Skip to main content

Home/ Larvata/ Contents contributed and discussions participated by 張 旭

Contents contributed and discussions participated by 張 旭

張 旭

phusion/passenger-docker: Docker base images for Ruby, Python, Node.js and Meteor web apps - 0 views

  • Ubuntu 20.04 LTS as base system
  • 2.7.5 is configured as the default.
  • Python 3.8
  • ...23 more annotations...
  • A build system, git, and development headers for many popular libraries, so that the most popular Ruby, Python and Node.js native extensions can be compiled without problems.
  • Nginx 1.18. Disabled by default
  • production-grade features, such as process monitoring, administration and status inspection.
  • Redis 5.0. Not installed by default.
  • The image has an app user with UID 9999 and home directory /home/app. Your application is supposed to run as this user.
  • running applications without root privileges is good security practice.
  • Your application should be placed inside /home/app.
  • COPY --chown=app:app
  • Passenger works like a mod_ruby, mod_nodejs, etc. It changes Nginx into an application server and runs your app from Nginx.
  • placing a .conf file in the directory /etc/nginx/sites-enabled
  • The best way to configure Nginx is by adding .conf files to /etc/nginx/main.d and /etc/nginx/conf.d
  • files in conf.d are included in the Nginx configuration's http context.
  • any environment variables you set with docker run -e, Docker linking and /etc/container_environment, won't reach Nginx.
  • To preserve these variables, place an Nginx config file ending with *.conf in the directory /etc/nginx/main.d, in which you tell Nginx to preserve these variables.
  • By default, Phusion Passenger sets all of the following environment variables to the value production
  • Setting these environment variables yourself (e.g. using docker run -e RAILS_ENV=...) will not have any effect, because Phusion Passenger overrides all of these environment variables.
  • PASSENGER_APP_ENV environment variable
  • passenger-docker autogenerates an Nginx configuration file (/etc/nginx/conf.d/00_app_env.conf) during container boot.
  • The configuration file is in /etc/redis/redis.conf. Modify it as you see fit, but make sure daemonize no is set.
  • You can add additional daemons to the image by creating runit entries.
  • The shell script must be called run, must be executable
  • the shell script must run the daemon without letting it daemonize/fork it.
  • We use RVM to install and to manage Ruby interpreters.
張 旭

我做系统架构的一些原则 | 酷 壳 - CoolShell - 0 views

  • 如果不说收益,只是为了技术而技术,而没有任何意义。
  • 有计划和无计划的停机做相应的解决方案
  • 经常不断的 human error
  • ...35 more annotations...
  • 运维又会分成基础运维和应用运维,开发则会分成基础核心开发和业务开发。
  • 基础运维和开发的同学更多的只是关注资源的利用率和性能,而应用运维和业务开发则更多关注的是应用和服务上的东西。
  • 有一些系统已经说不清楚是基础层的还是应用层的了,比如像服务治理上的东西,里面即有底层基础技术,也需要业务的同学来配合,包括 k8s 也样,里面即有底层的如网络这样的技术,也有需要业务配合的 readniess和 liveness 这样的健康检查,以及业务应用需要 configMap 等等 ……
  • 试想一下城市交通的优化,当城市规模到一定程度的时候,整体的性能你是无法通过优化几条路或是几条街区来完成的,你需要对整个城市做整体的功能体的规划才可能达到整体效率的提升
  • 当系统越来越复杂的时候,用户把他们的  PHP,Python, .NET,或 Node.js 的架构完全都迁移到 Java + Go 的架构上来的案例不断的发生。
  • 更为工业化的技术
  • 使用更为成熟更为工业化的技术栈,而不是自己熟悉的技术栈
  • 不要自己发明轮子,更不要魔改
  • 完全没有必要。不重新发明轮子,不魔改,不是因为自己技术不能,而是因为,这个世界早已不是自己干所有事的年代了
  • 好些公司的架构都被技术负责人个人的喜好、擅长和个人经验给绑架了,完全不是从一个客观的角度来进行技术选型
  • 全中国所有的电商平台,几百家银行,三大电信运营商,所有的保险公司,劵商的系统,医院里的系统,电子政府系统,等等,基本都是用 Java 开发的,包括 AWS 的主流语言也是 Java
  • NoSQL 的数据库在 Join 上都表现的太差
  • 为了不做 Join 就开始冗余数据,然而自己又维护不好冗余数据后带来的数据一致性的问题,导致数据上的各种错乱丢失。
  • 永远使用完备支持 ACID 的关系型数据库
  • 性能上的事,总是有解的,手段也是最多的,这个比起架构的完备性和扩展性来说真的不必太过担心。
  • 很多公司的系统既没有服从业界标准,也没有形成自己公司的标准,感觉就像一群乌合之众一样。
  • 最典型的例子就是 HTTP 调用的状态返回码。业内给你的标准是 200表示成功,3xx 跳转,4xx 表示调用端出错,5xx 表示服务端出错,我实在是不明白为什么无论成功和失败大家都喜欢返回 200,然后在 body 里指出是否error
  • Restful API 的规范。我觉得是非常重要的,这里给两个我觉得写得最好的参考:Paypal 和 Microsoft 。
  • 监控系统宁可自己死了也不能干扰实际应用。
  • 一个公司至少一年要有一次软件版本升级的review,然后形成软件版本的统一和一致
  • 架构和软件不是写好就完的,是需要不断修改不断维护的,80%的软件成本都是在维护上。
  • 通过服务发现或服务网关来降低服务依赖所带来的运维复杂度
  • 一定要使用各种软件设计的原则。比如:像SOLID这样的原则(参看《一些软件设计的原则》),IoC/DIP,SOA 或 Spring Cloud 等 架构的最佳实践(参看《SteveY对Amazon和Google平台的吐槽》中的 Service Interface 的那几条军规),分布式系统架构的相关实践(参看:《分布式系统的事务处理》,或微软件的 《Cloud Design Patterns》)……等等
  • 没有自动化测试,没有好的软件文档,没有质量好的代码,没有标准和规范
  • 以前欠下的技术债,都得要还,没打好的地基要重新打,没建配套设施都要建。这些基础设施如果不按照正确科学的方式建立的话,你是不可能有一个好的的系统
  • 与其花大力气迁就技术债务,不如直接还技术债
  • 建设没有技术债的“新城区”,并通过“防腐层 ”的架构模型,不要让技术债侵入“新城区”。
  • 如果有一天你在做技术决定的时候,开始凭自己以往的经验,那么你就已经不可能再成长了。
  • 做任何决定之前,最好花上一点时间,上网查一下相关的资料,技术博客,文章,论文等 ,同时,也看看各个公司,或是各个开源软件他们是怎么做的?然后,比较多种方案的 Pros/Cons,最终形成自己的决定
  • 对于 X-Y 问题,也就是说,用户为了解决 X问题,他觉得用 Y 可以解,于是问我 Y 怎么搞,结果搞到最后,发现原来要解决的 X 问题,这个时候最好的解决方案不是 Y,而是 Z。
  • 我很喜欢追问为什么 ,这种追问,会让客户也跟着来一起重新思考。
  • 激进并不是瞎搞,也不是见新技术就上,而是积极拥抱会改变未来的新技术
  • 不是不喜欢的就不学了,我对区块链和 Rust 我一样学习,我也知道这些技术的优势,但我不会大规模使用它们。
  • 进步永远来自于探索,探索是要付出代价的,但是收益更大。
  • 不敢冒险才是最大的冒险,不敢犯错才是最大的错误,害怕失去会让你失去的更多
張 旭

Choose when to run jobs | GitLab - 0 views

  • Rules are evaluated in order until the first match.
  • no rules match, so the job is not added to any other pipeline.
  • define a set of rules to exclude jobs in a few cases, but run them in all other cases
  • ...32 more annotations...
  • use all rules keywords, like if, changes, and exists, in the same rule. The rule evaluates to true only when all included keywords evaluate to true.
  • use parentheses with && and || to build more complicated variable expressions.
  • Use workflow to specify which types of pipelines can run.
  • every push to an open merge request’s source branch causes duplicated pipelines.
  • avoid duplicate pipelines by changing the job rules to avoid either push (branch) pipelines or merge request pipelines.
  • do not mix only/except jobs with rules jobs in the same pipeline.
  • For behavior similar to the only/except keywords, you can check the value of the $CI_PIPELINE_SOURCE variable
  • commonly used variables for if clauses
  • rules:changes expressions to determine when to add jobs to a pipeline
  • Use !reference tags to reuse rules in different jobs.
  • Use except to define when a job does not run.
  • only or except used without refs is the same as only:refs / except/refs
  • If you change multiple files, but only one file ends in .md, the build job is still skipped.
  • If you use multiple keywords with only or except, the keywords are evaluated as a single conjoined expression.
  • only includes the job if all of the keys have at least one condition that matches.
  • except excludes the job if any of the keys have at least one condition that matches.
  • With only, individual keys are logically joined by an AND
  • With except, individual keys are logically joined by an OR
  • To specify a job as manual, add when: manual to the job in the .gitlab-ci.yml file.
  • Use protected environments to define a list of users authorized to run a manual job.
  • Use when: delayed to execute scripts after a waiting period, or if you want to avoid jobs immediately entering the pending state.
  • To split a large job into multiple smaller jobs that run in parallel, use the parallel keyword
  • run a trigger job multiple times in parallel in a single pipeline, but with different variable values for each instance of the job.
  • The @ symbol denotes the beginning of a ref’s repository path. To match a ref name that contains the @ character in a regular expression, you must use the hex character code match \x40.
  • Compare a variable to a string
  • Check if a variable is undefined
  • Check if a variable is empty
  • Check if a variable exists
  • Check if a variable is empty
  • Matches are found when using =~.
  • Matches are not found when using !~.
  • Join variable expressions together with && or ||
  •  
    "Rules are evaluated in order until the first match."
張 旭

Trunk-based Development | Atlassian - 0 views

  • Trunk-based development is a version control management practice where developers merge small, frequent updates to a core “trunk” or main branch.
  • Gitflow and trunk-based development. 
  • Gitflow, which was popularized first, is a stricter development model where only certain individuals can approve changes to the main code. This maintains code quality and minimizes the number of bugs.
  • ...20 more annotations...
  • Trunk-based development is a more open model since all developers have access to the main code. This enables teams to iterate quickly and implement CI/CD.
  • Developers can create short-lived branches with a few small commits compared to other long-lived feature branching strategies.
  • Gitflow is an alternative Git branching model that uses long-lived feature branches and multiple primary branches.
  • Gitflow also has separate primary branch lines for development, hotfixes, features, and releases.
  • Trunk-based development is far more simplified since it focuses on the main branch as the source of fixes and releases.
  • Trunk-based development eases the friction of code integration.
  • trunk-based development model reduces these conflicts.
  • Adding an automated test suite and code coverage monitoring for this stream of commits enables continuous integration.
  • When new code is merged into the trunk, automated integration and code coverage tests run to validate the code quality.
  • Trunk-based development strives to keep the trunk branch “green”, meaning it's ready to deploy at any commit.
  • With continuous integration, developers perform trunk-based development in conjunction with automated tests that run after each committee to a trunk.
  • If trunk-based development was like music it would be a rapid staccato -- short, succinct notes in rapid succession, with the repository commits being the notes.
  • Instead of creating a feature branch and waiting to build out the complete specification, developers can instead create a trunk commit that introduces the feature flag and pushes new trunk commits that build out the feature specification within the flag.
  • Automated testing is necessary for any modern software project intending to achieve CI/CD.
  • Short running unit and integration tests are executed during development and upon code merge.
  • Automated tests provide a layer of preemptive code review.
  • Once a branch merges, it is best practice to delete it.
  • A repository with a large amount of active branches has some unfortunate side effects
  • Merge branches to the trunk at least once a day
  • The “continuous” in CI/CD implies that updates are constantly flowing.
張 旭

Overriding Auto Devops - 0 views

  • most customers need to modify the devops pipeline to suit there needs
  • include Auto Devops and override it.
  • include all of Auto Devops, just as if the Auto Devops checkbox were checked for the project
  • ...4 more annotations...
  • skips for all the scans, as a way of speeding up the build process while working on the CI configuration
  • The Auto Devops test job, which uses Herokuish for testing, does not rely on the Docker image that’s generated during the Build job
  • moving the Test job to the Build stage to speed things along
  • Literally any part of Auto Devops can be overridden in your own CI configuration.
張 旭

stakater/Reloader: A Kubernetes controller to watch changes in ConfigMap and ... - 0 views

shared by 張 旭 on 09 Oct 21 - No Cached
  • reloader.stakater.com/search and reloader.stakater.com/auto do not work together.
  • If you have the reloader.stakater.com/auto: "true" annotation on your deployment, then it will always restart upon a change in configmaps or secrets it uses,
  • By default reloader watches in all namespaces.
張 旭

Helm | Named Templates - 0 views

  • a special-purpose include function that works similarly to the template action.
  • when naming templates: template names are global.
  • templates in subcharts are compiled together with top-level templates, you should be careful to name your templates with chart-specific names.
  • ...14 more annotations...
  • One popular naming convention is to prefix each defined template with the name of the chart: {{ define "mychart.labels" }}
  • using the specific chart name as a prefix we can avoid any conflicts
  • But files whose name begins with an underscore (_) are assumed to not have a manifest inside.
  • The define action allows us to create a named template inside of a template file.
  • include it with the template action
  • a define does not produce output unless it is called with a template
  • define functions should have a simple documentation block ({{/* ... */}}) describing what they do.
  • template names are global.
  • A popular naming convention is to prefix each defined template with the name of the chart
  • When a named template (created with define) is rendered, it will receive the scope passed in by the template call.
  • No scope was passed in, so within the template we cannot access anything in .
  • Note that we pass . at the end of the template call. We could just as easily pass .Values or .Values.favorite or whatever scope we want
  • the template that is substituted in has the text aligned to the left. Because template is an action, and not a function, there is no way to pass the output of a template call to other functions; the data is simply inserted inline.
  • use indent to indent
  •  
    "a special-purpose include function that works similarly to the template action."
張 旭

Helm | Variables - 0 views

shared by 張 旭 on 03 Oct 21 - No Cached
  • there is one variable that is always global - $ - this variable will always point to the root context.
  • # Many helm templates would use `.` below, but that will not work
  • {{- range
  •  
    "there is one variable that is always global - $ - this variable will always point to the root context. "
張 旭

Deploy tokens | GitLab - 0 views

  • If a user creates one named gitlab-deploy-token, the username and token of the deploy token is automatically exposed to the CI/CD jobs as CI/CD variables: CI_DEPLOY_USER and CI_DEPLOY_PASSWORD
  • The special handling for the gitlab-deploy-token deploy token is not implemented for group deploy tokens.
張 旭

How to configure a Kubernetes Multi-Pod Deployment - Stack Overflow - 0 views

  • A Deployment is meant to represent a single group of PODs fulfilling a single purpose together.
  • Deployments are meant to contain stateless services. If you need to store a state you need to create StatefulSet instead
  •  
    "A Deployment is meant to represent a single group of PODs fulfilling a single purpose together."
張 旭

Helm | Flow Control - 0 views

  • Control structures (called "actions" in template parlance) provide you, the template author, with the ability to control the flow of a template's generation
  •  
    "Control structures (called "actions" in template parlance) provide you, the template author, with the ability to control the flow of a template's generation"
張 旭

Helm | Getting Started - 0 views

  • The templates/ directory is for template files. When Helm evaluates a chart, it will send all of the files in the templates/ directory through the template rendering engine. It then collects the results of those templates and sends them on to Kubernetes.
  • The charts/ directory may contain other charts (which we call subcharts).
  • we recommend using the suffix .yaml for YAML files and .tpl for helpers.
  • ...8 more annotations...
  • The helm get manifest command takes a release name (full-coral) and prints out all of the Kubernetes resources that were uploaded to the server.
  • Each file begins with --- to indicate the start of a YAML document, and then is followed by an automatically generated comment line that tells us what template file generated this YAML document.
  • name: field is limited to 63 characters because of limitations to the DNS system.
  • The template directive {{ .Release.Name }} injects the release name into the template. The values that are passed into a template can be thought of as namespaced objects, where a dot (.) separates each namespaced element.
  • The leading dot before Release indicates that we start with the top-most namespace for this scope
  • helm install --debug --dry-run goodly-guppy ./mychart. This will render the templates. But instead of installing the chart, it will return the rendered template to you
  • Using --dry-run will make it easier to test your code, but it won't ensure that Kubernetes itself will accept the templates you generate.
  • It's best not to assume that your chart will install just because --dry-run works.
張 旭

Helm | Template Function List - 0 views

shared by 張 旭 on 02 Oct 21 - No Cached
  • The definition of "empty" depends on type:Numeric: 0String: ""Lists: []Dicts: {}Boolean: falseAnd always nil (aka null)
  • The empty function returns true if the given value is considered empty
  • in Go template conditionals, emptiness is calculated for you. Thus, you rarely need if empty .Foo. Instead, just use if .Foo
  • ...2 more annotations...
  • Unconditionally returns an empty string and an error with the specified text.
  • The ternary function takes two values, and a test value. If the test value is true, the first value will be returned. If the test value is empty, the second value will be returned.
  •  
    "The definition of "empty" depends on type: Numeric: 0 String: "" Lists: [] Dicts: {} Boolean: false And always nil (aka null)"
張 旭

Helm | Template Functions and Pipelines - 0 views

  • When injecting strings from the .Values object into the template, we ought to quote these strings.
  • Helm has over 60 available functions. Some of them are defined by the Go template language itself. Most of the others are part of the Sprig template library
  • the "Helm template language" as if it is Helm-specific, it is actually a combination of the Go template language, some extra functions, and a variety of wrappers to expose certain objects to the templates.
  • ...10 more annotations...
  • Drawing on a concept from UNIX, pipelines are a tool for chaining together a series of template commands to compactly express a series of transformations.
  • the default function: default DEFAULT_VALUE GIVEN_VALUE
  • all static default values should live in the values.yaml, and should not be repeated using the default command (otherwise they would be redundant).
  • the default command is perfect for computed values, which can not be declared inside values.yaml.
  • When lookup returns an object, it will return a dictionary.
  • The synopsis of the lookup function is lookup apiVersion, kind, namespace, name -> resource or resource list
  • When no object is found, an empty value is returned. This can be used to check for the existence of an object.
  • The lookup function uses Helm's existing Kubernetes connection configuration to query Kubernetes.
  • Helm is not supposed to contact the Kubernetes API Server during a helm template or a helm install|update|delete|rollback --dry-run, so the lookup function will return an empty list (i.e. dict) in such a case.
  • the operators (eq, ne, lt, gt, and, or and so on) are all implemented as functions. In pipelines, operations can be grouped with parentheses ((, and )).
  •  
    "When injecting strings from the .Values object into the template, we ought to quote these strings. "
張 旭

Helm | Values Files - 0 views

shared by 張 旭 on 02 Oct 21 - No Cached
  • a subchart, the values.yaml file of a parent chart
  • Individual parameters passed with --set
  • The list above is in order of specificity: values.yaml is the default, which can be overridden by a parent chart's values.yaml, which can in turn be overridden by a user-supplied values file, which can in turn be overridden by --set parameters.
  • ...4 more annotations...
  • --set has a higher precedence than the default values.yaml file
  • Values files can contain more structured content
  • If you need to delete a key from the default values, you may override the value of the key to be null, in which case Helm will remove the key from the overridden values merge.
  • Kubernetes would then fail because you can not declare more than one livenessProbe handler.
張 旭

Helm | Built-in Objects - 0 views

  • The built-in values always begin with a capital letter.
  • use only initial lower case letters in order to distinguish local names from those built-in.
  • Files.Get is a function for getting a file by name
  • ...3 more annotations...
  • While you cannot use it to access templates, you can use it to access other files in the chart.
  • Release: This object describes the release itself.
  • Values: Values passed into the template from the values.yaml file
  •  
    "The built-in values always begin with a capital letter. "
張 旭

Using NGINX Logging for Application Performance Monitoring - 0 views

  • taking advantage of the flexibility of NGINX access logging is application performance monitoring (APM).
  • it’s simple to get detailed visibility into the performance of your applications by adding timing values to your code and passing them as response headers for inclusion in the NGINX access log.
  • $request_time – Full request time, starting when NGINX reads the first byte from the client and ending when NGINX sends the last byte of the response body
  • ...3 more annotations...
  • $upstream_response_time – Time between establishing a connection to an upstream server and receiving the last byte of the response body
  • capture timings in the application itself and include them as response headers, which NGINX then captures in its access log.
  • $upstream_header_time – Time between establishing a connection to an upstream server and receiving the first byte of the response header
張 旭

Introducing the MinIO Operator and Operator Console - 0 views

  • Object-storage-as-a-service is a game changer for IT.
  • provision multi-tenant object storage as a service.
  • have the skill set to create, deploy, tune, scale and manage modern, application oriented object storage using Kubernetes
  • ...12 more annotations...
  • MinIO is purpose-built to take full advantage of the Kubernetes architecture.
  • MinIO and Kubernetes work together to simplify infrastructure management, providing a way to manage object storage infrastructure within the Kubernetes toolset.  
  • The operator pattern extends Kubernetes's familiar declarative API model with custom resource definitions (CRDs) to perform common operations like resource orchestration, non-disruptive upgrades, cluster expansion and to maintain high-availability
  • The Operator uses the command set kubectl that the Kubernetes community was already familiar with and adds the kubectl minio plugin . The MinIO Operator and the MinIO kubectl plugin facilitate the deployment and management of MinIO Object Storage on Kubernetes - which is how multi-tenant object storage as a service is delivered.
  • choosing a leader for a distributed application without an internal member election process
  • The Operator Console makes Kubernetes object storage easier still. In this graphical user interface, MinIO created something so simple that anyone in the organization can create, deploy and manage object storage as a service.
  • The primary unit of managing MinIO on Kubernetes is the tenant.
  • The MinIO Operator can allocate multiple tenants within the same Kubernetes cluster.
  • Each tenant, in turn, can have different capacity (i.e: a small 500GB tenant vs a 100TB tenant), resources (1000m CPU and 4Gi RAM vs 4000m CPU and 16Gi RAM) and servers (4 pods vs 16 pods), as well a separate configurations regarding Identity Providers, Encryption and versions.
  • each tenant is a cluster of server pools (independent sets of nodes with their own compute, network, and storage resources), that, while sharing the same physical infrastructure, are fully isolated from each other in their own namespaces.
  • Each tenant runs their own MinIO cluster, fully isolated from other tenants
  • Each tenant scales independently by federating clusters across geographies.
張 旭

Services | GitLab - 0 views

  • The services keyword defines a Docker image that runs during a job linked to the Docker image that the image keyword defines. This allows you to access the service image during build time.
張 旭

Understanding GitHub Actions - GitHub Docs - 0 views

  • A job is a set of steps that execute on the same runner. By default, a workflow with multiple jobs will run those jobs in parallel.
  • Workflows are made up of one or more jobs and can be scheduled or triggered by an event
  • An event is a specific activity that triggers a workflow.
  • ...8 more annotations...
  • configure a workflow to run jobs sequentially.
  • A step is an individual task that can run commands in a job. A step can be either an action or a shell command.
  • Each step in a job executes on the same runner, allowing the actions in that job to share data with each other.
  • Actions are standalone commands that are combined into steps to create a job.
  • Actions are the smallest portable building block of a workflow.
  • To use an action in a workflow, you must include it as a step.
  • You can use a runner hosted by GitHub, or you can host your own.
  • GitHub-hosted runners are based on Ubuntu Linux, Microsoft Windows, and macOS, and each job in a workflow runs in a fresh virtual environment.
  •  
    "A job is a set of steps that execute on the same runner. By default, a workflow with multiple jobs will run those jobs in parallel. "
« First ‹ Previous 61 - 80 of 596 Next › Last »
Showing 20 items per page