Skip to main content

Home/ Larvata/ Group items tagged productivity

Rss Feed Group items tagged

張 旭

Auto DevOps | GitLab - 0 views

  • Auto DevOps provides pre-defined CI/CD configuration which allows you to automatically detect, build, test, deploy, and monitor your applications
  • Just push your code and GitLab takes care of everything else.
  • Auto DevOps will be automatically disabled on the first pipeline failure.
  • ...78 more annotations...
  • Your project will continue to use an alternative CI/CD configuration file if one is found
  • Auto DevOps works with any Kubernetes cluster;
  • using the Docker or Kubernetes executor, with privileged mode enabled.
  • Base domain (needed for Auto Review Apps and Auto Deploy)
  • Kubernetes (needed for Auto Review Apps, Auto Deploy, and Auto Monitoring)
  • Prometheus (needed for Auto Monitoring)
  • scrape your Kubernetes cluster.
  • project level as a variable: KUBE_INGRESS_BASE_DOMAIN
  • A wildcard DNS A record matching the base domain(s) is required
  • Once set up, all requests will hit the load balancer, which in turn will route them to the Kubernetes pods that run your application(s).
  • review/ (every environment starting with review/)
  • staging
  • production
  • need to define a separate KUBE_INGRESS_BASE_DOMAIN variable for all the above based on the environment.
  • Continuous deployment to production: Enables Auto Deploy with master branch directly deployed to production.
  • Continuous deployment to production using timed incremental rollout
  • Automatic deployment to staging, manual deployment to production
  • Auto Build creates a build of the application using an existing Dockerfile or Heroku buildpacks.
  • If a project’s repository contains a Dockerfile, Auto Build will use docker build to create a Docker image.
  • Each buildpack requires certain files to be in your project’s repository for Auto Build to successfully build your application.
  • Auto Test automatically runs the appropriate tests for your application using Herokuish and Heroku buildpacks by analyzing your project to detect the language and framework.
  • Auto Code Quality uses the Code Quality image to run static analysis and other code checks on the current code.
  • Static Application Security Testing (SAST) uses the SAST Docker image to run static analysis on the current code and checks for potential security issues.
  • Dependency Scanning uses the Dependency Scanning Docker image to run analysis on the project dependencies and checks for potential security issues.
  • License Management uses the License Management Docker image to search the project dependencies for their license.
  • Vulnerability Static Analysis for containers uses Clair to run static analysis on a Docker image and checks for potential security issues.
  • Review Apps are temporary application environments based on the branch’s code so developers, designers, QA, product managers, and other reviewers can actually see and interact with code changes as part of the review process. Auto Review Apps create a Review App for each branch. Auto Review Apps will deploy your app to your Kubernetes cluster only. When no cluster is available, no deployment will occur.
  • The Review App will have a unique URL based on the project ID, the branch or tag name, and a unique number, combined with the Auto DevOps base domain.
  • Review apps are deployed using the auto-deploy-app chart with Helm, which can be customized.
  • Your apps should not be manipulated outside of Helm (using Kubernetes directly).
  • Dynamic Application Security Testing (DAST) uses the popular open source tool OWASP ZAProxy to perform an analysis on the current code and checks for potential security issues.
  • Auto Browser Performance Testing utilizes the Sitespeed.io container to measure the performance of a web page.
  • add the paths to a file named .gitlab-urls.txt in the root directory, one per line.
  • After a branch or merge request is merged into the project’s default branch (usually master), Auto Deploy deploys the application to a production environment in the Kubernetes cluster, with a namespace based on the project name and unique project ID
  • Auto Deploy doesn’t include deployments to staging or canary by default, but the Auto DevOps template contains job definitions for these tasks if you want to enable them.
  • Apps are deployed using the auto-deploy-app chart with Helm.
  • For internal and private projects a GitLab Deploy Token will be automatically created, when Auto DevOps is enabled and the Auto DevOps settings are saved.
  • If the GitLab Deploy Token cannot be found, CI_REGISTRY_PASSWORD is used. Note that CI_REGISTRY_PASSWORD is only valid during deployment.
  • If present, DB_INITIALIZE will be run as a shell command within an application pod as a helm post-install hook.
  • a post-install hook means that if any deploy succeeds, DB_INITIALIZE will not be processed thereafter.
  • DB_MIGRATE will be run as a shell command within an application pod as a helm pre-upgrade hook.
    • 張 旭
       
      如果專案類型不同,就要去查 buildpacks 裡面如何叫用該指令,例如 laravel 的 migration
    • 張 旭
       
      如果是自己的 Dockerfile 建立起來的,看來就不用鳥 buildpacks 的作法
  • Once your application is deployed, Auto Monitoring makes it possible to monitor your application’s server and response metrics right out of the box.
  • annotate the NGINX Ingress deployment to be scraped by Prometheus using prometheus.io/scrape: "true" and prometheus.io/port: "10254"
  • If you are also using Auto Review Apps and Auto Deploy and choose to provide your own Dockerfile, make sure you expose your application to port 5000 as this is the port assumed by the default Helm chart.
  • While Auto DevOps provides great defaults to get you started, you can customize almost everything to fit your needs; from custom buildpacks, to Dockerfiles, Helm charts, or even copying the complete CI/CD configuration into your project to enable staging and canary deployments, and more.
  • If your project has a Dockerfile in the root of the project repo, Auto DevOps will build a Docker image based on the Dockerfile rather than using buildpacks.
  • Auto DevOps uses Helm to deploy your application to Kubernetes.
  • Bundled chart - If your project has a ./chart directory with a Chart.yaml file in it, Auto DevOps will detect the chart and use it instead of the default one.
  • Create a project variable AUTO_DEVOPS_CHART with the URL of a custom chart to use or create two project variables AUTO_DEVOPS_CHART_REPOSITORY with the URL of a custom chart repository and AUTO_DEVOPS_CHART with the path to the chart.
  • make use of the HELM_UPGRADE_EXTRA_ARGS environment variable to override the default values in the values.yaml file in the default Helm chart.
  • specify the use of a custom Helm chart per environment by scoping the environment variable to the desired environment.
    • 張 旭
       
      Auto DevOps 就是一套人家寫好好的傳便便的 .gitlab-ci.yml
  • Your additions will be merged with the Auto DevOps template using the behaviour described for include
  • copy and paste the contents of the Auto DevOps template into your project and edit this as needed.
  • In order to support applications that require a database, PostgreSQL is provisioned by default.
  • Set up the replica variables using a project variable and scale your application by just redeploying it!
  • You should not scale your application using Kubernetes directly.
  • Some applications need to define secret variables that are accessible by the deployed application.
  • Auto DevOps detects variables where the key starts with K8S_SECRET_ and make these prefixed variables available to the deployed application, as environment variables.
  • Auto DevOps pipelines will take your application secret variables to populate a Kubernetes secret.
  • Environment variables are generally considered immutable in a Kubernetes pod.
  • if you update an application secret without changing any code then manually create a new pipeline, you will find that any running application pods will not have the updated secrets.
  • Variables with multiline values are not currently supported
  • The normal behavior of Auto DevOps is to use Continuous Deployment, pushing automatically to the production environment every time a new pipeline is run on the default branch.
  • If STAGING_ENABLED is defined in your project (e.g., set STAGING_ENABLED to 1 as a CI/CD variable), then the application will be automatically deployed to a staging environment, and a production_manual job will be created for you when you’re ready to manually deploy to production.
  • If CANARY_ENABLED is defined in your project (e.g., set CANARY_ENABLED to 1 as a CI/CD variable) then two manual jobs will be created: canary which will deploy the application to the canary environment production_manual which is to be used by you when you’re ready to manually deploy to production.
  • If INCREMENTAL_ROLLOUT_MODE is set to manual in your project, then instead of the standard production job, 4 different manual jobs will be created: rollout 10% rollout 25% rollout 50% rollout 100%
  • The percentage is based on the REPLICAS variable and defines the number of pods you want to have for your deployment.
  • To start a job, click on the play icon next to the job’s name.
  • Once you get to 100%, you cannot scale down, and you’d have to roll back by redeploying the old version using the rollback button in the environment page.
  • With INCREMENTAL_ROLLOUT_MODE set to manual and with STAGING_ENABLED
  • not all buildpacks support Auto Test yet
  • When a project has been marked as private, GitLab’s Container Registry requires authentication when downloading containers.
  • Authentication credentials will be valid while the pipeline is running, allowing for a successful initial deployment.
  • After the pipeline completes, Kubernetes will no longer be able to access the Container Registry.
  • We strongly advise using GitLab Container Registry with Auto DevOps in order to simplify configuration and prevent any unforeseen issues.
張 旭

今天拜飛機了沒? - 談敏捷開發中的貨物崇拜 - 敏捷進化趣 Agile FunEvo - 0 views

  •  
    "沒有溝通(產品的)願景和策略 產品藍圖和發佈日期在一年前就由CTO規劃好了 組織內沒有人跟顧客對談 CTO和利害關係人堅持所有的改動都要經由他們批准 因為保密資安等理由,禁止使用實體的看板或告示 利害關係人直接跟CTO對談,跳過Product Owner 由利害關係人來決定交付產品增量,而不是Product Owner 專案/產品只有在完成時才交付,而不是增量式的交付 避免利害關係人直接跟開發團隊對談 Product Backlog是由一個產品委員會決定的 就算對功能的價值有所懷疑,但還是硬上了(為了年終獎金…) 業務為了成交,答應客戶增加目前不存在的功能,而Product Owner不知情 就算是不重要的問題,也有固定的進度表和期限 負責產品管理的角色沒有取得商業智能(BI)資訊的權限,沒有充足的資訊和數據幫助決定 利害關係人使用需求文件來和產品與工程部門溝通 Product Owner大部分的時間都在撰寫和管理使用者故事(User Stories) 在Sprint開始後不久Sprint Backlog就變了 專門成立一個開發團隊來修Bugs和處理小的需求 利害關係人沒有參加過Scrum活動(例如Sprint Planning和Sprint Review) 主要是用『速率(Velocity)符合當初的承諾』來當指標評估Scrum是否成功 開發人員沒有參與創造使用者故事 同時處理的專案數量和工作會改變開發團隊的人數跟組成方式 在Daily Stand up中團隊成員對ScrumMaster報告 定期舉行自省會議(Retrospectives),但沒有改變隨之發生 開發團隊並不是跨功能(Cross Functional),而要靠其他團隊或部門才能完成工作"
張 旭

Introduction to GitLab Flow | GitLab - 0 views

  • GitLab flow as a clearly defined set of best practices. It combines feature-driven development and feature branches with issue tracking.
  • In Git, you add files from the working copy to the staging area. After that, you commit them to your local repo. The third step is pushing to a shared remote repository.
  • branching model
  • ...68 more annotations...
  • The biggest problem is that many long-running branches emerge that all contain part of the changes.
  • It is a convention to call your default branch master and to mostly branch from and merge to this.
  • Nowadays, most organizations practice continuous delivery, which means that your default branch can be deployed.
  • Continuous delivery removes the need for hotfix and release branches, including all the ceremony they introduce.
  • Merging everything into the master branch and frequently deploying means you minimize the amount of unreleased code, which is in line with lean and continuous delivery best practices.
  • GitHub flow assumes you can deploy to production every time you merge a feature branch.
  • You can deploy a new version by merging master into the production branch. If you need to know what code is in production, you can just checkout the production branch to see.
  • Production branch
  • Environment branches
  • have an environment that is automatically updated to the master branch.
  • deploy the master branch to staging.
  • To deploy to pre-production, create a merge request from the master branch to the pre-production branch.
  • Go live by merging the pre-production branch into the production branch.
  • Release branches
  • work with release branches if you need to release software to the outside world.
  • each branch contains a minor version
  • After announcing a release branch, only add serious bug fixes to the branch.
  • merge these bug fixes into master, and then cherry-pick them into the release branch.
  • Merging into master and then cherry-picking into release is called an “upstream first” policy
  • Tools such as GitHub and Bitbucket choose the name “pull request” since the first manual action is to pull the feature branch.
  • Tools such as GitLab and others choose the name “merge request” since the final action is to merge the feature branch.
  • If you work on a feature branch for more than a few hours, it is good to share the intermediate result with the rest of the team.
  • the merge request automatically updates when new commits are pushed to the branch.
  • If the assigned person does not feel comfortable, they can request more changes or close the merge request without merging.
  • In GitLab, it is common to protect the long-lived branches, e.g., the master branch, so that most developers can’t modify them.
  • if you want to merge into a protected branch, assign your merge request to someone with maintainer permissions.
  • After you merge a feature branch, you should remove it from the source control software.
  • Having a reason for every code change helps to inform the rest of the team and to keep the scope of a feature branch small.
  • If there is no issue yet, create the issue
  • The issue title should describe the desired state of the system.
  • For example, the issue title “As an administrator, I want to remove users without receiving an error” is better than “Admin can’t remove users.”
  • create a branch for the issue from the master branch
  • If you open the merge request but do not assign it to anyone, it is a “Work In Progress” merge request.
  • Start the title of the merge request with [WIP] or WIP: to prevent it from being merged before it’s ready.
  • When they press the merge button, GitLab merges the code and creates a merge commit that makes this event easily visible later on.
  • Merge requests always create a merge commit, even when the branch could be merged without one. This merge strategy is called “no fast-forward” in Git.
  • Suppose that a branch is merged but a problem occurs and the issue is reopened. In this case, it is no problem to reuse the same branch name since the first branch was deleted when it was merged.
  • At any time, there is at most one branch for every issue.
  • It is possible that one feature branch solves more than one issue.
  • GitLab closes these issues when the code is merged into the default branch.
  • If you have an issue that spans across multiple repositories, create an issue for each repository and link all issues to a parent issue.
  • use an interactive rebase (rebase -i) to squash multiple commits into one or reorder them.
  • you should never rebase commits you have pushed to a remote server.
  • Rebasing creates new commits for all your changes, which can cause confusion because the same change would have multiple identifiers.
  • if someone has already reviewed your code, rebasing makes it hard to tell what changed since the last review.
  • never rebase commits authored by other people.
  • it is a bad idea to rebase commits that you have already pushed.
  • If you revert a merge commit and then change your mind, revert the revert commit to redo the merge.
  • Often, people avoid merge commits by just using rebase to reorder their commits after the commits on the master branch.
  • Using rebase prevents a merge commit when merging master into your feature branch, and it creates a neat linear history.
  • every time you rebase, you have to resolve similar conflicts.
  • Sometimes you can reuse recorded resolutions (rerere), but merging is better since you only have to resolve conflicts once.
  • A good way to prevent creating many merge commits is to not frequently merge master into the feature branch.
  • keep your feature branches short-lived.
  • Most feature branches should take less than one day of work.
  • If your feature branches often take more than a day of work, try to split your features into smaller units of work.
  • You could also use feature toggles to hide incomplete features so you can still merge back into master every day.
  • you should try to prevent merge commits, but not eliminate them.
  • Your codebase should be clean, but your history should represent what actually happened.
  • If you rebase code, the history is incorrect, and there is no way for tools to remedy this because they can’t deal with changing commit identifiers
  • Commit often and push frequently
  • You should push your feature branch frequently, even when it is not yet ready for review.
  • A commit message should reflect your intention, not just the contents of the commit.
  • each merge request must be tested before it is accepted.
  • test the master branch after each change.
  • If new commits in master cause merge conflicts with the feature branch, merge master back into the branch to make the CI server re-run the tests.
  • When creating a feature branch, always branch from an up-to-date master.
  • Do not merge from upstream again if your code can work and merge cleanly without doing so.
張 旭

Basics - Træfik - 0 views

  • Modifier rules only modify the request. They do not have any impact on routing decisions being made.
  • A frontend consists of a set of rules that determine how incoming requests are forwarded from an entrypoint to a backend.
  • Entrypoints are the network entry points into Træfik
  • ...27 more annotations...
  • Modifiers and matchers
  • Matcher rules determine if a particular request should be forwarded to a backend
  • if any rule matches
  • if all rules match
  • In order to use regular expressions with Host and Path matchers, you must declare an arbitrarily named variable followed by the colon-separated regular expression, all enclosed in curly braces.
  • Use a *Prefix* matcher if your backend listens on a particular base path but also serves requests on sub-paths. For instance, PathPrefix: /products would match /products but also /products/shoes and /products/shirts. Since the path is forwarded as-is, your backend is expected to listen on /products
  • Use Path if your backend listens on the exact path only. For instance, Path: /products would match /products but not /products/shoes.
  • Modifier rules ALWAYS apply after the Matcher rules.
  • A backend is responsible to load-balance the traffic coming from one or more frontends to a set of http servers
  • wrr: Weighted Round Robin
  • drr: Dynamic Round Robin: increases weights on servers that perform better than others.
  • A circuit breaker can also be applied to a backend, preventing high loads on failing servers.
  • To proactively prevent backends from being overwhelmed with high load, a maximum connection limit can also be applied to each backend.
  • Sticky sessions are supported with both load balancers.
  • When sticky sessions are enabled, a cookie is set on the initial request.
  • The check is defined by a path appended to the backend URL and an interval (given in a format understood by time.ParseDuration) specifying how often the health check should be executed (the default being 30 seconds). Each backend must respond to the health check within 5 seconds.
  • The static configuration is the global configuration which is setting up connections to configuration backends and entrypoints.
  • We only need to enable watch option to make Træfik watch configuration backend changes and generate its configuration automatically.
  • Separate the regular expression and the replacement by a space.
  • a comma-separated key/value pair where both key and value must be literals.
  • namespacing of your backends happens on the basis of hosts in addition to paths
  • Modifiers will be applied in a pre-determined order regardless of their order in the rule configuration section.
  • customize priority
  • Custom headers can be configured through the frontends, to add headers to either requests or responses that match the frontend's rules.
  • Security related headers (HSTS headers, SSL redirection, Browser XSS filter, etc) can be added and configured per frontend in a similar manner to the custom headers above.
  • Servers are simply defined using a url. You can also apply a custom weight to each server (this will be used by load-balancing).
  • Maximum connections can be configured by specifying an integer value for maxconn.amount and maxconn.extractorfunc which is a strategy used to determine how to categorize requests in order to evaluate the maximum connections.
張 旭

The Twelve-Factor App - 0 views

  • Keep development, staging, and production as similar as possible
  • Developers write code, ops engineers deploy it.
  • The twelve-factor app is designed for continuous deployment by keeping the gap between development and production small
  • ...4 more annotations...
  • Backing services, such as the app’s database, queueing system, or cache, is one area where dev/prod parity is important
  • The twelve-factor developer resists the urge to use different backing services between development and production, even when adapters theoretically abstract away any differences in backing services.
  • declarative provisioning tools such as Chef and Puppet combined with light-weight virtual environments such as Docker and Vagrant allow developers to run local environments which closely approximate production environments.
  • all deploys of the app (developer environments, staging, production) should be using the same type and version of each of the backing services.
  •  
    "as similar as possible "
張 旭

The Asset Pipeline - Ruby on Rails Guides - 0 views

  • provides a framework to concatenate and minify or compress JavaScript and CSS assets
  • adds the ability to write these assets in other languages and pre-processors such as CoffeeScript, Sass and ERB
  • invalidate the cache by altering this fingerprint
  • ...80 more annotations...
  • Rails 4 automatically adds the sass-rails, coffee-rails and uglifier gems to your Gemfile
  • reduce the number of requests that a browser makes to render a web page
  • Starting with version 3.1, Rails defaults to concatenating all JavaScript files into one master .js file and all CSS files into one master .css file
  • In production, Rails inserts an MD5 fingerprint into each filename so that the file is cached by the web browser
  • The technique sprockets uses for fingerprinting is to insert a hash of the content into the name, usually at the end.
  • asset minification or compression
  • The sass-rails gem is automatically used for CSS compression if included in Gemfile and no config.assets.css_compressor option is set.
  • Supported languages include Sass for CSS, CoffeeScript for JavaScript, and ERB for both by default.
  • When a filename is unique and based on its content, HTTP headers can be set to encourage caches everywhere (whether at CDNs, at ISPs, in networking equipment, or in web browsers) to keep their own copy of the content
  • asset pipeline is technically no longer a core feature of Rails 4
  • Rails uses for fingerprinting is to insert a hash of the content into the name, usually at the end
  • With the asset pipeline, the preferred location for these assets is now the app/assets directory.
  • Fingerprinting is enabled by default for production and disabled for all other environments
  • The files in app/assets are never served directly in production.
  • Paths are traversed in the order that they occur in the search path
  • You should use app/assets for files that must undergo some pre-processing before they are served.
  • By default .coffee and .scss files will not be precompiled on their own
  • app/assets is for assets that are owned by the application, such as custom images, JavaScript files or stylesheets.
  • lib/assets is for your own libraries' code that doesn't really fit into the scope of the application or those libraries which are shared across applications.
  • vendor/assets is for assets that are owned by outside entities, such as code for JavaScript plugins and CSS frameworks.
  • Any path under assets/* will be searched
  • By default these files will be ready to use by your application immediately using the require_tree directive.
  • By default, this means the files in app/assets take precedence, and will mask corresponding paths in lib and vendor
  • Sprockets uses files named index (with the relevant extensions) for a special purpose
  • Rails.application.config.assets.paths
  • causes turbolinks to check if an asset has been updated and if so loads it into the page
  • if you add an erb extension to a CSS asset (for example, application.css.erb), then helpers like asset_path are available in your CSS rules
  • If you add an erb extension to a JavaScript asset, making it something such as application.js.erb, then you can use the asset_path helper in your JavaScript code
  • The asset pipeline automatically evaluates ERB
  • data URI — a method of embedding the image data directly into the CSS file — you can use the asset_data_uri helper.
  • Sprockets will also look through the paths specified in config.assets.paths, which includes the standard application paths and any paths added by Rails engines.
  • image_tag
  • the closing tag cannot be of the style -%>
  • asset_data_uri
  • app/assets/javascripts/application.js
  • sass-rails provides -url and -path helpers (hyphenated in Sass, underscored in Ruby) for the following asset classes: image, font, video, audio, JavaScript and stylesheet.
  • Rails.application.config.assets.compress
  • In JavaScript files, the directives begin with //=
  • The require_tree directive tells Sprockets to recursively include all JavaScript files in the specified directory into the output.
  • manifest files contain directives — instructions that tell Sprockets which files to require in order to build a single CSS or JavaScript file.
  • You should not rely on any particular order among those
  • Sprockets uses manifest files to determine which assets to include and serve.
  • the family of require directives prevents files from being included twice in the output
  • which files to require in order to build a single CSS or JavaScript file
  • Directives are processed top to bottom, but the order in which files are included by require_tree is unspecified.
  • In JavaScript files, Sprockets directives begin with //=
  • If require_self is called more than once, only the last call is respected.
  • require directive is used to tell Sprockets the files you wish to require.
  • You need not supply the extensions explicitly. Sprockets assumes you are requiring a .js file when done from within a .js file
  • paths must be specified relative to the manifest file
  • require_directory
  • Rails 4 creates both app/assets/javascripts/application.js and app/assets/stylesheets/application.css regardless of whether the --skip-sprockets option is used when creating a new rails application.
  • The file extensions used on an asset determine what preprocessing is applied.
  • app/assets/stylesheets/application.css
  • Additional layers of preprocessing can be requested by adding other extensions, where each extension is processed in a right-to-left manner
  • require_self
  • use the Sass @import rule instead of these Sprockets directives.
  • Keep in mind that the order of these preprocessors is important
  • In development mode, assets are served as separate files in the order they are specified in the manifest file.
  • when these files are requested they are processed by the processors provided by the coffee-script and sass gems and then sent back to the browser as JavaScript and CSS respectively.
  • css.scss.erb
  • js.coffee.erb
  • Keep in mind the order of these preprocessors is important.
  • By default Rails assumes that assets have been precompiled and will be served as static assets by your web server
  • with the Asset Pipeline the :cache and :concat options aren't used anymore
  • Assets are compiled and cached on the first request after the server is started
  • RAILS_ENV=production bundle exec rake assets:precompile
  • Debug mode can also be enabled in Rails helper methods
  • If you set config.assets.initialize_on_precompile to false, be sure to test rake assets:precompile locally before deploying
  • By default Rails assumes assets have been precompiled and will be served as static assets by your web server.
  • a rake task to compile the asset manifests and other files in the pipeline
  • RAILS_ENV=production bin/rake assets:precompile
  • a recipe to handle this in deployment
  • links the folder specified in config.assets.prefix to shared/assets
  • config/initializers/assets.rb
  • The initialize_on_precompile change tells the precompile task to run without invoking Rails
  • The X-Sendfile header is a directive to the web server to ignore the response from the application, and instead serve a specified file from disk
  • the jquery-rails gem which comes with Rails as the standard JavaScript library gem.
  • Possible options for JavaScript compression are :closure, :uglifier and :yui
  • concatenate assets
crazylion lee

The Pragmatic Bookshelf | DevOps in Practice - 0 views

  •  
    "Delivering production software can often be a painful task. Long test periods and the integration between operations and development can ruin or delay a promising delivery. That's what DevOps can fix. DevOps is a cultural change that aims to smoothly integrate development and operations procedures, breaking the barriers between them and focusing on automation, collaboration, and sharing of knowledge and tools. This book shows you how to implement DevOps and Continuous Delivery practices to raise your system's deployment frequency, increasing your production application's stability and robustness."
張 旭

What is DevOps? | Atlassian - 0 views

  • DevOps is a set of practices that automates the processes between software development and IT teams, in order that they can build, test, and release software faster and more reliably.
  • increased trust, faster software releases, ability to solve critical issues quickly, and better manage unplanned work.
  • bringing together the best of software development and IT operations.
  • ...39 more annotations...
  • DevOps is a culture, a movement, a philosophy.
  • a firm handshake between development and operations
  • DevOps isn’t magic, and transformations don’t happen overnight.
  • Infrastructure as code
  • Culture is the #1 success factor in DevOps.
  • Building a culture of shared responsibility, transparency and faster feedback is the foundation of every high performing DevOps team.
  •  'not our problem' mentality
  • DevOps is that change in mindset of looking at the development process holistically and breaking down the barrier between Dev and Ops.
  • Speed is everything.
  • Lack of automated test and review cycles block the release to production and poor incident response time kills velocity and team confidence
  • Open communication helps Dev and Ops teams swarm on issues, fix incidents, and unblock the release pipeline faster.
  • Unplanned work is a reality that every team faces–a reality that most often impacts team productivity.
  • “cross-functional collaboration.”
  • All the tooling and automation in the world are useless if they aren’t accompanied by a genuine desire on the part of development and IT/Ops professionals to work together.
  • DevOps doesn’t solve tooling problems. It solves human problems.
  • Forming project- or product-oriented teams to replace function-based teams is a step in the right direction.
  • sharing a common goal and having a plan to reach it together
  • join sprint planning sessions, daily stand-ups, and sprint demos.
  • DevOps culture across every department
  • open channels of communication, and talk regularly
  • DevOps isn’t one team’s job. It’s everyone’s job.
  • automation eliminates repetitive manual work, yields repeatable processes, and creates reliable systems.
  • Build, test, deploy, and provisioning automation
  • continuous delivery: the practice of running each code change through a gauntlet of automated tests, often facilitated by cloud-based infrastructure, then packaging up successful builds and promoting them up toward production using automated deploys.
  • automated deploys alert IT/Ops to server “drift” between environments, which reduces or eliminates surprises when it’s time to release.
  • “configuration as code.”
  • when DevOps uses automated deploys to send thoroughly tested code to identically provisioned environments, “Works on my machine!” becomes irrelevant.
  • A DevOps mindset sees opportunities for continuous improvement everywhere.
  • regular retrospectives
  • A/B testing
  • failure is inevitable. So you might as well set up your team to absorb it, recover, and learn from it (some call this “being anti-fragile”).
  • Postmortems focus on where processes fell down and how to strengthen them – not on which team member f'ed up the code.
  • Our engineers are responsible for QA, writing, and running their own tests to get the software out to customers.
  • How long did it take to go from development to deployment? 
  • How long does it take to recover after a system failure?
  • service level agreements (SLAs)
  • Devops isn't any single person's job. It's everyone's job.
  • DevOps is big on the idea that the same people who build an application should be involved in shipping and running it.
  • developers and operators pair with each other in each phase of the application’s lifecycle.
張 旭

The Squeaky Blog | Why we don't use a staging environment - 0 views

  • Pre-live environments are never at parity with production
  • multiple people use staging to validate their changes before release.
  • Branches are then constantly out of sync with each other, and problems often surface when you merge, rebase, and backfill hotfixes.
  • ...10 more annotations...
  • Big Bang releases
  • there is a lengthy suite of tests and checks that run before it is deployed to staging. During this period, which could end up being hours, engineers will likely pick up another task. I’ve seen people merge, and then forget that their changes are on staging, more times than I can count.
  • only merge code that is ready to go live
  • written sufficient tests and have validated our changes in development.
  • All branches are cut from main, and all changes get merged back into main.
  • If we ever have an issue in production, we always roll forward.
  • Feature flags can be enabled on a per-user basis so we can monitor performance and gather feedback
  • Experimental features can be enabled by users in their account settings.
  • we have monitoring, logging, and alarms around all of our services. We also blue/green deploy, by draining and replacing a percentage of containers.
  • Dropping your staging environment in favour of true continuous integration and deployment can create a different mindset for shipping software.
  •  
    "Pre-live environments are never at parity with production "
張 旭

Production environment | Kubernetes - 0 views

  • to promote an existing cluster for production use
  • Separating the control plane from the worker nodes.
  • Having enough worker nodes available
  • ...22 more annotations...
  • You can use role-based access control (RBAC) and other security mechanisms to make sure that users and workloads can get access to the resources they need, while keeping workloads, and the cluster itself, secure. You can set limits on the resources that users and workloads can access by managing policies and container resources.
  • you need to plan how to scale to relieve increased pressure from more requests to the control plane and worker nodes or scale down to reduce unused resources.
  • Managed control plane: Let the provider manage the scale and availability of the cluster's control plane, as well as handle patches and upgrades.
  • The simplest Kubernetes cluster has the entire control plane and worker node services running on the same machine.
  • You can deploy a control plane using tools such as kubeadm, kops, and kubespray.
  • Secure communications between control plane services are implemented using certificates.
  • Certificates are automatically generated during deployment or you can generate them using your own certificate authority.
  • Separate and backup etcd service: The etcd services can either run on the same machines as other control plane services or run on separate machines
  • Create multiple control plane systems: For high availability, the control plane should not be limited to a single machine
  • Some deployment tools set up Raft consensus algorithm to do leader election of Kubernetes services. If the primary goes away, another service elects itself and take over.
  • Groups of zones are referred to as regions.
  • if you installed with kubeadm, there are instructions to help you with Certificate Management and Upgrading kubeadm clusters.
  • Production-quality workloads need to be resilient and anything they rely on needs to be resilient (such as CoreDNS).
  • Add nodes to the cluster: If you are managing your own cluster you can add nodes by setting up your own machines and either adding them manually or having them register themselves to the cluster’s apiserver.
  • Set up node health checks: For important workloads, you want to make sure that the nodes and pods running on those nodes are healthy.
  • Authentication: The apiserver can authenticate users using client certificates, bearer tokens, an authenticating proxy, or HTTP basic auth.
  • Authorization: When you set out to authorize your regular users, you will probably choose between RBAC and ABAC authorization.
  • Role-based access control (RBAC): Lets you assign access to your cluster by allowing specific sets of permissions to authenticated users. Permissions can be assigned for a specific namespace (Role) or across the entire cluster (ClusterRole).
  • Attribute-based access control (ABAC): Lets you create policies based on resource attributes in the cluster and will allow or deny access based on those attributes.
  • Set limits on workload resources
  • Set namespace limits: Set per-namespace quotas on things like memory and CPU
  • Prepare for DNS demand: If you expect workloads to massively scale up, your DNS service must be ready to scale up as well.
crazylion lee

Flynn - The product that ops provides to developers - 1 views

  •  
    "Flynn is an open source platform (PaaS) for running applications in production. It does everything you need, out of the box:"
crazylion lee

Desktop Neo - rethinking the desktop interface for productivity. - 0 views

  •  
    "rethinking the desktop interface for productivity."
張 旭

Configuration - docker-sync 0.5.10 documentation - 0 views

  • Be sure to use a sync-name which is unique, since it will be a container name.
    • 張 旭
       
      慣例是 docker-sync 的 container name 後綴都是 -sync
  • split your docker-compose configuration for production and development (as usual)
  • ...9 more annotations...
  • production stack (docker-compose.yml) does not need any changes and would look like this (and is portable, no docker-sync adjustments).
  • docker-compose-dev.yml ( it needs to be called that way, look like this ) will override
    • 張 旭
       
      開發版的 docker-compose-dev.yml 僅會覆寫 production docker-compose.yml 的 volumes 設定,也就接上 docker-sync.yml 的 volumes,其它都維持不變
  • nocopy # nocopy is important
  • nocopy # nocopy is important
  • docker-compose -f docker-compose.yml -f docker-compose-dev.yml up
  • add the external volume and the mount here
  • In case the folder we mount to has been declared as a VOLUME during image build, its content will be merged with the name volume we mount from the host
    • 張 旭
       
      如果在 Dockerfile 裡面有宣告一個 volume,那麼在 docker build 的時候這個 volume mount point 會被記錄起來,在 container 跑起來的時候,會將 host (server) 上的同名的 volume 內容合併進來 (取代)。也就是說 container 跑起來的時候,會去接上已經存在的既有的 host (server) 上的 volume。
  • enforce the content from our host on the initial wiring
  • set your environment variables by creating a .env file at the root of your project
  •  
    "Be sure to use a sync-name which is unique, since it will be a container name."
張 旭

Using Orbs - CircleCI - 0 views

  • Orbs enable you to share, standardize, and simplify config across your projects.
  • Jobs are comprised of two parts: a set of steps, and the environment they should be executed within.
  • Executors define the environment in which the steps of a job will be run.
  • ...12 more annotations...
  • Commands are reusable sets of steps that you can invoke with specific parameters within an existing job.
  • you can pass my-executor as the value of a name key under executor. This method is primarily employed when passing parameters to executor invocations.
  • Development orbs are mutable and expire after 90 days.
  • Production Orbs are immutable and durable.
  • CircleCI allows development orbs that have versions that start with dev:
  • Production orbs are immutable
  • Each installation of CircleCI, including circleci.com, has only one registry where orbs can be kept.
  • Organization Admins publish production orbs.
  • Organization members publish development orbs
  • You must invoke jobs in the workflow stanza of config.yml file, making sure to pass any necessary parameters as subkeys to the job.
  • When you declare an executor in a configuration outside of jobs, you can use these declarations for all jobs in the scope of that declaration, enabling you to reuse a single executor definition across multiple jobs.
  • Orbs are transparent - If you can execute an orb, you and anyone else can view the source of that orb.
張 旭

Understanding the GitHub flow · GitHub Guides - 0 views

  • anything in the master branch is always deployable.
  • Your branch name should be descriptive
  • Commits also create a transparent history of your work that others can follow to understand what you've done and why.
  • ...9 more annotations...
  • each commit is considered a separate unit of change.
  • By writing clear commit messages, you can make it easier for other people to follow along and provide feedback.
  • Pull Requests initiate discussion about your commits.
  • If you're using a Fork & Pull Model, Pull Requests provide a way to notify project maintainers about the changes you'd like them to consider.
  • Pull Requests are designed to encourage and capture this type of conversation.
  • You can also continue to push to your branch in light of discussion and feedback about your commits.
  • With GitHub, you can deploy from a branch for final testing in production before merging to master.
  • If your branch causes issues, you can roll it back by deploying the existing master into production.
  • your changes have been verified in production, it is time to merge your code into the master branch.
  •  
    "anything in the master branch is always deployable."
張 旭

Production Notes - MongoDB Manual - 0 views

  • mongod will not start if dbPath contains data files created by a storage engine other than the one specified by --storageEngine.
  • mongod must possess read and write permissions for the specified dbPath.
  • WiredTiger supports concurrent access by readers and writers to the documents in a collection
  • ...9 more annotations...
  • Journaling guarantees that MongoDB can quickly recover write operations that were written to the journal but not written to data files in cases where mongod terminated due to a crash or other serious failure.
  • To use read concern level of "majority", replica sets must use WiredTiger storage engine.
  • Write concern describes the level of acknowledgement requested from MongoDB for write operations.
  • With stronger write concerns, clients must wait after sending a write operation until MongoDB confirms the write operation at the requested write concern level.
  • By default, authorization is not enabled, and mongod assumes a trusted environment
  • The HTTP interface is disabled by default. Do not enable the HTTP interface in production environments.
  • Avoid overloading the connection resources of a mongod or mongos instance by adjusting the connection pool size to suit your use case.
  • ensure that each mongod or mongos instance has access to two real cores or one multi-core physical CPU.
  • The WiredTiger storage engine is multithreaded and can take advantage of additional CPU cores
crazylion lee

Docker in Production: A History of Failure - The HFT Guy - 0 views

  •  
    docker 用在大規模的失敗經驗,值得一讀
crazylion lee

Rocket.Chat - 1 views

  •  
    "Rocket.Chat is an incredible product because we have an incredible developer community. Over 200 contributors have made our platform a dynamic and innovative toolkit, from group messages and video calls to helpdesk killer features. Our contributors are the reason we're the best cross-platform open source chat solution available today."
crazylion lee

Evolutionary Database Design - 0 views

  •  
    "Over the last decade we've developed and refined a number of techniques that allow a database design to evolve as an application develops. This is a very important capability for agile methodologies. The techniques rely on applying continuous integration and automated refactoring to database development, together with a close collaboration between DBAs and application developers. The techniques work in both pre-production and released systems, in green field projects as well as legacy systems."
crazylion lee

Scalable C (in progress) - GitBook - 0 views

  •  
    "In this book I'll explain "Scalable C," which kicks C into the 21st Century. We use actors, message passing, code generation, and other tricks. I've been writing C for 30 years. It's never been this fun and productive. - Pieter Hintjens"
1 - 20 of 74 Next › Last »
Showing 20 items per page