Skip to main content

Home/ Larvata/ Group items tagged review

Rss Feed Group items tagged

張 旭

Auto DevOps | GitLab - 0 views

  • Auto DevOps provides pre-defined CI/CD configuration which allows you to automatically detect, build, test, deploy, and monitor your applications
  • Just push your code and GitLab takes care of everything else.
  • Auto DevOps will be automatically disabled on the first pipeline failure.
  • ...78 more annotations...
  • Your project will continue to use an alternative CI/CD configuration file if one is found
  • Auto DevOps works with any Kubernetes cluster;
  • using the Docker or Kubernetes executor, with privileged mode enabled.
  • Base domain (needed for Auto Review Apps and Auto Deploy)
  • Kubernetes (needed for Auto Review Apps, Auto Deploy, and Auto Monitoring)
  • Prometheus (needed for Auto Monitoring)
  • scrape your Kubernetes cluster.
  • project level as a variable: KUBE_INGRESS_BASE_DOMAIN
  • A wildcard DNS A record matching the base domain(s) is required
  • Once set up, all requests will hit the load balancer, which in turn will route them to the Kubernetes pods that run your application(s).
  • review/ (every environment starting with review/)
  • staging
  • production
  • need to define a separate KUBE_INGRESS_BASE_DOMAIN variable for all the above based on the environment.
  • Continuous deployment to production: Enables Auto Deploy with master branch directly deployed to production.
  • Continuous deployment to production using timed incremental rollout
  • Automatic deployment to staging, manual deployment to production
  • Auto Build creates a build of the application using an existing Dockerfile or Heroku buildpacks.
  • If a project’s repository contains a Dockerfile, Auto Build will use docker build to create a Docker image.
  • Each buildpack requires certain files to be in your project’s repository for Auto Build to successfully build your application.
  • Auto Test automatically runs the appropriate tests for your application using Herokuish and Heroku buildpacks by analyzing your project to detect the language and framework.
  • Auto Code Quality uses the Code Quality image to run static analysis and other code checks on the current code.
  • Static Application Security Testing (SAST) uses the SAST Docker image to run static analysis on the current code and checks for potential security issues.
  • Dependency Scanning uses the Dependency Scanning Docker image to run analysis on the project dependencies and checks for potential security issues.
  • License Management uses the License Management Docker image to search the project dependencies for their license.
  • Vulnerability Static Analysis for containers uses Clair to run static analysis on a Docker image and checks for potential security issues.
  • Review Apps are temporary application environments based on the branch’s code so developers, designers, QA, product managers, and other reviewers can actually see and interact with code changes as part of the review process. Auto Review Apps create a Review App for each branch. Auto Review Apps will deploy your app to your Kubernetes cluster only. When no cluster is available, no deployment will occur.
  • The Review App will have a unique URL based on the project ID, the branch or tag name, and a unique number, combined with the Auto DevOps base domain.
  • Review apps are deployed using the auto-deploy-app chart with Helm, which can be customized.
  • Your apps should not be manipulated outside of Helm (using Kubernetes directly).
  • Dynamic Application Security Testing (DAST) uses the popular open source tool OWASP ZAProxy to perform an analysis on the current code and checks for potential security issues.
  • Auto Browser Performance Testing utilizes the Sitespeed.io container to measure the performance of a web page.
  • add the paths to a file named .gitlab-urls.txt in the root directory, one per line.
  • After a branch or merge request is merged into the project’s default branch (usually master), Auto Deploy deploys the application to a production environment in the Kubernetes cluster, with a namespace based on the project name and unique project ID
  • Auto Deploy doesn’t include deployments to staging or canary by default, but the Auto DevOps template contains job definitions for these tasks if you want to enable them.
  • Apps are deployed using the auto-deploy-app chart with Helm.
  • For internal and private projects a GitLab Deploy Token will be automatically created, when Auto DevOps is enabled and the Auto DevOps settings are saved.
  • If the GitLab Deploy Token cannot be found, CI_REGISTRY_PASSWORD is used. Note that CI_REGISTRY_PASSWORD is only valid during deployment.
  • If present, DB_INITIALIZE will be run as a shell command within an application pod as a helm post-install hook.
  • a post-install hook means that if any deploy succeeds, DB_INITIALIZE will not be processed thereafter.
  • DB_MIGRATE will be run as a shell command within an application pod as a helm pre-upgrade hook.
    • 張 旭
       
      如果專案類型不同,就要去查 buildpacks 裡面如何叫用該指令,例如 laravel 的 migration
    • 張 旭
       
      如果是自己的 Dockerfile 建立起來的,看來就不用鳥 buildpacks 的作法
  • Once your application is deployed, Auto Monitoring makes it possible to monitor your application’s server and response metrics right out of the box.
  • annotate the NGINX Ingress deployment to be scraped by Prometheus using prometheus.io/scrape: "true" and prometheus.io/port: "10254"
  • If you are also using Auto Review Apps and Auto Deploy and choose to provide your own Dockerfile, make sure you expose your application to port 5000 as this is the port assumed by the default Helm chart.
  • While Auto DevOps provides great defaults to get you started, you can customize almost everything to fit your needs; from custom buildpacks, to Dockerfiles, Helm charts, or even copying the complete CI/CD configuration into your project to enable staging and canary deployments, and more.
  • If your project has a Dockerfile in the root of the project repo, Auto DevOps will build a Docker image based on the Dockerfile rather than using buildpacks.
  • Auto DevOps uses Helm to deploy your application to Kubernetes.
  • Bundled chart - If your project has a ./chart directory with a Chart.yaml file in it, Auto DevOps will detect the chart and use it instead of the default one.
  • Create a project variable AUTO_DEVOPS_CHART with the URL of a custom chart to use or create two project variables AUTO_DEVOPS_CHART_REPOSITORY with the URL of a custom chart repository and AUTO_DEVOPS_CHART with the path to the chart.
  • make use of the HELM_UPGRADE_EXTRA_ARGS environment variable to override the default values in the values.yaml file in the default Helm chart.
  • specify the use of a custom Helm chart per environment by scoping the environment variable to the desired environment.
    • 張 旭
       
      Auto DevOps 就是一套人家寫好好的傳便便的 .gitlab-ci.yml
  • Your additions will be merged with the Auto DevOps template using the behaviour described for include
  • copy and paste the contents of the Auto DevOps template into your project and edit this as needed.
  • In order to support applications that require a database, PostgreSQL is provisioned by default.
  • Set up the replica variables using a project variable and scale your application by just redeploying it!
  • You should not scale your application using Kubernetes directly.
  • Some applications need to define secret variables that are accessible by the deployed application.
  • Auto DevOps detects variables where the key starts with K8S_SECRET_ and make these prefixed variables available to the deployed application, as environment variables.
  • Auto DevOps pipelines will take your application secret variables to populate a Kubernetes secret.
  • Environment variables are generally considered immutable in a Kubernetes pod.
  • if you update an application secret without changing any code then manually create a new pipeline, you will find that any running application pods will not have the updated secrets.
  • Variables with multiline values are not currently supported
  • The normal behavior of Auto DevOps is to use Continuous Deployment, pushing automatically to the production environment every time a new pipeline is run on the default branch.
  • If STAGING_ENABLED is defined in your project (e.g., set STAGING_ENABLED to 1 as a CI/CD variable), then the application will be automatically deployed to a staging environment, and a production_manual job will be created for you when you’re ready to manually deploy to production.
  • If CANARY_ENABLED is defined in your project (e.g., set CANARY_ENABLED to 1 as a CI/CD variable) then two manual jobs will be created: canary which will deploy the application to the canary environment production_manual which is to be used by you when you’re ready to manually deploy to production.
  • If INCREMENTAL_ROLLOUT_MODE is set to manual in your project, then instead of the standard production job, 4 different manual jobs will be created: rollout 10% rollout 25% rollout 50% rollout 100%
  • The percentage is based on the REPLICAS variable and defines the number of pods you want to have for your deployment.
  • To start a job, click on the play icon next to the job’s name.
  • Once you get to 100%, you cannot scale down, and you’d have to roll back by redeploying the old version using the rollback button in the environment page.
  • With INCREMENTAL_ROLLOUT_MODE set to manual and with STAGING_ENABLED
  • not all buildpacks support Auto Test yet
  • When a project has been marked as private, GitLab’s Container Registry requires authentication when downloading containers.
  • Authentication credentials will be valid while the pipeline is running, allowing for a successful initial deployment.
  • After the pipeline completes, Kubernetes will no longer be able to access the Container Registry.
  • We strongly advise using GitLab Container Registry with Auto DevOps in order to simplify configuration and prevent any unforeseen issues.
crazylion lee

gerrit - Gerrit Code Review - Google Project Hosting - 0 views

  •  
    "Web based code review and project management for Git based projects. "
張 旭

Code Review: 超越 "审、查、评" 的代码回顾 » 社區 » Coding Style - 0 views

  • 在相互挑错的场合里,人的内心会本能地封闭起来,来抗拒那些针对自己的批评意见。相互挑错所造成的紧张气氛,会让程序员对Code Review望而却步,从而情绪低落,这会让Code Review的效果大打折扣。
  • 把Code Review称为“代码回顾”好一些。如果大家放弃“挑错”,来“共同学习”
  • 团队成员共同识别模式
  • ...10 more annotations...
  • 程序员编写代码的习惯
  • 几百行的一个类文件
  • 复杂的if-else嵌套
  • 代码回顾的过程中,完全可以不提谁是代码的作者,而只提“好模式”和“反模式”,这样能让作者放松心态
  • 好模式和反模式,其实就是编程的好习惯和坏习惯。代码回顾应该重在识别编程习惯,而不是找bug
  • 代码回顾也需要每次少量,每日持续
  • 将每日代码回顾的时间控制在半小时以内,细水长流,才能持续
  • 每日的代码回顾,仅需要在半小时内,大家一起看200~300行随机抽取的当天编写的代码就够了
  • 如果代码编写得好,那么不是作者的人就能在没有作者帮助的情况下读懂。我希望一位不是这段代码作者的志愿者,来为大家解释一下这段代码是做什么的。
  • 要尽量找那些被改正过来的曾经的反模式,比如“这段代码用到了方法提取,且命名富有表达力,改掉了昨天’长方法’的反模式”。只识别反模式,不识别好模式,会让代码回顾退化到令人生畏的代码审查,打掉大家长期坚持的积极性。
張 旭

Introduction to GitLab Flow | GitLab - 0 views

  • Git allows a wide variety of branching strategies and workflows.
  • not integrated with issue tracking systems
  • The biggest problem is that many long-running branches emerge that all contain part of the changes.
  • ...47 more annotations...
  • most organizations practice continuous delivery, which means that your default branch can be deployed.
  • Merging everything into the master branch and frequently deploying means you minimize the amount of unreleased code, which is in line with lean and continuous delivery best practices.
  • you can deploy to production every time you merge a feature branch.
  • deploy a new version by merging master into the production branch.
  • you can have your deployment script create a tag on each deployment.
  • to have an environment that is automatically updated to the master branch
  • commits only flow downstream, ensures that everything is tested in all environments.
  • first merge these bug fixes into master, and then cherry-pick them into the release branch.
  • Merging into master and then cherry-picking into release is called an “upstream first” policy
  • “merge request” since the final action is to merge the feature branch.
  • “pull request” since the first manual action is to pull the feature branch
  • it is common to protect the long-lived branches
  • After you merge a feature branch, you should remove it from the source control software
  • When you are ready to code, create a branch for the issue from the master branch. This branch is the place for any work related to this change.
  • A merge request is an online place to discuss the change and review the code.
  • If you open the merge request but do not assign it to anyone, it is a “Work In Progress” merge request.
  • Start the title of the merge request with “[WIP]” or “WIP:” to prevent it from being merged before it’s ready.
  • To automatically close linked issues, mention them with the words “fixes” or “closes,” for example, “fixes #14” or “closes #67.” GitLab closes these issues when the code is merged into the default branch.
  • If you have an issue that spans across multiple repositories, create an issue for each repository and link all issues to a parent issue.
  • With Git, you can use an interactive rebase (rebase -i) to squash multiple commits into one or reorder them.
  • you should never rebase commits you have pushed to a remote server.
  • Rebasing creates new commits for all your changes, which can cause confusion because the same change would have multiple identifiers.
  • if someone has already reviewed your code, rebasing makes it hard to tell what changed since the last review.
  • never rebase commits authored by other people.
  • it is a bad idea to rebase commits that you have already pushed.
  • always use the “no fast-forward” (--no-ff) strategy when you merge manually.
  • you should try to avoid merge commits in feature branches
  • people avoid merge commits by just using rebase to reorder their commits after the commits on the master branch. Using rebase prevents a merge commit when merging master into your feature branch, and it creates a neat linear history.
  • you should never rebase commits you have pushed to a remote server
  • Sometimes you can reuse recorded resolutions (rerere), but merging is better since you only have to resolve conflicts once.
  • not frequently merge master into the feature branch.
  • utilizing new code,
  • resolving merge conflicts
  • updating long-running branches.
  • just cherry-picking a commit.
  • If your feature branch has a merge conflict, creating a merge commit is a standard way of solving this.
  • keep your feature branches short-lived.
  • split your features into smaller units of work
  • you should try to prevent merge commits, but not eliminate them.
  • Your codebase should be clean, but your history should represent what actually happened.
  • Splitting up work into individual commits provides context for developers looking at your code later.
  • push your feature branch frequently, even when it is not yet ready for review.
  • Commit often and push frequently
  • A commit message should reflect your intention, not just the contents of the commit.
  • Testing before merging
  • When using GitLab flow, developers create their branches from this master branch, so it is essential that it never breaks. Therefore, each merge request must be tested before it is accepted.
  • When creating a feature branch, always branch from an up-to-date master
  •  
    "Git allows a wide variety of branching strategies and workflows."
張 旭

Introduction to GitLab Flow | GitLab - 0 views

  • GitLab flow as a clearly defined set of best practices. It combines feature-driven development and feature branches with issue tracking.
  • In Git, you add files from the working copy to the staging area. After that, you commit them to your local repo. The third step is pushing to a shared remote repository.
  • branching model
  • ...68 more annotations...
  • The biggest problem is that many long-running branches emerge that all contain part of the changes.
  • It is a convention to call your default branch master and to mostly branch from and merge to this.
  • Nowadays, most organizations practice continuous delivery, which means that your default branch can be deployed.
  • Continuous delivery removes the need for hotfix and release branches, including all the ceremony they introduce.
  • Merging everything into the master branch and frequently deploying means you minimize the amount of unreleased code, which is in line with lean and continuous delivery best practices.
  • GitHub flow assumes you can deploy to production every time you merge a feature branch.
  • You can deploy a new version by merging master into the production branch. If you need to know what code is in production, you can just checkout the production branch to see.
  • Production branch
  • Environment branches
  • have an environment that is automatically updated to the master branch.
  • deploy the master branch to staging.
  • To deploy to pre-production, create a merge request from the master branch to the pre-production branch.
  • Go live by merging the pre-production branch into the production branch.
  • Release branches
  • work with release branches if you need to release software to the outside world.
  • each branch contains a minor version
  • After announcing a release branch, only add serious bug fixes to the branch.
  • merge these bug fixes into master, and then cherry-pick them into the release branch.
  • Merging into master and then cherry-picking into release is called an “upstream first” policy
  • Tools such as GitHub and Bitbucket choose the name “pull request” since the first manual action is to pull the feature branch.
  • Tools such as GitLab and others choose the name “merge request” since the final action is to merge the feature branch.
  • If you work on a feature branch for more than a few hours, it is good to share the intermediate result with the rest of the team.
  • the merge request automatically updates when new commits are pushed to the branch.
  • If the assigned person does not feel comfortable, they can request more changes or close the merge request without merging.
  • In GitLab, it is common to protect the long-lived branches, e.g., the master branch, so that most developers can’t modify them.
  • if you want to merge into a protected branch, assign your merge request to someone with maintainer permissions.
  • After you merge a feature branch, you should remove it from the source control software.
  • Having a reason for every code change helps to inform the rest of the team and to keep the scope of a feature branch small.
  • If there is no issue yet, create the issue
  • The issue title should describe the desired state of the system.
  • For example, the issue title “As an administrator, I want to remove users without receiving an error” is better than “Admin can’t remove users.”
  • create a branch for the issue from the master branch
  • If you open the merge request but do not assign it to anyone, it is a “Work In Progress” merge request.
  • Start the title of the merge request with [WIP] or WIP: to prevent it from being merged before it’s ready.
  • When they press the merge button, GitLab merges the code and creates a merge commit that makes this event easily visible later on.
  • Merge requests always create a merge commit, even when the branch could be merged without one. This merge strategy is called “no fast-forward” in Git.
  • Suppose that a branch is merged but a problem occurs and the issue is reopened. In this case, it is no problem to reuse the same branch name since the first branch was deleted when it was merged.
  • At any time, there is at most one branch for every issue.
  • It is possible that one feature branch solves more than one issue.
  • GitLab closes these issues when the code is merged into the default branch.
  • If you have an issue that spans across multiple repositories, create an issue for each repository and link all issues to a parent issue.
  • use an interactive rebase (rebase -i) to squash multiple commits into one or reorder them.
  • you should never rebase commits you have pushed to a remote server.
  • Rebasing creates new commits for all your changes, which can cause confusion because the same change would have multiple identifiers.
  • if someone has already reviewed your code, rebasing makes it hard to tell what changed since the last review.
  • never rebase commits authored by other people.
  • it is a bad idea to rebase commits that you have already pushed.
  • If you revert a merge commit and then change your mind, revert the revert commit to redo the merge.
  • Often, people avoid merge commits by just using rebase to reorder their commits after the commits on the master branch.
  • Using rebase prevents a merge commit when merging master into your feature branch, and it creates a neat linear history.
  • every time you rebase, you have to resolve similar conflicts.
  • Sometimes you can reuse recorded resolutions (rerere), but merging is better since you only have to resolve conflicts once.
  • A good way to prevent creating many merge commits is to not frequently merge master into the feature branch.
  • keep your feature branches short-lived.
  • Most feature branches should take less than one day of work.
  • If your feature branches often take more than a day of work, try to split your features into smaller units of work.
  • You could also use feature toggles to hide incomplete features so you can still merge back into master every day.
  • you should try to prevent merge commits, but not eliminate them.
  • Your codebase should be clean, but your history should represent what actually happened.
  • If you rebase code, the history is incorrect, and there is no way for tools to remedy this because they can’t deal with changing commit identifiers
  • Commit often and push frequently
  • You should push your feature branch frequently, even when it is not yet ready for review.
  • A commit message should reflect your intention, not just the contents of the commit.
  • each merge request must be tested before it is accepted.
  • test the master branch after each change.
  • If new commits in master cause merge conflicts with the feature branch, merge master back into the branch to make the CI server re-run the tests.
  • When creating a feature branch, always branch from an up-to-date master.
  • Do not merge from upstream again if your code can work and merge cleanly without doing so.
crazylion lee

Gerrit Code Review - index.md - 0 views

  •  
    "Gerrit provides web based code review and repository management for the Git version control system."
crazylion lee

User Onboarding | A frequently-updated compendium of web app first-run experiences - 0 views

  •  
    "Want to see how popular web apps handle their signup experiences? Here's every one I've ever reviewed, in one handy list. "
crazylion lee

Security/Server Side TLS - MozillaWiki - 0 views

  •  
    The goal of this document is to help operational teams with the configuration of TLS on servers. All Mozilla sites and deployment should follow the recommendations below. The Operations Security (OpSec) team maintains this document as a reference guide to navigate the TLS landscape. It contains information on TLS protocols, known issues and vulnerabilities, configuration examples and testing tools. Changes are reviewed and merged by the OpSec team, and broadcasted to the various Operational teams.
張 旭

今天拜飛機了沒? - 談敏捷開發中的貨物崇拜 - 敏捷進化趣 Agile FunEvo - 0 views

  •  
    "沒有溝通(產品的)願景和策略 產品藍圖和發佈日期在一年前就由CTO規劃好了 組織內沒有人跟顧客對談 CTO和利害關係人堅持所有的改動都要經由他們批准 因為保密資安等理由,禁止使用實體的看板或告示 利害關係人直接跟CTO對談,跳過Product Owner 由利害關係人來決定交付產品增量,而不是Product Owner 專案/產品只有在完成時才交付,而不是增量式的交付 避免利害關係人直接跟開發團隊對談 Product Backlog是由一個產品委員會決定的 就算對功能的價值有所懷疑,但還是硬上了(為了年終獎金…) 業務為了成交,答應客戶增加目前不存在的功能,而Product Owner不知情 就算是不重要的問題,也有固定的進度表和期限 負責產品管理的角色沒有取得商業智能(BI)資訊的權限,沒有充足的資訊和數據幫助決定 利害關係人使用需求文件來和產品與工程部門溝通 Product Owner大部分的時間都在撰寫和管理使用者故事(User Stories) 在Sprint開始後不久Sprint Backlog就變了 專門成立一個開發團隊來修Bugs和處理小的需求 利害關係人沒有參加過Scrum活動(例如Sprint Planning和Sprint Review) 主要是用『速率(Velocity)符合當初的承諾』來當指標評估Scrum是否成功 開發人員沒有參與創造使用者故事 同時處理的專案數量和工作會改變開發團隊的人數跟組成方式 在Daily Stand up中團隊成員對ScrumMaster報告 定期舉行自省會議(Retrospectives),但沒有改變隨之發生 開發團隊並不是跨功能(Cross Functional),而要靠其他團隊或部門才能完成工作"
張 旭

A Clear, Concise & Comfy Code Review Checklist - DEV Community - 0 views

  • 2 blocks doing similar things might be allowable, but 3 or more is a definitive red cross from me!
  • This would ultimately be integrated into your CI/CD pipelines running on each build/commit/deploy too; stopping any rogue commits getting in.
  • not to say that every code block that is duplicated needs to be refactored
  • ...13 more annotations...
  • Refactoring is a cyclical process
  • Before accessing variables within objects and collections make sure they are there! PLEASE!
  • If that variable is a constant or won't be changed then use the Const keyword in applicable languages and the CAPITALISATION convention to let users aware of your decisions about them.
  • The name of a method is more important than we give it credit for, when a method changes so should its name.
  • Make sure you are returning the right thing, trying to make it as generic as possible.
  • Void should do something, not change something!
  • Private vs Public, this is a big topic
  • keeping an eye of the access level of a method can stop issues further down the line
  • Gherkin is a Business Readable, Domain Specific Language created especially for behavior descriptions.
  • specify the 3 main points of a test, including what you expect to happen using the following keywords GIVEN,  WHEN / AND , THEN.
  • look at how the code is structured, make sure methods aren't too long, don't have too many branches, and that for and if statements could be simplified.
  • Use your initiative and discuss if a rewrite would benefit maintainability for the future.
  • it's unnecessary to leave commented code when working in and around areas with them.
crazylion lee

Overview of Different MySQL Replication Solutions - Percona Database Performance Blog - 0 views

  •  
    "In this blog post, I will review some of the MySQL replication concepts that are part of the MySQL environment (and Percona Server for MySQL specifically). I will also try to clarify some of the misconceptions people have about replication. Since I've been working on the Solution Engineering team, I've noticed that - although information is plentiful - replication is often misunderstood or incompletely understood."
張 旭

What is DevOps? | Atlassian - 0 views

  • DevOps is a set of practices that automates the processes between software development and IT teams, in order that they can build, test, and release software faster and more reliably.
  • increased trust, faster software releases, ability to solve critical issues quickly, and better manage unplanned work.
  • bringing together the best of software development and IT operations.
  • ...39 more annotations...
  • DevOps is a culture, a movement, a philosophy.
  • a firm handshake between development and operations
  • DevOps isn’t magic, and transformations don’t happen overnight.
  • Infrastructure as code
  • Culture is the #1 success factor in DevOps.
  • Building a culture of shared responsibility, transparency and faster feedback is the foundation of every high performing DevOps team.
  •  'not our problem' mentality
  • DevOps is that change in mindset of looking at the development process holistically and breaking down the barrier between Dev and Ops.
  • Speed is everything.
  • Lack of automated test and review cycles block the release to production and poor incident response time kills velocity and team confidence
  • Open communication helps Dev and Ops teams swarm on issues, fix incidents, and unblock the release pipeline faster.
  • Unplanned work is a reality that every team faces–a reality that most often impacts team productivity.
  • “cross-functional collaboration.”
  • All the tooling and automation in the world are useless if they aren’t accompanied by a genuine desire on the part of development and IT/Ops professionals to work together.
  • DevOps doesn’t solve tooling problems. It solves human problems.
  • Forming project- or product-oriented teams to replace function-based teams is a step in the right direction.
  • sharing a common goal and having a plan to reach it together
  • join sprint planning sessions, daily stand-ups, and sprint demos.
  • DevOps culture across every department
  • open channels of communication, and talk regularly
  • DevOps isn’t one team’s job. It’s everyone’s job.
  • automation eliminates repetitive manual work, yields repeatable processes, and creates reliable systems.
  • Build, test, deploy, and provisioning automation
  • continuous delivery: the practice of running each code change through a gauntlet of automated tests, often facilitated by cloud-based infrastructure, then packaging up successful builds and promoting them up toward production using automated deploys.
  • automated deploys alert IT/Ops to server “drift” between environments, which reduces or eliminates surprises when it’s time to release.
  • “configuration as code.”
  • when DevOps uses automated deploys to send thoroughly tested code to identically provisioned environments, “Works on my machine!” becomes irrelevant.
  • A DevOps mindset sees opportunities for continuous improvement everywhere.
  • regular retrospectives
  • A/B testing
  • failure is inevitable. So you might as well set up your team to absorb it, recover, and learn from it (some call this “being anti-fragile”).
  • Postmortems focus on where processes fell down and how to strengthen them – not on which team member f'ed up the code.
  • Our engineers are responsible for QA, writing, and running their own tests to get the software out to customers.
  • How long did it take to go from development to deployment? 
  • How long does it take to recover after a system failure?
  • service level agreements (SLAs)
  • Devops isn't any single person's job. It's everyone's job.
  • DevOps is big on the idea that the same people who build an application should be involved in shipping and running it.
  • developers and operators pair with each other in each phase of the application’s lifecycle.
張 旭

数据库水平分片心得 · ScienJus's Blog - 0 views

  • 水平分片(也叫水平分库)指的是将整体存储在单个数据库中的数据,通过某种策略分摊到多个表结构与其相同的数据库中,这样每个数据库中的数据量就会相对减少很多,并且可以部署在不同物理服务器上,理论上能够实现数据库的无限横向拓展。
  • 当遇到第一次数据库性能问题时,最先想到的方案应该是读写分离,将所有写操作都放在主数据库上,所有读操作都放在从数据库上
  • 配置一主多从
  • ...16 more annotations...
  • 主从关系(Master-Slave),此时所有操作还是由主数据库完成,主数据库再同步到从数据库上,而从数据库只需要在主数据库挂掉之后代替其工作。
  • 一般来说读写分离加上缓存已经可以应付绝大多数情况了,并且几乎不需要对业务层面进行修改。
  • 对数据库进行垂直分库,将业务彼此无关的表放在单独的数据库中,分库后不同库中的表无法进行联合查询等操作,但是可以平摊压力,并且独立做读写分离。
  • 对数据库进行水平分表,建立多个结构相同的表分摊数据,使得每个表的数据量减少从而提升速度。
  • 分表却只能在单台机器的单个数据库上,如果是服务器本身的性能达到瓶颈,则分表不会有明显作用。
  • 分表后各个子表还是可以通过 union 等命令联合查询,分库后则不行
  • 基于 id 的区间分片,例如:将 id 为 1-2w 的数据存放在 A 数据库,2w-4w 的数据存放在 B 数据库。
  • 基于 id 的 hash 分片,例如:将 id%2=0 的数据存放在 A 数据库,id%2=1 的数据放在 B 数据库。
  • 基于时间的区间分片,大部分软件都会有一个特征:越新的数据被操作的几率越大,老数据几乎不会被操作。所以通过数据的插入时间进行分库(也称为冷热分离)
  • 基于检索表分片,通过额外建立一张检索表保存 id 与所在数据库节点的对应关系,优点是逻辑简单,自由且不会有迁移问题,缺点是每次查询都需要额外查询检索表,所以一般会选择将检索表缓存到内存中。
  • 基于地理位置分片
  • 分库策略
  • 在数据库表设计时,为了保证 id 唯一,大部分人都会将主键设为自增的 int 类型。但是由于 auto\_increment 是和表所绑定的,所以在分库后每个表的自增 id 也是独立的。这样就肯定会发生主键冲突
  • 但是很多人都希望主键即使不是连续自增,也是一个有序的整数,这样在排序等情况下会有用。这时候就需要自己实现一个 id 生成算法了,一般都会使用 unix 时间戳保持有序,混入 Mac 地址等保证唯一。
  • 在分库情况下,需要将大部分联合查询都替换为至少两次查询,先从关联的表中查询出符合条件的 id,在根据 id 去对应的数据库里查询主体信息。
  • 数据库水平分片作为数据库性能瓶颈的最终解决方案,确实可以有效的解决这个问题。但是它将业务逻辑变得非常复杂(主要是关联表冗余和字段冗余,以及这些数据的更新),并且有分布式事务这个难题。所以不到必要时刻,尽量不要轻易尝试数据库分片。
張 旭

Scrum懶人包 - 10分鐘讀懂Scrum與敏捷軟體開發入門(含中文英文名詞對照) - 敏捷進化趣 Agile FunEvo - 1 views

  • Daily Scrum(每日站立會議):每天10-15分鐘不能超時
  • Sprint(衝刺):顧名思義,當團隊決定要哪些 Item 後,就著手去衝。
  • 大原則是 Sprint 內的 Sprint Backlog 不改變(有原則就有例外)。
  • ...6 more annotations...
  • Sprint 開始時,討論一下這個 Sprint 團隊可以交付哪些Item。
  • Sprint 結束時針對產品的會議,PO邀請利害關係人對產出給意見,是要可用的軟體才算產出。
  • Sprint Review 後,Scrum Team 成員(Dev Team 或包含 PO),針對這個 Sprint 團隊的工作模式討論改善,并定出下個 Sprint 改善事項。
  • Product Owner(PO,產品負責人)
  • ScrumMaster
  • Development Team(Dev Team,開發團隊)
張 旭

跳出运维,才能做好运维(凤凰项目)书评 - 0 views

  • 1. 业务项目:这些通常是公司业务部门,比如产品研发部门或销售部门所提出的需求,比如新产品发布上线、为客户做实施、双十一这种大促活动的规划等等。这些工作通常具有一定的系统性,需要部门间通力合作。2. 内部项目:运维部门内部围绕业务项目所实施的一些列基础设施研发,部署自动化、多环境构建、持续交付、监控报警等等。3. 变更:根据其它部门申请,对运维组件进行变更操作,相对于业务项目,变更通常都会比较零散,比如加权限、开端口、开机器等等。对于变更,我们要维护好操作记录,做到有迹可寻。4. 计划外的工作:无法预料的问题处理,即“救火”。
  • 计划外的工作越多,就会占用其它三种工作的时间,导致其它三种工作大量积累。
  • 运维内部的基础建设跟不上,无法从根上去思考和解决计划外的工作,就会造成一种恶性循环,做不完的工作越堆越多(书中喜欢叫这玩意儿"半成品")直至让人崩溃。
  • ...4 more annotations...
  • 一些周期性的工作对于运维其实也是非常重要的,比如对组件进行周期性的巡检、压测、像ChaosMonkey那样对分布式系统进行随机破坏、灾难演习、备份演习等等。不过这些也可以归结在内部项目之中
  • 流水线可视化了后,除了解决掉瓶颈节点,其它的诸如任务优先级、任务的依赖关系、每批次解决的任务数量,也都会显示出来并得到解决。
  • 反馈和朔源。我们需要知道每种任务流水线的每个环节的执行情况,效率、质量,然后与该环节相关的部门一起去探讨和优化改进的措施,比如部署时间过长,那么就要与开发团队一起优化打包流程和部署所依赖的中间件架构;线上故障太多,那么就需要联合开发和QA部门一道,去审视这些故障所爆发的源头,定制解决方案,等等
  • 第三步,就需要建立一种文化机制,努力把这种工作方式延续下去,做到持续改进,越来越好!
張 旭

What is a CSR (Certificate Signing Request)? - 0 views

  • usually generated on the server where the certificate will be installed and contains information that will be included in the certificate such as the organization name, common name (domain name), locality, and country.
  • A private key is usually created at the same time that you create the CSR, making a key pair.
  • CSR or Certificate Signing request is a block of encoded text that is given to a Certificate Authority when applying for an SSL Certificate
  • ...6 more annotations...
  • A certificate authority will use a CSR to create your SSL certificate, but it does not need your private key.
  • The certificate created with a particular CSR will only work with the private key that was generated with it.
  • Most CSRs are created in the Base-64 encoded PEM format.
  • generate a CSR and private key on the server that the certificate will be used on.
  • openssl req -in server.csr -noout -text
  • The bit-length of a CSR and private key pair determine how easily the key can be cracked using brute force methods.
張 旭

Public Key Infrastructure (PKI) Overview - 0 views

  • A PKI allows you to bind public keys (contained in SSL certificates) with a person in a way that allows you to trust the certificate.
  • Public Key Infrastructures, like the one used to secure the Internet, most commonly use a Certificate Authority (also called a Registration Authority) to verify the identity of an entity and create unforgeable certificates.
  • An SSL Certificate Authority (also called a trusted third party or CA) is an organization that issues digital certificates to organizations or individuals after verifying their identity.
  • ...9 more annotations...
  • An SSL Certificate provides assurances that we are talking to the right server, but the assurances are limited.
  • In PKI, trust simply means that a certificate can be validated by a CA that is in our trust store.
  • An SSL Certificate in a PKI is a digital document containing a public key, entity information, and a digital signature from the certificate issuer.
  • it is much more practical and secure to establish a chain of trust to the Root certificate by signing an Intermediate certificate
  • A trust store is a collection of Root certificates that are trusted by default.
  • there are four primary trust stores that are relied upon for the majority of software: Apple, Microsoft, Chrome, and Mozilla.
  • a revocation system that allows a certificate to be listed as invalid if it was improperly issued or if the private key has been compromised.
  • Online Certificate Status Protocol (OCSP)
  • Certificate Revocation List (CRL)
張 旭

Warnings, Notes, & Tips - 0 views

  • AS3 manages topology records globally in /Common, it is required that records only be managed through AS3, as it will treat the records declaratively.
  • If a record is added outside of AS3, it will be removed if it is not included in the next AS3 declaration for topology records (AS3 completely overwrites non-AS3 topologies when a declaration is submitted).
  • using AS3 to delete a tenant (for example, sending DELETE to the /declare/<TENANT> endpoint) that contains GSLB topologies will completely remove ALL GSLB topologies from the BIG-IP.
  • ...12 more annotations...
  • When posting a large declaration (hundreds of application services in a single declaration), you may experience a 500 error stating that the save sys config operation failed.
  • Even if you have asynchronous mode set to false, after 45 seconds AS3 sets asynchronous mode to true (API swap), and returns an async response.
  • When creating a new tenant using AS3, it must not use the same name as a partition you separately create on the target BIG-IP system.
  • If you use the same name and then post the declaration, AS3 overwrites (or removes) the existing partition completely, including all configuration objects in that partition.
  • use AS3 to create a tenant (which creates a BIG-IP partition), manually adding configuration objects to the partition created by AS3 can have unexpected results
  • When you delete the Tenant using AS3, the system deletes both virtual servers.
  • if a Firewall_Address_List contains zero addresses, a dummy IPv6 address of ::1:5ee:bad:c0de is added in order to maintain a valid Firewall_Address_List. If an address is added to the list, the dummy address is removed.
  • use /mgmt/shared/appsvcs/declare?async=true if you have a particularly large declaration which will take a long time to process.
  • reviewing the Sizing BIG-IP Virtual Editions section (page 7) of Deploying BIG-IP VEs in a Hyper-Converged Infrastructure
  • To test whether your system has AS3 installed or not, use GET with the /mgmt/shared/appsvcs/info URI.
  • You may find it more convenient to put multi-line texts such as iRules into AS3 declarations by first encoding them in Base64.
  • no matter your BIG-IP user account name, audit logs show all messages from admin and not the specific user name.
張 旭

How to Write a Git Commit Message - 1 views

  • a well-crafted Git commit message is the best way to communicate context about a change to fellow developers (and indeed to their future selves).
  • A diff will tell you what changed, but only the commit message can properly tell you why.
  • a commit message shows whether a developer is a good collaborator
  • ...22 more annotations...
  • a well-cared for log is a beautiful and useful thing
  • Reviewing others’ commits and pull requests becomes something worth doing, and suddenly can be done independently.
  • Understanding why something happened months or years ago becomes not only possible but efficient.
  • how to write an individual commit message.
  • Markup syntax, wrap margins, grammar, capitalization, punctuation.
  • What should it not contain?
  • issue tracking IDs
  • pull request numbers
  • The seven rules of a great Git commit message
  • Use the body to explain what and why vs. how
  • Use the imperative mood in the subject line
  • it’s a good idea to begin the commit message with a single short (less than 50 character) line summarizing the change, followed by a blank line and then a more thorough description.
  • forces the author to think for a moment about the most concise way to explain what’s going on.
  • If you’re having a hard time summarizing, you might be committing too many changes at once.
  • shoot for 50 characters, but consider 72 the hard limit
  • Imperative mood just means “spoken or written as if giving a command or instruction”.
  • Git itself uses the imperative whenever it creates a commit on your behalf.
  • when you write your commit messages in the imperative, you’re following Git’s own built-in conventions.
  • A properly formed Git commit subject line should always be able to complete the following sentence: If applied, this commit will your subject line here
  • explaining what changed and why
  • Code is generally self-explanatory in this regard (and if the code is so complex that it needs to be explained in prose, that’s what source comments are for).
  • there are tab completion scripts that take much of the pain out of remembering the subcommands and switches.
張 旭

Open source load testing tool review 2020 - 0 views

  • Hey is a simple tool, written in Go, with good performance and the most common features you'll need to run simple static URL tests.
  • Hey supports HTTP/2, which neither Wrk nor Apachebench does
  • Apachebench is very fast, so often you will not need more than one CPU core to generate enough traffic
  • ...16 more annotations...
  • Hey has rate limiting, which can be used to run fixed-rate tests.
  • Vegeta was designed to be run on the command line; it reads from stdin a list of HTTP transactions to generate, and sends results in binary format to stdout,
  • Vegeta is a really strong tool that caters to people who want a tool to test simple, static URLs (perhaps API end points) but also want a bit more functionality.
  • Vegeta can even be used as a Golang library/package if you want to create your own load testing tool.
  • Wrk is so damn fast
  • being fast and measuring correctly is about all that Wrk does
  • k6 is scriptable in plain Javascript
  • k6 is average or better. In some categories (documentation, scripting API, command line UX) it is outstanding.
  • Jmeter is a huge beast compared to most other tools.
  • Siege is a simple tool, similar to e.g. Apachebench in that it has no scripting and is primarily used when you want to hit a single, static URL repeatedly.
  • A good way of testing the testing tools is to not test them on your code, but on some third-party thing that is sure to be very high-performing.
  • use a tool like e.g. top to keep track of Nginx CPU usage while testing. If you see just one process, and see it using close to 100% CPU, it means you could be CPU-bound on the target side.
  • If you see multiple Nginx processes but only one is using a lot of CPU, it means your load testing tool is only talking to that particular worker process.
  • Network delay is also important to take into account as it sets an upper limit on the number of requests per second you can push through.
  • If, say, the Nginx default page requires a transfer of 250 bytes to load, it means that if the servers are connected via a 100 Mbit/s link, the theoretical max RPS rate would be around 100,000,000 divided by 8 (bits per byte) divided by 250 => 100M/2000 = 50,000 RPS. Though that is a very optimistic calculation - protocol overhead will make the actual number a lot lower so in the case above I would start to get worried bandwidth was an issue if I saw I could push through max 30,000 RPS, or something like that.
  • Wrk managed to push through over 50,000 RPS and that made 8 Nginx workers on the target system consume about 600% CPU.
1 - 20 of 24 Next ›
Showing 20 items per page