Skip to main content

Home/ Larvata/ Group items tagged highlight

Rss Feed Group items tagged

張 旭

Auto DevOps | GitLab - 0 views

  • Auto DevOps provides pre-defined CI/CD configuration which allows you to automatically detect, build, test, deploy, and monitor your applications
  • Just push your code and GitLab takes care of everything else.
  • Auto DevOps will be automatically disabled on the first pipeline failure.
  • ...78 more annotations...
  • Your project will continue to use an alternative CI/CD configuration file if one is found
  • Auto DevOps works with any Kubernetes cluster;
  • using the Docker or Kubernetes executor, with privileged mode enabled.
  • Base domain (needed for Auto Review Apps and Auto Deploy)
  • Kubernetes (needed for Auto Review Apps, Auto Deploy, and Auto Monitoring)
  • Prometheus (needed for Auto Monitoring)
  • scrape your Kubernetes cluster.
  • project level as a variable: KUBE_INGRESS_BASE_DOMAIN
  • A wildcard DNS A record matching the base domain(s) is required
  • Once set up, all requests will hit the load balancer, which in turn will route them to the Kubernetes pods that run your application(s).
  • review/ (every environment starting with review/)
  • staging
  • production
  • need to define a separate KUBE_INGRESS_BASE_DOMAIN variable for all the above based on the environment.
  • Continuous deployment to production: Enables Auto Deploy with master branch directly deployed to production.
  • Continuous deployment to production using timed incremental rollout
  • Automatic deployment to staging, manual deployment to production
  • Auto Build creates a build of the application using an existing Dockerfile or Heroku buildpacks.
  • If a project’s repository contains a Dockerfile, Auto Build will use docker build to create a Docker image.
  • Each buildpack requires certain files to be in your project’s repository for Auto Build to successfully build your application.
  • Auto Test automatically runs the appropriate tests for your application using Herokuish and Heroku buildpacks by analyzing your project to detect the language and framework.
  • Auto Code Quality uses the Code Quality image to run static analysis and other code checks on the current code.
  • Static Application Security Testing (SAST) uses the SAST Docker image to run static analysis on the current code and checks for potential security issues.
  • Dependency Scanning uses the Dependency Scanning Docker image to run analysis on the project dependencies and checks for potential security issues.
  • License Management uses the License Management Docker image to search the project dependencies for their license.
  • Vulnerability Static Analysis for containers uses Clair to run static analysis on a Docker image and checks for potential security issues.
  • Review Apps are temporary application environments based on the branch’s code so developers, designers, QA, product managers, and other reviewers can actually see and interact with code changes as part of the review process. Auto Review Apps create a Review App for each branch. Auto Review Apps will deploy your app to your Kubernetes cluster only. When no cluster is available, no deployment will occur.
  • The Review App will have a unique URL based on the project ID, the branch or tag name, and a unique number, combined with the Auto DevOps base domain.
  • Review apps are deployed using the auto-deploy-app chart with Helm, which can be customized.
  • Your apps should not be manipulated outside of Helm (using Kubernetes directly).
  • Dynamic Application Security Testing (DAST) uses the popular open source tool OWASP ZAProxy to perform an analysis on the current code and checks for potential security issues.
  • Auto Browser Performance Testing utilizes the Sitespeed.io container to measure the performance of a web page.
  • add the paths to a file named .gitlab-urls.txt in the root directory, one per line.
  • After a branch or merge request is merged into the project’s default branch (usually master), Auto Deploy deploys the application to a production environment in the Kubernetes cluster, with a namespace based on the project name and unique project ID
  • Auto Deploy doesn’t include deployments to staging or canary by default, but the Auto DevOps template contains job definitions for these tasks if you want to enable them.
  • Apps are deployed using the auto-deploy-app chart with Helm.
  • For internal and private projects a GitLab Deploy Token will be automatically created, when Auto DevOps is enabled and the Auto DevOps settings are saved.
  • If the GitLab Deploy Token cannot be found, CI_REGISTRY_PASSWORD is used. Note that CI_REGISTRY_PASSWORD is only valid during deployment.
  • If present, DB_INITIALIZE will be run as a shell command within an application pod as a helm post-install hook.
  • a post-install hook means that if any deploy succeeds, DB_INITIALIZE will not be processed thereafter.
  • DB_MIGRATE will be run as a shell command within an application pod as a helm pre-upgrade hook.
    • 張 旭
       
      如果專案類型不同,就要去查 buildpacks 裡面如何叫用該指令,例如 laravel 的 migration
    • 張 旭
       
      如果是自己的 Dockerfile 建立起來的,看來就不用鳥 buildpacks 的作法
  • Once your application is deployed, Auto Monitoring makes it possible to monitor your application’s server and response metrics right out of the box.
  • annotate the NGINX Ingress deployment to be scraped by Prometheus using prometheus.io/scrape: "true" and prometheus.io/port: "10254"
  • If you are also using Auto Review Apps and Auto Deploy and choose to provide your own Dockerfile, make sure you expose your application to port 5000 as this is the port assumed by the default Helm chart.
  • While Auto DevOps provides great defaults to get you started, you can customize almost everything to fit your needs; from custom buildpacks, to Dockerfiles, Helm charts, or even copying the complete CI/CD configuration into your project to enable staging and canary deployments, and more.
  • If your project has a Dockerfile in the root of the project repo, Auto DevOps will build a Docker image based on the Dockerfile rather than using buildpacks.
  • Auto DevOps uses Helm to deploy your application to Kubernetes.
  • Bundled chart - If your project has a ./chart directory with a Chart.yaml file in it, Auto DevOps will detect the chart and use it instead of the default one.
  • Create a project variable AUTO_DEVOPS_CHART with the URL of a custom chart to use or create two project variables AUTO_DEVOPS_CHART_REPOSITORY with the URL of a custom chart repository and AUTO_DEVOPS_CHART with the path to the chart.
  • make use of the HELM_UPGRADE_EXTRA_ARGS environment variable to override the default values in the values.yaml file in the default Helm chart.
  • specify the use of a custom Helm chart per environment by scoping the environment variable to the desired environment.
    • 張 旭
       
      Auto DevOps 就是一套人家寫好好的傳便便的 .gitlab-ci.yml
  • Your additions will be merged with the Auto DevOps template using the behaviour described for include
  • copy and paste the contents of the Auto DevOps template into your project and edit this as needed.
  • In order to support applications that require a database, PostgreSQL is provisioned by default.
  • Set up the replica variables using a project variable and scale your application by just redeploying it!
  • You should not scale your application using Kubernetes directly.
  • Some applications need to define secret variables that are accessible by the deployed application.
  • Auto DevOps detects variables where the key starts with K8S_SECRET_ and make these prefixed variables available to the deployed application, as environment variables.
  • Auto DevOps pipelines will take your application secret variables to populate a Kubernetes secret.
  • Environment variables are generally considered immutable in a Kubernetes pod.
  • if you update an application secret without changing any code then manually create a new pipeline, you will find that any running application pods will not have the updated secrets.
  • Variables with multiline values are not currently supported
  • The normal behavior of Auto DevOps is to use Continuous Deployment, pushing automatically to the production environment every time a new pipeline is run on the default branch.
  • If STAGING_ENABLED is defined in your project (e.g., set STAGING_ENABLED to 1 as a CI/CD variable), then the application will be automatically deployed to a staging environment, and a production_manual job will be created for you when you’re ready to manually deploy to production.
  • If CANARY_ENABLED is defined in your project (e.g., set CANARY_ENABLED to 1 as a CI/CD variable) then two manual jobs will be created: canary which will deploy the application to the canary environment production_manual which is to be used by you when you’re ready to manually deploy to production.
  • If INCREMENTAL_ROLLOUT_MODE is set to manual in your project, then instead of the standard production job, 4 different manual jobs will be created: rollout 10% rollout 25% rollout 50% rollout 100%
  • The percentage is based on the REPLICAS variable and defines the number of pods you want to have for your deployment.
  • To start a job, click on the play icon next to the job’s name.
  • Once you get to 100%, you cannot scale down, and you’d have to roll back by redeploying the old version using the rollback button in the environment page.
  • With INCREMENTAL_ROLLOUT_MODE set to manual and with STAGING_ENABLED
  • not all buildpacks support Auto Test yet
  • When a project has been marked as private, GitLab’s Container Registry requires authentication when downloading containers.
  • Authentication credentials will be valid while the pipeline is running, allowing for a successful initial deployment.
  • After the pipeline completes, Kubernetes will no longer be able to access the Container Registry.
  • We strongly advise using GitLab Container Registry with Auto DevOps in order to simplify configuration and prevent any unforeseen issues.
張 旭

Best practices for writing Dockerfiles | Docker Documentation - 0 views

  • building efficient images
  • Docker builds images automatically by reading the instructions from a Dockerfile -- a text file that contains all commands, in order, needed to build a given image.
  • A Docker image consists of read-only layers each of which represents a Dockerfile instruction.
  • ...47 more annotations...
  • The layers are stacked and each one is a delta of the changes from the previous layer
  • When you run an image and generate a container, you add a new writable layer (the “container layer”) on top of the underlying layers.
  • By “ephemeral,” we mean that the container can be stopped and destroyed, then rebuilt and replaced with an absolute minimum set up and configuration.
  • Inadvertently including files that are not necessary for building an image results in a larger build context and larger image size.
  • To exclude files not relevant to the build (without restructuring your source repository) use a .dockerignore file. This file supports exclusion patterns similar to .gitignore files.
  • minimize image layers by leveraging build cache.
  • if your build contains several layers, you can order them from the less frequently changed (to ensure the build cache is reusable) to the more frequently changed
  • avoid installing extra or unnecessary packages just because they might be “nice to have.”
  • Each container should have only one concern.
  • Decoupling applications into multiple containers makes it easier to scale horizontally and reuse containers
  • Limiting each container to one process is a good rule of thumb, but it is not a hard and fast rule.
  • Use your best judgment to keep containers as clean and modular as possible.
  • do multi-stage builds and only copy the artifacts you need into the final image. This allows you to include tools and debug information in your intermediate build stages without increasing the size of the final image.
  • avoid duplication of packages and make the list much easier to update.
  • When building an image, Docker steps through the instructions in your Dockerfile, executing each in the order specified.
  • the next instruction is compared against all child images derived from that base image to see if one of them was built using the exact same instruction. If not, the cache is invalidated.
  • simply comparing the instruction in the Dockerfile with one of the child images is sufficient.
  • For the ADD and COPY instructions, the contents of the file(s) in the image are examined and a checksum is calculated for each file.
  • If anything has changed in the file(s), such as the contents and metadata, then the cache is invalidated.
  • cache checking does not look at the files in the container to determine a cache match.
  • In that case just the command string itself is used to find a match.
    • 張 旭
       
      RUN apt-get 這樣的指令,直接比對指令內容的意思。
  • Whenever possible, use current official repositories as the basis for your images.
  • Using RUN apt-get update && apt-get install -y ensures your Dockerfile installs the latest package versions with no further coding or manual intervention.
  • cache busting
  • Docker executes these commands using the /bin/sh -c interpreter, which only evaluates the exit code of the last operation in the pipe to determine success.
  • set -o pipefail && to ensure that an unexpected error prevents the build from inadvertently succeeding.
  • The CMD instruction should be used to run the software contained by your image, along with any arguments.
  • CMD should almost always be used in the form of CMD [“executable”, “param1”, “param2”…]
  • CMD should rarely be used in the manner of CMD [“param”, “param”] in conjunction with ENTRYPOINT
  • The ENV instruction is also useful for providing required environment variables specific to services you wish to containerize,
  • Each ENV line creates a new intermediate layer, just like RUN commands
  • COPY is preferred
  • COPY only supports the basic copying of local files into the container
  • the best use for ADD is local tar file auto-extraction into the image, as in ADD rootfs.tar.xz /
  • If you have multiple Dockerfile steps that use different files from your context, COPY them individually, rather than all at once.
  • using ADD to fetch packages from remote URLs is strongly discouraged; you should use curl or wget instead
  • The best use for ENTRYPOINT is to set the image’s main command, allowing that image to be run as though it was that command (and then use CMD as the default flags).
  • the image name can double as a reference to the binary as shown in the command
  • The VOLUME instruction should be used to expose any database storage area, configuration storage, or files/folders created by your docker container.
  • use VOLUME for any mutable and/or user-serviceable parts of your image
  • If you absolutely need functionality similar to sudo, such as initializing the daemon as root but running it as non-root), consider using “gosu”.
  • always use absolute paths for your WORKDIR
  • An ONBUILD command executes after the current Dockerfile build completes.
  • Think of the ONBUILD command as an instruction the parent Dockerfile gives to the child Dockerfile
  • A Docker build executes ONBUILD commands before any command in a child Dockerfile.
  • Be careful when putting ADD or COPY in ONBUILD. The “onbuild” image fails catastrophically if the new build’s context is missing the resource being added.
張 旭

Choose when to run jobs | GitLab - 0 views

  • Rules are evaluated in order until the first match.
  • no rules match, so the job is not added to any other pipeline.
  • define a set of rules to exclude jobs in a few cases, but run them in all other cases
  • ...32 more annotations...
  • use all rules keywords, like if, changes, and exists, in the same rule. The rule evaluates to true only when all included keywords evaluate to true.
  • use parentheses with && and || to build more complicated variable expressions.
  • Use workflow to specify which types of pipelines can run.
  • every push to an open merge request’s source branch causes duplicated pipelines.
  • avoid duplicate pipelines by changing the job rules to avoid either push (branch) pipelines or merge request pipelines.
  • do not mix only/except jobs with rules jobs in the same pipeline.
  • For behavior similar to the only/except keywords, you can check the value of the $CI_PIPELINE_SOURCE variable
  • commonly used variables for if clauses
  • rules:changes expressions to determine when to add jobs to a pipeline
  • Use !reference tags to reuse rules in different jobs.
  • Use except to define when a job does not run.
  • only or except used without refs is the same as only:refs / except/refs
  • If you change multiple files, but only one file ends in .md, the build job is still skipped.
  • If you use multiple keywords with only or except, the keywords are evaluated as a single conjoined expression.
  • only includes the job if all of the keys have at least one condition that matches.
  • except excludes the job if any of the keys have at least one condition that matches.
  • With only, individual keys are logically joined by an AND
  • With except, individual keys are logically joined by an OR
  • To specify a job as manual, add when: manual to the job in the .gitlab-ci.yml file.
  • Use protected environments to define a list of users authorized to run a manual job.
  • Use when: delayed to execute scripts after a waiting period, or if you want to avoid jobs immediately entering the pending state.
  • To split a large job into multiple smaller jobs that run in parallel, use the parallel keyword
  • run a trigger job multiple times in parallel in a single pipeline, but with different variable values for each instance of the job.
  • The @ symbol denotes the beginning of a ref’s repository path. To match a ref name that contains the @ character in a regular expression, you must use the hex character code match \x40.
  • Compare a variable to a string
  • Check if a variable is undefined
  • Check if a variable is empty
  • Check if a variable exists
  • Check if a variable is empty
  • Matches are found when using =~.
  • Matches are not found when using !~.
  • Join variable expressions together with && or ||
  •  
    "Rules are evaluated in order until the first match."
張 旭

Using Workflows to Schedule Jobs - CircleCI - 1 views

  • A workflow is a set of rules for defining a collection of jobs and their run order.
  • Schedule workflows for jobs that should only run periodically.
  • run multiple jobs in parallel
  • ...37 more annotations...
  • rerun just the failed job
  • Builds without workflows require a build job.
  • Refer the YAML Anchors/Aliases documentation for information about how to alias and reuse syntax to keep your .circleci/config.yml file small.
  • workflow orchestration with two parallel jobs
  • jobs run according to configured requirements, each job waiting to start until the required job finishes successfully
  • requires: key
  • fans-out to run a set of acceptance test jobs in parallel, and finally fans-in to run a common deploy job.
  • Holding a Workflow for a Manual Approval
  • Workflows can be configured to wait for manual approval of a job before continuing to the next job
  • add a job to the jobs list with the key type: approval
  • approval is a special job type that is only available to jobs under the workflow key
  • The name of the job to hold is arbitrary - it could be wait or pause, for example, as long as the job has a type: approval key in it.
  • schedule a workflow to run at a certain time for specific branches.
  • The triggers key is only added under your workflows key
  • using cron syntax to represent Coordinated Universal Time (UTC) for specified branches.
  • By default, a workflow is triggered on every git push
  • the commit workflow has no triggers key and will run on every git push
  • The nightly workflow has a triggers key and will run on the specified schedule
  • Cron step syntax (for example, */1, */20) is not supported.
  • use a context to share environment variables
  • use the same shared environment variables when initiated by a user who is part of the organization.
  • CircleCI does not run workflows for tags unless you explicitly specify tag filters.
  • CircleCI branch and tag filters support the Java variant of regex pattern matching.
  • Each workflow has an associated workspace which can be used to transfer files to downstream jobs as the workflow progresses.
  • The workspace is an additive-only store of data.
  • Jobs can persist data to the workspace
  • Downstream jobs can attach the workspace to their container filesystem.
  • Attaching the workspace downloads and unpacks each layer based on the ordering of the upstream jobs in the workflow graph.
  • Workflows that include jobs running on multiple branches may require data to be shared using workspaces
  • To persist data from a job and make it available to other jobs, configure the job to use the persist_to_workspace key.
  • Files and directories named in the paths: property of persist_to_workspace will be uploaded to the workflow’s temporary workspace relative to the directory specified with the root key.
  • Configure a job to get saved data by configuring the attach_workspace key.
  • persist_to_workspace
  • attach_workspace
  • To rerun only a workflow’s failed jobs, click the Workflows icon in the app and select a workflow to see the status of each job, then click the Rerun button and select Rerun from failed.
  • if you do not see your workflows triggering, a configuration error is preventing the workflow from starting.
  • check your Workflows page of the CircleCI app (not the Job page)
  •  
    "A workflow is a set of rules for defining a collection of jobs and their run order."
張 旭

Introduction to GitLab Flow | GitLab - 0 views

  • GitLab flow as a clearly defined set of best practices. It combines feature-driven development and feature branches with issue tracking.
  • In Git, you add files from the working copy to the staging area. After that, you commit them to your local repo. The third step is pushing to a shared remote repository.
  • branching model
  • ...68 more annotations...
  • The biggest problem is that many long-running branches emerge that all contain part of the changes.
  • It is a convention to call your default branch master and to mostly branch from and merge to this.
  • Nowadays, most organizations practice continuous delivery, which means that your default branch can be deployed.
  • Continuous delivery removes the need for hotfix and release branches, including all the ceremony they introduce.
  • Merging everything into the master branch and frequently deploying means you minimize the amount of unreleased code, which is in line with lean and continuous delivery best practices.
  • GitHub flow assumes you can deploy to production every time you merge a feature branch.
  • You can deploy a new version by merging master into the production branch. If you need to know what code is in production, you can just checkout the production branch to see.
  • Production branch
  • Environment branches
  • have an environment that is automatically updated to the master branch.
  • deploy the master branch to staging.
  • To deploy to pre-production, create a merge request from the master branch to the pre-production branch.
  • Go live by merging the pre-production branch into the production branch.
  • Release branches
  • work with release branches if you need to release software to the outside world.
  • each branch contains a minor version
  • After announcing a release branch, only add serious bug fixes to the branch.
  • merge these bug fixes into master, and then cherry-pick them into the release branch.
  • Merging into master and then cherry-picking into release is called an “upstream first” policy
  • Tools such as GitHub and Bitbucket choose the name “pull request” since the first manual action is to pull the feature branch.
  • Tools such as GitLab and others choose the name “merge request” since the final action is to merge the feature branch.
  • If you work on a feature branch for more than a few hours, it is good to share the intermediate result with the rest of the team.
  • the merge request automatically updates when new commits are pushed to the branch.
  • If the assigned person does not feel comfortable, they can request more changes or close the merge request without merging.
  • In GitLab, it is common to protect the long-lived branches, e.g., the master branch, so that most developers can’t modify them.
  • if you want to merge into a protected branch, assign your merge request to someone with maintainer permissions.
  • After you merge a feature branch, you should remove it from the source control software.
  • Having a reason for every code change helps to inform the rest of the team and to keep the scope of a feature branch small.
  • If there is no issue yet, create the issue
  • The issue title should describe the desired state of the system.
  • For example, the issue title “As an administrator, I want to remove users without receiving an error” is better than “Admin can’t remove users.”
  • create a branch for the issue from the master branch
  • If you open the merge request but do not assign it to anyone, it is a “Work In Progress” merge request.
  • Start the title of the merge request with [WIP] or WIP: to prevent it from being merged before it’s ready.
  • When they press the merge button, GitLab merges the code and creates a merge commit that makes this event easily visible later on.
  • Merge requests always create a merge commit, even when the branch could be merged without one. This merge strategy is called “no fast-forward” in Git.
  • Suppose that a branch is merged but a problem occurs and the issue is reopened. In this case, it is no problem to reuse the same branch name since the first branch was deleted when it was merged.
  • At any time, there is at most one branch for every issue.
  • It is possible that one feature branch solves more than one issue.
  • GitLab closes these issues when the code is merged into the default branch.
  • If you have an issue that spans across multiple repositories, create an issue for each repository and link all issues to a parent issue.
  • use an interactive rebase (rebase -i) to squash multiple commits into one or reorder them.
  • you should never rebase commits you have pushed to a remote server.
  • Rebasing creates new commits for all your changes, which can cause confusion because the same change would have multiple identifiers.
  • if someone has already reviewed your code, rebasing makes it hard to tell what changed since the last review.
  • never rebase commits authored by other people.
  • it is a bad idea to rebase commits that you have already pushed.
  • If you revert a merge commit and then change your mind, revert the revert commit to redo the merge.
  • Often, people avoid merge commits by just using rebase to reorder their commits after the commits on the master branch.
  • Using rebase prevents a merge commit when merging master into your feature branch, and it creates a neat linear history.
  • every time you rebase, you have to resolve similar conflicts.
  • Sometimes you can reuse recorded resolutions (rerere), but merging is better since you only have to resolve conflicts once.
  • A good way to prevent creating many merge commits is to not frequently merge master into the feature branch.
  • keep your feature branches short-lived.
  • Most feature branches should take less than one day of work.
  • If your feature branches often take more than a day of work, try to split your features into smaller units of work.
  • You could also use feature toggles to hide incomplete features so you can still merge back into master every day.
  • you should try to prevent merge commits, but not eliminate them.
  • Your codebase should be clean, but your history should represent what actually happened.
  • If you rebase code, the history is incorrect, and there is no way for tools to remedy this because they can’t deal with changing commit identifiers
  • Commit often and push frequently
  • You should push your feature branch frequently, even when it is not yet ready for review.
  • A commit message should reflect your intention, not just the contents of the commit.
  • each merge request must be tested before it is accepted.
  • test the master branch after each change.
  • If new commits in master cause merge conflicts with the feature branch, merge master back into the branch to make the CI server re-run the tests.
  • When creating a feature branch, always branch from an up-to-date master.
  • Do not merge from upstream again if your code can work and merge cleanly without doing so.
張 旭

Run your CI/CD jobs in Docker containers | GitLab - 0 views

  • If you run Docker on your local machine, you can run tests in the container, rather than testing on a dedicated CI/CD server.
  • Run other services, like MySQL, in containers. Do this by specifying services in your .gitlab-ci.yml file.
  • By default, the executor pulls images from Docker Hub
  • ...10 more annotations...
  • Maps must contain at least the name option, which is the same image name as used for the string setting.
  • When a CI job runs in a Docker container, the before_script, script, and after_script commands run in the /builds/<project-path>/ directory. Your image may have a different default WORKDIR defined. To move to your WORKDIR, save the WORKDIR as an environment variable so you can reference it in the container during the job’s runtime.
  • The runner starts a Docker container using the defined entrypoint. The default from Dockerfile that may be overridden in the .gitlab-ci.yml file.
  • attaches itself to a running container.
  • sends the script to the container’s shell stdin and receives the output.
  • To override the entrypoint of a Docker image, define an empty entrypoint in the .gitlab-ci.yml file, so the runner does not start a useless shell layer. However, that does not work for all Docker versions. For Docker 17.06 and later, the entrypoint can be set to an empty value. For Docker 17.03 and earlier, the entrypoint can be set to /bin/sh -c, /bin/bash -c, or an equivalent shell available in the image.
  • The runner expects that the image has no entrypoint or that the entrypoint is prepared to start a shell command.
  • entrypoint: [""]
  • entrypoint: ["/bin/sh", "-c"]
  • A DOCKER_AUTH_CONFIG CI/CD variable
  •  
    "If you run Docker on your local machine, you can run tests in the container, rather than testing on a dedicated CI/CD server. "
張 旭

Networking with overlay networks | Docker Documentation - 0 views

  • The manager host will function as both a manager and a worker, which means it can both run service tasks and manage the swarm.
  • connected together using an overlay network called ingress
  • each of them now has an overlay network called ingress and a bridge network called docker_gwbridge
  • ...7 more annotations...
  • The docker_gwbridge connects the ingress network to the Docker host’s network interface so that traffic can flow to and from swarm managers and workers
  • recommended that you use separate overlay networks for each application or group of applications which will work together
  • You don’t need to create the overlay network on the other nodes, beacause it will be automatically created when one of those nodes starts running a service task which requires it.
  • The default publish mode of ingress, which is used when you do not specify a mode for the --publish flag, means that if you browse to port 80 on manager, worker-1, or worker-2, you will be connected to port 80 on one of the 5 service tasks, even if no tasks are currently running on the node you browse to.
  • Even though overlay networks are automatically created on swarm worker nodes as needed, they are not automatically removed.
  • The -dit flags mean to start the container detached (in the background), interactive (with the ability to type into it), and with a TTY (so you can see the input and output).
  • alpine containers running ash, which is Alpine’s default shell rather than bash
張 旭

Reusing Config - CircleCI - 0 views

  • Change the version key to 2.1 in your .circleci/config.yml file and commit the changes to test your build.
  • Reusable commands are invoked with specific parameters as steps inside a job.
  • Commands can use other commands in the scope of execution
  • ...19 more annotations...
  • Executors define the environment in which the steps of a job will be run.
  • Executor declarations in config outside of jobs can be used by all jobs in the scope of that declaration, allowing you to reuse a single executor definition across multiple jobs.
  • It is also possible to allow an orb to define the executor used by all of its commands.
  • When invoking an executor in a job any keys in the job itself will override those of the executor invoked.
  • Steps are used when you have a job or command that needs to mix predefined and user-defined steps.
  • Use the enum parameter type when you want to enforce that the value must be one from a specific set of string values.
  • Use an executor parameter type to allow the invoker of a job to decide what executor it will run on
  • invoke the same job more than once in the workflows stanza of config.yml, passing any necessary parameters as subkeys to the job.
  • If a job is declared inside an orb it can use commands in that orb or the global commands.
  • To use parameters in executors, define the parameters under the given executor.
  • Parameters are in-scope only within the job or command that defined them.
  • A single configuration may invoke a job multiple times.
  • Every job invocation may optionally accept two special arguments: pre-steps and post-steps.
  • Pre and post steps allow you to execute steps in a given job without modifying the job.
  • conditions are checked before a workflow is actually run
  • you cannot use a condition to check an environment variable.
  • Conditional steps may be located anywhere a regular step could and may only use parameter values as inputs.
  • A conditional step consists of a step with the key when or unless. Under this conditional key are the subkeys steps and condition
  • A condition is a single value that evaluates to true or false at the time the config is processed, so you cannot use environment variables as conditions
張 旭

Speeding up Docker image build process of a Rails application | BigBinary Blog - 1 views

  • we do not want to execute bundle install and rake assets:precompile tasks while starting a container in each pod which would prevent that pod from accepting any requests until these tasks are finished.
  • run bundle install and rake assets:precompile tasks while or before containerizing the Rails application.
  • Kubernetes pulls the image, starts a Docker container using that image inside the pod and runs puma server immediately.
  • ...7 more annotations...
  • Since source code changes often, the previously cached layer for the ADD instruction is invalidated due to the mismatching checksums.
  • The ARG instruction in the Dockerfile defines RAILS_ENV variable and is implicitly used as an environment variable by the rest of the instructions defined just after that ARG instruction.
  • RUN instructions are used to install gems and precompile static assets using sprockets
  • Instead, Docker automatically re-uses the previously built layer for the RUN bundle install instruction if the Gemfile.lock file remains unchanged.
  • everyday we need to build a lot of Docker images containing source code from varying Git branches as well as with varying environments.
  • it is hard for Docker to cache layers for bundle install and rake assets:precompile tasks and re-use those layers during every docker build command run with different application source code and a different environment.
  • By default, Bundler installs gems at the location which is set by Rubygems.
  •  
    "we do not want to execute bundle install and rake assets:precompile tasks while starting a container in each pod which would prevent that pod from accepting any requests until these tasks are finished."
張 旭

Introduction to GitLab Flow | GitLab - 0 views

  • Git allows a wide variety of branching strategies and workflows.
  • not integrated with issue tracking systems
  • The biggest problem is that many long-running branches emerge that all contain part of the changes.
  • ...47 more annotations...
  • most organizations practice continuous delivery, which means that your default branch can be deployed.
  • Merging everything into the master branch and frequently deploying means you minimize the amount of unreleased code, which is in line with lean and continuous delivery best practices.
  • you can deploy to production every time you merge a feature branch.
  • deploy a new version by merging master into the production branch.
  • you can have your deployment script create a tag on each deployment.
  • to have an environment that is automatically updated to the master branch
  • commits only flow downstream, ensures that everything is tested in all environments.
  • first merge these bug fixes into master, and then cherry-pick them into the release branch.
  • Merging into master and then cherry-picking into release is called an “upstream first” policy
  • “merge request” since the final action is to merge the feature branch.
  • “pull request” since the first manual action is to pull the feature branch
  • it is common to protect the long-lived branches
  • After you merge a feature branch, you should remove it from the source control software
  • When you are ready to code, create a branch for the issue from the master branch. This branch is the place for any work related to this change.
  • A merge request is an online place to discuss the change and review the code.
  • If you open the merge request but do not assign it to anyone, it is a “Work In Progress” merge request.
  • Start the title of the merge request with “[WIP]” or “WIP:” to prevent it from being merged before it’s ready.
  • To automatically close linked issues, mention them with the words “fixes” or “closes,” for example, “fixes #14” or “closes #67.” GitLab closes these issues when the code is merged into the default branch.
  • If you have an issue that spans across multiple repositories, create an issue for each repository and link all issues to a parent issue.
  • With Git, you can use an interactive rebase (rebase -i) to squash multiple commits into one or reorder them.
  • you should never rebase commits you have pushed to a remote server.
  • Rebasing creates new commits for all your changes, which can cause confusion because the same change would have multiple identifiers.
  • if someone has already reviewed your code, rebasing makes it hard to tell what changed since the last review.
  • never rebase commits authored by other people.
  • it is a bad idea to rebase commits that you have already pushed.
  • always use the “no fast-forward” (--no-ff) strategy when you merge manually.
  • you should try to avoid merge commits in feature branches
  • people avoid merge commits by just using rebase to reorder their commits after the commits on the master branch. Using rebase prevents a merge commit when merging master into your feature branch, and it creates a neat linear history.
  • you should never rebase commits you have pushed to a remote server
  • Sometimes you can reuse recorded resolutions (rerere), but merging is better since you only have to resolve conflicts once.
  • not frequently merge master into the feature branch.
  • utilizing new code,
  • resolving merge conflicts
  • updating long-running branches.
  • just cherry-picking a commit.
  • If your feature branch has a merge conflict, creating a merge commit is a standard way of solving this.
  • keep your feature branches short-lived.
  • split your features into smaller units of work
  • you should try to prevent merge commits, but not eliminate them.
  • Your codebase should be clean, but your history should represent what actually happened.
  • Splitting up work into individual commits provides context for developers looking at your code later.
  • push your feature branch frequently, even when it is not yet ready for review.
  • Commit often and push frequently
  • A commit message should reflect your intention, not just the contents of the commit.
  • Testing before merging
  • When using GitLab flow, developers create their branches from this master branch, so it is essential that it never breaks. Therefore, each merge request must be tested before it is accepted.
  • When creating a feature branch, always branch from an up-to-date master
  •  
    "Git allows a wide variety of branching strategies and workflows."
張 旭

Orbs, Jobs, Steps, and Workflows - CircleCI - 0 views

  • Orbs are packages of config that you either import by name or configure inline to simplify your config, share, and reuse config within and across projects.
  • Jobs are a collection of Steps.
  • All of the steps in the job are executed in a single unit which consumes a CircleCI container from your plan while it’s running.
  • ...11 more annotations...
  • Workspaces persist data between jobs in a single Workflow.
  • Caching persists data between the same job in different Workflow builds.
  • Artifacts persist data after a Workflow has finished.
  • run using the machine executor which enables reuse of recently used machine executor runs,
  • docker executor which can compose Docker containers to run your tests and any services they require
  • macos executor
  • Steps are a collection of executable commands which are run during a job
  • In addition to the run: key, keys for save_cache:, restore_cache:, deploy:, store_artifacts:, store_test_results: and add_ssh_keys are nested under Steps.
  • checkout: key is required to checkout your code
  • run: enables addition of arbitrary, multi-line shell command scripting
  • orchestrating job runs with parallel, sequential, and manual approval workflows.
張 旭

Dynamic Provisioning | vSphere Storage for Kubernetes - 0 views

  • Storage Policy based Management (SPBM). SPBM provides a single unified control plane across a broad range of data services and storage solutions
  • Kubernetes StorageClasses allow the creation of PersistentVolumes on-demand without having to create storage and mount it into K8s nodes upfront
  • When a PVC is created, the PersistentVolume will be provisioned on a compatible datastore with the most free space that satisfies the gold storage policy requirements.
  • ...2 more annotations...
  • When a PVC is created, the vSphere Cloud Provider checks if the user specified datastore satisfies the gold storage policy requirements. If it does, the vSphere Cloud Provider will provision the PersistentVolume on the user specified datastore. If not, it will create an error telling the user that the specified datastore is not compatible with gold storage policy requirements.
  • The Kubernetes user will have the ability to specify custom vSAN Storage Capabilities during dynamic volume provisioning.
  •  
    "Storage Policy based Management (SPBM). SPBM provides a single unified control plane across a broad range of data services and storage solutions"
張 旭

Quick start - 0 views

  • Terragrunt will forward almost all commands, arguments, and options directly to Terraform, but based on the settings in your terragrunt.hcl file
  • the backend configuration does not support variables or expressions of any sort
  • the path_relative_to_include() built-in function,
  • ...9 more annotations...
  • The generate attribute is used to inform Terragrunt to generate the Terraform code for configuring the backend.
  • The find_in_parent_folders() helper will automatically search up the directory tree to find the root terragrunt.hcl and inherit the remote_state configuration from it.
  • Unlike the backend configurations, provider configurations support variables,
  • if you needed to modify the configuration to expose another parameter (e.g session_name), you would have to then go through each of your modules to make this change.
  • instructs Terragrunt to create the file provider.tf in the working directory (where Terragrunt calls terraform) before it calls any of the Terraform commands
  • large modules should be considered harmful.
  • it is a Bad Idea to define all of your environments (dev, stage, prod, etc), or even a large amount of infrastructure (servers, databases, load balancers, DNS, etc), in a single Terraform module.
  • Large modules are slow, insecure, hard to update, hard to code review, hard to test, and brittle (i.e., you have all your eggs in one basket).
  • Terragrunt allows you to define your Terraform code once and to promote a versioned, immutable “artifact” of that exact same code from environment to environment.
張 旭

Run the Docker daemon as a non-root user (Rootless mode) | Docker Documentation - 0 views

  • running the Docker daemon and containers as a non-root user
  • Rootless mode does not require root privileges even during the installation of the Docker daemon
  • Rootless mode executes the Docker daemon and containers inside a user namespace.
  • ...9 more annotations...
  • in rootless mode, both the daemon and the container are running without root privileges.
  • Rootless mode does not use binaries with SETUID bits or file capabilities, except newuidmap and newgidmap, which are needed to allow multiple UIDs/GIDs to be used in the user namespace.
  • expose privileged ports (< 1024)
  • add net.ipv4.ip_unprivileged_port_start=0 to /etc/sysctl.conf (or /etc/sysctl.d) and run sudo sysctl --system
  • dockerd-rootless.sh uses slirp4netns (if installed) or VPNKit as the network stack by default.
  • These network stacks run in userspace and might have performance overhead
  • This error occurs when the number of available entries in /etc/subuid or /etc/subgid is not sufficient.
  • This error occurs mostly when the host is running in cgroup v2. See the section Fedora 31 or later for information on switching the host to use cgroup v1.
  • --net=host doesn’t listen ports on the host network namespace This is an expected behavior, as the daemon is namespaced inside RootlessKit’s network namespace. Use docker run -p instead.
張 旭

Deploy services to a swarm | Docker Documentation - 0 views

  • Swarm services use a declarative model, which means that you define the desired state of the service, and rely upon Docker to maintain this state.
  • To create a single-replica service with no extra configuration, you only need to supply the image name.
  • A service can be in a pending state if its image is unavailable
  • ...12 more annotations...
  • If your image is available on a private registry which requires login, use the --with-registry-auth flag
  • When you update a service, Docker stops its containers and restarts them with the new configuration.
  • When updating an existing service, the flag is --publish-add. There is also a --publish-rm flag to remove a port that was previously published.
  • To update the command an existing service runs, you can use the --args flag.
  • force the service to use a specific version of the image
  • If the manager can’t resolve the tag to a digest, each worker node is responsible for resolving the tag to a digest, and different nodes may use different versions of the image.
  • After you create a service, its image is never updated unless you explicitly run docker service update with the --image flag as described below.
  • When you run service update with the --image flag, the swarm manager queries Docker Hub or your private Docker registry for the digest the tag currently points to and updates the service tasks to use that digest.
  • You can publish a service task’s port directly on the swarm node where that service is running.
  • You can rely on the routing mesh. When you publish a service port, the swarm makes the service accessible at the target port on every node, regardless of whether there is a task for the service running on that node or not.
  • To publish a service’s ports externally to the swarm, use the --publish <PUBLISHED-PORT>:<SERVICE-PORT> flag.
  • published port on every swarm node
張 旭

Pre-Built CircleCI Docker Images - CircleCI - 0 views

  • typically extensions of official Docker images and include tools especially useful for CI/CD.
  • Convenience images are based on the most recently built versions of upstream images, so it is best practice to use the most specific image possible.
  • add -jessie or -stretch to the end of each of those containers to ensure you’re only using that version of the Debian base OS.
  • ...12 more annotations...
  • language images
  • service images
  • All images add a circleci user as a system user
  • A language image should be listed first under the docker key in your configuration, making it the primary container during execution.
  • For example, if you want to add browsers to the circleci/golang:1.9 image, use the circleci/golang:1.9-browsers image.
  • Service images are convenience images for services like databases
  • should be listed after language images so they become secondary service containers.
  • To speed up builds using RAM volume, add the -ram suffix to the end of a service image tag
  • All convenience images have been extended with additional tools.
  • all images include the following packages, installed via apt-get
  • Most CircleCI convenience images are Debian Jessie- or Stretch-based images, however some extend Ubuntu-based images.
  • The following packages are installed via curl
張 旭

Choosing an Executor Type - CircleCI - 0 views

  • Containers are an instance of the Docker Image you specify and the first image listed in your configuration is the primary container image in which all steps run.
  • In this example, all steps run in the container created by the first image listed under the build job
  • If you experience increases in your run times due to installing additional tools during execution, it is best practice to use the Building Custom Docker Images Documentation to create a custom image with tools that are pre-loaded in the container to meet the job requirements.
  • ...9 more annotations...
  • The machine option runs your jobs in a dedicated, ephemeral VM
  • Using the machine executor gives your application full access to OS resources and provides you with full control over the job environment.
  • Using machine may require additional fees in a future pricing update.
  • Using the macos executor allows you to run your job in a macOS environment on a VM.
  • In a multi-image configuration job, all steps are executed in the container created by the first image listed.
  • All containers run in a common network and every exposed port will be available on localhost from a primary container.
  • If you want to work with private images/registries, please refer to Using Private Images.
  • Docker also has built-in image caching and enables you to build, run, and publish Docker images via Remote Docker.
  • if you require low-level access to the network or need to mount external volumes consider using machine
張 旭

Docker for AWS persistent data volumes | Docker Documentation - 0 views

  • Cloudstor is a modern volume plugin built by Docker
  • Docker swarm mode tasks and regular Docker containers can use a volume created with Cloudstor to mount a persistent data volume.
  • Global shared Cloudstor volumes mounted by all tasks in a swarm service.
  • ...14 more annotations...
  • Workloads running in a Docker service that require access to low latency/high IOPs persistent storage, such as a database engine, can use a relocatable Cloudstor volume backed by EBS.
  • Each relocatable Cloudstor volume is backed by a single EBS volume.
  • If a swarm task using a relocatable Cloudstor volume gets rescheduled to another node within the same availability zone as the original node where the task was running, Cloudstor detaches the backing EBS volume from the original node and attaches it to the new target node automatically.
  • in a different availability zone,
  • Cloudstor transfers the contents of the backing EBS volume to the destination availability zone using a snapshot, and cleans up the EBS volume in the original availability zone.
  • Typically the snapshot-based transfer process across availability zones takes between 2 and 5 minutes unless the work load is write-heavy.
  • A swarm task is not started until the volume it mounts becomes available
  • Sharing/mounting the same Cloudstor volume backed by EBS among multiple tasks is not a supported scenario and leads to data loss.
  • a Cloudstor volume to share data between tasks, choose the appropriate EFS backed shared volume option.
  • When multiple swarm service tasks need to share data in a persistent storage volume, you can use a shared Cloudstor volume backed by EFS.
  • a volume and its contents can be mounted by multiple swarm service tasks without the risk of data loss
  • over NFS
  • the persistent data backed by EFS volumes is always available.
  • shared Cloudstor volumes only work in those AWS regions where EFS is supported.
張 旭

Database Profiler - MongoDB Manual - 0 views

  • The database profiler collects detailed information about Database Commands executed against a running mongod instance.
  • The profiler writes all the data it collects to the system.profile collection, a capped collection in the admin database.
  • db.setProfilingLevel(2)
  • ...10 more annotations...
  • The slowms and sampleRate profiling settings are global. When set, these settings affect all databases in your process.
  • db.setProfilingLevel(1, { slowms: 20 })
  • db.setProfilingLevel(0, { slowms: 20 })
  • show profile
  • The system.profile collection is a capped collection with a default size of 1 megabyte.
  • By default, sampleRate is set to 1.0, meaning all slow operations are profiled.
  • When logLevel is set to 0, MongoDB records slow operations to the diagnostic log at a rate determined by slowOpSampleRate.
  • The slowms field indicates operation time threshold, in milliseconds, beyond which operations are considered slow.
  • You cannot enable profiling on a mongos instance.
  • profiler logs information about database operations in the system.profile collection.
張 旭

Using Services to Keep Your Rails Controllers Clean and DRY - 0 views

  • I’ll typically create an actions folder for things like create_invoice, and folders for other service objects such as decorators, policies, and support. I also use a services folder, but I reserve it for service objects that talk to external entities, like Stripe, AWS, or geolocation services.
  • You can create your own actions, decorators, support objects, and services.
1 - 20 of 54 Next › Last »
Showing 20 items per page