A workflow is a set of rules for defining a collection of jobs and their run order.
Using Workflows to Schedule Jobs - CircleCI - 1 views
- ...37 more annotations...
-
Refer the YAML Anchors/Aliases documentation for information about how to alias and reuse syntax to keep your .circleci/config.yml file small.
-
jobs run according to configured requirements, each job waiting to start until the required job finishes successfully
-
fans-out to run a set of acceptance test jobs in parallel, and finally fans-in to run a common deploy job.
-
The name of the job to hold is arbitrary - it could be wait or pause, for example, as long as the job has a type: approval key in it.
-
Each workflow has an associated workspace which can be used to transfer files to downstream jobs as the workflow progresses.
-
Attaching the workspace downloads and unpacks each layer based on the ordering of the upstream jobs in the workflow graph.
-
Workflows that include jobs running on multiple branches may require data to be shared using workspaces
-
To persist data from a job and make it available to other jobs, configure the job to use the persist_to_workspace key.
-
Files and directories named in the paths: property of persist_to_workspace will be uploaded to the workflow’s temporary workspace relative to the directory specified with the root key.
-
To rerun only a workflow’s failed jobs, click the Workflows icon in the app and select a workflow to see the status of each job, then click the Rerun button and select Rerun from failed.
-
if you do not see your workflows triggering, a configuration error is preventing the workflow from starting.
Auto DevOps | GitLab - 0 views
-
Auto DevOps provides pre-defined CI/CD configuration which allows you to automatically detect, build, test, deploy, and monitor your applications
- ...78 more annotations...
-
Once set up, all requests will hit the load balancer, which in turn will route them to the Kubernetes pods that run your application(s).
-
need to define a separate KUBE_INGRESS_BASE_DOMAIN variable for all the above based on the environment.
-
Continuous deployment to production: Enables Auto Deploy with master branch directly deployed to production.
-
If a project’s repository contains a Dockerfile, Auto Build will use docker build to create a Docker image.
-
Each buildpack requires certain files to be in your project’s repository for Auto Build to successfully build your application.
-
Auto Test automatically runs the appropriate tests for your application using Herokuish and Heroku buildpacks by analyzing your project to detect the language and framework.
-
Auto Code Quality uses the Code Quality image to run static analysis and other code checks on the current code.
-
Static Application Security Testing (SAST) uses the SAST Docker image to run static analysis on the current code and checks for potential security issues.
-
Dependency Scanning uses the Dependency Scanning Docker image to run analysis on the project dependencies and checks for potential security issues.
-
License Management uses the License Management Docker image to search the project dependencies for their license.
-
Vulnerability Static Analysis for containers uses Clair to run static analysis on a Docker image and checks for potential security issues.
-
Review Apps are temporary application environments based on the branch’s code so developers, designers, QA, product managers, and other reviewers can actually see and interact with code changes as part of the review process. Auto Review Apps create a Review App for each branch. Auto Review Apps will deploy your app to your Kubernetes cluster only. When no cluster is available, no deployment will occur.
-
The Review App will have a unique URL based on the project ID, the branch or tag name, and a unique number, combined with the Auto DevOps base domain.
-
Dynamic Application Security Testing (DAST) uses the popular open source tool OWASP ZAProxy to perform an analysis on the current code and checks for potential security issues.
-
Auto Browser Performance Testing utilizes the Sitespeed.io container to measure the performance of a web page.
-
After a branch or merge request is merged into the project’s default branch (usually master), Auto Deploy deploys the application to a production environment in the Kubernetes cluster, with a namespace based on the project name and unique project ID
-
Auto Deploy doesn’t include deployments to staging or canary by default, but the Auto DevOps template contains job definitions for these tasks if you want to enable them.
-
For internal and private projects a GitLab Deploy Token will be automatically created, when Auto DevOps is enabled and the Auto DevOps settings are saved.
-
If the GitLab Deploy Token cannot be found, CI_REGISTRY_PASSWORD is used. Note that CI_REGISTRY_PASSWORD is only valid during deployment.
-
If present, DB_INITIALIZE will be run as a shell command within an application pod as a helm post-install hook.
-
a post-install hook means that if any deploy succeeds, DB_INITIALIZE will not be processed thereafter.
-
-
-
Once your application is deployed, Auto Monitoring makes it possible to monitor your application’s server and response metrics right out of the box.
-
annotate the NGINX Ingress deployment to be scraped by Prometheus using prometheus.io/scrape: "true" and prometheus.io/port: "10254"
-
If you are also using Auto Review Apps and Auto Deploy and choose to provide your own Dockerfile, make sure you expose your application to port 5000 as this is the port assumed by the default Helm chart.
-
While Auto DevOps provides great defaults to get you started, you can customize almost everything to fit your needs; from custom buildpacks, to Dockerfiles, Helm charts, or even copying the complete CI/CD configuration into your project to enable staging and canary deployments, and more.
-
If your project has a Dockerfile in the root of the project repo, Auto DevOps will build a Docker image based on the Dockerfile rather than using buildpacks.
-
Bundled chart - If your project has a ./chart directory with a Chart.yaml file in it, Auto DevOps will detect the chart and use it instead of the default one.
-
Create a project variable AUTO_DEVOPS_CHART with the URL of a custom chart to use or create two project variables AUTO_DEVOPS_CHART_REPOSITORY with the URL of a custom chart repository and AUTO_DEVOPS_CHART with the path to the chart.
-
make use of the HELM_UPGRADE_EXTRA_ARGS environment variable to override the default values in the values.yaml file in the default Helm chart.
-
specify the use of a custom Helm chart per environment by scoping the environment variable to the desired environment.
-
-
Your additions will be merged with the Auto DevOps template using the behaviour described for include
-
Set up the replica variables using a project variable and scale your application by just redeploying it!
-
Auto DevOps detects variables where the key starts with K8S_SECRET_ and make these prefixed variables available to the deployed application, as environment variables.
-
if you update an application secret without changing any code then manually create a new pipeline, you will find that any running application pods will not have the updated secrets.
-
The normal behavior of Auto DevOps is to use Continuous Deployment, pushing automatically to the production environment every time a new pipeline is run on the default branch.
-
If STAGING_ENABLED is defined in your project (e.g., set STAGING_ENABLED to 1 as a CI/CD variable), then the application will be automatically deployed to a staging environment, and a production_manual job will be created for you when you’re ready to manually deploy to production.
-
If CANARY_ENABLED is defined in your project (e.g., set CANARY_ENABLED to 1 as a CI/CD variable) then two manual jobs will be created: canary which will deploy the application to the canary environment production_manual which is to be used by you when you’re ready to manually deploy to production.
-
If INCREMENTAL_ROLLOUT_MODE is set to manual in your project, then instead of the standard production job, 4 different manual jobs will be created: rollout 10% rollout 25% rollout 50% rollout 100%
-
The percentage is based on the REPLICAS variable and defines the number of pods you want to have for your deployment.
-
Once you get to 100%, you cannot scale down, and you’d have to roll back by redeploying the old version using the rollback button in the environment page.
-
When a project has been marked as private, GitLab’s Container Registry requires authentication when downloading containers.
-
Authentication credentials will be valid while the pipeline is running, allowing for a successful initial deployment.
-
We strongly advise using GitLab Container Registry with Auto DevOps in order to simplify configuration and prevent any unforeseen issues.
Kubernetes 基本概念 · Kubernetes指南 - 0 views
Ingress - Kubernetes - 0 views
- ...62 more annotations...
-
Cluster networkA set of links, logical or physical, that facilitate communication within a cluster according to the Kubernetes networking model.
-
A Kubernetes ServiceA way to expose an application running on a set of Pods as a network service. that identifies a set of Pods using labelTags objects with identifying attributes that are meaningful and relevant to users. selectors.
-
An Ingress can be configured to give Services externally-reachable URLs, load balance traffic, terminate SSL / TLS, and offer name based virtual hosting.
-
Exposing services other than HTTP and HTTPS to the internet typically uses a service of type Service.Type=NodePort or Service.Type=LoadBalancer.
-
You must have an ingress controller to satisfy an Ingress. Only creating an Ingress resource has no effect.
-
Both the host and path must match the content of an incoming request before the load balancer directs traffic to the referenced Service.
-
HTTP (and HTTPS) requests to the Ingress that matches the host and path of the rule are sent to the listed backend.
-
A default backend is often configured in an Ingress controller to service any requests that do not match a path in the spec.
-
A fanout configuration routes traffic from a single IP address to more than one Service, based on the HTTP URI being requested.
-
Name-based virtual hosts support routing HTTP traffic to multiple host names at the same IP address.
-
an Ingress resource without any hosts defined in the rules, then any web traffic to the IP address of your Ingress controller can be matched without a name based virtual host being required.
-
secure an Ingress by specifying a SecretStores sensitive information, such as passwords, OAuth tokens, and ssh keys. that contains a TLS private key and certificate.
-
An Ingress controller is bootstrapped with some load balancing policy settings that it applies to all Ingress, such as the load balancing algorithm, backend weight scheme, and others.
-
persistent sessions, dynamic weights) are not yet exposed through the Ingress. You can instead get these features through the load balancer used for a Service.
-
After you save your changes, kubectl updates the resource in the API server, which tells the Ingress controller to reconfigure the load balancer.
-
Cluster network: A set of links, logical or physical, that facilitate communication within a cluster according to the Kubernetes networking model.
-
An Ingress may be configured to give Services externally-reachable URLs, load balance traffic, terminate SSL / TLS, and offer name-based virtual hosting.
-
You must have an Ingress controller to satisfy an Ingress. Only creating an Ingress resource has no effect.
-
An Ingress with no rules sends all traffic to a single default backend and .spec.defaultBackend is the backend that should handle requests in that case.
-
If defaultBackend is not set, the handling of requests that do not match any of the rules will be up to the ingress controller
-
A common usage for a Resource backend is to ingress data to an object storage backend with static assets.
-
Prefix: Matches based on a URL path prefix split by /. Matching is case sensitive and done on a path element by element basis.
-
multiple paths within an Ingress will match a request. In those cases precedence will be given first to the longest matching path.
-
Each Ingress should specify a class, a reference to an IngressClass resource that contains additional configuration including the name of the controller that should implement the class.
-
The Ingress resource only supports a single TLS port, 443, and assumes TLS termination at the ingress point (traffic to the Service and its Pods is in plaintext).
-
TLS will not work on the default rule because the certificates would have to be issued for all the possible sub-domains.