"Apache nifi supports powerful and scalable directed graphs of data routing, transformation, and system mediation logic. Some of the high-level capabilities and objectives of Apache NiFi include:"
Refer the YAML Anchors/Aliases documentation for information about how to alias and reuse syntax to keep your .circleci/config.yml file small.
workflow orchestration with two parallel jobs
jobs run according to configured requirements, each job waiting to start until the required job finishes successfully
requires: key
fans-out to run a set of acceptance test jobs in parallel, and finally fans-in to run a common deploy job.
Holding a Workflow for a Manual Approval
Workflows can be configured to wait for manual approval of a job before
continuing to the next job
add a job to the jobs list with the
key type: approval
approval is a special job type that is only available to jobs under the workflow key
The name of the job to hold is arbitrary - it could be wait or pause, for example,
as long as the job has a type: approval key in it.
schedule a workflow
to run at a certain time for specific branches.
The triggers key is only added under your workflows key
using cron syntax to represent Coordinated Universal Time (UTC) for specified branches.
By default,
a workflow is triggered on every git push
the commit workflow has no triggers key
and will run on every git push
The nightly workflow has a triggers key
and will run on the specified schedule
Cron step syntax (for example, */1, */20) is not supported.
use a context to share environment variables
use the same shared environment variables when initiated by a user who is part of the organization.
CircleCI does not run workflows for tags
unless you explicitly specify tag filters.
CircleCI branch and tag filters support
the Java variant of regex pattern matching.
Each workflow has an associated workspace which can be used to transfer files to downstream jobs as the workflow progresses.
The workspace is an additive-only store of data.
Jobs can persist data to the workspace
Downstream jobs can attach the workspace to their container filesystem.
Attaching the workspace downloads and unpacks each layer based on the ordering of the upstream jobs in the workflow graph.
Workflows that include jobs running on multiple branches may require data to be shared using workspaces
To persist data from a job and make it available to other jobs, configure the job to use the persist_to_workspace key.
Files and directories named in the paths: property of persist_to_workspace will be uploaded to the workflow’s temporary workspace relative to the directory specified with the root key.
Configure a job to get saved data by configuring the attach_workspace key.
persist_to_workspace
attach_workspace
To rerun only a workflow’s failed jobs, click the Workflows icon in the app and select a workflow to see the status of each job, then click the Rerun button and select Rerun from failed.
if you do not see your workflows triggering, a configuration error is preventing the workflow from starting.
check your Workflows page of the CircleCI app (not the Job page)
Zabbix by default uses "pull" model when a server connects to agents on each monitoring machine, agents periodically gather the info and send it to a server.
Prometheus prefers "pull" model when a server gather info from client machines.
Prometheus requires an application to be instrumented with Prometheus client library (available in different programming languages) for preparing metrics.
expose metrics for Prometheus (similar to "agents" for Zabbix)
Zabbix uses its own tcp-based communication protocol between agents and a server.
Prometheus uses HTTP with protocol buffers (+ text format for ease of use with curl).
Prometheus offers basic tool for exploring gathered data and visualizing it in simple graphs on its native server and also offers a minimal dashboard builder PromDash. But Prometheus is and is designed to be supported by modern visualizing tools like Grafana.
Prometheus offers solution for alerting that is separated from its core into Alertmanager application.
Your project will continue to use an alternative
CI/CD configuration file if one is found
Auto DevOps works with any Kubernetes cluster;
using the
Docker or Kubernetes
executor, with
privileged mode enabled.
Base domain (needed for Auto Review Apps and Auto Deploy)
Kubernetes (needed for Auto Review Apps, Auto Deploy, and Auto Monitoring)
Prometheus (needed for Auto Monitoring)
scrape your Kubernetes cluster.
project level as a variable: KUBE_INGRESS_BASE_DOMAIN
A wildcard DNS A record matching the base domain(s) is required
Once set up, all requests will hit the load balancer, which in turn will route
them to the Kubernetes pods that run your application(s).
review/ (every environment starting with review/)
staging
production
need to define a separate
KUBE_INGRESS_BASE_DOMAIN variable for all the above
based on the environment.
Continuous deployment to production: Enables Auto Deploy
with master branch directly deployed to production.
Continuous deployment to production using timed incremental rollout
Automatic deployment to staging, manual deployment to production
Auto Build creates a build of the application using an existing Dockerfile or
Heroku buildpacks.
If a project’s repository contains a Dockerfile, Auto Build will use
docker build to create a Docker image.
Each buildpack requires certain files to be in your project’s repository for
Auto Build to successfully build your application.
Auto Test automatically runs the appropriate tests for your application using
Herokuish and Heroku
buildpacks by analyzing
your project to detect the language and framework.
Auto Code Quality uses the
Code Quality image to run
static analysis and other code checks on the current code.
Static Application Security Testing (SAST) uses the
SAST Docker image to run static
analysis on the current code and checks for potential security issues.
Dependency Scanning uses the
Dependency Scanning Docker image
to run analysis on the project dependencies and checks for potential security issues.
License Management uses the
License Management Docker image
to search the project dependencies for their license.
Vulnerability Static Analysis for containers uses
Clair to run static analysis on a
Docker image and checks for potential security issues.
Review Apps are temporary application environments based on the
branch’s code so developers, designers, QA, product managers, and other
reviewers can actually see and interact with code changes as part of the review
process. Auto Review Apps create a Review App for each branch.
Auto Review Apps will deploy your app to your Kubernetes cluster only. When no cluster
is available, no deployment will occur.
The Review App will have a unique URL based on the project ID, the branch or tag
name, and a unique number, combined with the Auto DevOps base domain.
Review apps are deployed using the
auto-deploy-app chart with
Helm, which can be customized.
Your apps should not be manipulated outside of Helm (using Kubernetes directly).
Dynamic Application Security Testing (DAST) uses the
popular open source tool OWASP ZAProxy
to perform an analysis on the current code and checks for potential security
issues.
Auto Browser Performance Testing utilizes the Sitespeed.io container to measure the performance of a web page.
add the paths to a file named .gitlab-urls.txt in the root directory, one per line.
After a branch or merge request is merged into the project’s default branch (usually
master), Auto Deploy deploys the application to a production environment in
the Kubernetes cluster, with a namespace based on the project name and unique
project ID
Auto Deploy doesn’t include deployments to staging or canary by default, but the
Auto DevOps template contains job definitions for these tasks if you want to
enable them.
Apps are deployed using the
auto-deploy-app chart with
Helm.
For internal and private projects a GitLab Deploy Token
will be automatically created, when Auto DevOps is enabled and the Auto DevOps settings are saved.
If the GitLab Deploy Token cannot be found, CI_REGISTRY_PASSWORD is
used. Note that CI_REGISTRY_PASSWORD is only valid during deployment.
If present, DB_INITIALIZE will be run as a shell command within an
application pod as a helm post-install hook.
a post-install hook means that if any deploy succeeds,
DB_INITIALIZE will not be processed thereafter.
DB_MIGRATE will be run as a shell command within an application pod as
a helm pre-upgrade hook.
Once your application is deployed, Auto Monitoring makes it possible to monitor
your application’s server and response metrics right out of the box.
annotate
the NGINX Ingress deployment to be scraped by Prometheus using
prometheus.io/scrape: "true" and prometheus.io/port: "10254"
If you are also using Auto Review Apps and Auto Deploy and choose to provide
your own Dockerfile, make sure you expose your application to port
5000 as this is the port assumed by the
default Helm chart.
While Auto DevOps provides great defaults to get you started, you can customize
almost everything to fit your needs; from custom buildpacks,
to Dockerfiles, Helm charts, or
even copying the complete CI/CD configuration
into your project to enable staging and canary deployments, and more.
If your project has a Dockerfile in the root of the project repo, Auto DevOps
will build a Docker image based on the Dockerfile rather than using buildpacks.
Auto DevOps uses Helm to deploy your application to Kubernetes.
Bundled chart - If your project has a ./chart directory with a Chart.yaml
file in it, Auto DevOps will detect the chart and use it instead of the default
one.
Create a project variable
AUTO_DEVOPS_CHART with the URL of a custom chart to use or create two project variables AUTO_DEVOPS_CHART_REPOSITORY with the URL of a custom chart repository and AUTO_DEVOPS_CHART with the path to the chart.
make use of the HELM_UPGRADE_EXTRA_ARGS environment variable to override the default values in the values.yaml file in the default Helm chart.
specify the use of a custom Helm chart per environment by scoping the environment variable
to the desired environment.
Your additions will be merged with the
Auto DevOps template using the behaviour described for
include
copy and paste the contents of the Auto DevOps
template into your project and edit this as needed.
In order to support applications that require a database,
PostgreSQL is provisioned by default.
Set up the replica variables using a
project variable
and scale your application by just redeploying it!
You should not scale your application using Kubernetes directly.
Some applications need to define secret variables that are
accessible by the deployed application.
Auto DevOps detects variables where the key starts with
K8S_SECRET_ and make these prefixed variables available to the
deployed application, as environment variables.
Auto DevOps pipelines will take your application secret variables to
populate a Kubernetes secret.
Environment variables are generally considered immutable in a Kubernetes
pod.
if you update an application secret without changing any
code then manually create a new pipeline, you will find that any running
application pods will not have the updated secrets.
Variables with multiline values are not currently supported
The normal behavior of Auto DevOps is to use Continuous Deployment, pushing
automatically to the production environment every time a new pipeline is run
on the default branch.
If STAGING_ENABLED is defined in your project (e.g., set STAGING_ENABLED to
1 as a CI/CD variable), then the application will be automatically deployed
to a staging environment, and a production_manual job will be created for
you when you’re ready to manually deploy to production.
If CANARY_ENABLED is defined in your project (e.g., set CANARY_ENABLED to
1 as a CI/CD variable) then two manual jobs will be created:
canary which will deploy the application to the canary environment
production_manual which is to be used by you when you’re ready to manually
deploy to production.
If INCREMENTAL_ROLLOUT_MODE is set to manual in your project, then instead
of the standard production job, 4 different
manual jobs
will be created:
rollout 10%
rollout 25%
rollout 50%
rollout 100%
The percentage is based on the REPLICAS variable and defines the number of
pods you want to have for your deployment.
To start a job, click on the play icon next to the job’s name.
Once you get to 100%, you cannot scale down, and you’d have to roll
back by redeploying the old version using the
rollback button in the
environment page.
With INCREMENTAL_ROLLOUT_MODE set to manual and with STAGING_ENABLED
not all buildpacks support Auto Test yet
When a project has been marked as private, GitLab’s Container
Registry requires authentication when downloading
containers.
Authentication credentials will be valid while the pipeline is running, allowing
for a successful initial deployment.
After the pipeline completes, Kubernetes will no longer be able to access the
Container Registry.
We strongly advise using GitLab Container Registry with Auto DevOps in order to
simplify configuration and prevent any unforeseen issues.