Serverless was first used to describe applications that significantly or fully
depend on 3rd party applications / services (‘in the cloud’) to manage server-side
logic and state.
‘rich client’ applications (think single page
web apps, or mobile apps) that use the vast ecosystem of cloud accessible
databases (like Parse, Firebase), authentication services (Auth0, AWS Cognito),
etc.
Serverless can also mean applications where some amount of server-side logic
is still written by the application developer but unlike traditional architectures
is run in stateless compute containers that are event-triggered, ephemeral (may
only last for one invocation), and fully managed by a 3rd party.
‘Functions as a service
AWS Lambda is one of the most popular implementations of FaaS at present,
A good example is
Auth0 - they started initially with BaaS ‘Authentication
as a Service’, but with Auth0 Webtask they are entering the
FaaS space.
a typical ecommerce app
a backend data-processing service
with zero administration.
FaaS offerings do not require coding to a specific framework or
library.
Horizontal scaling is completely automatic, elastic, and managed by the
provider
Functions in FaaS are triggered by event types defined by the provider.
a FaaS-supported message broker
from a
deployment-unit point of view FaaS functions are stateless.
allowed the client direct access to a
subset of our database
deleted the authentication logic in the original application and have
replaced it with a third party BaaS service
The client is in fact well on its way to becoming a Single Page Application.
implement a FaaS function that responds to http requests via an
API Gateway
port the search code from the Pet Store server to the Pet Store Search
function
replaced a long lived consumer application with a
FaaS function that runs within the event driven context
server
applications - is a key difference when comparing with other modern
architectural trends like containers and PaaS
the only code that needs to
change when moving to FaaS is the ‘main method / startup’ code, in that it is
deleted, and likely the specific code that is the top-level message handler
(the ‘message listener interface’ implementation), but this might only be a change
in method signature
With FaaS you need to write the function ahead of time to assume parallelism
Most providers also allow functions to be triggered as a response to inbound
http requests, typically in some kind of API gateway
you should assume that for any given
invocation of a function none of the in-process or host state that you create
will be available to any subsequent invocation.
FaaS
functions are either naturally stateless
store
state across requests or for further input to handle a request.
certain classes of long lived task are not suited to FaaS
functions without re-architecture
if you were writing a
low-latency trading application you probably wouldn’t want to use FaaS systems
at this time
An
API Gateway is an HTTP server where routes / endpoints are defined in
configuration and each route is associated with a FaaS function.
API
Gateway will allow mapping from http request parameters to inputs arguments
for the FaaS function
API Gateways may also perform authentication, input validation,
response code mapping, etc.
the Serverless Framework makes working
with API Gateway + Lambda significantly easier than using the first principles
provided by AWS.
Apex - a project to
‘Build, deploy, and manage AWS Lambda functions with ease.'
'Serverless'
to mean the union of a couple of other ideas - 'Backend as a Service' and
'Functions as a Service'.
"The YubiKey 4 is the strong authentication bullseye the industry has been aiming at for years, enabling one single key to secure an unlimited number of applications.
Yubico's 4th generation YubiKey is built on high-performance secure elements. It includes the same range of one-time password and public key authentication protocols as in the YubiKey NEO, excluding NFC, but with stronger public/private keys, faster crypto operations and the world's first touch-to-sign feature.
With the YubiKey 4 platform, we have further improved our manufacturing and ordering process, enabling customers to order exactly what functions they want in 500+ unit volumes, with no secrets stored at Yubico or shared with a third-party organization. The best part? An organization can securely customize 1,000 YubiKeys in less than 10 minutes.
For customers who require NFC, the YubiKey NEO is our full-featured key with both contact (USB) and contactless (NFC, MIFARE) communications."
Keyfiles are bare-minimum forms of security and are best suited for testing or
development environments.
With keyfile authentication, each
mongod instances in the replica set uses the contents of the keyfile as the
shared password for authenticating other members in the deployment.
On UNIX systems, the keyfile must not have group or world
permissions.
You can use role-based access control
(RBAC) and other
security mechanisms to make sure that users and workloads can get access to the
resources they need, while keeping workloads, and the cluster itself, secure.
You can set limits on the resources that users and workloads can access
by managing policies and
container resources.
you need to plan how to scale to relieve increased
pressure from more requests to the control plane and worker nodes or scale down to reduce unused
resources.
Managed control plane: Let the provider manage the scale and availability
of the cluster's control plane, as well as handle patches and upgrades.
The simplest Kubernetes cluster has the entire control plane and worker node
services running on the same machine.
You can deploy a control plane using tools such
as kubeadm, kops, and kubespray.
Secure communications between control plane services
are implemented using certificates.
Certificates are automatically generated
during deployment or you can generate them using your own certificate authority.
Separate and backup etcd service: The etcd services can either run on the
same machines as other control plane services or run on separate machines
Create multiple control plane systems: For high availability, the
control plane should not be limited to a single machine
Some deployment tools set up Raft
consensus algorithm to do leader election of Kubernetes services. If the
primary goes away, another service elects itself and take over.
Groups of zones are referred to as regions.
if you installed with kubeadm, there are instructions to help you with
Certificate Management
and Upgrading kubeadm clusters.
Production-quality workloads need to be resilient and anything they rely
on needs to be resilient (such as CoreDNS).
Add nodes to the cluster: If you are managing your own cluster you can
add nodes by setting up your own machines and either adding them manually or
having them register themselves to the cluster’s apiserver.
Set up node health checks: For important workloads, you want to make sure
that the nodes and pods running on those nodes are healthy.
Authentication: The apiserver can authenticate users using client
certificates, bearer tokens, an authenticating proxy, or HTTP basic auth.
Authorization: When you set out to authorize your regular users, you will probably choose
between RBAC and ABAC authorization.
Role-based access control (RBAC): Lets you
assign access to your cluster by allowing specific sets of permissions to authenticated users.
Permissions can be assigned for a specific namespace (Role) or across the entire cluster
(ClusterRole).
Attribute-based access control (ABAC): Lets you
create policies based on resource attributes in the cluster and will allow or deny access
based on those attributes.
Set limits on workload resources
Set namespace limits: Set per-namespace quotas on things like memory and CPU
Prepare for DNS demand: If you expect workloads to massively scale up,
your DNS service must be ready to scale up as well.
The nodes
in a swarm use mutual Transport Layer Security (TLS) to authenticate, authorize,
and encrypt the communications with other nodes in the swarm.
By default, the manager node generates a new root Certificate
Authority (CA) along with a key pair, which are used to secure communications
with other nodes that join the swarm.
The manager node also generates two tokens to use when you join additional nodes
to the swarm: one worker token and one manager token.
Each time a new node joins the swarm, the manager issues a certificate to the
node
By default, each node in the swarm renews its certificate every three months.
a cluster CA key or a manager node is compromised, you can
rotate the swarm root CA so that none of the nodes trust certificates
signed by the old root CA anymore.
"The nodes in a swarm use mutual Transport Layer Security (TLS) to authenticate, authorize, and encrypt the communications with other nodes in the swarm."
a stateless authentication mechanism as the user state is never saved in server memory
In authentication, when the user successfully logs in using their credentials, a JSON Web Token will be returned and must be saved locally (typically in local storage, but cookies can be also used), instead of the traditional approach of creating a session in the server and returning a cookie.
ser agent should send the JWT, typically in the Authorization header using the Bearer schema.
Helm will figure out where to install Tiller by reading your Kubernetes
configuration file (usually $HOME/.kube/config). This is the same file
that kubectl uses.
By default, when Tiller is installed, it does not have authentication enabled.
helm repo update
Without a max history set the history is kept indefinitely, leaving a large number of records for helm and tiller to maintain.
helm init --upgrade
Whenever you install a chart, a new release is created.
one chart can
be installed multiple times into the same cluster. And each can be
independently managed and upgraded.
helm list function will show you a list of all deployed releases.
helm delete
helm status
you
can audit a cluster’s history, and even undelete a release (with helm
rollback).
the Helm
server (Tiller).
The Helm client (helm)
brew install kubernetes-helm
Tiller, the server portion of Helm, typically runs inside of your
Kubernetes cluster.
it can also be run locally, and
configured to talk to a remote Kubernetes cluster.
Role-Based Access Control - RBAC for short
create a service account for Tiller with the right roles and permissions to access resources.
run Tiller in an RBAC-enabled Kubernetes cluster.
run kubectl get pods --namespace
kube-system and see Tiller running.
helm inspect
Helm will look for Tiller in the kube-system namespace unless
--tiller-namespace or TILLER_NAMESPACE is set.
For development, it is sometimes easier to work on Tiller locally, and
configure it to connect to a remote Kubernetes cluster.
even when running locally, Tiller will store release
configuration in ConfigMaps inside of Kubernetes.
helm version should show you both
the client and server version.
Tiller stores its data in Kubernetes ConfigMaps, you can safely
delete and re-install Tiller without worrying about losing any data.
helm reset
The --node-selectors flag allows us to specify the node labels required
for scheduling the Tiller pod.
--override allows you to specify properties of Tiller’s
deployment manifest.
helm init --override manipulates the specified properties of the final
manifest (there is no “values” file).
The --output flag allows us skip the installation of Tiller’s deployment
manifest and simply output the deployment manifest to stdout in either
JSON or YAML format.
By default, tiller stores release information in ConfigMaps in the namespace
where it is running.
switch from the default backend to the secrets
backend, you’ll have to do the migration for this on your own.
a beta SQL storage backend that stores release
information in an SQL database (only postgres has been tested so far).
Once you have the Helm Client and Tiller successfully installed, you can
move on to using Helm to manage charts.
Helm requires that kubelet have access to a copy of the socat program to proxy connections to the Tiller API.
A Release is an instance of a chart running in a Kubernetes cluster.
One chart can often be installed many times into the same cluster.
helm init --client-only
helm init --dry-run --debug
A panic in Tiller is almost always the result of a failure to negotiate with the
Kubernetes API server
Tiller and Helm have to negotiate a common version to make sure that they can safely
communicate without breaking API assumptions
helm delete --purge
Helm stores some files in $HELM_HOME, which is
located by default in ~/.helm
A Chart is a Helm package. It contains all of the resource definitions
necessary to run an application, tool, or service inside of a Kubernetes
cluster.
it like the Kubernetes equivalent of a Homebrew formula,
an Apt dpkg, or a Yum RPM file.
A Repository is the place where charts can be collected and shared.
Set the $HELM_HOME environment variable
each time it is installed, a new release is created.
Helm installs charts into Kubernetes, creating a new release for
each installation. And to find new charts, you can search Helm chart
repositories.
chart repository is named
stable by default
helm search shows you all of the available charts
helm inspect
To install a new package, use the helm install command. At its
simplest, it takes only one argument: The name of the chart.
If you want to use your own release name, simply use the
--name flag on helm install
additional configuration steps you can or
should take.
Helm does not wait until all of the resources are running before it
exits. Many charts require Docker images that are over 600M in size, and
may take a long time to install into the cluster.
helm status
helm inspect
values
helm inspect values stable/mariadb
override any of these settings in a YAML formatted file,
and then pass that file during installation.
helm install -f config.yaml stable/mariadb
--values (or -f): Specify a YAML file with overrides.
--set (and its variants --set-string and --set-file): Specify overrides on the command line.
Values that have been --set can be cleared by running helm upgrade with --reset-values
specified.
Chart
designers are encouraged to consider the --set usage when designing the format
of a values.yaml file.
--set-file key=filepath is another variant of --set.
It reads the file and use its content as a value.
inject a multi-line text into values without dealing with indentation in YAML.
An unpacked chart directory
When a new version of a chart is released, or when you want to change
the configuration of your release, you can use the helm upgrade
command.
Kubernetes charts can be large and
complex, Helm tries to perform the least invasive upgrade.
It will only
update things that have changed since the last release
If both are used, --set values are merged into --values with higher precedence.
The helm get command is a useful tool for looking at a release in the
cluster.
helm rollback
A release version is an incremental revision. Every time an install,
upgrade, or rollback happens, the revision number is incremented by 1.
helm history
a release name cannot be
re-used.
you can rollback a
deleted resource, and have it re-activate.
helm repo list
helm repo add
helm repo update
The Chart Development Guide explains how to develop your own
charts.
helm create
helm lint
helm package
Charts that are archived can be loaded into chart repositories.
chart repository server
Tiller can be installed into any namespace.
Limiting Tiller to only be able to install into specific namespaces and/or resource types is controlled by Kubernetes RBAC roles and rolebindings
Release names are unique PER TILLER INSTANCE
Charts should only contain resources that exist in a single namespace.
not recommended to have multiple Tillers configured to manage resources in the same namespace.
a client-side Helm plugin. A plugin is a
tool that can be accessed through the helm CLI, but which is not part of the
built-in Helm codebase.
Helm plugins are add-on tools that integrate seamlessly with Helm. They provide
a way to extend the core feature set of Helm, but without requiring every new
feature to be written in Go and added to the core tool.
Helm plugins live in $(helm home)/plugins
The Helm plugin model is partially modeled on Git’s plugin model
helm referred to as the porcelain layer, with
plugins being the plumbing.
command is the command that this plugin will
execute when it is called.
Environment variables are interpolated before the plugin
is executed.
The command itself is not executed in a shell. So you can’t oneline a shell script.
Helm is able to fetch Charts using HTTP/S
Variables like KUBECONFIG are set for the plugin if they are set in the
outer environment.
In Kubernetes, granting a role to an application-specific service account is a best practice to ensure that your application is operating in the scope that you have specified.
restrict Tiller’s capabilities to install resources to certain namespaces, or to grant a Helm client running access to a Tiller instance.
Service account with cluster-admin role
The cluster-admin role is created by default in a Kubernetes cluster
Deploy Tiller in a namespace, restricted to deploying resources only in that namespace
Deploy Tiller in a namespace, restricted to deploying resources in another namespace
When running a Helm client in a pod, in order for the Helm client to talk to a Tiller instance, it will need certain privileges to be granted.
SSL Between Helm and Tiller
The Tiller authentication model uses client-side SSL certificates.
creating an internal CA, and using both the
cryptographic and identity functions of SSL.
Helm is a powerful and flexible package-management and operations tool for Kubernetes.
default installation applies no security configurations
with a cluster that is well-secured in a private network with no data-sharing or no other users or teams.
With great power comes great responsibility.
Choose the Best Practices you should apply to your helm installation
Role-based access control, or RBAC
Tiller’s gRPC endpoint and its usage by Helm
Kubernetes employ a role-based access control (or RBAC) system (as do modern operating systems) to help mitigate the damage that can be done if credentials are misused or bugs exist.
In the default installation the gRPC endpoint that Tiller offers is available inside the cluster (not external to the cluster) without authentication configuration applied.
Tiller stores its release information in ConfigMaps. We suggest changing the default to Secrets.
release information
charts
charts are a kind of package that not only installs containers you may or may not have validated yourself, but it may also install into more than one namespace.
As with all shared software, in a controlled or shared environment you must validate all software you install yourself before you install it.
Helm’s provenance tools to ensure the provenance and integrity of charts
"Helm will figure out where to install Tiller by reading your Kubernetes configuration file (usually $HOME/.kube/config). This is the same file that kubectl uses."
By default, secrets are mounted into a service at /run/secrets/<secret-name>
docker secret create
If you use a distributed storage driver, such as Amazon S3, you can use a
fully replicated service. Each worker can write to the storage back-end
without causing write conflicts.
You can access the service on port 443 of any swarm node. Docker sends the
requests to the node which is running the service.
--publish published=443,target=443
The most important aspect is that a load balanced cluster of registries must
share the same resources
S3 or Azure, they should be
accessing the same resource and share an identical configuration.
you must make sure you are properly sending the
X-Forwarded-Proto, X-Forwarded-For, and Host headers to their “client-side”
values. Failure to do so usually makes the registry issue redirects to internal
hostnames or downgrading from https to http.
A properly secured registry should return 401 when the “/v2/” endpoint is hit
without credentials
registries should always
implement access restrictions.
REGISTRY_AUTH=htpasswd
REGISTRY_AUTH_HTPASSWD_PATH=/auth/htpasswd
The registry also supports delegated authentication which redirects users to a
specific trusted token server. This approach is more complicated to set up, and
only makes sense if you need to fully configure ACLs and need more control over
the registry’s integration into your global authorization and authentication
systems.
Baseimage-docker only advocates running multiple OS processes inside a single container.
Password and challenge-response authentication are disabled by default. Only key authentication is allowed.
A tool for running a command as another user
The Docker developers advocate the philosophy of running a single logical service per container. A logical service can consist of multiple OS processes.
All syslog messages are forwarded to "docker logs".
Baseimage-docker advocates running multiple OS processes inside a single container, and a single logical service can consist of multiple OS processes.
Baseimage-docker provides tools to encourage running processes as different users
sometimes it makes sense to run multiple services in a single container, and sometimes it doesn't.
Splitting your logical service into multiple OS processes also makes sense from a security standpoint.
using environment variables to pass parameters to containers is very much the "Docker way"
Baseimage-docker provides a facility to run a single one-shot command, while solving all of the aforementioned problems
the shell script must run the daemon without letting it daemonize/fork it.
All executable scripts in /etc/my_init.d, if this directory exists. The scripts are run in lexicographic order.
variables will also be passed to all child processes
Environment variables on Unix are inherited on a per-process basis
there is no good central place for defining environment variables for all applications and services
centrally defining environment variables
One of the ideas behind Docker is that containers should be stateless, easily restartable, and behave like a black box.
a one-shot command in a new container
immediately exit after the command exits,
However the downside of this approach is that the init system is not started. That is, while invoking COMMAND, important daemons such as cron and syslog are not running. Also, orphaned child processes are not properly reaped, because COMMAND is PID 1.
add additional daemons (e.g. your own app) to the image by creating runit entries.
Nginx is one such example: it removes all environment variables unless you explicitly instruct it to retain them through the env configuration option.
Mechanisms for easily running multiple processes, without violating the Docker philosophy
Ubuntu is not designed to be run inside Docker
According to the Unix process model, the init process -- PID 1 -- inherits all orphaned child processes and must reap them
Syslog-ng seems to be much more stable
cron daemon
Rotates and compresses logs
/sbin/setuser
A tool for installing apt packages that automatically cleans up after itself.
a single logical service inside a single container
A daemon is a program which runs in the background of its system, such
as a web server.
The shell script must be called run, must be executable, and is to be
placed in the directory /etc/service/<NAME>. runsv will switch to
the directory and invoke ./run after your container starts.
If any script exits with a non-zero exit code, the booting will fail.
If your process is started with
a shell script, make sure you exec the actual process, otherwise the shell will receive the signal
and not your process.
any environment variables set with docker run --env or with the ENV command in the Dockerfile, will be picked up by my_init
not possible for a child process to change the environment variables of other processes
they will not see the environment variables that were originally passed by Docker.
We ignore HOME, SHELL, USER and a bunch of other environment variables on purpose, because not ignoring them will break multi-user containers.
my_init imports environment variables from the directory /etc/container_environment
/etc/container_environment.sh - a dump of the environment variables in Bash format.
modify the environment variables in my_init (and therefore the environment variables in all child processes that are spawned after that point in time), by altering the files in /etc/container_environment
my_init only activates changes in /etc/container_environment when running startup scripts
environment variables don't contain sensitive data, then you can also relax the permissions
Syslog messages are forwarded to the console
syslog-ng is started separately before the runit supervisor process, and shutdown after runit exits.
RUN apt-get update && apt-get upgrade -y -o Dpkg::Options::="--force-confold"
/sbin/my_init --skip-startup-files --quiet --
By default, no keys are installed, so nobody can login
provide a pregenerated, insecure key (PuTTY format)
RUN /usr/sbin/enable_insecure_key
docker run YOUR_IMAGE /sbin/my_init --enable-insecure-key
RUN cat /tmp/your_key.pub >> /root/.ssh/authorized_keys && rm -f /tmp/your_key.pub
The default baseimage-docker installs syslog-ng, cron and sshd services during the build process
ED25519 is more vulnerable to quantum computation than is RSA
best practice to be using a hardware token
to use a yubikey via gpg: with this method you use your gpg subkey as an ssh key
sit down and spend an hour thinking about your backup and recovery strategy first
never share a private keys between physical devices
allows you to revoke a single credential if you lose (control over) that device
If a private key ever turns up on the wrong machine,
you *know* the key and both source and destination
machines have been compromised.
centralized management of authentication/authorization
I have setup a VPS, disabled passwords, and setup a key with a passphrase to gain access. At this point my greatest worry is losing this private key, as that means I can't access the server.What is a reasonable way to backup my private key?
a mountable disk image that's encrypted
a system that can update/rotate your keys across all of your servers on the fly in case one is compromised or assumed to be compromised.
different keys for different purposes per client device
fall back to password plus OTP
relying completely on the security of your disk, against either physical or cyber.
It is better to use a different passphrase for each key but it is also less convenient unless you're using a password manager (personally, I'm using KeePass)
- RSA is pretty standard, and generally speaking is fairly secure for key lengths >=2048. RSA-2048 is the default for ssh-keygen, and is compatible with just about everything.
public-key authentication has somewhat unexpected side effect of preventing MITM per this security consulting firm
Disable passwords and only allow keys even for root with PermitRootLogin without-password
You should definitely use a different passphrase for keys stored on separate computers,
Keycloak is an “Open source identity and access management” solution.
setup a central Identity Provider (IdP) that applications acting as Service Providers (SP) use to authenticate or authorize user access.
FreeIPA does a LOT more than just provide user info though. It can manage user groups, service lists, hosts, DNS, certificates, and much, much, more.
Your project will continue to use an alternative
CI/CD configuration file if one is found
Auto DevOps works with any Kubernetes cluster;
using the
Docker or Kubernetes
executor, with
privileged mode enabled.
Base domain (needed for Auto Review Apps and Auto Deploy)
Kubernetes (needed for Auto Review Apps, Auto Deploy, and Auto Monitoring)
Prometheus (needed for Auto Monitoring)
scrape your Kubernetes cluster.
project level as a variable: KUBE_INGRESS_BASE_DOMAIN
A wildcard DNS A record matching the base domain(s) is required
Once set up, all requests will hit the load balancer, which in turn will route
them to the Kubernetes pods that run your application(s).
review/ (every environment starting with review/)
staging
production
need to define a separate
KUBE_INGRESS_BASE_DOMAIN variable for all the above
based on the environment.
Continuous deployment to production: Enables Auto Deploy
with master branch directly deployed to production.
Continuous deployment to production using timed incremental rollout
Automatic deployment to staging, manual deployment to production
Auto Build creates a build of the application using an existing Dockerfile or
Heroku buildpacks.
If a project’s repository contains a Dockerfile, Auto Build will use
docker build to create a Docker image.
Each buildpack requires certain files to be in your project’s repository for
Auto Build to successfully build your application.
Auto Test automatically runs the appropriate tests for your application using
Herokuish and Heroku
buildpacks by analyzing
your project to detect the language and framework.
Auto Code Quality uses the
Code Quality image to run
static analysis and other code checks on the current code.
Static Application Security Testing (SAST) uses the
SAST Docker image to run static
analysis on the current code and checks for potential security issues.
Dependency Scanning uses the
Dependency Scanning Docker image
to run analysis on the project dependencies and checks for potential security issues.
License Management uses the
License Management Docker image
to search the project dependencies for their license.
Vulnerability Static Analysis for containers uses
Clair to run static analysis on a
Docker image and checks for potential security issues.
Review Apps are temporary application environments based on the
branch’s code so developers, designers, QA, product managers, and other
reviewers can actually see and interact with code changes as part of the review
process. Auto Review Apps create a Review App for each branch.
Auto Review Apps will deploy your app to your Kubernetes cluster only. When no cluster
is available, no deployment will occur.
The Review App will have a unique URL based on the project ID, the branch or tag
name, and a unique number, combined with the Auto DevOps base domain.
Review apps are deployed using the
auto-deploy-app chart with
Helm, which can be customized.
Your apps should not be manipulated outside of Helm (using Kubernetes directly).
Dynamic Application Security Testing (DAST) uses the
popular open source tool OWASP ZAProxy
to perform an analysis on the current code and checks for potential security
issues.
Auto Browser Performance Testing utilizes the Sitespeed.io container to measure the performance of a web page.
add the paths to a file named .gitlab-urls.txt in the root directory, one per line.
After a branch or merge request is merged into the project’s default branch (usually
master), Auto Deploy deploys the application to a production environment in
the Kubernetes cluster, with a namespace based on the project name and unique
project ID
Auto Deploy doesn’t include deployments to staging or canary by default, but the
Auto DevOps template contains job definitions for these tasks if you want to
enable them.
Apps are deployed using the
auto-deploy-app chart with
Helm.
For internal and private projects a GitLab Deploy Token
will be automatically created, when Auto DevOps is enabled and the Auto DevOps settings are saved.
If the GitLab Deploy Token cannot be found, CI_REGISTRY_PASSWORD is
used. Note that CI_REGISTRY_PASSWORD is only valid during deployment.
If present, DB_INITIALIZE will be run as a shell command within an
application pod as a helm post-install hook.
a post-install hook means that if any deploy succeeds,
DB_INITIALIZE will not be processed thereafter.
DB_MIGRATE will be run as a shell command within an application pod as
a helm pre-upgrade hook.
Once your application is deployed, Auto Monitoring makes it possible to monitor
your application’s server and response metrics right out of the box.
annotate
the NGINX Ingress deployment to be scraped by Prometheus using
prometheus.io/scrape: "true" and prometheus.io/port: "10254"
If you are also using Auto Review Apps and Auto Deploy and choose to provide
your own Dockerfile, make sure you expose your application to port
5000 as this is the port assumed by the
default Helm chart.
While Auto DevOps provides great defaults to get you started, you can customize
almost everything to fit your needs; from custom buildpacks,
to Dockerfiles, Helm charts, or
even copying the complete CI/CD configuration
into your project to enable staging and canary deployments, and more.
If your project has a Dockerfile in the root of the project repo, Auto DevOps
will build a Docker image based on the Dockerfile rather than using buildpacks.
Auto DevOps uses Helm to deploy your application to Kubernetes.
Bundled chart - If your project has a ./chart directory with a Chart.yaml
file in it, Auto DevOps will detect the chart and use it instead of the default
one.
Create a project variable
AUTO_DEVOPS_CHART with the URL of a custom chart to use or create two project variables AUTO_DEVOPS_CHART_REPOSITORY with the URL of a custom chart repository and AUTO_DEVOPS_CHART with the path to the chart.
make use of the HELM_UPGRADE_EXTRA_ARGS environment variable to override the default values in the values.yaml file in the default Helm chart.
specify the use of a custom Helm chart per environment by scoping the environment variable
to the desired environment.
Your additions will be merged with the
Auto DevOps template using the behaviour described for
include
copy and paste the contents of the Auto DevOps
template into your project and edit this as needed.
In order to support applications that require a database,
PostgreSQL is provisioned by default.
Set up the replica variables using a
project variable
and scale your application by just redeploying it!
You should not scale your application using Kubernetes directly.
Some applications need to define secret variables that are
accessible by the deployed application.
Auto DevOps detects variables where the key starts with
K8S_SECRET_ and make these prefixed variables available to the
deployed application, as environment variables.
Auto DevOps pipelines will take your application secret variables to
populate a Kubernetes secret.
Environment variables are generally considered immutable in a Kubernetes
pod.
if you update an application secret without changing any
code then manually create a new pipeline, you will find that any running
application pods will not have the updated secrets.
Variables with multiline values are not currently supported
The normal behavior of Auto DevOps is to use Continuous Deployment, pushing
automatically to the production environment every time a new pipeline is run
on the default branch.
If STAGING_ENABLED is defined in your project (e.g., set STAGING_ENABLED to
1 as a CI/CD variable), then the application will be automatically deployed
to a staging environment, and a production_manual job will be created for
you when you’re ready to manually deploy to production.
If CANARY_ENABLED is defined in your project (e.g., set CANARY_ENABLED to
1 as a CI/CD variable) then two manual jobs will be created:
canary which will deploy the application to the canary environment
production_manual which is to be used by you when you’re ready to manually
deploy to production.
If INCREMENTAL_ROLLOUT_MODE is set to manual in your project, then instead
of the standard production job, 4 different
manual jobs
will be created:
rollout 10%
rollout 25%
rollout 50%
rollout 100%
The percentage is based on the REPLICAS variable and defines the number of
pods you want to have for your deployment.
To start a job, click on the play icon next to the job’s name.
Once you get to 100%, you cannot scale down, and you’d have to roll
back by redeploying the old version using the
rollback button in the
environment page.
With INCREMENTAL_ROLLOUT_MODE set to manual and with STAGING_ENABLED
not all buildpacks support Auto Test yet
When a project has been marked as private, GitLab’s Container
Registry requires authentication when downloading
containers.
Authentication credentials will be valid while the pipeline is running, allowing
for a successful initial deployment.
After the pipeline completes, Kubernetes will no longer be able to access the
Container Registry.
We strongly advise using GitLab Container Registry with Auto DevOps in order to
simplify configuration and prevent any unforeseen issues.
designed to run on Rack
or complement existing web application frameworks such as Rails and Sinatra by
providing a simple DSL to easily develop RESTful APIs
Grape APIs are Rack applications that are created by subclassing Grape::API
Rails expects a subdirectory that matches the name of the Ruby module and a file name that matches the name of the class
mount multiple API implementations inside another one
mount on a path, which is similar to using prefix inside the mounted API itself.
four strategies in which clients can reach your API's endpoints: :path,
:header, :accept_version_header and :param
clients should pass the desired version as a request parameter,
either in the URL query string or in the request body.
clients should pass the desired version in the HTTP Accept head
clients should pass the desired version in the UR
clients should pass the desired version in the HTTP Accept-Version header.
add a description to API methods and namespaces
Request parameters are available through the params hash object
Parameters are automatically populated from the request body on POST and PUT
route string parameters will have precedence.
Grape allows you to access only the parameters that have been declared by your params block
By default declared(params) includes parameters that have nil values
all valid types
type: File
JSON objects and arrays of objects are accepted equally
any class can be
used as a type so long as an explicit coercion method is supplied
As a special case, variant-member-type collections may also be declared, by
passing a Set or Array with more than one member to type
Parameters can be nested using group or by calling requires or optional with a block
relevant if another parameter is given
Parameters options can be grouped
allow_blank can be combined with both requires and optional
Parameters can be restricted to a specific set of values
Parameters can be restricted to match a specific regular expression
Never define mutually exclusive sets with any required params
Namespaces allow parameter definitions and apply to every method within the namespace
define a route parameter as a namespace using route_param
create custom validation that use request to validate the attribute
rescue a Grape::Exceptions::ValidationErrors and respond with a custom response or turn the response into well-formatted JSON for a JSON API that separates individual parameters and the corresponding error messages
custom validation messages
Request headers are available through the headers helper or from env in their original form
define requirements for your named route parameters using regular
expressions on namespace or endpoint
route will match only if all requirements are met
mix in a module
define reusable params
using cookies method
a 201 for POST-Requests
204 for DELETE-Requests
200 status code for all other Requests
use status to query and set the actual HTTP Status Code
raising errors with error!
It is very crucial to define this endpoint at the very end of your API, as it
literally accepts every request.
rescue_from will rescue the exceptions listed and all their subclasses.
Grape::API provides a logger method which by default will return an instance of the Logger
class from Ruby's standard library.
Grape supports a range of ways to present your data
Grape has built-in Basic and Digest authentication (the given block
is executed in the context of the current Endpoint).
Authentication
applies to the current namespace and any children, but not parents.
Blocks can be executed before or after every API call, using before, after,
before_validation and after_validation
Before and after callbacks execute in the following order
Grape by default anchors all request paths, which means that the request URL
should match from start to end to match
The namespace method has a number of aliases, including: group, resource,
resources, and segment. Use whichever reads the best for your API.
test a Grape API with RSpec by making HTTP requests and examining the response
POST JSON data and specify the correct content-type.
discussing the basic structure of an Nginx configuration file along with some guidelines on how to design your files
/etc/nginx/nginx.conf
In Nginx parlance, the areas that these brackets define are called "contexts" because they contain configuration details that are separated according to their area of concern
if a directive is valid in multiple nested scopes, a declaration in a broader context will be passed on to any child contexts as default values.
The children contexts can override these values at will
Nginx will error out on reading a configuration file with directives that are declared in the wrong context.
The most general context is the "main" or "global" context
Any directive that exist entirely outside of these blocks is said to inhabit the "main" context
The main context represents the broadest environment for Nginx configuration.
The "events" context is contained within the "main" context. It is used to set global options that affect how Nginx handles connections at a general level.
Nginx uses an event-based connection processing model, so the directives defined within this context determine how worker processes should handle connections.
the connection processing method is automatically selected based on the most efficient choice that the platform has available
a worker will only take a single connection at a time
When configuring Nginx as a web server or reverse proxy, the "http" context will hold the majority of the configuration.
The http context is a sibling of the events context, so they should be listed side-by-side, rather than nested
fine-tune the TCP keep alive settings (keepalive_disable, keepalive_requests, and keepalive_timeout)
The "server" context is declared within the "http" context.
multiple declarations
each instance defines a specific virtual server to handle client requests
Each client request will be handled according to the configuration defined in a single server context, so Nginx must decide which server context is most appropriate based on details of the request.
listen: The ip address / port combination that this server block is designed to respond to.
server_name: This directive is the other component used to select a server block for processing.
"Host" header
configure files to try to respond to requests (try_files)
issue redirects and rewrites (return and rewrite)
set arbitrary variables (set)
Location contexts share many relational qualities with server contexts
multiple location contexts can be defined, each location is used to handle a certain type of client request, and each location is selected by virtue of matching the location definition against the client request through a selection algorithm
Location blocks live within server contexts and, unlike server blocks, can be nested inside one another.
While server contexts are selected based on the requested IP address/port combination and the host name in the "Host" header, location blocks further divide up the request handling within a server block by looking at the request URI
The request URI is the portion of the request that comes after the domain name or IP address/port combination.
New directives at this level allow you to reach locations outside of the document root (alias), mark the location as only internally accessible (internal), and proxy to other servers or locations (using http, fastcgi, scgi, and uwsgi proxying).
These can then be used to do A/B testing by providing different content to different hosts.
configures Perl handlers for the location they appear in
set the value of a variable depending on the value of another variable
used to map MIME types to the file extensions that should be associated with them.
this context defines a named pool of servers that Nginx can then proxy requests to
The upstream context should be placed within the http context, outside of any specific server contexts.
The upstream context can then be referenced by name within server or location blocks to pass requests of a certain type to the pool of servers that have been defined.
function as a high performance mail proxy server
The mail context is defined within the "main" or "global" context (outside of the http context).
Nginx has the ability to redirect authentication requests to an external authentication server
the if directive in Nginx will execute the instructions contained if a given test returns "true".
Since Nginx will test conditions of a request with many other purpose-made directives, if should not be used for most forms of conditional execution.
The limit_except context is used to restrict the use of certain HTTP methods within a location context.
The result of the above example is that any client can use the GET and HEAD verbs, but only clients coming from the 192.168.1.1/24 subnet are allowed to use other methods.
Many directives are valid in more than one context
it is usually best to declare directives in the highest context to which they are applicable, and overriding them in lower contexts as necessary.
Declaring at higher levels provides you with a sane default
Nginx already engages in a well-documented selection algorithm for things like selecting server blocks and location blocks.
instead of relying on rewrites to get a user supplied request into the format that you would like to work with, you should try to set up two blocks for the request, one of which represents the desired method, and the other that catches messy requests and redirects (and possibly rewrites) them to your correct block.
incorrect requests can get by with a redirect rather than a rewrite, which should execute with lower overhead.
Do not use the Directory Manager account to authenticate remote services to the IPA LDAP server. Use a system account
The reason to use an account like this rather than creating a normal user account in IPA and using that is that the system account exists only for binding to LDAP. It is not a real POSIX user, can't log into any systems and doesn't own any files.
This use also has no special rights and is unable to write any data in the IPA LDAP server, only read.
When possible, configure your LDAP client to communicate over SSL/TLS.
The IPA CA certificate can be found in /etc/ipa/ca.crt
(Recommended) If you have plans to upgrade this single control-plane kubeadm cluster
to high availability you should specify the --control-plane-endpoint to set the shared endpoint
for all control-plane nodes
set the --pod-network-cidr to
a provider-specific value.
kubeadm tries to detect the container runtime by using a list of well
known endpoints.
kubeadm uses the network interface associated
with the default gateway to set the advertise address for this particular control-plane node's API server.
To use a different network interface, specify the --apiserver-advertise-address=<ip-address> argument
to kubeadm init
Do not share the admin.conf file with anyone and instead grant users custom permissions by generating
them a kubeconfig file using the kubeadm kubeconfig user command.
The token is used for mutual authentication between the control-plane node and the joining
nodes. The token included here is secret. Keep it safe, because anyone with this
token can add authenticated nodes to your cluster.
You must deploy a
Container Network Interface
(CNI) based Pod network add-on so that your Pods can communicate with each other.
Cluster DNS (CoreDNS) will not start up before a network is installed.
Take care that your Pod network must not overlap with any of the host
networks
Make sure that your Pod network plugin supports RBAC, and so do any manifests
that you use to deploy it.
You can install only one Pod network per cluster.
The cluster created here has a single control-plane node, with a single etcd database
running on it.
The node-role.kubernetes.io/control-plane label is such a restricted label and kubeadm manually applies it using
a privileged client after a node has been created.
By default, your cluster will not schedule Pods on the control plane nodes for security
reasons.
remove the node-role.kubernetes.io/control-plane:NoSchedule taint
from any nodes that have it, including the control plane nodes, meaning that the
scheduler will then be able to schedule Pods everywhere.