automatically discover any services on the Docker host and let Træfik reconfigure itself automatically when containers get created (or shut down) so HTTP traffic can be routed accordingly.
use Træfik as a layer-7 load balancer with SSL termination for a set of micro-services used to run a web application.
Docker containers can only communicate with each other over TCP when they share at least one network.
Docker under the hood creates IPTable rules so containers can't reach other containers unless you'd want to
Træfik can listen to Docker events and reconfigure its own internal configuration when containers are created (or shut down).
Enable the Docker provider and listen for container events on the Docker unix socket we've mounted earlier.
Enable automatic request and configuration of SSL certificates using Let's Encrypt.
These certificates will be stored in the acme.json file, which you can back-up yourself and store off-premises.
there isn't a single container that has any published ports to the host -- everything is routed through Docker networks.
Thanks to Docker labels, we can tell Træfik how to create its internal routing configuration.
container labels and service labels
With the traefik.enable label, we tell Træfik to include this container in its internal configuration.
tell Træfik to use the web network to route HTTP traffic to this container.
Service labels allow managing many routes for the same container.
When both container labels and service labels are defined, container labels are just used as default values for missing service labels but no frontend/backend are going to be defined only with these labels.
In the example, two service names are defined : basic and admin.
They allow creating two frontends and two backends.
Always specify the correct port where the container expects HTTP traffic using traefik.port label.
all containers that are placed in the same network as Træfik will automatically be reachable from the outside world
With the traefik.frontend.auth.basic label, it's possible for Træfik to provide a HTTP basic-auth challenge for the endpoints you provide the label for.
variable definitions can have default values assigned to them.
values are stored in separate files with the .tfvars extension.
looks through the working directory for a file named terraform.tfvars, or for files with the .auto.tfvars extension.
add the terraform.tfvars file to your .gitignore file and keep it out of version control.
include an example terraform.tfvars.example in your Git repository with all of the variable names recorded (but none of the values entered).
terraform apply -var-file=myvars.tfvars
Terraform allows you to keep input variable values in environment variables.
the prefix TF_VAR_
If Terraform does not find a default value for a defined variable; or a value from a .tfvars file, environment variable, or CLI flag; it will prompt you for a value before running an action
state file contains a JSON object that holds your managed infrastructure’s current state
state is a snapshot of the various attributes of your infrastructure at the time it was last modified
sensitive information used to generate your Terraform state can be stored as plain text in the terraform.tfstate file.
Avoid checking your terraform.tfstate file into your version control repository.
Some backends, like Consul, also allow for state locking. If one user is applying a state, another user will be unable to make any changes.
Terraform backends allow the user to securely store their state in a remote location, such as a key/value store like Consul, or an S3 compatible bucket storage like Minio.
Helm will figure out where to install Tiller by reading your Kubernetes
configuration file (usually $HOME/.kube/config). This is the same file
that kubectl uses.
By default, when Tiller is installed, it does not have authentication enabled.
helm repo update
Without a max history set the history is kept indefinitely, leaving a large number of records for helm and tiller to maintain.
helm init --upgrade
Whenever you install a chart, a new release is created.
one chart can
be installed multiple times into the same cluster. And each can be
independently managed and upgraded.
helm list function will show you a list of all deployed releases.
helm delete
helm status
you
can audit a cluster’s history, and even undelete a release (with helm
rollback).
the Helm
server (Tiller).
The Helm client (helm)
brew install kubernetes-helm
Tiller, the server portion of Helm, typically runs inside of your
Kubernetes cluster.
it can also be run locally, and
configured to talk to a remote Kubernetes cluster.
Role-Based Access Control - RBAC for short
create a service account for Tiller with the right roles and permissions to access resources.
run Tiller in an RBAC-enabled Kubernetes cluster.
run kubectl get pods --namespace
kube-system and see Tiller running.
helm inspect
Helm will look for Tiller in the kube-system namespace unless
--tiller-namespace or TILLER_NAMESPACE is set.
For development, it is sometimes easier to work on Tiller locally, and
configure it to connect to a remote Kubernetes cluster.
even when running locally, Tiller will store release
configuration in ConfigMaps inside of Kubernetes.
helm version should show you both
the client and server version.
Tiller stores its data in Kubernetes ConfigMaps, you can safely
delete and re-install Tiller without worrying about losing any data.
helm reset
The --node-selectors flag allows us to specify the node labels required
for scheduling the Tiller pod.
--override allows you to specify properties of Tiller’s
deployment manifest.
helm init --override manipulates the specified properties of the final
manifest (there is no “values” file).
The --output flag allows us skip the installation of Tiller’s deployment
manifest and simply output the deployment manifest to stdout in either
JSON or YAML format.
By default, tiller stores release information in ConfigMaps in the namespace
where it is running.
switch from the default backend to the secrets
backend, you’ll have to do the migration for this on your own.
a beta SQL storage backend that stores release
information in an SQL database (only postgres has been tested so far).
Once you have the Helm Client and Tiller successfully installed, you can
move on to using Helm to manage charts.
Helm requires that kubelet have access to a copy of the socat program to proxy connections to the Tiller API.
A Release is an instance of a chart running in a Kubernetes cluster.
One chart can often be installed many times into the same cluster.
helm init --client-only
helm init --dry-run --debug
A panic in Tiller is almost always the result of a failure to negotiate with the
Kubernetes API server
Tiller and Helm have to negotiate a common version to make sure that they can safely
communicate without breaking API assumptions
helm delete --purge
Helm stores some files in $HELM_HOME, which is
located by default in ~/.helm
A Chart is a Helm package. It contains all of the resource definitions
necessary to run an application, tool, or service inside of a Kubernetes
cluster.
it like the Kubernetes equivalent of a Homebrew formula,
an Apt dpkg, or a Yum RPM file.
A Repository is the place where charts can be collected and shared.
Set the $HELM_HOME environment variable
each time it is installed, a new release is created.
Helm installs charts into Kubernetes, creating a new release for
each installation. And to find new charts, you can search Helm chart
repositories.
chart repository is named
stable by default
helm search shows you all of the available charts
helm inspect
To install a new package, use the helm install command. At its
simplest, it takes only one argument: The name of the chart.
If you want to use your own release name, simply use the
--name flag on helm install
additional configuration steps you can or
should take.
Helm does not wait until all of the resources are running before it
exits. Many charts require Docker images that are over 600M in size, and
may take a long time to install into the cluster.
helm status
helm inspect
values
helm inspect values stable/mariadb
override any of these settings in a YAML formatted file,
and then pass that file during installation.
helm install -f config.yaml stable/mariadb
--values (or -f): Specify a YAML file with overrides.
--set (and its variants --set-string and --set-file): Specify overrides on the command line.
Values that have been --set can be cleared by running helm upgrade with --reset-values
specified.
Chart
designers are encouraged to consider the --set usage when designing the format
of a values.yaml file.
--set-file key=filepath is another variant of --set.
It reads the file and use its content as a value.
inject a multi-line text into values without dealing with indentation in YAML.
An unpacked chart directory
When a new version of a chart is released, or when you want to change
the configuration of your release, you can use the helm upgrade
command.
Kubernetes charts can be large and
complex, Helm tries to perform the least invasive upgrade.
It will only
update things that have changed since the last release
If both are used, --set values are merged into --values with higher precedence.
The helm get command is a useful tool for looking at a release in the
cluster.
helm rollback
A release version is an incremental revision. Every time an install,
upgrade, or rollback happens, the revision number is incremented by 1.
helm history
a release name cannot be
re-used.
you can rollback a
deleted resource, and have it re-activate.
helm repo list
helm repo add
helm repo update
The Chart Development Guide explains how to develop your own
charts.
helm create
helm lint
helm package
Charts that are archived can be loaded into chart repositories.
chart repository server
Tiller can be installed into any namespace.
Limiting Tiller to only be able to install into specific namespaces and/or resource types is controlled by Kubernetes RBAC roles and rolebindings
Release names are unique PER TILLER INSTANCE
Charts should only contain resources that exist in a single namespace.
not recommended to have multiple Tillers configured to manage resources in the same namespace.
a client-side Helm plugin. A plugin is a
tool that can be accessed through the helm CLI, but which is not part of the
built-in Helm codebase.
Helm plugins are add-on tools that integrate seamlessly with Helm. They provide
a way to extend the core feature set of Helm, but without requiring every new
feature to be written in Go and added to the core tool.
Helm plugins live in $(helm home)/plugins
The Helm plugin model is partially modeled on Git’s plugin model
helm referred to as the porcelain layer, with
plugins being the plumbing.
command is the command that this plugin will
execute when it is called.
Environment variables are interpolated before the plugin
is executed.
The command itself is not executed in a shell. So you can’t oneline a shell script.
Helm is able to fetch Charts using HTTP/S
Variables like KUBECONFIG are set for the plugin if they are set in the
outer environment.
In Kubernetes, granting a role to an application-specific service account is a best practice to ensure that your application is operating in the scope that you have specified.
restrict Tiller’s capabilities to install resources to certain namespaces, or to grant a Helm client running access to a Tiller instance.
Service account with cluster-admin role
The cluster-admin role is created by default in a Kubernetes cluster
Deploy Tiller in a namespace, restricted to deploying resources only in that namespace
Deploy Tiller in a namespace, restricted to deploying resources in another namespace
When running a Helm client in a pod, in order for the Helm client to talk to a Tiller instance, it will need certain privileges to be granted.
SSL Between Helm and Tiller
The Tiller authentication model uses client-side SSL certificates.
creating an internal CA, and using both the
cryptographic and identity functions of SSL.
Helm is a powerful and flexible package-management and operations tool for Kubernetes.
default installation applies no security configurations
with a cluster that is well-secured in a private network with no data-sharing or no other users or teams.
With great power comes great responsibility.
Choose the Best Practices you should apply to your helm installation
Role-based access control, or RBAC
Tiller’s gRPC endpoint and its usage by Helm
Kubernetes employ a role-based access control (or RBAC) system (as do modern operating systems) to help mitigate the damage that can be done if credentials are misused or bugs exist.
In the default installation the gRPC endpoint that Tiller offers is available inside the cluster (not external to the cluster) without authentication configuration applied.
Tiller stores its release information in ConfigMaps. We suggest changing the default to Secrets.
release information
charts
charts are a kind of package that not only installs containers you may or may not have validated yourself, but it may also install into more than one namespace.
As with all shared software, in a controlled or shared environment you must validate all software you install yourself before you install it.
Helm’s provenance tools to ensure the provenance and integrity of charts
"Helm will figure out where to install Tiller by reading your Kubernetes configuration file (usually $HOME/.kube/config). This is the same file that kubectl uses."
designed to run on Rack
or complement existing web application frameworks such as Rails and Sinatra by
providing a simple DSL to easily develop RESTful APIs
Grape APIs are Rack applications that are created by subclassing Grape::API
Rails expects a subdirectory that matches the name of the Ruby module and a file name that matches the name of the class
mount multiple API implementations inside another one
mount on a path, which is similar to using prefix inside the mounted API itself.
four strategies in which clients can reach your API's endpoints: :path,
:header, :accept_version_header and :param
clients should pass the desired version as a request parameter,
either in the URL query string or in the request body.
clients should pass the desired version in the HTTP Accept head
clients should pass the desired version in the UR
clients should pass the desired version in the HTTP Accept-Version header.
add a description to API methods and namespaces
Request parameters are available through the params hash object
Parameters are automatically populated from the request body on POST and PUT
route string parameters will have precedence.
Grape allows you to access only the parameters that have been declared by your params block
By default declared(params) includes parameters that have nil values
all valid types
type: File
JSON objects and arrays of objects are accepted equally
any class can be
used as a type so long as an explicit coercion method is supplied
As a special case, variant-member-type collections may also be declared, by
passing a Set or Array with more than one member to type
Parameters can be nested using group or by calling requires or optional with a block
relevant if another parameter is given
Parameters options can be grouped
allow_blank can be combined with both requires and optional
Parameters can be restricted to a specific set of values
Parameters can be restricted to match a specific regular expression
Never define mutually exclusive sets with any required params
Namespaces allow parameter definitions and apply to every method within the namespace
define a route parameter as a namespace using route_param
create custom validation that use request to validate the attribute
rescue a Grape::Exceptions::ValidationErrors and respond with a custom response or turn the response into well-formatted JSON for a JSON API that separates individual parameters and the corresponding error messages
custom validation messages
Request headers are available through the headers helper or from env in their original form
define requirements for your named route parameters using regular
expressions on namespace or endpoint
route will match only if all requirements are met
mix in a module
define reusable params
using cookies method
a 201 for POST-Requests
204 for DELETE-Requests
200 status code for all other Requests
use status to query and set the actual HTTP Status Code
raising errors with error!
It is very crucial to define this endpoint at the very end of your API, as it
literally accepts every request.
rescue_from will rescue the exceptions listed and all their subclasses.
Grape::API provides a logger method which by default will return an instance of the Logger
class from Ruby's standard library.
Grape supports a range of ways to present your data
Grape has built-in Basic and Digest authentication (the given block
is executed in the context of the current Endpoint).
Authentication
applies to the current namespace and any children, but not parents.
Blocks can be executed before or after every API call, using before, after,
before_validation and after_validation
Before and after callbacks execute in the following order
Grape by default anchors all request paths, which means that the request URL
should match from start to end to match
The namespace method has a number of aliases, including: group, resource,
resources, and segment. Use whichever reads the best for your API.
test a Grape API with RSpec by making HTTP requests and examining the response
POST JSON data and specify the correct content-type.
The control plane's components make global decisions about the cluster
Control plane components can be run on any machine in the cluster.
for simplicity, set up scripts typically start all control plane components on
the same machine, and do not run user containers on this machine
The API server is the front end for the Kubernetes control plane.
kube-apiserver is designed to scale horizontally—that is, it scales by deploying more instances.
You can run several instances of kube-apiserver and balance traffic between those instances.
Kubernetes cluster uses etcd as its backing store, make sure you have a
back up plan
for those data.
watches for newly created
Pods with no assigned
node, and selects a node for them
to run on.
Factors taken into account for scheduling decisions include:
individual and collective resource requirements, hardware/software/policy
constraints, affinity and anti-affinity specifications, data locality,
inter-workload interference, and deadlines.
each controller is a separate process, but to reduce complexity, they are all compiled into a single binary and run in a single process.
Node controller
Job controller
Endpoints controller
Service Account & Token controllers
The cloud controller manager lets you link your
cluster into your cloud provider's API, and separates out the components that interact
with that cloud platform from components that only interact with your cluster.
If you are running Kubernetes on your own premises, or in a learning environment inside your
own PC, the cluster does not have a cloud controller manager.
An agent that runs on each node in the cluster. It makes sure that containers are running in a Pod.
The kubelet takes a set of PodSpecs that are provided through various mechanisms and ensures that the containers described in those PodSpecs are running and healthy.
The kubelet doesn't manage containers which were not created by Kubernetes.
kube-proxy is a network proxy that runs on each
node in your cluster,
implementing part of the Kubernetes
Service concept.
kube-proxy
maintains network rules on nodes. These network rules allow network
communication to your Pods from network sessions inside or outside of
your cluster.
kube-proxy uses the operating system packet filtering layer if there is one
and it's available.
Kubernetes supports several container runtimes: Docker,
containerd, CRI-O,
and any implementation of the Kubernetes CRI (Container Runtime
Interface).
Addons use Kubernetes resources (DaemonSet,
Deployment, etc)
to implement cluster features
namespaced resources
for addons belong within the kube-system namespace.
all Kubernetes clusters should have cluster DNS,
Cluster DNS is a DNS server, in addition to the other DNS server(s) in your environment, which serves DNS records for Kubernetes services.
Containers started by Kubernetes automatically include this DNS server in their DNS searches.
Container Resource Monitoring records generic time-series metrics
about containers in a central database, and provides a UI for browsing that data.
A cluster-level logging mechanism is responsible for
saving container logs to a central log store with search/browsing interface.
for a lot of people, the name “Docker” itself is synonymous with the word “container”.
Docker created a very ergonomic (nice-to-use) tool for working with containers – also called docker.
docker is designed to be installed on a workstation or server and comes with a bunch of tools to make it easy to build and run containers as a developer, or DevOps person.
containerd: This is a daemon process that manages and runs containers.
runc: This is the low-level container runtime (the thing that actually creates and runs containers).
libcontainer, a native Go-based implementation for creating containers.
Kubernetes includes a component called dockershim, which allows it to support Docker.
Kubernetes prefers to run containers through any container runtime which supports its Container Runtime Interface (CRI).
Kubernetes will remove support for Docker directly, and prefer to use only container runtimes that implement its Container Runtime Interface.
Both containerd and CRI-O can run Docker-formatted (actually OCI-formatted) images, they just do it without having to use the docker command or the Docker daemon.
Docker images, are actually images packaged in the Open Container Initiative (OCI) format.
CRI is the API that Kubernetes uses to control the different runtimes that create and manage containers.
CRI makes it easier for Kubernetes to use different container runtimes
containerd is a high-level container runtime that came from Docker, and implements the CRI spec
containerd was separated out of the Docker project, to make Docker more modular.
CRI-O is another high-level container runtime which implements the Container Runtime Interface (CRI).
The idea behind the OCI is that you can choose between different runtimes which conform to the spec.
runc is an OCI-compatible container runtime.
A reference implementation is a piece of software that has implemented all the requirements of a specification or standard.
runc provides all of the low-level functionality for containers, interacting with existing low-level Linux features, like namespaces and control groups.
"Hetty is an HTTP toolkit for security research. It aims to become an open source alternative to commercial software like Burp Suite Pro, with powerful features tailored to the needs of the infosec and bug bounty community."