"Next-Gen Open Source Password Manager
Stop wasting time synchronizing your encrypted vault. Remember one master password to access your passwords, anywhere, anytime. No sync needed."
Each password validator must provide a help text to explain the requirements to
the user, validate a given password and return an error message if it does not
meet the requirements, and optionally receive passwords that have been set.
By default, validators are used in the forms to reset or change passwords and
in the createsuperuser and changepassword management
commands
The application key is a random, 32-character string stored in the APP_KEY key in your .env file.
Once your app is running, there's one place it uses the APP_KEY: cookies.
Laravel uses the key for all encrypted cookies, including the session cookie, before handing them off to the user's browser, and it uses it to decrypt cookies read from the browser.
Encrypted cookies are an important security feature in Laravel.
All of this encryption and decryption is handled in Laravel by the Encrypter using PHP's built-in security tools, including OpenSSL.
Passwords are not encrypted, they are hashed.
Laravel's passwords are hashed using Hash::make() or bcrypt(), neither of which use APP_KEY.
Crypt (symmetric encryption) and Hash (one-way cryptographic hashing).
Laravel uses this same method for cookies, both the sender and receiver, using APP_KEY as the encryption key.
something like user passwords, you should never have a way to decrypt them. Ever.
Unique: The collision rate (different inputs hashing to the same output) should be very small
Laravel hashing implements the native PHP password_hash() function, defaulting to a hashing algorithm called bcrypt.
a one-way hash, we cannot decrypt it. All that we can do is test against it.
When the user with this password attempts to log in, Laravel hashes their password input and uses PHP’s password_verify() function to compare the new hash with the database hash
User password storage should never be reversible, and therefore doesn’t need APP_KEY at all.
Any good credential management strategy should include rotation: changing keys and passwords on a regular basis
update the key on each server.
their sessions invalidated as soon as you change your APP_KEY.
make and test a plan to decrypt that data with your old key and re-encrypt it with the new key.
ED25519 is more vulnerable to quantum computation than is RSA
best practice to be using a hardware token
to use a yubikey via gpg: with this method you use your gpg subkey as an ssh key
sit down and spend an hour thinking about your backup and recovery strategy first
never share a private keys between physical devices
allows you to revoke a single credential if you lose (control over) that device
If a private key ever turns up on the wrong machine,
you *know* the key and both source and destination
machines have been compromised.
centralized management of authentication/authorization
I have setup a VPS, disabled passwords, and setup a key with a passphrase to gain access. At this point my greatest worry is losing this private key, as that means I can't access the server.What is a reasonable way to backup my private key?
a mountable disk image that's encrypted
a system that can update/rotate your keys across all of your servers on the fly in case one is compromised or assumed to be compromised.
different keys for different purposes per client device
fall back to password plus OTP
relying completely on the security of your disk, against either physical or cyber.
It is better to use a different passphrase for each key but it is also less convenient unless you're using a password manager (personally, I'm using KeePass)
- RSA is pretty standard, and generally speaking is fairly secure for key lengths >=2048. RSA-2048 is the default for ssh-keygen, and is compatible with just about everything.
public-key authentication has somewhat unexpected side effect of preventing MITM per this security consulting firm
Disable passwords and only allow keys even for root with PermitRootLogin without-password
You should definitely use a different passphrase for keys stored on separate computers,
Putting this information in a secret
is safer and more flexible than putting it verbatim in a
PodThe smallest and simplest Kubernetes object. A Pod represents a set of running containers on your cluster. definition or in a container imageStored instance of a container that holds a set of software needed to run an application.
.
A Secret is an object that contains a small amount of sensitive data such as
a password, a token, or a key.
Users can create secrets, and the system also creates some secrets.
To use a secret, a pod needs to reference the secret.
A secret can be used with a pod in two ways: as files in a
volumeA directory containing data, accessible to the containers in a pod. mounted on one or more of
its containers, or used by kubelet when pulling images for the pod.
--from-file
You can also create a Secret in a file first, in json or yaml format,
and then create that object.
The
Secret contains two maps:
data and stringData.
The data field is used to store arbitrary data, encoded using
base64.
Kubernetes automatically creates secrets which contain credentials for
accessing the API and it automatically modifies your pods to use this type of
secret.
kubectl get and kubectl describe avoid showing the contents of a secret by
default.
stringData field is provided for convenience, and allows you to provide
secret data as unencoded strings.
where you are deploying an application
that uses a Secret to store a configuration file, and you want to populate
parts of that configuration file during your deployment process.
a field is specified in both data and stringData, the value from stringData
is used.
The keys of data and stringData must consist of alphanumeric characters,
‘-’, ‘_’ or ‘.’.
Newlines are not valid within these strings and must
be omitted.
When using the base64 utility on Darwin/macOS users should avoid
using the -b option to split long lines.
create a Secret from generators and then apply it to create the object on
the Apiserver.
The generated Secrets name has a suffix appended by hashing the contents.
base64 --decode
Secrets can be mounted as data volumes or be exposed as
environment variablesContainer environment variables are name=value pairs that provide useful information into containers running in a Pod.
to be used by a container in a pod.
Multiple pods can reference the same secret.
Each key in the secret data map becomes the filename under mountPath
each container needs its
own volumeMounts block, but only one .spec.volumes is needed per secret
use .spec.volumes[].secret.items field to change target path of each key:
If .spec.volumes[].secret.items is used, only keys specified in items are projected.
To consume all keys from the secret, all of them must be listed in the items field.
You can also specify the permission mode bits files part of a secret will have.
If you don’t specify any, 0644 is used by default.
JSON spec doesn’t support octal notation, so use the value 256 for
0400 permissions.
Inside the container that mounts a secret volume, the secret keys appear as
files and the secret values are base-64 decoded and stored inside these files.
Mounted Secrets are updated automatically
Kubelet is checking whether the mounted secret is fresh on every periodic sync.
cache propagation delay depends on the chosen cache type
A container using a Secret as a
subPath volume mount will not receive
Secret updates.
Inside a container that consumes a secret in an environment variables, the secret keys appear as
normal environment variables containing the base-64 decoded values of the secret data.
An imagePullSecret is a way to pass a secret that contains a Docker (or other) image registry
password to the Kubelet so it can pull a private image on behalf of your Pod.
a secret
needs to be created before any pods that depend on it.
Secret API objects reside in a namespaceAn abstraction used by Kubernetes to support multiple virtual clusters on the same physical cluster.
.
They can only be referenced by pods in that same namespace.
Individual secrets are limited to 1MiB in size.
Kubelet only supports use of secrets for Pods it gets from the API server.
Secrets must be created before they are consumed in pods as environment
variables unless they are marked as optional.
References to Secrets that do
not exist will prevent the pod from starting.
References via secretKeyRef to keys that do not exist in a named Secret
will prevent the pod from starting.
Once a pod is scheduled, the kubelet will try to fetch the
secret value.
Think carefully before sending your own ssh keys: other users of the cluster may have access to the secret.
Special characters such as $, \*, and ! require escaping.
If the password you are using has special characters, you need to escape them using the \\ character.
You do not need to escape special characters in passwords from files
make that key begin with a dot
Dotfiles in secret volume
.secret-file
a frontend container
which handles user interaction and business logic, but which cannot see the
private key;
a signer container that can see the private key, and responds
to simple signing requests from the frontend
When deploying applications that interact with the secrets API, access should be
limited using authorization policies such as RBAC
watch and list requests for secrets within a namespace are
extremely powerful capabilities and should be avoided
watch and list all secrets in a cluster should be reserved for only the most
privileged, system-level components.
additional
precautions with secret objects, such as avoiding writing them to disk where
possible.
A secret is only sent to a node if a pod on that node requires it
only the
secrets that a pod requests are potentially visible within its containers
each container in a pod has
to request the secret volume in its volumeMounts for it to be visible within
the container.
In the API server secret data is stored in etcdConsistent and highly-available key value store used as Kubernetes’ backing store for all cluster data.
limit access to etcd to admin users
Base64 encoding is not an
encryption method and is considered the same as plain text.
A user who can create a pod that uses a secret can also see the value of that secret.
anyone with root on any node can read any secret from the apiserver,
by impersonating the kubelet.
"The YubiKey 4 is the strong authentication bullseye the industry has been aiming at for years, enabling one single key to secure an unlimited number of applications.
Yubico's 4th generation YubiKey is built on high-performance secure elements. It includes the same range of one-time password and public key authentication protocols as in the YubiKey NEO, excluding NFC, but with stronger public/private keys, faster crypto operations and the world's first touch-to-sign feature.
With the YubiKey 4 platform, we have further improved our manufacturing and ordering process, enabling customers to order exactly what functions they want in 500+ unit volumes, with no secrets stored at Yubico or shared with a third-party organization. The best part? An organization can securely customize 1,000 YubiKeys in less than 10 minutes.
For customers who require NFC, the YubiKey NEO is our full-featured key with both contact (USB) and contactless (NFC, MIFARE) communications."
Edge routerA router that enforces the firewall policy for your cluster.
Cluster networkA set of links, logical or physical, that facilitate communication within a cluster according to the Kubernetes networking model.
A Kubernetes ServiceA way to expose an application running on a set of Pods as a network service. that identifies a set of Pods using labelTags objects with identifying attributes that are meaningful and relevant to users. selectors.
Services are assumed to have virtual IPs only routable within the cluster network.
Ingress exposes HTTP and HTTPS routes from outside the cluster to
services within the cluster.
Traffic routing is controlled by rules defined on the Ingress resource.
An Ingress can be configured to give Services externally-reachable URLs, load balance traffic, terminate SSL / TLS, and offer name based virtual hosting.
Exposing services other than HTTP and HTTPS to the internet typically
uses a service of type Service.Type=NodePort or
Service.Type=LoadBalancer.
You must have an ingress controller to satisfy an Ingress. Only creating an Ingress resource has no effect.
As with all other Kubernetes resources, an Ingress needs apiVersion, kind, and metadata fields
Ingress frequently uses annotations to configure some options depending on the Ingress controller,
Ingress resource only supports rules
for directing HTTP traffic.
An optional host.
A list of paths
A backend is a combination of Service and port names
has an associated backend
Both the host and path must match the content of an incoming request before the
load balancer directs traffic to the referenced Service.
HTTP (and HTTPS) requests to the
Ingress that matches the host and path of the rule are sent to the listed backend.
A default backend is often configured in an Ingress controller to service any requests that do not
match a path in the spec.
An Ingress with no rules sends all traffic to a single default backend.
Ingress controllers and load balancers may take a minute or two to allocate an IP address.
A fanout configuration routes traffic from a single IP address to more than one Service,
based on the HTTP URI being requested.
nginx.ingress.kubernetes.io/rewrite-target: /
describe ingress
get ingress
Name-based virtual hosts support routing HTTP traffic to multiple host names at the same IP address.
route requests based on
the Host header.
an Ingress resource without any hosts defined in the rules, then any
web traffic to the IP address of your Ingress controller can be matched without a name based
virtual host being required.
secure an Ingress by specifying a SecretStores sensitive information, such as passwords, OAuth tokens, and ssh keys.
that contains a TLS private key and certificate.
Currently the Ingress only
supports a single TLS port, 443, and assumes TLS termination.
An Ingress controller is bootstrapped with some load balancing policy settings
that it applies to all Ingress, such as the load balancing algorithm, backend
weight scheme, and others.
persistent sessions, dynamic weights) are not yet exposed through the
Ingress. You can instead get these features through the load balancer used for
a Service.
review the controller
specific documentation to see how they handle health checks
edit ingress
After you save your changes, kubectl updates the resource in the API server, which tells the
Ingress controller to reconfigure the load balancer.
kubectl replace -f on a modified Ingress YAML file.
Node: A worker machine in Kubernetes, part of a cluster.
in most common Kubernetes deployments, nodes in the cluster are not part of the public internet.
Edge router: A router that enforces the firewall policy for your cluster.
a gateway managed by a cloud provider or a physical piece of hardware.
Cluster network: A set of links, logical or physical, that facilitate communication within a cluster according to the Kubernetes networking model.
Service: A Kubernetes Service that identifies a set of Pods using label selectors.
An Ingress may be configured to give Services externally-reachable URLs, load balance traffic, terminate SSL / TLS, and offer name-based virtual hosting.
An Ingress does not expose arbitrary ports or protocols.
You must have an Ingress controller to satisfy an Ingress. Only creating an Ingress resource has no effect.
The name of an Ingress object must be a valid
DNS subdomain name
The Ingress spec
has all the information needed to configure a load balancer or proxy server.
Ingress resource only supports rules
for directing HTTP(S) traffic.
An Ingress with no rules sends all traffic to a single default backend and .spec.defaultBackend
is the backend that should handle requests in that case.
If defaultBackend is not set, the handling of requests that do not match any of the rules will be up to the
ingress controller
A common
usage for a Resource backend is to ingress data to an object storage backend
with static assets.
Exact: Matches the URL path exactly and with case sensitivity.
Prefix: Matches based on a URL path prefix split by /. Matching is case
sensitive and done on a path element by element basis.
multiple paths within an Ingress will match a request. In those
cases precedence will be given first to the longest matching path.
Hosts can be precise matches (for example “foo.bar.com”) or a wildcard (for
example “*.foo.com”).
No match, wildcard only covers a single DNS label
Each Ingress should specify a class, a reference to an
IngressClass resource that contains additional configuration including the name
of the controller that should implement the class.
secure an Ingress by specifying a Secret
that contains a TLS private key and certificate.
The Ingress resource only
supports a single TLS port, 443, and assumes TLS termination at the ingress point
(traffic to the Service and its Pods is in plaintext).
TLS will not work on the default rule because the
certificates would have to be issued for all the possible sub-domains.
hosts in the tls section need to explicitly match the host in the rules
section.
"Felony is an open-source pgp keychain built on the modern web with Electron, React, and Redux. Felony is the first PGP app that's easy for anyone to use, without a tutorial. Security++ to the greatest extreme!"
Baseimage-docker only advocates running multiple OS processes inside a single container.
Password and challenge-response authentication are disabled by default. Only key authentication is allowed.
A tool for running a command as another user
The Docker developers advocate the philosophy of running a single logical service per container. A logical service can consist of multiple OS processes.
All syslog messages are forwarded to "docker logs".
Baseimage-docker advocates running multiple OS processes inside a single container, and a single logical service can consist of multiple OS processes.
Baseimage-docker provides tools to encourage running processes as different users
sometimes it makes sense to run multiple services in a single container, and sometimes it doesn't.
Splitting your logical service into multiple OS processes also makes sense from a security standpoint.
using environment variables to pass parameters to containers is very much the "Docker way"
Baseimage-docker provides a facility to run a single one-shot command, while solving all of the aforementioned problems
the shell script must run the daemon without letting it daemonize/fork it.
All executable scripts in /etc/my_init.d, if this directory exists. The scripts are run in lexicographic order.
variables will also be passed to all child processes
Environment variables on Unix are inherited on a per-process basis
there is no good central place for defining environment variables for all applications and services
centrally defining environment variables
One of the ideas behind Docker is that containers should be stateless, easily restartable, and behave like a black box.
a one-shot command in a new container
immediately exit after the command exits,
However the downside of this approach is that the init system is not started. That is, while invoking COMMAND, important daemons such as cron and syslog are not running. Also, orphaned child processes are not properly reaped, because COMMAND is PID 1.
add additional daemons (e.g. your own app) to the image by creating runit entries.
Nginx is one such example: it removes all environment variables unless you explicitly instruct it to retain them through the env configuration option.
Mechanisms for easily running multiple processes, without violating the Docker philosophy
Ubuntu is not designed to be run inside Docker
According to the Unix process model, the init process -- PID 1 -- inherits all orphaned child processes and must reap them
Syslog-ng seems to be much more stable
cron daemon
Rotates and compresses logs
/sbin/setuser
A tool for installing apt packages that automatically cleans up after itself.
a single logical service inside a single container
A daemon is a program which runs in the background of its system, such
as a web server.
The shell script must be called run, must be executable, and is to be
placed in the directory /etc/service/<NAME>. runsv will switch to
the directory and invoke ./run after your container starts.
If any script exits with a non-zero exit code, the booting will fail.
If your process is started with
a shell script, make sure you exec the actual process, otherwise the shell will receive the signal
and not your process.
any environment variables set with docker run --env or with the ENV command in the Dockerfile, will be picked up by my_init
not possible for a child process to change the environment variables of other processes
they will not see the environment variables that were originally passed by Docker.
We ignore HOME, SHELL, USER and a bunch of other environment variables on purpose, because not ignoring them will break multi-user containers.
my_init imports environment variables from the directory /etc/container_environment
/etc/container_environment.sh - a dump of the environment variables in Bash format.
modify the environment variables in my_init (and therefore the environment variables in all child processes that are spawned after that point in time), by altering the files in /etc/container_environment
my_init only activates changes in /etc/container_environment when running startup scripts
environment variables don't contain sensitive data, then you can also relax the permissions
Syslog messages are forwarded to the console
syslog-ng is started separately before the runit supervisor process, and shutdown after runit exits.
RUN apt-get update && apt-get upgrade -y -o Dpkg::Options::="--force-confold"
/sbin/my_init --skip-startup-files --quiet --
By default, no keys are installed, so nobody can login
provide a pregenerated, insecure key (PuTTY format)
RUN /usr/sbin/enable_insecure_key
docker run YOUR_IMAGE /sbin/my_init --enable-insecure-key
RUN cat /tmp/your_key.pub >> /root/.ssh/authorized_keys && rm -f /tmp/your_key.pub
The default baseimage-docker installs syslog-ng, cron and sshd services during the build process
Kubernetes supports many types of volumes, and a Pod can
use any number of them simultaneously.
To use a volume, a Pod specifies what volumes to provide for the Pod (the
.spec.volumes
field) and where to mount those into Containers (the
.spec.containers.volumeMounts
field).
A process in a container sees a filesystem view composed from their Docker
image and volumes.
Volumes can not mount onto other volumes or have hard links to
other volumes.
Each Container in the Pod must independently specify where to
mount each volume
localnfs
cephfs
awsElasticBlockStore
glusterfs
vsphereVolume
An awsElasticBlockStore volume mounts an Amazon Web Services (AWS) EBS
Volume into your Pod.
the contents of an EBS
volume are preserved and the volume is merely unmounted.
an
EBS volume can be pre-populated with data, and that data can be “handed off”
between Pods.
create an EBS volume using aws ec2 create-volume
the nodes on which Pods are running must be AWS EC2 instances
EBS only supports a single EC2 instance mounting a volume
check that the size and EBS volume
type are suitable for your use!
A cephfs volume allows an existing CephFS volume to be
mounted into your Pod.
the contents of a cephfs volume are preserved and the volume is merely
unmounted.
A Container using a ConfigMap as a subPath volume mount will not
receive ConfigMap updates.
An emptyDir volume is first created when a Pod is assigned to a Node, and
exists as long as that Pod is running on that node.
When a Pod is removed from a node for
any reason, the data in the emptyDir is deleted forever.
By default, emptyDir volumes are stored on whatever medium is backing the
node - that might be disk or SSD or network storage, depending on your
environment.
you can set the emptyDir.medium field to "Memory"
to tell Kubernetes to mount a tmpfs (RAM-backed filesystem)
An fc volume allows an existing fibre channel volume to be mounted in a Pod.
configure FC SAN Zoning to allocate and mask those LUNs (volumes) to the target WWNs beforehand so that Kubernetes hosts can access them.
Flocker is an open-source clustered Container data volume manager. It provides management
and orchestration of data volumes backed by a variety of storage backends.
emptyDir
flocker
A flocker volume allows a Flocker dataset to be mounted into a Pod
have your own Flocker installation running
A gcePersistentDisk volume mounts a Google Compute Engine (GCE) Persistent
Disk into your Pod.
Using a PD on a Pod controlled by a ReplicationController will fail unless
the PD is read-only or the replica count is 0 or 1
A glusterfs volume allows a Glusterfs (an open
source networked filesystem) volume to be mounted into your Pod.
have your own GlusterFS installation running
A hostPath volume mounts a file or directory from the host node’s filesystem
into your Pod.
a
powerful escape hatch for some applications
access to Docker internals; use a hostPath
of /var/lib/docker
allowing a Pod to specify whether a given hostPath should exist prior to the
Pod running, whether it should be created, and what it should exist as
specify a type for a hostPath volume
the files or directories created on the underlying hosts are only writable by root.
hostPath:
# directory location on host
path: /data
# this field is optional
type: Directory
An iscsi volume allows an existing iSCSI (SCSI over IP) volume to be mounted
into your Pod.
have your own iSCSI server running
A feature of iSCSI is that it can be mounted as read-only by multiple consumers
simultaneously.
A local volume represents a mounted local storage device such as a disk,
partition or directory.
Local volumes can only be used as a statically created PersistentVolume.
Compared to hostPath volumes, local volumes can be used in a durable and
portable manner without manually scheduling Pods to nodes, as the system is aware
of the volume’s node constraints by looking at the node affinity on the PersistentVolume.
If a node becomes unhealthy,
then the local volume will also become inaccessible, and a Pod using it will not
be able to run.
PersistentVolume spec using a local volume and
nodeAffinity
PersistentVolume nodeAffinity is required when using local volumes. It enables
the Kubernetes scheduler to correctly schedule Pods using local volumes to the
correct node.
PersistentVolume volumeMode can now be set to “Block” (instead of the default
value “Filesystem”) to expose the local volume as a raw block device.
When using local volumes, it is recommended to create a StorageClass with
volumeBindingMode set to WaitForFirstConsumer
An nfs volume allows an existing NFS (Network File System) share to be
mounted into your Pod.
NFS can be mounted by multiple
writers simultaneously.
have your own NFS server running with the share exported
A persistentVolumeClaim volume is used to mount a
PersistentVolume into a Pod.
PersistentVolumes are a
way for users to “claim” durable storage (such as a GCE PersistentDisk or an
iSCSI volume) without knowing the details of the particular cloud environment.
A projected volume maps several existing volume sources into the same directory.
All sources are required to be in the same namespace as the Pod. For more details,
see the all-in-one volume design document.
Each projected volume source is listed in the spec under sources
A Container using a projected volume source as a subPath volume mount will not
receive updates for those volume sources.
RBD volumes can only be mounted by a single consumer in read-write mode - no
simultaneous writers allowed
A secret volume is used to pass sensitive information, such as passwords, to
Pods
store secrets in the Kubernetes API and mount them as files for
use by Pods
secret volumes are
backed by tmpfs (a RAM-backed filesystem) so they are never written to
non-volatile storage.
create a secret in the Kubernetes API before you can use it
A Container using a Secret as a subPath volume mount will not
receive Secret updates.
StorageOS runs as a Container within your Kubernetes environment, making local
or attached storage accessible from any node within the Kubernetes cluster.
Data can be replicated to protect against node failure. Thin provisioning and
compression can improve utilization and reduce cost.
StorageOS provides block storage to Containers, accessible via a file system.
A vsphereVolume is used to mount a vSphere VMDK Volume into your Pod.
supports both VMFS and VSAN datastore.
create VMDK using one of the following methods before using with Pod.
share one volume for multiple uses in a single Pod.
The volumeMounts.subPath
property can be used to specify a sub-path inside the referenced volume instead of its root.
Use the subPathExpr field to construct subPath directory names from Downward API environment variables
enable the VolumeSubpathEnvExpansion feature gate
The subPath and subPathExpr properties are mutually exclusive.
There is no limit on how much space an emptyDir or
hostPath volume can consume, and no isolation between Containers or between
Pods.
emptyDir and hostPath volumes will be able to
request a certain amount of space using a resource
specification, and to select the type of media to use, for clusters that have
several media types.
the Container Storage Interface (CSI)
and Flexvolume. They enable storage vendors to create custom storage plugins
without adding them to the Kubernetes repository.
all volume plugins (like
volume types listed above) were “in-tree” meaning they were built, linked,
compiled, and shipped with the core Kubernetes binaries and extend the core
Kubernetes API.
Container Storage Interface (CSI)
defines a standard interface for container orchestration systems (like
Kubernetes) to expose arbitrary storage systems to their container workloads.
Once a CSI compatible volume driver is deployed on a Kubernetes cluster, users
may use the csi volume type to attach, mount, etc. the volumes exposed by the
CSI driver.
The csi volume type does not support direct reference from Pod and may only be
referenced in a Pod via a PersistentVolumeClaim object.
This feature requires CSIInlineVolume feature gate to be enabled:--feature-gates=CSIInlineVolume=true
In-tree plugins that support CSI Migration and have a corresponding CSI driver implemented
are listed in the “Types of Volumes” section above.
Mount propagation allows for sharing volumes mounted by a Container to
other Containers in the same Pod, or even to other Pods on the same node.
Mount propagation of a volume is controlled by mountPropagation field in Container.volumeMounts.
HostToContainer - This volume mount will receive all subsequent mounts
that are mounted to this volume or any of its subdirectories.
Bidirectional - This volume mount behaves the same the HostToContainer mount.
In addition, all volume mounts created by the Container will be propagated
back to the host and to all Containers of all Pods that use the same volume.
Edit your Docker’s systemd service file. Set MountFlags as follows:MountFlags=shared
A chart is a collection of files
that describe a related set of Kubernetes resources.
A single chart
might be used to deploy something simple, like a memcached pod, or
something complex, like a full web app stack with HTTP servers,
databases, caches, and so on.
Charts are created as files laid out in a particular directory tree,
then they can be packaged into versioned archives to be deployed.
A chart is organized as a collection of files inside of a directory.
values.yaml # The default configuration values for this chart
charts/ # A directory containing any charts upon which this chart depends.
templates/ # A directory of templates that, when combined with values,
# will generate valid Kubernetes manifest files.
version: A SemVer 2 version (required)
apiVersion: The chart API version, always "v1" (required)
Every chart must have a version number. A version must follow the
SemVer 2 standard.
non-SemVer names are explicitly
disallowed by the system.
When generating a
package, the helm package command will use the version that it finds
in the Chart.yaml as a token in the package name.
the appVersion field is not related to the version field. It is
a way of specifying the version of the application.
appVersion: The version of the app that this contains (optional). This needn't be SemVer.
If the latest version of a chart in the
repository is marked as deprecated, then the chart as a whole is considered to
be deprecated.
deprecated: Whether this chart is deprecated (optional, boolean)
one chart may depend on any number of other charts.
dependencies can be dynamically linked through the requirements.yaml
file or brought in to the charts/ directory and managed manually.
the preferred method of declaring dependencies is by using a
requirements.yaml file inside of your chart.
A requirements.yaml file is a simple file for listing your
dependencies.
The repository field is the full URL to the chart repository.
you must also use helm repo add to add that repo locally.
helm dependency update
and it will use your dependency file to download all the specified
charts into your charts/ directory for you.
When helm dependency update retrieves charts, it will store them as
chart archives in the charts/ directory.
Managing charts with requirements.yaml is a good way to easily keep
charts updated, and also share requirements information throughout a
team.
All charts are loaded by default.
The condition field holds one or more YAML paths (delimited by commas).
If this path exists in the top parent’s values and resolves to a boolean value,
the chart will be enabled or disabled based on that boolean value.
The tags field is a YAML list of labels to associate with this chart.
all charts with tags can be enabled or disabled by
specifying the tag and a boolean value.
The --set parameter can be used as usual to alter tag and condition values.
Conditions (when set in values) always override tags.
The first condition path that exists wins and subsequent ones for that chart are ignored.
The keys containing the values to be imported can be specified in the parent chart’s requirements.yaml file
using a YAML list. Each item in the list is a key which is imported from the child chart’s exports field.
specifying the key data in our import list, Helm looks in the exports field of the child
chart for data key and imports its contents.
the parent key data is not contained in the parent’s final values. If you need to specify the
parent key, use the ‘child-parent’ format.
To access values that are not contained in the exports key of the child chart’s values, you will need to
specify the source key of the values to be imported (child) and the destination path in the parent chart’s
values (parent).
To drop a dependency into your charts/ directory, use the
helm fetch command
A dependency can be either a chart archive (foo-1.2.3.tgz) or an
unpacked chart directory.
name cannot start with _ or ..
Such files are ignored by the chart loader.
a single release is created with all the objects for the chart and its dependencies.
Helm Chart templates are written in the
Go template language, with the
addition of 50 or so add-on template
functions from the Sprig library and a
few other specialized functions
When
Helm renders the charts, it will pass every file in that directory
through the template engine.
Chart developers may supply a file called values.yaml inside of a
chart. This file can contain default values.
Chart users may supply a YAML file that contains values. This can be
provided on the command line with helm install.
When a user supplies custom values, these values will override the
values in the chart’s values.yaml file.
Template files follow the standard conventions for writing Go templates
{{default "minio" .Values.storage}}
Values that are supplied via a values.yaml file (or via the --set
flag) are accessible from the .Values object in a template.
pre-defined, are available to every template, and
cannot be overridden
the names are case
sensitive
Release.Name: The name of the release (not the chart)
Release.IsUpgrade: This is set to true if the current operation is an upgrade or rollback.
Release.Revision: The revision number. It begins at 1, and increments with
each helm upgrade
Chart: The contents of the Chart.yaml
Files: A map-like object containing all non-special files in the chart.
Files can be
accessed using {{index .Files "file.name"}} or using the {{.Files.Get name}} or
{{.Files.GetString name}} functions.
.helmignore
access the contents of the file
as []byte using {{.Files.GetBytes}}
Any unknown Chart.yaml fields will be dropped
Chart.yaml cannot be
used to pass arbitrarily structured data into the template.
A values file is formatted in YAML.
A chart may include a default
values.yaml file
be merged into the default
values file.
The default values file included inside of a chart must be named
values.yaml
accessible inside of templates using the
.Values object
Values files can declare values for the top-level chart, as well as for
any of the charts that are included in that chart’s charts/ directory.
Charts at a higher level have access to all of the variables defined
beneath.
lower level charts cannot access things in
parent charts
Values are namespaced, but namespaces are pruned.
the scope of the values has been reduced and the
namespace prefix removed
Helm supports special “global” value.
a way of sharing one top-level variable with all
subcharts, which is useful for things like setting metadata properties
like labels.
If a subchart declares a global variable, that global will be passed
downward (to the subchart’s subcharts), but not upward to the parent
chart.
global variables of parent charts take precedence over the global variables from subcharts.
helm lint
A chart repository is an HTTP server that houses one or more packaged
charts
Any HTTP server that can serve YAML files and tar files and can answer
GET requests can be used as a repository server.
Helm does not provide tools for uploading charts to
remote repository servers.
the only way to add a chart to $HELM_HOME/starters is to manually
copy it there.
Helm provides a hook mechanism to allow chart developers to intervene
at certain points in a release’s life cycle.
Execute a Job to back up a database before installing a new chart,
and then execute a second job after the upgrade in order to restore
data.
Hooks are declared as an annotation in the metadata section of a manifest
Hooks work like regular templates, but they have special annotations
pre-install
post-install: Executes after all resources are loaded into Kubernetes
pre-delete
post-delete: Executes on a deletion request after all of the release’s
resources have been deleted.
pre-upgrade
post-upgrade
pre-rollback
post-rollback: Executes on a rollback request after all resources
have been modified.
crd-install
test-success: Executes when running helm test and expects the pod to
return successfully (return code == 0).
test-failure: Executes when running helm test and expects the pod to
fail (return code != 0).
Hooks allow you, the chart developer, an opportunity to perform
operations at strategic points in a release lifecycle
Tiller then loads the hook with the lowest weight first (negative to positive)
Tiller returns the release name (and other data) to the client
If the resources is a Job kind, Tiller
will wait until the job successfully runs to completion.
if the job
fails, the release will fail. This is a blocking operation, so the
Helm client will pause while the Job is run.
If they
have hook weights (see below), they are executed in weighted order. Otherwise,
ordering is not guaranteed.
good practice to add a hook weight, and set it
to 0 if weight is not important.
The resources that a hook creates are not tracked or managed as part of the
release.
leave the hook resource alone.
To destroy such
resources, you need to either write code to perform this operation in a pre-delete
or post-delete hook or add "helm.sh/hook-delete-policy" annotation to the hook template file.
Hooks are just Kubernetes manifest files with special annotations in the
metadata section
One resource can implement multiple hooks
no limit to the number of different resources that
may implement a given hook.
When subcharts declare hooks, those are also evaluated. There is no way
for a top-level chart to disable the hooks declared by subcharts.
Hook weights can be positive or negative numbers but must be represented as
strings.
sort those hooks in ascending order.
Hook deletion policies
"before-hook-creation" specifies Tiller should delete the previous hook before the new hook is launched.
By default Tiller will wait for 60 seconds for a deleted hook to no longer exist in the API server before timing out.
Custom Resource Definitions (CRDs) are a special kind in Kubernetes.
The crd-install hook is executed very early during an installation, before
the rest of the manifests are verified.
A common reason why the hook resource might already exist is that it was not deleted following use on a previous install/upgrade.
Helm uses Go templates for templating
your resource files.
two special template functions: include and required
include
function allows you to bring in another template, and then pass the results to other
template functions.
The required function allows you to declare a particular
values entry as required for template rendering.
If the value is empty, the template
rendering will fail with a user submitted error message.
When you are working with string data, you are always safer quoting the
strings than leaving them as bare words
Quote Strings, Don’t Quote Integers
when working with integers do not quote the values
env variables values which are expected to be string
to include a template, and then perform an operation
on that template’s output, Helm has a special include function
The above includes a template called toYaml, passes it $value, and
then passes the output of that template to the nindent function.
Go provides a way for setting template options to control behavior
when a map is indexed with a key that’s not present in the map
The required function gives developers the ability to declare a value entry
as required for template rendering.
The tpl function allows developers to evaluate strings as templates inside a template.
Rendering a external configuration file
(.Files.Get "conf/app.conf")
Image pull secrets are essentially a combination of registry, username, and password.
Automatically Roll Deployments When ConfigMaps or Secrets change
configmaps or secrets are injected as configuration
files in containers
a restart may be required should those
be updated with a subsequent helm upgrade
The sha256sum function can be used to ensure a deployment’s
annotation section is updated if another file changes
checksum/config: {{ include (print $.Template.BasePath "/configmap.yaml") . | sha256sum }}
helm upgrade --recreate-pods
"helm.sh/resource-policy": keep
resources that should not be deleted when Helm runs a
helm delete
this resource becomes
orphaned. Helm will no longer manage it in any way.
create some reusable parts in your chart
In the templates/ directory, any file that begins with an
underscore(_) is not expected to output a Kubernetes manifest file.
by convention, helper templates and partials are placed in a
_helpers.tpl file.
The current best practice for composing a complex application from discrete parts
is to create a top-level umbrella chart that
exposes the global configurations, and then use the charts/ subdirectory to
embed each of the components.
SAP’s Converged charts: These charts
install SAP Converged Cloud a full OpenStack IaaS on Kubernetes. All of the charts are collected
together in one GitHub repository, except for a few submodules.
Deis’s Workflow:
This chart exposes the entire Deis PaaS system with one chart. But it’s different
from the SAP chart in that this umbrella chart is built from each component, and
each component is tracked in a different Git repository.
YAML is a superset of JSON
any valid JSON structure ought to be valid in YAML.
As a best practice, templates should follow a YAML-like syntax unless
the JSON syntax substantially reduces the risk of a formatting issue.
There are functions in Helm that allow you to generate random data,
cryptographic keys, and so on.
a chart repository is a location where packaged charts can be
stored and shared.
A chart repository is an HTTP server that houses an index.yaml file and
optionally some packaged charts.
Because a chart repository can be any HTTP server that can serve YAML and tar
files and can answer GET requests, you have a plethora of options when it comes
down to hosting your own chart repository.
It is not required that a chart package be located on the same server as the
index.yaml file.
A valid chart repository must have an index file. The
index file contains information about each chart in the chart repository.
The Helm project provides an open-source Helm repository server called ChartMuseum that you can host yourself.
$ helm repo index fantastic-charts --url https://fantastic-charts.storage.googleapis.com
A repository will not be added if it does not contain a valid
index.yaml
add the repository to their helm client via the helm
repo add [NAME] [URL] command with any name they would like to use to
reference the repository.
Helm has provenance tools which help chart users verify the integrity and origin
of a package.
Integrity is established by comparing a chart to a provenance record
The provenance file contains a chart’s YAML file plus several pieces of
verification information
Chart repositories serve as a centralized collection of Helm charts.
Chart repositories must make it possible to serve provenance files over HTTP via
a specific request, and must make them available at the same URI path as the chart.
We don’t want to be “the certificate authority” for all chart
signers. Instead, we strongly favor a decentralized model, which is part
of the reason we chose OpenPGP as our foundational technology.
The Keybase platform provides a public
centralized repository for trust information.
A chart contains a number of Kubernetes resources and components that work together.
A test in a helm chart lives under the templates/ directory and is a pod definition that specifies a container with a given command to run.
The pod definition must contain one of the helm test hook annotations: helm.sh/hook: test-success or helm.sh/hook: test-failure
helm test
nest your test suite under a tests/ directory like <chart-name>/templates/tests/
Keyfiles are bare-minimum forms of security and are best suited for testing or
development environments.
With keyfile authentication, each
mongod instances in the replica set uses the contents of the keyfile as the
shared password for authenticating other members in the deployment.
On UNIX systems, the keyfile must not have group or world
permissions.
Processes are visible to other containers in the pod. This includes all
information visible in /proc, such as passwords that were passed as arguments
or environment variables. These are protected only by regular Unix permissions.
Container filesystems are visible to other containers in the pod through the
/proc/$pid/root link. This makes debugging easier, but it also means
that filesystem secrets are protected only by filesystem permissions.