"[" is a command. It's actually syntactic sugar for the built-in command test which checks and compares its arguments. The "]" is actually an argument to the [ command that tells it to stop checking for arguments!
why > and < get weird inside single square brackets -- Bash actually thinks you're trying to do an input or output redirect inside a command!
the [[ double square brackets ]] and (( double parens )) are not exactly commands. They're actually Bash language keywords, which is what makes them behave a little more predictably.
The [[ double square brackets ]] work essentially the same as [ single square brackets ], albeit with some more superpowers like more powerful regex support.
The (( double parentheses )) are actually a construct that allow arithmetic inside Bash.
If the results inside are zero, it returns an exit code of 1. (Essentially, zero is "falsey.")
the greater and less-than symbols work just fine inside arithmetic parens.
exit code 0 for success.
exit code 1 for failure.
If the regex works out, the return code of the double square brackets is 0, and thus the function returns 0. If not, everything returns 1. This is a really great way to name regexes.
the stuff immediately after the if can be any command in the whole wide world, as long as it provides an exit code, which is pretty much always.
""[" is a command. It's actually syntactic sugar for the built-in command test which checks and compares its arguments. The "]" is actually an argument to the [ command that tells it to stop checking for arguments!"
When you quote $*, it will output all of the arguments received as one single string, separated by a space1 regardless of how they were quoted going in, but it will quote that string so that it doesn't get split up later.
/sys/fs/cgroup/ is the recommended location for cgroup hierarchies, but it is not a standard.
most container specific metrics are available at the cgroup filesystem via /path/to/cgroup/memory.stat, /path/to/cgroup/memory.usage_in_bytes, /path/to/cgroup/memory.limit_in_bytes and others.
The initial idea is to make application invokes deconstructor of each component as soon as the application receives specific signals such as SIGTERM and SIGINT
When you run a docker container, by default it has a PID namespace, which means the docker process is isolated from other processes on your host.
The PID namespace has an important task to reap zombie processes.
This uses /bin/bash as PID1 and runs your program as the subprocess.
When a signal is sent to a shell, the signal actually won’t be forwarded to subprocesses.
By using the exec form, we can run our program as PID1
if you use exec form to run a shell script to spawn your application, remember to use exec syscall to overwrite /usr/bin/bash otherwise it will act as senario1
/bin/bash can handle repeating zombie process
with Tini, SIGTERM properly terminates your process even if you didn’t explicitly install a signal handler for it.
run tini as PID1 and it will forward the signal for subprocesses.
tini is a signal proxy and it also can deal with zombie process issue automatically.
run your program with tini by passing --init flag to docker run
use docker stop, docker will wait for 10s for stopping container before killing a process (by default). The main process inside the container will receive SIGTERM, then docker daemon will wait for 10s and send SIGKILL to terminate process.
kill running containers immediately. it’s more like kill -9 and kill --SIGKILL
The parameter name
or symbol to be expanded may be enclosed in braces, which
are optional but serve to protect the variable to be expanded from
characters immediately following it which could be
interpreted as part of the name.
When braces are used, the matching ending brace is the first ‘}’
not escaped by a backslash or within a quoted string, and not within an
embedded arithmetic expansion, command substitution, or parameter
expansion.
${parameter}
braces are required
If the first character of parameter is an exclamation point (!),
and parameter is not a nameref,
it introduces a level of variable indirection.
${parameter:-word}
If parameter is unset or null, the expansion of
word is substituted. Otherwise, the value of
parameter is substituted.
${parameter:=word}
If parameter
is unset or null, the expansion of word
is assigned to parameter.
${parameter:?word}
If parameter
is null or unset, the expansion of word (or a message
to that effect if word
is not present) is written to the standard error and the shell, if it
is not interactive, exits.
${parameter:+word}
If parameter
is null or unset, nothing is substituted, otherwise the expansion of
word is substituted.
${parameter:offset}
${parameter:offset:length}
Substring expansion applied to an associative array produces undefined
results.
${parameter/pattern/string}
The pattern is expanded to produce a pattern just as in
filename expansion.
If pattern begins with ‘/’, all matches of pattern are
replaced with string.
Normally only the first match is replaced
The ‘^’ operator converts lowercase letters matching pattern
to uppercase
the ‘,’ operator converts matching uppercase letters
to lowercase.
every Kubernetes operation is exposed as an API endpoint and can be executed by an HTTP request to this endpoint.
the main job of kubectl is to carry out HTTP requests to the Kubernetes API
Kubernetes maintains an internal state of resources, and all Kubernetes operations are CRUD operations on these resources.
Kubernetes is a fully resource-centred system
Kubernetes API reference is organised as a list of resource types with their associated operations.
This is how kubectl works for all commands that interact with the Kubernetes cluster.
kubectl simply makes HTTP requests to the appropriate Kubernetes API endpoints.
it's totally possible to control Kubernetes with a tool like curl by manually issuing HTTP requests to the Kubernetes API.
Kubernetes consists of a set of independent components that run as separate processes on the nodes of a cluster.
components on the master nodes
Storage backend: stores resource definitions (usually etcd is used)
API server: provides Kubernetes API and manages storage backend
Controller manager: ensures resource statuses match specifications
Scheduler: schedules Pods to worker nodes
component on the worker nodes
Kubelet: manages execution of containers on a worker node
triggers the ReplicaSet controller, which is a sub-process of the controller manager.
the scheduler, who watches for Pod definitions that are not yet scheduled to a worker node.
creating and updating resources in the storage backend on the master node.
The kubelet of the worker node your ReplicaSet Pods have been scheduled to instructs the configured container runtime (which may be Docker) to download the required container images and run the containers.
Kubernetes components (except the API server and the storage backend) work by watching for resource changes in the storage backend and manipulating resources in the storage backend.
However, these components do not access the storage backend directly, but only through the Kubernetes API.
double usage of the Kubernetes API for internal components as well as for external users is a fundamental design concept of Kubernetes.
All other Kubernetes components and users read, watch, and manipulate the state (i.e. resources) of Kubernetes through the Kubernetes API
The storage backend stores the state (i.e. resources) of Kubernetes.
command completion is a shell feature that works by the means of a completion script.
A completion script is a shell script that defines the completion behaviour for a specific command. Sourcing a completion script enables completion for the corresponding command.
kubectl completion zsh
/etc/bash_completion.d directory (create it, if it doesn't exist)
source <(kubectl completion bash)
source <(kubectl completion zsh)
autoload -Uz compinit
compinit
the API reference, which contains the full specifications of all resources.
kubectl api-resources
displays the resource names in their plural form (e.g. deployments instead of deployment). It also displays the shortname (e.g. deploy) for those resources that have one. Don't worry about these differences. All of these name variants are equivalent for kubectl.
.spec
custom columns output format comes in. It lets you freely define the columns and the data to display in them. You can choose any field of a resource to be displayed as a separate column in the output
kubectl get pods -o custom-columns='NAME:metadata.name,NODE:spec.nodeName'
kubectl explain pod.spec.
kubectl explain pod.metadata.
browse the resource specifications and try it out with any fields you like!
JSONPath is a language to extract data from JSON documents (it is similar to XPath for XML).
with kubectl explain, only a subset of the JSONPath capabilities is supported
Many fields of Kubernetes resources are lists, and this operator allows you to select items of these lists. It is often used with a wildcard as [*] to select all items of the list.
kubectl get pods -o custom-columns='NAME:metadata.name,IMAGES:spec.containers[*].image'
a Pod may contain more than one container.
The availability zones for each node are obtained through the special failure-domain.beta.kubernetes.io/zone label.
kubectl get nodes -o yaml
kubectl get nodes -o json
The default kubeconfig file is ~/.kube/config
with multiple clusters, then you have connection parameters for multiple clusters configured in your kubeconfig file.
Within a cluster, you can set up multiple namespaces (a namespace is kind of "virtual" clusters within a physical cluster)
overwrite the default kubeconfig file with the --kubeconfig option for every kubectl command.
Namespace: the namespace to use when connecting to the cluster
a one-to-one mapping between clusters and contexts.
When kubectl reads a kubeconfig file, it always uses the information from the current context.
just change the current context in the kubeconfig file
to switch to another namespace in the same cluster, you can change the value of the namespace element of the current context
kubectl also provides the --cluster, --user, --namespace, and --context options that allow you to overwrite individual elements and the current context itself, regardless of what is set in the kubeconfig file.
for switching between clusters and namespaces is kubectx.
kubectl config get-contexts
just have to download the shell scripts named kubectl-ctx and kubectl-ns to any directory in your PATH and make them executable (for example, with chmod +x)
kubectl proxy
kubectl get roles
kubectl get pod
Kubectl plugins are distributed as simple executable files with a name of the form kubectl-x. The prefix kubectl- is mandatory,
To install a plugin, you just have to copy the kubectl-x file to any directory in your PATH and make it executable (for example, with chmod +x)
krew itself is a kubectl plugin
check out the kubectl-plugins GitHub topic
The executable can be of any type, a Bash script, a compiled Go program, a Python script, it really doesn't matter. The only requirement is that it can be directly executed by the operating system.
kubectl plugins can be written in any programming or scripting language.
you can write more sophisticated plugins with real programming languages, for example, using a Kubernetes client library. If you use Go, you can also use the cli-runtime library, which exists specifically for writing kubectl plugins.
a kubeconfig file consists of a set of contexts
changing the current context means changing the cluster, if you have only a single context per cluster.
Baseimage-docker only advocates running multiple OS processes inside a single container.
Password and challenge-response authentication are disabled by default. Only key authentication is allowed.
A tool for running a command as another user
The Docker developers advocate the philosophy of running a single logical service per container. A logical service can consist of multiple OS processes.
All syslog messages are forwarded to "docker logs".
Baseimage-docker advocates running multiple OS processes inside a single container, and a single logical service can consist of multiple OS processes.
Baseimage-docker provides tools to encourage running processes as different users
sometimes it makes sense to run multiple services in a single container, and sometimes it doesn't.
Splitting your logical service into multiple OS processes also makes sense from a security standpoint.
using environment variables to pass parameters to containers is very much the "Docker way"
Baseimage-docker provides a facility to run a single one-shot command, while solving all of the aforementioned problems
the shell script must run the daemon without letting it daemonize/fork it.
All executable scripts in /etc/my_init.d, if this directory exists. The scripts are run in lexicographic order.
variables will also be passed to all child processes
Environment variables on Unix are inherited on a per-process basis
there is no good central place for defining environment variables for all applications and services
centrally defining environment variables
One of the ideas behind Docker is that containers should be stateless, easily restartable, and behave like a black box.
a one-shot command in a new container
immediately exit after the command exits,
However the downside of this approach is that the init system is not started. That is, while invoking COMMAND, important daemons such as cron and syslog are not running. Also, orphaned child processes are not properly reaped, because COMMAND is PID 1.
add additional daemons (e.g. your own app) to the image by creating runit entries.
Nginx is one such example: it removes all environment variables unless you explicitly instruct it to retain them through the env configuration option.
Mechanisms for easily running multiple processes, without violating the Docker philosophy
Ubuntu is not designed to be run inside Docker
According to the Unix process model, the init process -- PID 1 -- inherits all orphaned child processes and must reap them
Syslog-ng seems to be much more stable
cron daemon
Rotates and compresses logs
/sbin/setuser
A tool for installing apt packages that automatically cleans up after itself.
a single logical service inside a single container
A daemon is a program which runs in the background of its system, such
as a web server.
The shell script must be called run, must be executable, and is to be
placed in the directory /etc/service/<NAME>. runsv will switch to
the directory and invoke ./run after your container starts.
If any script exits with a non-zero exit code, the booting will fail.
If your process is started with
a shell script, make sure you exec the actual process, otherwise the shell will receive the signal
and not your process.
any environment variables set with docker run --env or with the ENV command in the Dockerfile, will be picked up by my_init
not possible for a child process to change the environment variables of other processes
they will not see the environment variables that were originally passed by Docker.
We ignore HOME, SHELL, USER and a bunch of other environment variables on purpose, because not ignoring them will break multi-user containers.
my_init imports environment variables from the directory /etc/container_environment
/etc/container_environment.sh - a dump of the environment variables in Bash format.
modify the environment variables in my_init (and therefore the environment variables in all child processes that are spawned after that point in time), by altering the files in /etc/container_environment
my_init only activates changes in /etc/container_environment when running startup scripts
environment variables don't contain sensitive data, then you can also relax the permissions
Syslog messages are forwarded to the console
syslog-ng is started separately before the runit supervisor process, and shutdown after runit exits.
RUN apt-get update && apt-get upgrade -y -o Dpkg::Options::="--force-confold"
/sbin/my_init --skip-startup-files --quiet --
By default, no keys are installed, so nobody can login
provide a pregenerated, insecure key (PuTTY format)
RUN /usr/sbin/enable_insecure_key
docker run YOUR_IMAGE /sbin/my_init --enable-insecure-key
RUN cat /tmp/your_key.pub >> /root/.ssh/authorized_keys && rm -f /tmp/your_key.pub
The default baseimage-docker installs syslog-ng, cron and sshd services during the build process
Most processes that you start on a Linux machine will run in the foreground. The command will begin execution, blocking use of the shell for the duration of the process.
By default, processes are started in the foreground. Until the program exits or changes state, you will not be able to interact with the shell.
Linux terminals are usually configured to send the "SIGINT" signal (typically signal number 2) to current foreground process when the CTRL-C key combination is pressed.
Another signal that we can send is the "SIGTSTP" signal (typically signal number 20).
A background process is associated with the specific terminal that started it, but does not block access to the shell
start a background process by appending an ampersand character ("&") to the end of your commands.
type commands at the same time.
The [1] represents the command's "job spec" or job number. We can reference this with other job and process control commands, like kill, fg, and bg by preceding the job number with a percentage sign. In this case, we'd reference this job as %1.
Once the process is stopped, we can use the bg command to start it again in the background
By default, the bg command operates on the most recently stopped process.
Whether a process is in the background or in the foreground, it is rather tightly tied with the terminal instance that started it
When a terminal closes, it typically sends a SIGHUP signal to all of the processes (foreground, background, or stopped) that are tied to the terminal.
a terminal multiplexer
start it using the nohup command
appending output to ‘nohup.out’
pgrep -a
The disown command, in its default configuration, removes a job from the jobs queue of a terminal.
You can pass the -h flag to the disown process instead in order to mark the process to ignore SIGHUP signals, but to otherwise continue on as a regular job
The huponexit shell option controls whether bash will send its child processes the SIGHUP signal when it exits.
the utility used a simple recursive descent parser without backtracking, which gave unary operators precedence over binary operators and ignored trailing arguments.
The x-hack is effective because no unary operators can start with x.
the x-hack could be used to work around certain bugs all the way up until 2015, seven years after StackOverflow wrote it off as an archaic relic of the past!
The Dash issue of [ "(" = ")" ] was originally reported in a form that affected both Bash 3.2.48 and Dash 0.5.4 in 2008. You can still see this on macOS bash today
(Recommended) If you have plans to upgrade this single control-plane kubeadm cluster
to high availability you should specify the --control-plane-endpoint to set the shared endpoint
for all control-plane nodes
set the --pod-network-cidr to
a provider-specific value.
kubeadm tries to detect the container runtime by using a list of well
known endpoints.
kubeadm uses the network interface associated
with the default gateway to set the advertise address for this particular control-plane node's API server.
To use a different network interface, specify the --apiserver-advertise-address=<ip-address> argument
to kubeadm init
Do not share the admin.conf file with anyone and instead grant users custom permissions by generating
them a kubeconfig file using the kubeadm kubeconfig user command.
The token is used for mutual authentication between the control-plane node and the joining
nodes. The token included here is secret. Keep it safe, because anyone with this
token can add authenticated nodes to your cluster.
You must deploy a
Container Network Interface
(CNI) based Pod network add-on so that your Pods can communicate with each other.
Cluster DNS (CoreDNS) will not start up before a network is installed.
Take care that your Pod network must not overlap with any of the host
networks
Make sure that your Pod network plugin supports RBAC, and so do any manifests
that you use to deploy it.
You can install only one Pod network per cluster.
The cluster created here has a single control-plane node, with a single etcd database
running on it.
The node-role.kubernetes.io/control-plane label is such a restricted label and kubeadm manually applies it using
a privileged client after a node has been created.
By default, your cluster will not schedule Pods on the control plane nodes for security
reasons.
remove the node-role.kubernetes.io/control-plane:NoSchedule taint
from any nodes that have it, including the control plane nodes, meaning that the
scheduler will then be able to schedule Pods everywhere.
"testssl.sh is a free command line tool which checks a server's service on any port for the support of TLS/SSL ciphers, protocols as well as recent cryptographic flaws and more.
"
CMD instruction should be used to run the software contained by your
image, along with any arguments
CMD should be given an interactive shell (bash, python,
perl, etc)
COPY them individually, rather than all at once
COPY
is preferred
using ADD to fetch packages from remote URLs is
strongly discouraged
always use COPY
The best use for ENTRYPOINT is to set the image's main command, allowing that
image to be run as though it was that command (and then use CMD as the
default flags).
the image name can double as a reference to the binary as
shown in the command above
ENTRYPOINT instruction can also be used in combination with a helper
script
The VOLUME instruction should be used to expose any database storage area,
configuration storage, or files/folders created by your docker container.
use USER to change to a non-root
user
avoid installing or using sudo
avoid switching USER back
and forth frequently.
always use absolute paths for your
WORKDIR
ONBUILD is only useful for images that are going to be built FROM a given
image
The “onbuild” image will
fail catastrophically if the new build's context is missing the resource being
added.