every Kubernetes operation is exposed as an API endpoint and can be executed by an HTTP request to this endpoint.
the main job of kubectl is to carry out HTTP requests to the Kubernetes API
Kubernetes maintains an internal state of resources, and all Kubernetes operations are CRUD operations on these resources.
Kubernetes is a fully resource-centred system
Kubernetes API reference is organised as a list of resource types with their associated operations.
This is how kubectl works for all commands that interact with the Kubernetes cluster.
kubectl simply makes HTTP requests to the appropriate Kubernetes API endpoints.
it's totally possible to control Kubernetes with a tool like curl by manually issuing HTTP requests to the Kubernetes API.
Kubernetes consists of a set of independent components that run as separate processes on the nodes of a cluster.
components on the master nodes
Storage backend: stores resource definitions (usually etcd is used)
API server: provides Kubernetes API and manages storage backend
Controller manager: ensures resource statuses match specifications
Scheduler: schedules Pods to worker nodes
component on the worker nodes
Kubelet: manages execution of containers on a worker node
triggers the ReplicaSet controller, which is a sub-process of the controller manager.
the scheduler, who watches for Pod definitions that are not yet scheduled to a worker node.
creating and updating resources in the storage backend on the master node.
The kubelet of the worker node your ReplicaSet Pods have been scheduled to instructs the configured container runtime (which may be Docker) to download the required container images and run the containers.
Kubernetes components (except the API server and the storage backend) work by watching for resource changes in the storage backend and manipulating resources in the storage backend.
However, these components do not access the storage backend directly, but only through the Kubernetes API.
double usage of the Kubernetes API for internal components as well as for external users is a fundamental design concept of Kubernetes.
All other Kubernetes components and users read, watch, and manipulate the state (i.e. resources) of Kubernetes through the Kubernetes API
The storage backend stores the state (i.e. resources) of Kubernetes.
command completion is a shell feature that works by the means of a completion script.
A completion script is a shell script that defines the completion behaviour for a specific command. Sourcing a completion script enables completion for the corresponding command.
kubectl completion zsh
/etc/bash_completion.d directory (create it, if it doesn't exist)
source <(kubectl completion bash)
source <(kubectl completion zsh)
autoload -Uz compinit
compinit
the API reference, which contains the full specifications of all resources.
kubectl api-resources
displays the resource names in their plural form (e.g. deployments instead of deployment). It also displays the shortname (e.g. deploy) for those resources that have one. Don't worry about these differences. All of these name variants are equivalent for kubectl.
.spec
custom columns output format comes in. It lets you freely define the columns and the data to display in them. You can choose any field of a resource to be displayed as a separate column in the output
kubectl get pods -o custom-columns='NAME:metadata.name,NODE:spec.nodeName'
kubectl explain pod.spec.
kubectl explain pod.metadata.
browse the resource specifications and try it out with any fields you like!
JSONPath is a language to extract data from JSON documents (it is similar to XPath for XML).
with kubectl explain, only a subset of the JSONPath capabilities is supported
Many fields of Kubernetes resources are lists, and this operator allows you to select items of these lists. It is often used with a wildcard as [*] to select all items of the list.
kubectl get pods -o custom-columns='NAME:metadata.name,IMAGES:spec.containers[*].image'
a Pod may contain more than one container.
The availability zones for each node are obtained through the special failure-domain.beta.kubernetes.io/zone label.
kubectl get nodes -o yaml
kubectl get nodes -o json
The default kubeconfig file is ~/.kube/config
with multiple clusters, then you have connection parameters for multiple clusters configured in your kubeconfig file.
Within a cluster, you can set up multiple namespaces (a namespace is kind of "virtual" clusters within a physical cluster)
overwrite the default kubeconfig file with the --kubeconfig option for every kubectl command.
Namespace: the namespace to use when connecting to the cluster
a one-to-one mapping between clusters and contexts.
When kubectl reads a kubeconfig file, it always uses the information from the current context.
just change the current context in the kubeconfig file
to switch to another namespace in the same cluster, you can change the value of the namespace element of the current context
kubectl also provides the --cluster, --user, --namespace, and --context options that allow you to overwrite individual elements and the current context itself, regardless of what is set in the kubeconfig file.
for switching between clusters and namespaces is kubectx.
kubectl config get-contexts
just have to download the shell scripts named kubectl-ctx and kubectl-ns to any directory in your PATH and make them executable (for example, with chmod +x)
kubectl proxy
kubectl get roles
kubectl get pod
Kubectl plugins are distributed as simple executable files with a name of the form kubectl-x. The prefix kubectl- is mandatory,
To install a plugin, you just have to copy the kubectl-x file to any directory in your PATH and make it executable (for example, with chmod +x)
krew itself is a kubectl plugin
check out the kubectl-plugins GitHub topic
The executable can be of any type, a Bash script, a compiled Go program, a Python script, it really doesn't matter. The only requirement is that it can be directly executed by the operating system.
kubectl plugins can be written in any programming or scripting language.
you can write more sophisticated plugins with real programming languages, for example, using a Kubernetes client library. If you use Go, you can also use the cli-runtime library, which exists specifically for writing kubectl plugins.
a kubeconfig file consists of a set of contexts
changing the current context means changing the cluster, if you have only a single context per cluster.
In both the required_version and required_providers settings, each override
constraint entirely replaces the constraints for the same component in the
original block.
If both the base block and the override block both set
required_version then the constraints in the base block are entirely ignored.
Terraform normally loads all of the .tf and .tf.json files within a
directory and expects each one to define a distinct set of configuration
objects.
If two files attempt to define the same object, Terraform returns
an error.
a
human-edited configuration file in the Terraform language native syntax
could be partially overridden using a programmatically-generated file
in JSON syntax.
Terraform has special handling of any configuration
file whose name ends in _override.tf or _override.tf.json
Terraform initially skips these override files when loading configuration,
and then afterwards processes each one in turn (in lexicographical order).
merges the
override block contents into the existing object.
Over-use of override files
hurts readability, since a reader looking only at the original files cannot
easily see that some portions of those files have been overridden without
consulting all of the override files that are present.
When using override
files, use comments in the original files to warn future readers about which
override files apply changes to each block.
A top-level block in an override file merges with a block in a normal
configuration file that has the same block header.
Within a top-level block, an attribute argument within an override block
replaces any argument of the same name in the original block.
Within a top-level block, any nested blocks within an override block replace
all blocks of the same type in the original block.
The contents of nested configuration blocks are not merged.
If more than one override file defines the same top-level block, the overriding
effect is compounded, with later blocks taking precedence over earlier blocks
The settings within terraform blocks are considered individually when
merging.
If the required_providers argument is set, its value is merged on an
element-by-element basis, which allows an override block to adjust the
constraint for a single provider without affecting the constraints for
other providers.
"In both the required_version and required_providers settings, each override constraint entirely replaces the constraints for the same component in the original block. "
for a lot of people, the name “Docker” itself is synonymous with the word “container”.
Docker created a very ergonomic (nice-to-use) tool for working with containers – also called docker.
docker is designed to be installed on a workstation or server and comes with a bunch of tools to make it easy to build and run containers as a developer, or DevOps person.
containerd: This is a daemon process that manages and runs containers.
runc: This is the low-level container runtime (the thing that actually creates and runs containers).
libcontainer, a native Go-based implementation for creating containers.
Kubernetes includes a component called dockershim, which allows it to support Docker.
Kubernetes prefers to run containers through any container runtime which supports its Container Runtime Interface (CRI).
Kubernetes will remove support for Docker directly, and prefer to use only container runtimes that implement its Container Runtime Interface.
Both containerd and CRI-O can run Docker-formatted (actually OCI-formatted) images, they just do it without having to use the docker command or the Docker daemon.
Docker images, are actually images packaged in the Open Container Initiative (OCI) format.
CRI is the API that Kubernetes uses to control the different runtimes that create and manage containers.
CRI makes it easier for Kubernetes to use different container runtimes
containerd is a high-level container runtime that came from Docker, and implements the CRI spec
containerd was separated out of the Docker project, to make Docker more modular.
CRI-O is another high-level container runtime which implements the Container Runtime Interface (CRI).
The idea behind the OCI is that you can choose between different runtimes which conform to the spec.
runc is an OCI-compatible container runtime.
A reference implementation is a piece of software that has implemented all the requirements of a specification or standard.
runc provides all of the low-level functionality for containers, interacting with existing low-level Linux features, like namespaces and control groups.
deployment.yaml: A basic manifest for creating a Kubernetes deployment
using the suffix .yaml for YAML files and .tpl for helpers.
It is just fine to put a plain YAML file like this in the templates/ directory.
helm get manifest
The helm get manifest command takes a release name (full-coral) and prints
out all of the Kubernetes resources that were uploaded to the server. Each file
begins with --- to indicate the start of a YAML document
Names should be unique to a release
The name: field is limited to 63 characters because of limitations to
the DNS system.
release names are limited to 53 characters
{{ .Release.Name }}
A template directive is enclosed in {{ and }} blocks.
The values that are passed into a template can be thought of as namespaced objects, where a dot (.) separates each namespaced element.
The leading dot before Release indicates that we start with the top-most namespace for this scope
The Release object is one of the built-in objects for Helm
When you want to test the template rendering, but not actually install anything, you can use helm install ./mychart --debug --dry-run
Using --dry-run will make it easier to test your code, but it won’t ensure that Kubernetes itself will accept the templates you generate.
Objects are passed into a template from the template engine.
create new objects within your templates
Objects can be simple, and have just one value. Or they can contain other objects or functions.
Release is one of the top-level objects that you can access in your templates.
Release.Namespace: The namespace to be released into (if the manifest doesn’t override)
Values: Values passed into the template from the values.yaml file and from user-supplied files. By default, Values is empty.
Chart: The contents of the Chart.yaml file.
Files: This provides access to all non-special files in a chart.
Files.Get is a function for getting a file by name
Files.GetBytes is a function for getting the contents of a file as an array of bytes instead of as a string. This is useful for things like images.
Template: Contains information about the current template that is being executed
BasePath: The namespaced path to the templates directory of the current chart
The built-in values always begin with a capital letter.
Go’s naming convention
use only initial lower case letters in order to distinguish local names from those built-in.
If this is a subchart, the values.yaml file of a parent chart
Individual parameters passed with --set
values.yaml is the default, which can be overridden by a parent chart’s values.yaml, which can in turn be overridden by a user-supplied values file, which can in turn be overridden by --set parameters.
While structuring data this way is possible, the recommendation is that you keep your values trees shallow, favoring flatness.
If you need to delete a key from the default values, you may override the value of the key to be null, in which case Helm will remove the key from the overridden values merge.
Kubernetes would then fail because you can not declare more than one livenessProbe handler.
When injecting strings from the .Values object into the template, we ought to quote these strings.
quote
Template functions follow the syntax functionName arg1 arg2...
While we talk about the “Helm template language” as if it is Helm-specific, it is actually a combination of the Go template language, some extra functions, and a variety of wrappers to expose certain objects to the templates.
Drawing on a concept from UNIX, pipelines are a tool for chaining together a series of template commands to compactly express a series of transformations.
pipelines are an efficient way of getting several things done in sequence
The repeat function will echo the given string the given number of times
default DEFAULT_VALUE GIVEN_VALUE. This function allows you to specify a default value inside of the template, in case the value is omitted.
all static default values should live in the values.yaml, and should not be repeated using the default command
Operators are implemented as functions that return a boolean value.
To use eq, ne, lt, gt, and, or, not etcetera place the operator at the front of the statement followed by its parameters just as you would a function.
if and
if or
with to specify a scope
range, which provides a “for each”-style loop
block declares a special kind of fillable template area
A pipeline is evaluated as false if the value is:
a boolean false
a numeric zero
an empty string
a nil (empty or null)
an empty collection (map, slice, tuple, dict, array)
incorrect YAML because of the whitespacing
When the template engine runs, it removes the contents inside of {{ and }}, but it leaves the remaining whitespace exactly as is.
{{- (with the dash and space added) indicates that whitespace should be chomped left, while -}} means whitespace to the right should be consumed.
Newlines are whitespace!
an * at the end of the line indicates a newline character that would be removed
Be careful with the chomping modifiers.
the indent function
Scopes can be changed. with can allow you to set the current scope (.) to a particular object.
Inside of the restricted scope, you will not be able to access the other objects from the parent scope.
range
The range function will “range over” (iterate through) the pizzaToppings list.
Just like with sets the scope of ., so does a range operator.
The toppings: |- line is declaring a multi-line string.
not a YAML list. It’s a big string.
the data in ConfigMaps data is composed of key/value pairs, where both the key and the value are simple strings.
The |- marker in YAML takes a multi-line string.
range can be used to iterate over collections that have a key and a value (like a map or dict).
In Helm templates, a variable is a named reference to another object. It follows the form $name
Variables are assigned with a special assignment operator: :=
{{- $relname := .Release.Name -}}
capture both the index and the value
the integer index (starting from zero) to $index and the value to $topping
For data structures that have both a key and a value, we can use range to get both
Variables are normally not “global”. They are scoped to the block in which they are declared.
one variable that is always global - $ - this variable will always point to the root context.
$.
$.
Helm template language is its ability to declare multiple templates and use them together.
A named template (sometimes called a partial or a subtemplate) is simply a template defined inside of a file, and given a name.
when naming templates: template names are global.
If you declare two templates with the same name, whichever one is loaded last will be the one used.
you should be careful to name your templates with chart-specific names.
templates in subcharts are compiled together with top-level templates
naming convention is to prefix each defined template with the name of the chart: {{ define "mychart.labels" }}
When injecting strings from the .Values
object into the template, we ought to quote these strings.
Helm has over 60 available functions. Some of them are defined by the
Go
template language itself. Most of the others
are part of the
Sprig template library
the "Helm template language" as if it is Helm-specific, it
is actually a combination of the Go template language, some extra functions,
and a variety of wrappers to expose certain objects to the templates.
Drawing on a concept from UNIX, pipelines are a tool for chaining
together a series of template commands to compactly express a series of
transformations.
the default function: default DEFAULT_VALUE GIVEN_VALUE
all static default values should live in the values.yaml,
and should not be repeated using the default command (otherwise they would be
redundant).
the default command is perfect for computed values, which
can not be declared inside values.yaml.
When lookup returns an object, it will return a dictionary.
The synopsis of the lookup function is lookup apiVersion, kind, namespace, name -> resource or resource list
When no object is found, an empty value is returned. This can be used to check
for the existence of an object.
The lookup function uses Helm's existing Kubernetes connection configuration
to query Kubernetes.
Helm is not supposed to contact the Kubernetes API Server
during a helm template or a helm install|update|delete|rollback --dry-run,
so the lookup function will return an empty list (i.e. dict) in such a case.
the operators (eq, ne, lt, gt, and, or and so on) are
all implemented as functions. In pipelines, operations can be grouped with
parentheses ((, and )).
triggers - A map of values which should cause this set of provisioners to
re-run. Values are meant to be interpolated references to variables or
attributes of other resources.
"triggers - A map of values which should cause this set of provisioners to re-run. Values are meant to be interpolated references to variables or attributes of other resources.
"
"MessagePack is an efficient binary serialization format. It lets you exchange data among multiple languages like JSON. But it's faster and smaller. Small integers are encoded into a single byte, and typical short strings require only one extra byte in addition to the strings themselves."
"Riemann aggregates events from your servers and applications with a powerful stream processing language. Send an email for every exception in your app. Track the latency distribution of your web app. See the top processes on any host, by memory and CPU. Combine statistics from every Riak node in your cluster and forward to Graphite. Track user activity from second to second."
Gobot is set of libraries in the Go programming language for robotics and physical computing.
It provides a simple, yet powerful way to create solutions that incorporate multiple, different hardware devices at the same time.
Want to use Ruby on robots? Check out our sister project Artoo (http://artoo.io).
Want to use Node.js? Check out our sister project Cylon (http://cylonjs.com).
"Ledger is a powerful, double-entry accounting system that is accessed from the UNIX command-line. Ledger, begun in 2003, is written by John Wiegley and released under the BSD license. It has also inspired several ports to other languages."
"String API. It's difficult and obtuse, and people often wish it were more like string APIs in other languages. Today, I'm going to explain just why Swift's String API is designed the way it is (or at least, why I think it is) and why I ultimately think it's the best string API out there in terms of its fundamental design."
view templates are written in a language called ERB (Embedded Ruby) which is converted by the request cycle in Rails before being sent to the user.
Each action's purpose is to collect information to provide it to a view.
A view's purpose is to display this information in a human readable format.
routing file which holds entries in a special DSL (domain-specific language) that tells Rails how to connect incoming requests to controllers and actions.
You can create, read, update and destroy items for a resource and these operations are referred to as CRUD operations
A controller is simply a class that is defined to inherit from ApplicationController.
If not found, then it will attempt to load a template called application/new. It looks for one here because the PostsController inherits from ApplicationController
:formats specifies the format of template to be served in response. The default format is :html, and so Rails is looking for an HTML template.
:handlers, is telling us what template handlers could be used to render our template.
When you call form_for, you pass it an identifying object for this
form. In this case, it's the symbol :post. This tells the form_for
helper what this form is for.
that the action attribute for the form is pointing at /posts/new
When a form is submitted, the fields of the form are sent to Rails as parameters.
parameters can then be referenced inside the controller actions, typically to perform a particular task
params method is the object which represents the parameters (or fields) coming in from the form.
Active Record is smart enough to automatically map column names to
model attributes,
Rails uses rake commands to run migrations,
and it's possible to undo a migration after it's been applied to your database
every Rails model can be initialized with its
respective attributes, which are automatically mapped to the respective
database columns.
migration creates a method named change which will be called when you
run this migration.
The action defined in this method is also reversible, which
means Rails knows how to reverse the change made by this migration, in case you
want to reverse it later
Migration filenames include a timestamp to ensure that they're processed in the
order that they were created.
@post.save returns a boolean indicating
whether the model was saved or not.
prevents an attacker from
setting the model's attributes by manipulating the hash passed to the model.
If you want to link to an action in the same controller, you don't
need to specify the :controller option, as Rails will use the current
controller by default.
inherits from
ActiveRecord::Base
Active Record supplies a great deal of functionality to
your Rails models for free, including basic database CRUD (Create, Read, Update,
Destroy) operations, data validation, as well as sophisticated search support
and the ability to relate multiple models to one another.
Rails includes methods to help you validate the data that you send to models
Rails can validate a variety of conditions in a model,
including the presence or uniqueness of columns, their format, and the
existence of associated objects.
redirect_to will tell the browser to issue another request.
rendering is done within the same request as the form submission
Each request for a
comment has to keep track of the post to which the comment is attached, thus the
initial call to the find method of the Post model to get the post in question.
pluralize is a rails helper that takes a number and a string as its
arguments. If the number is greater than one, the string will be automatically pluralized.
The render method is used so that the @post object is passed back to the new template when it is rendered.
The method: :patch option tells Rails that we want this form to be submitted
via the PATCH HTTP method which is the HTTP method you're expected to use to
update resources according to the REST protocol.
it accepts a hash containing the attributes
that you want to update.
field_with_errors. You can define a css rule to make them
standout
belongs_to :post, which sets up an Active Record association
creates comments as a nested resource within posts
call destroy on Active Record objects when you want to delete
them from the database.
Rails allows you to
use the dependent option of an association to achieve this.
store all external data as UTF-8
you're better off
ensuring that all external data is UTF-8
use UTF-8 as the internal storage of your database
Rails defaults to converting data from your database into UTF-8 at
the boundary.
:patch
By default forms built with the form_for helper are sent via POST
The :method and :'data-confirm'
options are used as HTML5 attributes so that when the link is clicked,
Rails will first show a confirm dialog to the user, and then submit the link with method delete.
This is done via the JavaScript file jquery_ujs which is automatically included
into your application's layout (app/views/layouts/application.html.erb) when you
generated the application.
Without this file, the confirmation dialog box wouldn't appear.
just defines the partial template we want to render
As the render
method iterates over the @post.comments collection, it assigns each
comment to
a local variable named the same as the partial
use the authentication system
require and permit
the method is often made private to make sure
it can't be called outside its intended context.
standard CRUD actions in each
controller in the following order: index, show, new, edit, create, update
and destroy.
must be placed
before any private or protected method in the controller in order to work