Infrastructure as code is at the heart of provisioning for cloud infrastructure marking a significant shift away from monolithic point-and-click management tools.
infrastructure as code enables operators to take a programmatic approach to provisioning.
provides a single workflow to provision and maintain infrastructure and services from all of your vendors, making it not only easier to switch providers
A Terraform Provider is responsible for understanding API interactions between and exposing the resources from a given Infrastructure, Platform, or SaaS offering to Terraform.
write a Terraform file that describes the Virtual Machine that you want, apply that file with Terraform and create that VM as you described without ever needing to log into the vSphere dashboard.
HashiCorp Configuration Language (HCL)
the provider credentials are passed in at the top of the script to connect to the vSphere account.
modules— a way to encapsulate infrastructure resources into a reusable format.
A machine image is a single static unit that contains a pre-configured
operating system and installed software which is used to quickly create new
running machines.
"A machine image is a single static unit that contains a pre-configured operating system and installed software which is used to quickly create new running machines."
merged configuration is stored on disk in the .terraform
directory, which should be ignored from version control.
When using partial configuration, Terraform requires at a minimum that
an empty backend configuration is specified in one of the root Terraform
configuration files, to specify the backend type.
User variables are available globally within the rest
of the template.
The env function is available only within the default value of a user
variable, allowing you to default a user variable to an environment variable.
As Packer doesn't run
inside a shell, it won't expand ~
To set user variables from the command line, the -var flag is used as a
parameter to packer build (and some other commands).
Variables can also be set from an external JSON file. The -var-file flag
reads a file containing a key/value mapping of variables to values and sets
those variables.
-var-file=
sensitive variables won't get printed to the logs by adding them to the
"sensitive-variables" list within the Packer template
every Kubernetes operation is exposed as an API endpoint and can be executed by an HTTP request to this endpoint.
the main job of kubectl is to carry out HTTP requests to the Kubernetes API
Kubernetes maintains an internal state of resources, and all Kubernetes operations are CRUD operations on these resources.
Kubernetes is a fully resource-centred system
Kubernetes API reference is organised as a list of resource types with their associated operations.
This is how kubectl works for all commands that interact with the Kubernetes cluster.
kubectl simply makes HTTP requests to the appropriate Kubernetes API endpoints.
it's totally possible to control Kubernetes with a tool like curl by manually issuing HTTP requests to the Kubernetes API.
Kubernetes consists of a set of independent components that run as separate processes on the nodes of a cluster.
components on the master nodes
Storage backend: stores resource definitions (usually etcd is used)
API server: provides Kubernetes API and manages storage backend
Controller manager: ensures resource statuses match specifications
Scheduler: schedules Pods to worker nodes
component on the worker nodes
Kubelet: manages execution of containers on a worker node
triggers the ReplicaSet controller, which is a sub-process of the controller manager.
the scheduler, who watches for Pod definitions that are not yet scheduled to a worker node.
creating and updating resources in the storage backend on the master node.
The kubelet of the worker node your ReplicaSet Pods have been scheduled to instructs the configured container runtime (which may be Docker) to download the required container images and run the containers.
Kubernetes components (except the API server and the storage backend) work by watching for resource changes in the storage backend and manipulating resources in the storage backend.
However, these components do not access the storage backend directly, but only through the Kubernetes API.
double usage of the Kubernetes API for internal components as well as for external users is a fundamental design concept of Kubernetes.
All other Kubernetes components and users read, watch, and manipulate the state (i.e. resources) of Kubernetes through the Kubernetes API
The storage backend stores the state (i.e. resources) of Kubernetes.
command completion is a shell feature that works by the means of a completion script.
A completion script is a shell script that defines the completion behaviour for a specific command. Sourcing a completion script enables completion for the corresponding command.
kubectl completion zsh
/etc/bash_completion.d directory (create it, if it doesn't exist)
source <(kubectl completion bash)
source <(kubectl completion zsh)
autoload -Uz compinit
compinit
the API reference, which contains the full specifications of all resources.
kubectl api-resources
displays the resource names in their plural form (e.g. deployments instead of deployment). It also displays the shortname (e.g. deploy) for those resources that have one. Don't worry about these differences. All of these name variants are equivalent for kubectl.
.spec
custom columns output format comes in. It lets you freely define the columns and the data to display in them. You can choose any field of a resource to be displayed as a separate column in the output
kubectl get pods -o custom-columns='NAME:metadata.name,NODE:spec.nodeName'
kubectl explain pod.spec.
kubectl explain pod.metadata.
browse the resource specifications and try it out with any fields you like!
JSONPath is a language to extract data from JSON documents (it is similar to XPath for XML).
with kubectl explain, only a subset of the JSONPath capabilities is supported
Many fields of Kubernetes resources are lists, and this operator allows you to select items of these lists. It is often used with a wildcard as [*] to select all items of the list.
kubectl get pods -o custom-columns='NAME:metadata.name,IMAGES:spec.containers[*].image'
a Pod may contain more than one container.
The availability zones for each node are obtained through the special failure-domain.beta.kubernetes.io/zone label.
kubectl get nodes -o yaml
kubectl get nodes -o json
The default kubeconfig file is ~/.kube/config
with multiple clusters, then you have connection parameters for multiple clusters configured in your kubeconfig file.
Within a cluster, you can set up multiple namespaces (a namespace is kind of "virtual" clusters within a physical cluster)
overwrite the default kubeconfig file with the --kubeconfig option for every kubectl command.
Namespace: the namespace to use when connecting to the cluster
a one-to-one mapping between clusters and contexts.
When kubectl reads a kubeconfig file, it always uses the information from the current context.
just change the current context in the kubeconfig file
to switch to another namespace in the same cluster, you can change the value of the namespace element of the current context
kubectl also provides the --cluster, --user, --namespace, and --context options that allow you to overwrite individual elements and the current context itself, regardless of what is set in the kubeconfig file.
for switching between clusters and namespaces is kubectx.
kubectl config get-contexts
just have to download the shell scripts named kubectl-ctx and kubectl-ns to any directory in your PATH and make them executable (for example, with chmod +x)
kubectl proxy
kubectl get roles
kubectl get pod
Kubectl plugins are distributed as simple executable files with a name of the form kubectl-x. The prefix kubectl- is mandatory,
To install a plugin, you just have to copy the kubectl-x file to any directory in your PATH and make it executable (for example, with chmod +x)
krew itself is a kubectl plugin
check out the kubectl-plugins GitHub topic
The executable can be of any type, a Bash script, a compiled Go program, a Python script, it really doesn't matter. The only requirement is that it can be directly executed by the operating system.
kubectl plugins can be written in any programming or scripting language.
you can write more sophisticated plugins with real programming languages, for example, using a Kubernetes client library. If you use Go, you can also use the cli-runtime library, which exists specifically for writing kubectl plugins.
a kubeconfig file consists of a set of contexts
changing the current context means changing the cluster, if you have only a single context per cluster.
By default, terraform init downloads plugins into a subdirectory of the
working directory so that each working directory is self-contained.
Terraform optionally allows the
use of a local directory as a shared plugin cache, which then allows each
distinct plugin binary to be downloaded only once.
directory must already exist before Terraform will cache plugins;
Terraform will not create the directory itself.
When a plugin cache directory is enabled, the terraform init command will
still access the plugin distribution server to obtain metadata about which
plugins are available, but once a suitable version has been selected it will
first check to see if the selected plugin is already available in the cache
directory.
When possible, Terraform will use hardlinks or symlinks to avoid storing
a separate copy of a cached plugin in multiple directories.
Terraform will never itself delete a plugin from the
plugin cache once it's been placed there.
The configuration file used to define what image we want built and how is called
a template in Packer terminology.
JSON struck the best balance between
human-editable and machine-editable, allowing both hand-made templates as well
as machine generated templates to easily be made.
validate the
template by running packer validate example.json. This command checks the
syntax as well as the configuration values to verify they look valid.
At the end of running packer build, Packer outputs the artifacts that were
created as part of the build.
Packer only builds images. It does not attempt to manage them in any way.
All strings within templates are processed by a common Packer templating
engine, where variables and functions can be used to modify the value of a
configuration parameter at runtime.
Anything template related happens within double-braces: {{ }}.
Functions are specified directly within the braces, such as
{{timestamp}}
Packer needs to decide on a port to use for VNC when building remotely.
vnc_disable_password - This must be set to "true" when using VNC with
ESXi 6.5 or 6.7
remote_type (string) - The type of remote machine that will be used to
build this VM rather than a local desktop product. The only value accepted
for this currently is esx5. If this is not set, a desktop product will
be used. By default, this is not set.
variable definitions can have default values assigned to them.
values are stored in separate files with the .tfvars extension.
looks through the working directory for a file named terraform.tfvars, or for files with the .auto.tfvars extension.
add the terraform.tfvars file to your .gitignore file and keep it out of version control.
include an example terraform.tfvars.example in your Git repository with all of the variable names recorded (but none of the values entered).
terraform apply -var-file=myvars.tfvars
Terraform allows you to keep input variable values in environment variables.
the prefix TF_VAR_
If Terraform does not find a default value for a defined variable; or a value from a .tfvars file, environment variable, or CLI flag; it will prompt you for a value before running an action
state file contains a JSON object that holds your managed infrastructure’s current state
state is a snapshot of the various attributes of your infrastructure at the time it was last modified
sensitive information used to generate your Terraform state can be stored as plain text in the terraform.tfstate file.
Avoid checking your terraform.tfstate file into your version control repository.
Some backends, like Consul, also allow for state locking. If one user is applying a state, another user will be unable to make any changes.
Terraform backends allow the user to securely store their state in a remote location, such as a key/value store like Consul, or an S3 compatible bucket storage like Minio.
deploying code
changes at every small iteration, reducing the chance of developing
new code based on bugged or failed previous versions
based on
automating the execution of scripts to minimize the chance of
introducing errors while developing applications.
For every push to the repository, you
can create a set of scripts to build and test your application
automatically, decreasing the chance of introducing errors to your app.
the abstraction of the infrastructure layer, which is now considered code. Deployment of a new application may require the deployment of new infrastructure code as well.
"big bang" deployments update whole or large parts of an application in one fell swoop.
Big bang deployments required the business to conduct extensive development and testing before release, often associated with the "waterfall model" of large sequential releases.
Rollbacks are often costly, time-consuming, or even impossible.
In a rolling deployment, an application’s new version gradually replaces the old one.
new and old versions will coexist without affecting functionality or user experience.
Each container is modified to download the latest image from the app vendor’s site.
two identical production environments work in parallel.
Once the testing results are successful, application traffic is routed from blue to green.
In a blue-green deployment, both systems use the same persistence layer or database back end.
You can use the primary database by blue for write operations and use the secondary by green for read operations.
Blue-green deployments rely on traffic routing.
long TTL values can delay these changes.
The main challenge of canary deployment is to devise a way to route some users to the new application.
Using an application logic to unlock new features to specific users and groups.
With CD, the CI-built code artifact is packaged and always ready to be deployed in one or more environments.
Use Build Automation tools to automate environment builds
Use configuration management tools
Enable automated rollbacks for deployments
An application performance monitoring (APM) tool can help your team monitor critical performance metrics including server response times after deployments.
Executors define the environment in which the steps of a job will be run.
Executor declarations in config outside of jobs can be used by all jobs in the scope of that declaration, allowing you to reuse a single executor definition across multiple jobs.
It is also possible to allow an orb to define the executor used by all of its commands.
When invoking an executor in a job any keys in the job itself will override those of the executor invoked.
Steps are used when you have a job or command that needs to mix predefined and user-defined steps.
Use the enum parameter type when you want to enforce that the value must be one from a specific set of string values.
Use an executor parameter type to allow the invoker of a job to decide what
executor it will run on
invoke the same job more than once in the workflows stanza of config.yml, passing any necessary parameters as subkeys to the job.
If a job is declared inside an orb it can use commands in that orb or the global commands.
To use parameters in executors, define the parameters under the given executor.
Parameters are in-scope only within the job or command that defined them.
A single configuration may invoke a job multiple times.
Every job invocation may optionally accept two special arguments: pre-steps and post-steps.
Pre and post steps allow you to execute steps in a given job
without modifying the job.
conditions are checked before a workflow is actually run
you cannot use a condition to check an environment
variable.
Conditional steps may be located anywhere a regular step could and may only use parameter values as inputs.
A conditional step consists of a step with the key when or unless. Under this conditional key are the subkeys steps and condition
A condition is a single value that evaluates to true or false at the time the config is processed, so you cannot use environment variables as conditions