"strace 會列出程式執行的 system call,效率很好且不需要 debug symbol。我比較常用它找出影響程式行為設定檔的位置。
基本概念是開檔、讀取、寫入最後都會用到 system call,system call 數量不多,知道要用什麼 system call 作什麼事,就可以用 strace 觀察特定的 system call 得知很多資訊。比方說開檔一定要用 open,所以觀察 open 就能知道程式讀了那些檔案。
"
"debugger.html is a modern JavaScript debugger from Mozilla, built as a
web application with React and Redux. This project was started early
this year in an effort to replace the current debugger within the Firefox Developer Tools. Also, we wanted to make a debugger capable of debugging multiple targets and functioning in a standalone mode."
"Nmap ("Network Mapper") is a free and open source (license) utility for network discovery and security auditing. Many systems and network administrators also find it useful for tasks such as network inventory, managing service upgrade schedules, and monitoring host or service uptime. Nmap uses raw IP packets in novel ways to determine what hosts are available on the network, what services (application name and version) those hosts are offering, what operating systems (and OS versions) they are running, what type of packet filters/firewalls are in use, and dozens of other characteristics. It was designed to rapidly scan large networks, but works fine against single hosts. Nmap runs on all major computer operating systems, and official binary packages are available for Linux, Windows, and Mac OS X. In addition to the classic command-line Nmap executable, the Nmap suite includes an advanced GUI and results viewer (Zenmap), a flexible data transfer, redirection, and debugging tool (Ncat), a utility for comparing scan results (Ndiff), and a packet generation and response analysis tool (Nping)."
You only need the single Dockerfile. You don’t need a separate build script,
You don’t need to create any intermediate
images and you don’t need to extract any artifacts to your local system at all.
Debugging a specific build stage
You can use the COPY --from instruction to
copy from a separate image, either using the local image name, a tag available
locally or on a Docker registry, or a tag ID.
Docker builds images automatically by reading the instructions from a
Dockerfile -- a text file that contains all commands, in order, needed to
build a given image.
A Docker image consists of read-only layers each of which represents a
Dockerfile instruction.
The layers are stacked and each one is a delta of the
changes from the previous layer
When you run an image and generate a container, you add a new writable layer
(the “container layer”) on top of the underlying layers.
By “ephemeral,” we mean that the container can be stopped
and destroyed, then rebuilt and replaced with an absolute minimum set up and
configuration.
Inadvertently including files that are not necessary for building an image
results in a larger build context and larger image size.
To exclude files not relevant to the build (without restructuring your source
repository) use a .dockerignore file. This file supports exclusion patterns
similar to .gitignore files.
minimize image layers by leveraging build cache.
if your build contains several layers, you can order them from the
less frequently changed (to ensure the build cache is reusable) to the more
frequently changed
avoid
installing extra or unnecessary packages just because they might be “nice to
have.”
Each container should have only one concern.
Decoupling applications into
multiple containers makes it easier to scale horizontally and reuse containers
Limiting each container to one process is a good rule of thumb, but it is not a
hard and fast rule.
Use your best judgment to keep containers as clean and modular as possible.
do multi-stage builds
and only copy the artifacts you need into the final image. This allows you to
include tools and debug information in your intermediate build stages without
increasing the size of the final image.
avoid duplication of packages and make the
list much easier to update.
When building an image, Docker steps through the instructions in your
Dockerfile, executing each in the order specified.
the next
instruction is compared against all child images derived from that base
image to see if one of them was built using the exact same instruction. If
not, the cache is invalidated.
simply comparing the instruction in the Dockerfile with one
of the child images is sufficient.
For the ADD and COPY instructions, the contents of the file(s)
in the image are examined and a checksum is calculated for each file.
If anything has changed in the file(s), such
as the contents and metadata, then the cache is invalidated.
cache checking does not look at the
files in the container to determine a cache match.
In that case just
the command string itself is used to find a match.
Whenever possible, use current official repositories as the basis for your
images.
Using RUN apt-get update && apt-get install -y ensures your Dockerfile
installs the latest package versions with no further coding or manual
intervention.
cache busting
Docker executes these commands using the /bin/sh -c interpreter, which only
evaluates the exit code of the last operation in the pipe to determine success.
set -o pipefail && to ensure that an unexpected error prevents the
build from inadvertently succeeding.
The CMD instruction should be used to run the software contained by your
image, along with any arguments.
CMD should almost always be used in the form
of CMD [“executable”, “param1”, “param2”…]
CMD should rarely be used in the manner of CMD [“param”, “param”] in
conjunction with ENTRYPOINT
The ENV instruction is also useful for providing required environment
variables specific to services you wish to containerize,
Each ENV line creates a new intermediate layer, just like RUN commands
COPY
is preferred
COPY only
supports the basic copying of local files into the container
the best use for ADD is local tar file
auto-extraction into the image, as in ADD rootfs.tar.xz /
If you have multiple Dockerfile steps that use different files from your
context, COPY them individually, rather than all at once.
using ADD to fetch packages from remote URLs is
strongly discouraged; you should use curl or wget instead
The best use for ENTRYPOINT is to set the image’s main command, allowing that
image to be run as though it was that command (and then use CMD as the
default flags).
the image name can double as a reference to the binary as
shown in the command
The VOLUME instruction should be used to expose any database storage area,
configuration storage, or files/folders created by your docker container.
use VOLUME for any mutable and/or user-serviceable
parts of your image
If you absolutely need
functionality similar to sudo, such as initializing the daemon as root but
running it as non-root), consider using “gosu”.
always use absolute paths for your
WORKDIR
An ONBUILD command executes after the current Dockerfile build completes.
Think
of the ONBUILD command as an instruction the parent Dockerfile gives
to the child Dockerfile
A Docker build executes ONBUILD commands before any command in a child
Dockerfile.
Be careful when putting ADD or COPY in ONBUILD. The “onbuild” image
fails catastrophically if the new build’s context is missing the resource being
added.
{{ ... }} for Expressions to print to the template output
use a dot (.) to access attributes of a variable
the outer double-curly braces are not part of the
variable, but the print statement.
If you access variables inside tags don’t
put the braces around them.
If a variable or attribute does not exist, you will get back an undefined
value.
the default behavior is to evaluate to an empty string if
printed or iterated over, and to fail for every other operation.
if an object has an item and attribute with the same
name. Additionally, the attr() filter only looks up attributes.
Variables can be modified by filters. Filters are separated from the
variable by a pipe symbol (|) and may have optional arguments in
parentheses.
Multiple filters can be chained
Tests can be used
to test a variable against a common expression.
add is plus the name of the test after the variable.
to find out if a variable is defined, you can do name is defined,
which will then return true or false depending on whether name is defined
in the current template context.
strip whitespace in templates by hand. If you add a minus
sign (-) to the start or end of a block (e.g. a For tag), a
comment, or a variable expression, the whitespaces before or after
that block will be removed
not add whitespace between the tag and the minus sign
mark a block raw
Template inheritance
allows you to build a base “skeleton” template that contains all the common
elements of your site and defines blocks that child templates can override.
The {% extends %} tag is the key here. It tells the template engine that
this template “extends” another template.
access templates in subdirectories with a slash
can’t define multiple {% block %} tags with the same name in the
same template
use the special
self variable and call the block with that name
self.title()
super()
put the name of the block after the end tag for better
readability
if the block is replaced by
a child template, a variable would appear that was not defined in the block or
passed to the context.
setting the block to “scoped” by adding the scoped
modifier to a block declaration
If you have a variable that may
include any of the following chars (>, <, &, or ") you
SHOULD escape it unless the variable contains well-formed and trusted
HTML.
Jinja2 functions (macros, super, self.BLOCKNAME) always return template
data that is marked as safe.
With the default syntax, control structures appear inside
{% ... %} blocks.
the dictsort filter
loop.cycle
Unlike in Python, it’s not possible to break or continue in a loop
use loops recursively
add the recursive modifier
to the loop definition and call the loop variable with the new iterable
where you want to recurse.
The loop variable always refers to the closest (innermost) loop.
whether the value changed at all,
use it to test if a variable is defined, not
empty and not false
Macros are comparable with functions in regular programming languages.
If a macro name starts with an underscore, it’s not exported and can’t
be imported.
pass a macro to another macro
caller()
a single trailing newline is stripped if present
other whitespace (spaces, tabs, newlines etc.) is returned unchanged
a block tag works in “both”
directions. That is, a block tag doesn’t just provide a placeholder to fill
- it also defines the content that fills the placeholder in the parent.
Python dicts are not ordered
caller(user)
call(user)
This is a simple dialog rendered by using a macro and
a call block.
Filter sections allow you to apply regular Jinja2 filters on a block of
template data.
Assignments at
top level (outside of blocks, macros or loops) are exported from the template
like top level macros and can be imported by other templates.
using namespace
objects which allow propagating of changes across scopes
use block assignments to
capture the contents of a block into a variable name.
The extends tag can be used to extend one template from another.
Blocks are used for inheritance and act as both placeholders and replacements
at the same time.
The include statement is useful to include a template and return the
rendered contents of that file into the current namespace
Included templates have access to the variables of the active context by
default.
putting often used code into macros
imports are cached
and imported templates don’t have access to the current template variables,
just the globals by default.
Macros and variables starting with one or more underscores are private and
cannot be imported.
By default, included templates are passed the current context and imported
templates are not.
imports are often used just as a module that holds macros.
Integers and floating point numbers are created by just writing the
number down
Everything between two brackets is a list.
Tuples are like lists that cannot be modified (“immutable”).
A dict in Python is a structure that combines keys and values.
//
Divide two numbers and return the truncated integer result
The special constants true, false, and none are indeed lowercase
all Jinja identifiers are lowercase
(expr)
group an expression.
The is and in operators support negation using an infix notation
in
Perform a sequence / mapping containment test.
|
Applies a filter.
~
Converts all operands into strings and concatenates them.
use inline if expressions.
always an attribute is returned and items are not
looked up.
default(value, default_value=u'', boolean=False)¶
If the value is undefined it will return the passed default value,
otherwise the value of the variable
dictsort(value, case_sensitive=False, by='key', reverse=False)¶
Sort a dict and yield (key, value) pairs.
format(value, *args, **kwargs)¶
Apply python string formatting on an object
groupby(value, attribute)¶
Group a sequence of objects by a common attribute.
grouping by is stored in the grouper
attribute and the list contains all the objects that have this grouper
in common.
indent(s, width=4, first=False, blank=False, indentfirst=None)¶
Return a copy of the string with each line indented by 4 spaces. The
first line and blank lines are not indented by default.
join(value, d=u'', attribute=None)¶
Return a string which is the concatenation of the strings in the
sequence.
map()¶
Applies a filter on a sequence of objects or looks up an attribute.
pprint(value, verbose=False)¶
Pretty print a variable. Useful for debugging.
reject()¶
Filters a sequence of objects by applying a test to each object,
and rejecting the objects with the test succeeding.
replace(s, old, new, count=None)¶
Return a copy of the value with all occurrences of a substring
replaced with a new one.
round(value, precision=0, method='common')¶
Round the number to a given precision
even if rounded to 0 precision, a float is returned.
select()¶
Filters a sequence of objects by applying a test to each object,
and only selecting the objects with the test succeeding.
sort(value, reverse=False, case_sensitive=False, attribute=None)¶
Sort an iterable. Per default it sorts ascending, if you pass it
true as first argument it will reverse the sorting.
striptags(value)¶
Strip SGML/XML tags and replace adjacent whitespace by one space.
tojson(value, indent=None)¶
Dumps a structure to JSON so that it’s safe to use in <script>
tags.
trim(value)¶
Strip leading and trailing whitespace.
unique(value, case_sensitive=False, attribute=None)¶
Returns a list of unique items from the the given iterable
urlize(value, trim_url_limit=None, nofollow=False, target=None, rel=None)¶
Converts URLs in plain text into clickable links.
defined(value)¶
Return true if the variable is defined
in(value, seq)¶
Check if value is in seq.
mapping(value)¶
Return true if the object is a mapping (dict etc.).
number(value)¶
Return true if the variable is a number.
sameas(value, other)¶
Check if an object points to the same memory address than another
object
undefined(value)¶
Like defined() but the other way round.
A joiner is
passed a string and will return that string every time it’s called, except
the first time (in which case it returns an empty string).
namespace(...)¶
Creates a new container that allows attribute assignment using the
{% set %} tag
The with statement makes it possible to create a new inner scope.
Variables set within this scope are not visible outside of the scope.
activate and deactivate the autoescaping from within
the templates
With both trim_blocks and lstrip_blocks enabled, you can put block tags
on their own lines, and the entire block line will be removed when
rendered, preserving the whitespace of the contents
Rails 4 automatically adds the sass-rails, coffee-rails and uglifier
gems to your Gemfile
reduce the number of requests that a browser makes to render a web page
Starting with version 3.1, Rails defaults to concatenating all JavaScript files into one master .js file and all CSS files into one master .css file
In production, Rails inserts an MD5 fingerprint into each filename so that the file is cached by the web browser
The technique sprockets uses for fingerprinting is to insert a hash of the
content into the name, usually at the end.
asset minification or compression
The sass-rails gem is automatically used for CSS compression if included
in Gemfile and no config.assets.css_compressor option is set.
Supported languages include Sass for CSS, CoffeeScript for JavaScript, and ERB for both by default.
When a filename is unique and based on its content, HTTP headers can be set to encourage caches everywhere (whether at CDNs, at ISPs, in networking equipment, or in web browsers) to keep their own copy of the content
asset pipeline is technically no longer a core feature of Rails 4
Rails uses for fingerprinting is to insert a hash of the content into the name, usually at the end
With the asset pipeline, the preferred location for these assets is now the app/assets directory.
Fingerprinting is enabled by default for production and disabled for all other
environments
The files in app/assets are never served directly in production.
Paths are traversed in the order that they occur in the search path
You should use app/assets for
files that must undergo some pre-processing before they are served.
By default .coffee and .scss files will not be precompiled on their own
app/assets is for assets that are owned by the application, such as custom images, JavaScript files or stylesheets.
lib/assets is for your own libraries' code that doesn't really fit into the scope of the application or those libraries which are shared across applications.
vendor/assets is for assets that are owned by outside entities, such as code for JavaScript plugins and CSS frameworks.
Any path under assets/* will be searched
By default these files will be ready to use by your application immediately using the require_tree directive.
By default, this means the files in app/assets take precedence, and will mask corresponding paths in lib and vendor
Sprockets uses files named index (with the relevant extensions) for a special purpose
Rails.application.config.assets.paths
causes turbolinks to check if
an asset has been updated and if so loads it into the page
if you add an erb extension to a CSS asset (for example, application.css.erb), then helpers like asset_path are available in your CSS rules
If you add an erb extension to a JavaScript asset, making it something such as application.js.erb, then you can use the asset_path helper in your JavaScript code
The asset pipeline automatically evaluates ERB
data URI — a method of embedding the image data directly into the CSS file — you can use the asset_data_uri helper.
Sprockets will also look through the paths specified in config.assets.paths,
which includes the standard application paths and any paths added by Rails
engines.
image_tag
the closing tag cannot be of the style -%>
asset_data_uri
app/assets/javascripts/application.js
sass-rails provides -url and -path helpers (hyphenated in Sass,
underscored in Ruby) for the following asset classes: image, font, video, audio,
JavaScript and stylesheet.
Rails.application.config.assets.compress
In JavaScript files, the directives begin with //=
The require_tree directive tells Sprockets to recursively include all JavaScript files in the specified directory into the output.
manifest files contain directives — instructions that tell Sprockets which files to require in order to build a single CSS or JavaScript file.
You should not rely on any particular order among those
Sprockets uses manifest files to determine which assets to include and serve.
the family of require directives prevents files from being included twice in the output
which files to require in order to build a single CSS or JavaScript file
Directives are processed top to bottom, but the order in which files are included by require_tree is unspecified.
In JavaScript files, Sprockets directives begin with //=
If require_self is called more than once, only the last call is respected.
require
directive is used to tell Sprockets the files you wish to require.
You need not supply the extensions explicitly.
Sprockets assumes you are requiring a .js file when done from within a .js
file
paths must be
specified relative to the manifest file
require_directory
Rails 4 creates both app/assets/javascripts/application.js and
app/assets/stylesheets/application.css regardless of whether the
--skip-sprockets option is used when creating a new rails application.
The file extensions used on an asset determine what preprocessing is applied.
app/assets/stylesheets/application.css
Additional layers of preprocessing can be requested by adding other extensions, where each extension is processed in a right-to-left manner
require_self
use the Sass @import rule
instead of these Sprockets directives.
Keep in mind that the order of these preprocessors is important
In development mode, assets are served as separate files in the order they are specified in the manifest file.
when these files are
requested they are processed by the processors provided by the coffee-script
and sass gems and then sent back to the browser as JavaScript and CSS
respectively.
css.scss.erb
js.coffee.erb
Keep in mind the order of these preprocessors is important.
By default Rails assumes that assets have been precompiled and will be served as static assets by your web server
with the Asset Pipeline the :cache and :concat options aren't used anymore
Assets are compiled and cached on the first request after the server is started
Helm will figure out where to install Tiller by reading your Kubernetes
configuration file (usually $HOME/.kube/config). This is the same file
that kubectl uses.
By default, when Tiller is installed, it does not have authentication enabled.
helm repo update
Without a max history set the history is kept indefinitely, leaving a large number of records for helm and tiller to maintain.
helm init --upgrade
Whenever you install a chart, a new release is created.
one chart can
be installed multiple times into the same cluster. And each can be
independently managed and upgraded.
helm list function will show you a list of all deployed releases.
helm delete
helm status
you
can audit a cluster’s history, and even undelete a release (with helm
rollback).
the Helm
server (Tiller).
The Helm client (helm)
brew install kubernetes-helm
Tiller, the server portion of Helm, typically runs inside of your
Kubernetes cluster.
it can also be run locally, and
configured to talk to a remote Kubernetes cluster.
Role-Based Access Control - RBAC for short
create a service account for Tiller with the right roles and permissions to access resources.
run Tiller in an RBAC-enabled Kubernetes cluster.
run kubectl get pods --namespace
kube-system and see Tiller running.
helm inspect
Helm will look for Tiller in the kube-system namespace unless
--tiller-namespace or TILLER_NAMESPACE is set.
For development, it is sometimes easier to work on Tiller locally, and
configure it to connect to a remote Kubernetes cluster.
even when running locally, Tiller will store release
configuration in ConfigMaps inside of Kubernetes.
helm version should show you both
the client and server version.
Tiller stores its data in Kubernetes ConfigMaps, you can safely
delete and re-install Tiller without worrying about losing any data.
helm reset
The --node-selectors flag allows us to specify the node labels required
for scheduling the Tiller pod.
--override allows you to specify properties of Tiller’s
deployment manifest.
helm init --override manipulates the specified properties of the final
manifest (there is no “values” file).
The --output flag allows us skip the installation of Tiller’s deployment
manifest and simply output the deployment manifest to stdout in either
JSON or YAML format.
By default, tiller stores release information in ConfigMaps in the namespace
where it is running.
switch from the default backend to the secrets
backend, you’ll have to do the migration for this on your own.
a beta SQL storage backend that stores release
information in an SQL database (only postgres has been tested so far).
Once you have the Helm Client and Tiller successfully installed, you can
move on to using Helm to manage charts.
Helm requires that kubelet have access to a copy of the socat program to proxy connections to the Tiller API.
A Release is an instance of a chart running in a Kubernetes cluster.
One chart can often be installed many times into the same cluster.
helm init --client-only
helm init --dry-run --debug
A panic in Tiller is almost always the result of a failure to negotiate with the
Kubernetes API server
Tiller and Helm have to negotiate a common version to make sure that they can safely
communicate without breaking API assumptions
helm delete --purge
Helm stores some files in $HELM_HOME, which is
located by default in ~/.helm
A Chart is a Helm package. It contains all of the resource definitions
necessary to run an application, tool, or service inside of a Kubernetes
cluster.
it like the Kubernetes equivalent of a Homebrew formula,
an Apt dpkg, or a Yum RPM file.
A Repository is the place where charts can be collected and shared.
Set the $HELM_HOME environment variable
each time it is installed, a new release is created.
Helm installs charts into Kubernetes, creating a new release for
each installation. And to find new charts, you can search Helm chart
repositories.
chart repository is named
stable by default
helm search shows you all of the available charts
helm inspect
To install a new package, use the helm install command. At its
simplest, it takes only one argument: The name of the chart.
If you want to use your own release name, simply use the
--name flag on helm install
additional configuration steps you can or
should take.
Helm does not wait until all of the resources are running before it
exits. Many charts require Docker images that are over 600M in size, and
may take a long time to install into the cluster.
helm status
helm inspect
values
helm inspect values stable/mariadb
override any of these settings in a YAML formatted file,
and then pass that file during installation.
helm install -f config.yaml stable/mariadb
--values (or -f): Specify a YAML file with overrides.
--set (and its variants --set-string and --set-file): Specify overrides on the command line.
Values that have been --set can be cleared by running helm upgrade with --reset-values
specified.
Chart
designers are encouraged to consider the --set usage when designing the format
of a values.yaml file.
--set-file key=filepath is another variant of --set.
It reads the file and use its content as a value.
inject a multi-line text into values without dealing with indentation in YAML.
An unpacked chart directory
When a new version of a chart is released, or when you want to change
the configuration of your release, you can use the helm upgrade
command.
Kubernetes charts can be large and
complex, Helm tries to perform the least invasive upgrade.
It will only
update things that have changed since the last release
If both are used, --set values are merged into --values with higher precedence.
The helm get command is a useful tool for looking at a release in the
cluster.
helm rollback
A release version is an incremental revision. Every time an install,
upgrade, or rollback happens, the revision number is incremented by 1.
helm history
a release name cannot be
re-used.
you can rollback a
deleted resource, and have it re-activate.
helm repo list
helm repo add
helm repo update
The Chart Development Guide explains how to develop your own
charts.
helm create
helm lint
helm package
Charts that are archived can be loaded into chart repositories.
chart repository server
Tiller can be installed into any namespace.
Limiting Tiller to only be able to install into specific namespaces and/or resource types is controlled by Kubernetes RBAC roles and rolebindings
Release names are unique PER TILLER INSTANCE
Charts should only contain resources that exist in a single namespace.
not recommended to have multiple Tillers configured to manage resources in the same namespace.
a client-side Helm plugin. A plugin is a
tool that can be accessed through the helm CLI, but which is not part of the
built-in Helm codebase.
Helm plugins are add-on tools that integrate seamlessly with Helm. They provide
a way to extend the core feature set of Helm, but without requiring every new
feature to be written in Go and added to the core tool.
Helm plugins live in $(helm home)/plugins
The Helm plugin model is partially modeled on Git’s plugin model
helm referred to as the porcelain layer, with
plugins being the plumbing.
command is the command that this plugin will
execute when it is called.
Environment variables are interpolated before the plugin
is executed.
The command itself is not executed in a shell. So you can’t oneline a shell script.
Helm is able to fetch Charts using HTTP/S
Variables like KUBECONFIG are set for the plugin if they are set in the
outer environment.
In Kubernetes, granting a role to an application-specific service account is a best practice to ensure that your application is operating in the scope that you have specified.
restrict Tiller’s capabilities to install resources to certain namespaces, or to grant a Helm client running access to a Tiller instance.
Service account with cluster-admin role
The cluster-admin role is created by default in a Kubernetes cluster
Deploy Tiller in a namespace, restricted to deploying resources only in that namespace
Deploy Tiller in a namespace, restricted to deploying resources in another namespace
When running a Helm client in a pod, in order for the Helm client to talk to a Tiller instance, it will need certain privileges to be granted.
SSL Between Helm and Tiller
The Tiller authentication model uses client-side SSL certificates.
creating an internal CA, and using both the
cryptographic and identity functions of SSL.
Helm is a powerful and flexible package-management and operations tool for Kubernetes.
default installation applies no security configurations
with a cluster that is well-secured in a private network with no data-sharing or no other users or teams.
With great power comes great responsibility.
Choose the Best Practices you should apply to your helm installation
Role-based access control, or RBAC
Tiller’s gRPC endpoint and its usage by Helm
Kubernetes employ a role-based access control (or RBAC) system (as do modern operating systems) to help mitigate the damage that can be done if credentials are misused or bugs exist.
In the default installation the gRPC endpoint that Tiller offers is available inside the cluster (not external to the cluster) without authentication configuration applied.
Tiller stores its release information in ConfigMaps. We suggest changing the default to Secrets.
release information
charts
charts are a kind of package that not only installs containers you may or may not have validated yourself, but it may also install into more than one namespace.
As with all shared software, in a controlled or shared environment you must validate all software you install yourself before you install it.
Helm’s provenance tools to ensure the provenance and integrity of charts
"Helm will figure out where to install Tiller by reading your Kubernetes configuration file (usually $HOME/.kube/config). This is the same file that kubectl uses."
Infrastructure as Code (IaC) is a technique for deploying and managing infrastructure using software, configuration files, and automated tools.
With the older methods, technicians must configure a device manually, perhaps with the aid of an interactive tool. Information is added to configuration files by hand or through the use of ad-hoc scripts. Configuration wizards and similar utilities are helpful, but they still require hands-on management. A small group of experts owns the expertise, the process is typically poorly defined, and errors are common.
The development of the continuous integration and continuous delivery (CI/CD) pipeline made the idea of treating infrastructure as software much more attractive.
Infrastructure as Code takes advantage of the software development process, making use of quality assurance and test automation techniques.
Consistency/Standardization
Each node in the network becomes what is known as a snowflake, with its own unique settings. This leads to a system state that cannot easily be reproduced and is difficult to debug.
With standard configuration files and software-based configuration, there is greater consistency between all equipment of the same type. A key IaC concept is idempotence.
Idempotence makes it easy to troubleshoot, test, stabilize, and upgrade all the equipment.
Infrastructure as Code is central to the culture of DevOps, which is a mix of development and operations
edits are always made to the source configuration files, never on the target.
A declarative approach describes the final state of a device, but does not mandate how it should get there. The specific IaC tool makes all the procedural decisions. The end state is typically defined through a configuration file, a JSON specification, or a similar encoding.
An imperative approach defines specific functions or procedures that must be used to configure the device. It focuses on what must happen, but does not necessarily describe the final state. Imperative techniques typically use scripts for the implementation.
With a push configuration, the central server pushes the configuration to the destination device.
If a device is mutable, its configuration can be changed while it is active
Immutable devices cannot be changed. They must be decommissioned or rebooted and then completely rebuilt.
an immutable approach ensures consistency and avoids drift. However, it usually takes more time to remove or rebuild a configuration than it does to change it.
System administrators should consider security issues as part of the development process.
Ansible is a very popular open source IaC application from Red Hat
Ansible is often used in conjunction with Kubernetes and Docker.
Linode offers a collection of
several Ansible guides for a more comprehensive overview.
Pulumi permits the use of a variety of programming languages to deploy and manage infrastructure within a cloud environment.
Terraform allows users to provision data center infrastructure using either JSON or Terraform’s own declarative language.
Terraform manages resources through the use of providers, which are similar to APIs.
The control plane's components make global decisions about the cluster
Control plane components can be run on any machine in the cluster.
for simplicity, set up scripts typically start all control plane components on
the same machine, and do not run user containers on this machine
The API server is the front end for the Kubernetes control plane.
kube-apiserver is designed to scale horizontally—that is, it scales by deploying more instances.
You can run several instances of kube-apiserver and balance traffic between those instances.
Kubernetes cluster uses etcd as its backing store, make sure you have a
back up plan
for those data.
watches for newly created
Pods with no assigned
node, and selects a node for them
to run on.
Factors taken into account for scheduling decisions include:
individual and collective resource requirements, hardware/software/policy
constraints, affinity and anti-affinity specifications, data locality,
inter-workload interference, and deadlines.
each controller is a separate process, but to reduce complexity, they are all compiled into a single binary and run in a single process.
Node controller
Job controller
Endpoints controller
Service Account & Token controllers
The cloud controller manager lets you link your
cluster into your cloud provider's API, and separates out the components that interact
with that cloud platform from components that only interact with your cluster.
If you are running Kubernetes on your own premises, or in a learning environment inside your
own PC, the cluster does not have a cloud controller manager.
An agent that runs on each node in the cluster. It makes sure that containers are running in a Pod.
The kubelet takes a set of PodSpecs that are provided through various mechanisms and ensures that the containers described in those PodSpecs are running and healthy.
The kubelet doesn't manage containers which were not created by Kubernetes.
kube-proxy is a network proxy that runs on each
node in your cluster,
implementing part of the Kubernetes
Service concept.
kube-proxy
maintains network rules on nodes. These network rules allow network
communication to your Pods from network sessions inside or outside of
your cluster.
kube-proxy uses the operating system packet filtering layer if there is one
and it's available.
Kubernetes supports several container runtimes: Docker,
containerd, CRI-O,
and any implementation of the Kubernetes CRI (Container Runtime
Interface).
Addons use Kubernetes resources (DaemonSet,
Deployment, etc)
to implement cluster features
namespaced resources
for addons belong within the kube-system namespace.
all Kubernetes clusters should have cluster DNS,
Cluster DNS is a DNS server, in addition to the other DNS server(s) in your environment, which serves DNS records for Kubernetes services.
Containers started by Kubernetes automatically include this DNS server in their DNS searches.
Container Resource Monitoring records generic time-series metrics
about containers in a central database, and provides a UI for browsing that data.
A cluster-level logging mechanism is responsible for
saving container logs to a central log store with search/browsing interface.