for a lot of people, the name “Docker” itself is synonymous with the word “container”.
Docker created a very ergonomic (nice-to-use) tool for working with containers – also called docker.
docker is designed to be installed on a workstation or server and comes with a bunch of tools to make it easy to build and run containers as a developer, or DevOps person.
containerd: This is a daemon process that manages and runs containers.
runc: This is the low-level container runtime (the thing that actually creates and runs containers).
libcontainer, a native Go-based implementation for creating containers.
Kubernetes includes a component called dockershim, which allows it to support Docker.
Kubernetes prefers to run containers through any container runtime which supports its Container Runtime Interface (CRI).
Kubernetes will remove support for Docker directly, and prefer to use only container runtimes that implement its Container Runtime Interface.
Both containerd and CRI-O can run Docker-formatted (actually OCI-formatted) images, they just do it without having to use the docker command or the Docker daemon.
Docker images, are actually images packaged in the Open Container Initiative (OCI) format.
CRI is the API that Kubernetes uses to control the different runtimes that create and manage containers.
CRI makes it easier for Kubernetes to use different container runtimes
containerd is a high-level container runtime that came from Docker, and implements the CRI spec
containerd was separated out of the Docker project, to make Docker more modular.
CRI-O is another high-level container runtime which implements the Container Runtime Interface (CRI).
The idea behind the OCI is that you can choose between different runtimes which conform to the spec.
runc is an OCI-compatible container runtime.
A reference implementation is a piece of software that has implemented all the requirements of a specification or standard.
runc provides all of the low-level functionality for containers, interacting with existing low-level Linux features, like namespaces and control groups.
LXC (LinuX Containers) is a OS-level virtualization technology that allows creation and running of multiple isolated Linux virtual environments (VE) on a single control host.
Docker, previously called dotCloud, was started as a side project and only open-sourced in 2013. It is really an extension of LXC’s capabilities.
Docker is developed in the Go language and utilizes LXC, cgroups, and the Linux kernel itself. Since it’s based on LXC, a Docker container does not include a separate operating system; instead it relies on the operating system’s own functionality as provided by the underlying infrastructure.
Docker acts as a portable container engine, packaging the application and all its dependencies in a virtual container that can run on any Linux server.
a VE there is no preloaded emulation manager software as in a VM.
In a VE, the application (or OS) is spawned in a container and runs with no added overhead, except for a usually minuscule VE initialization process.
LXC will boast bare metal performance characteristics because it only packages the needed applications.
the OS is also just another application that can be packaged too.
a VM, which packages the entire OS and machine setup, including hard drive, virtual processors and network interfaces. The resulting bloated mass usually takes a long time to boot and consumes a lot of CPU and RAM.
don’t offer some other neat features of VM’s such as IaaS setups and live migration.
LXC as supercharged chroot on Linux. It allows you to not only isolate applications, but even the entire OS.
Libvirt, which allows the use of containers through the LXC driver by connecting to 'lxc:///'.
'LXC', is not compatible with libvirt, but is more flexible with more userspace tools.
Portable deployment across machines
Versioning: Docker includes git-like capabilities for tracking successive versions of a container
Component reuse: Docker allows building or stacking of already created packages.
Shared libraries: There is already a public registry (http://index.docker.io/ ) where thousands have already uploaded the useful containers they have created.
Docker taking the devops world by storm since its launch back in 2013.
LXC, while older, has not been as popular with developers as Docker has proven to be
LXC having a focus on sys admins that’s similar to what solutions like the Solaris operating system, with its Solaris Zones, Linux OpenVZ, and FreeBSD, with its BSD Jails virtualization system
it started out being built on top of LXC, Docker later moved beyond LXC containers to its own execution environment called libcontainer.
Unlike LXC, which launches an operating system init for each container, Docker provides one OS environment, supplied by the Docker Engine
LXC tooling sticks close to what system administrators running bare metal servers are used to
The LXC command line provides essential commands that cover routine management tasks, including the creation, launch, and deletion of LXC containers.
Docker containers aim to be even lighter weight in order to support the fast, highly scalable, deployment of applications with microservice architecture.
With backing from Canonical, LXC and LXD have an ecosystem tightly bound to the rest of the open source Linux community.
Docker Swarm
Docker Trusted Registry
Docker Compose
Docker Machine
Kubernetes facilitates the deployment of containers in your data center by representing a cluster of servers as a single system.
Swarm is Docker’s clustering, scheduling and orchestration tool for managing a cluster of Docker hosts.
rkt is a security minded container engine that uses KVM for VM-based isolation and packs other enhanced security features.
Apache Mesos can run different kinds of distributed jobs, including containers.
Elastic Container Service is Amazon’s service for running and orchestrating containerized applications on AWS
LXC offers the advantages of a VE on Linux, mainly the ability to isolate your own private workloads from one another. It is a cheaper and faster solution to implement than a VM, but doing so requires a bit of extra learning and expertise.
Docker is a significant improvement of LXC’s capabilities.
NodePort, by design, bypasses almost all network security in Kubernetes.
NetworkPolicy resources can currently only control NodePorts by allowing or disallowing all traffic on them.
put a network filter in front of all the nodes
if a Nodeport-ranged Service is advertised to the public, it may serve as an invitation to black-hats to scan and probe
When Kubernetes creates a NodePort service, it allocates a port from a range specified in the flags that define your Kubernetes cluster. (By default, these are ports ranging from 30000-32767.)
By design, Kubernetes NodePort cannot expose standard low-numbered ports like 80 and 443, or even 8080 and 8443.
A port in the NodePort range can be specified manually, but this would mean the creation of a list of non-standard ports, cross-referenced with the applications they map to
if you want the exposed application to be highly available, everything contacting the application has to know all of your node addresses, or at least more than one.
non-standard ports.
Ingress resources use an Ingress controller (the nginx one is common but not by any means the only choice) and an external load balancer or public IP to enable path-based routing of external requests to internal Services.
With a single point of entry to expose and secure
get simpler TLS management!
consider putting a real load balancer in front of your NodePort Services before opening them up to the world
Google very recently released an alpha-stage bare-metal load balancer that, once installed in your cluster, will load-balance using BGP
NodePort Services are easy to create but hard to secure, hard to manage, and not especially friendly to others
如果能夠寫出一個不錯的編譯器,那不就是大家都需要的軟體了嗎?
因此他便開始撰寫C語言的編譯器,那就是現在相當有名的GNU C Compiler(gcc)!
他還撰寫了更多可以被呼叫的C函式庫(GNU C library),以及可以被使用來操作作業系統的基本介面BASH shell!
這些都在1990年左右完成了!
有鑑於圖形使用者介面(Graphical User Interface, GUI)
的需求日益加重,在1984年由MIT與其他協力廠商首次發表了X Window System
,並且更在1988年成立了非營利性質的XFree86這個組織。所謂的XFree86其實是
X Window System + Free + x86的整合名稱呢!
在網路傳輸方面,由於網路使用的是
bit 為單位,因此網路常使用的單位為 Mbps 是 Mbits per second,亦即是每秒多少 Mbit
(1)北橋:負責連結速度較快的CPU、主記憶體與顯示卡界面等元件
(2)南橋:負責連接速度較慢的裝置介面,
包括硬碟、USB、網路卡等等
CPU內部含有微指令集,不同的微指令集會導致CPU工作效率的優劣
時脈就是CPU每秒鐘可以進行的工作次數。
所以時脈越高表示這顆CPU單位時間內可以作更多的事情。
早期的 CPU 架構主要透過北橋來連結系統最重要的 CPU、主記憶體與顯示卡裝置。因為所有的設備都得掉透過北橋來連結,因此每個設備的工作頻率應該要相同。
前端匯流排 (FSB)
外頻指的是CPU與外部元件進行資料傳輸時的速度
倍頻則是 CPU 內部用來加速工作效能的一個倍數
新的 CPU 設計中,
已經將記憶體控制器整合到 CPU 內部,而連結 CPU 與記憶體、顯示卡的控制器的設計,在Intel部份使用 QPI (Quick Path Interconnect) 與 DMI 技術,而 AMD 部份則使用
Hyper Transport 了,這些技術都可以讓 CPU 直接與主記憶體、顯示卡等設備分別進行溝通,而不需要透過外部的連結晶片了。
如何知道主記憶體能提供的資料量呢?此時還是得要藉由 CPU 內的記憶體控制晶片與主記憶體間的傳輸速度『前端匯流排速度(Front Side Bus,
FSB)
In both the required_version and required_providers settings, each override
constraint entirely replaces the constraints for the same component in the
original block.
If both the base block and the override block both set
required_version then the constraints in the base block are entirely ignored.
Terraform normally loads all of the .tf and .tf.json files within a
directory and expects each one to define a distinct set of configuration
objects.
If two files attempt to define the same object, Terraform returns
an error.
a
human-edited configuration file in the Terraform language native syntax
could be partially overridden using a programmatically-generated file
in JSON syntax.
Terraform has special handling of any configuration
file whose name ends in _override.tf or _override.tf.json
Terraform initially skips these override files when loading configuration,
and then afterwards processes each one in turn (in lexicographical order).
merges the
override block contents into the existing object.
Over-use of override files
hurts readability, since a reader looking only at the original files cannot
easily see that some portions of those files have been overridden without
consulting all of the override files that are present.
When using override
files, use comments in the original files to warn future readers about which
override files apply changes to each block.
A top-level block in an override file merges with a block in a normal
configuration file that has the same block header.
Within a top-level block, an attribute argument within an override block
replaces any argument of the same name in the original block.
Within a top-level block, any nested blocks within an override block replace
all blocks of the same type in the original block.
The contents of nested configuration blocks are not merged.
If more than one override file defines the same top-level block, the overriding
effect is compounded, with later blocks taking precedence over earlier blocks
The settings within terraform blocks are considered individually when
merging.
If the required_providers argument is set, its value is merged on an
element-by-element basis, which allows an override block to adjust the
constraint for a single provider without affecting the constraints for
other providers.
"In both the required_version and required_providers settings, each override constraint entirely replaces the constraints for the same component in the original block. "
Each root terragrunt.hcl file (the one at the environment level, e.g prod/terragrunt.hcl) should define a
generate block to generate the AWS provider configuration to assume the role for that environment.
The include block tells Terragrunt to use the exact same Terragrunt configuration from the terragrunt.hcl file
specified via the path parameter.
"Each root terragrunt.hcl file (the one at the environment level, e.g prod/terragrunt.hcl) should define a generate block to generate the AWS provider configuration to assume the role for that environment. "
The generate attribute is used to inform Terragrunt to generate the Terraform code for configuring the backend.
The find_in_parent_folders() helper will automatically search up the directory tree to find the root terragrunt.hcl and inherit the remote_state configuration from it.
Unlike the backend configurations, provider configurations support variables,
if you needed to modify the configuration to expose another parameter (e.g
session_name), you would have to then go through each of your modules to make this change.
instructs Terragrunt to create the file provider.tf in the working directory (where Terragrunt calls terraform)
before it calls any of the Terraform commands
large modules should be considered harmful.
it is a Bad Idea to define all of your environments (dev, stage, prod, etc), or even a large amount of infrastructure (servers, databases, load balancers, DNS, etc), in a single Terraform module.
Large modules are slow, insecure, hard to update, hard to code review, hard to test, and brittle (i.e., you have all your eggs in one basket).
Terragrunt allows you to define your Terraform code once and to promote a versioned, immutable “artifact” of that exact same code from environment to environment.
All containers are restarted after upgrade, because the container spec hash value is changed.
The upgrade procedure on control plane nodes should be executed one node at a time.
/etc/kubernetes/admin.conf
kubeadm upgrade also automatically renews the certificates that it manages on this node.
To opt-out of certificate renewal the flag --certificate-renewal=false can be used.
In a cluster, logs should have a separate storage and lifecycle independent of nodes, pods, or containers. This concept is called cluster-level logging.
Cluster-level logging architectures require a separate backend to store, analyze, and query logs
Kubernetes
does not provide a native storage solution for log data.
use kubectl logs --previous to retrieve logs from a previous instantiation of a container.
A container engine handles and redirects any output generated to a containerized application's stdout and stderr streams
The Docker JSON logging driver treats each line as a separate message.
By default, if a container restarts, the kubelet keeps one terminated container with its logs.
An important consideration in node-level logging is implementing log rotation,
so that logs don't consume all available storage on the node
You can also set up a container runtime to
rotate an application's logs automatically.
The two kubelet flags container-log-max-size and container-log-max-files can be used to configure the maximum size for each log file and the maximum number of files allowed for each container respectively.
The kubelet and container runtime do not run in containers.
On machines with systemd, the kubelet and container runtime write to journald. If
systemd is not present, the kubelet and container runtime write to .log files
in the /var/log directory.
System components inside containers always write
to the /var/log directory, bypassing the default logging mechanism.
Kubernetes does not provide a native solution for cluster-level logging
Use a node-level logging agent that runs on every node.
implement cluster-level logging by including a node-level logging agent on each node.
the logging agent is a container that has access to a directory with log files from all of the application containers on that node.
the logging agent must run on every node, it is recommended to run the agent
as a DaemonSet
Node-level logging creates only one agent per node and doesn't require any changes to the applications running on the node.
Containers write stdout and stderr, but with no agreed format. A node-level agent collects these logs and forwards them for aggregation.
Each sidecar container prints a log to its own stdout or stderr stream.
It is not recommended to write log entries with different formats to the same log
stream
writing logs to a file and
then streaming them to stdout can double disk usage.
If you have
an application that writes to a single file, it's recommended to set
/dev/stdout as the destination
it's recommended to use stdout and stderr directly and leave rotation
and retention policies to the kubelet.
Using a logging agent in a sidecar container can lead
to significant resource consumption. Moreover, you won't be able to access
those logs using kubectl logs because they are not controlled
by the kubelet.