Containers are an instance of the Docker Image you specify and the first image listed in your configuration is the primary container image in which all steps run.
In this example, all steps run in the container created by the first image listed under the build job
If you experience increases in your run times due to installing additional tools during execution, it is best practice to use the Building Custom Docker Images Documentation to create a custom image with tools that are pre-loaded in the container to meet the job requirements.
Do not use directories as a dependency for generated targets, ever.
Parallel make: add an explicit timestamp dependency (.done) that make can synchronize threaded calls on to avoid a race condition.
Maintain clean targets - makefiles should be able to remove all content that is generated so "make clean" will return the sandbox/directory back to a clean state.
Wrapper check/unit tests with a ENABLE_TESTS conditional
A build system, git, and development headers for many popular libraries, so that the most popular Ruby, Python and Node.js native extensions can be compiled without problems.
Nginx 1.18. Disabled by default
production-grade features, such as process monitoring, administration and status inspection.
Redis 5.0. Not installed by default.
The image has an app user with UID 9999 and home directory /home/app. Your application is supposed to run as this user.
running applications without root privileges is good security practice.
Your application should be placed inside /home/app.
COPY --chown=app:app
Passenger works like a mod_ruby, mod_nodejs, etc. It changes Nginx into an application server and runs your app from Nginx.
placing a .conf file in the directory /etc/nginx/sites-enabled
The best way to configure Nginx is by adding .conf files to /etc/nginx/main.d and /etc/nginx/conf.d
files in conf.d are included in the Nginx configuration's http context.
any environment variables you set with docker run -e, Docker linking and /etc/container_environment, won't reach Nginx.
To preserve these variables, place an Nginx config file ending with *.conf in the directory /etc/nginx/main.d, in which you tell Nginx to preserve these variables.
By default, Phusion Passenger sets all of the following environment variables to the value production
Setting these environment variables yourself (e.g. using docker run -e RAILS_ENV=...) will not have any effect, because Phusion Passenger overrides all of these environment variables.
PASSENGER_APP_ENV environment variable
passenger-docker autogenerates an Nginx configuration file (/etc/nginx/conf.d/00_app_env.conf) during container boot.
The configuration file is in /etc/redis/redis.conf. Modify it as you see fit, but make sure daemonize no is set.
You can add additional daemons to the image by creating runit entries.
The shell script must be called run, must be executable
the shell script must run the daemon without letting it daemonize/fork it.
We use RVM to install and to manage Ruby interpreters.
Kubernetes is all about sharing machines between applications.
sharing machines requires ensuring that two applications do not try to use the
same ports.
Dynamic port allocation brings a lot of complications to the system
Every Pod gets its own IP address
do not need to explicitly
create links between Pods
almost never need to deal with mapping
container ports to host ports.
Pods can be treated much like VMs or physical hosts from the
perspectives of port allocation, naming, service discovery, load balancing,
application configuration, and migration.
pods on a node can communicate with all pods on all nodes without NAT
agents on a node (e.g. system daemons, kubelet) can communicate with all
pods on that node
pods in the host network of a node can communicate with all pods on all
nodes without NAT
If your job previously ran in a VM, your VM had an IP and could
talk to other VMs in your project. This is the same basic model.
containers within a Pod
share their network namespaces - including their IP address
containers within a Pod can all reach each other’s ports on localhost
containers within a Pod must coordinate port usage
“IP-per-pod” model.
request ports on the Node itself which forward to your Pod
(called host ports), but this is a very niche operation
The Pod itself is
blind to the existence or non-existence of host ports.
AOS is an Intent-Based Networking system that creates and manages complex datacenter environments from a simple integrated platform.
Cisco Application Centric Infrastructure offers an integrated overlay and underlay SDN solution that supports containers, virtual machines, and bare metal servers.
AOS Reference Design currently supports Layer-3 connected hosts that eliminate legacy Layer-2 switching problems.
The AWS VPC CNI offers integrated AWS Virtual Private Cloud (VPC) networking for Kubernetes clusters.
users can apply existing AWS VPC networking and security best practices for building Kubernetes clusters.
Using this CNI plugin allows Kubernetes pods to have the same IP address inside the pod as they do on the VPC network.
The CNI allocates AWS Elastic Networking Interfaces (ENIs) to each Kubernetes node and using the secondary IP range from each ENI for pods on the node.
Big Cloud Fabric is a cloud native networking architecture, designed to run Kubernetes in private cloud/on-premises environments.
Cilium is L7/HTTP aware and can enforce network policies on L3-L7
using an identity based security model that is decoupled from network
addressing.
CNI-Genie is a CNI plugin that enables Kubernetes to simultaneously have access to different implementations of the Kubernetes network model in runtime.
CNI-Genie also supports assigning multiple IP addresses to a pod, each from a different CNI plugin.
cni-ipvlan-vpc-k8s contains a set
of CNI and IPAM plugins to provide a simple, host-local, low latency, high
throughput, and compliant networking stack for Kubernetes within Amazon Virtual
Private Cloud (VPC) environments by making use of Amazon Elastic Network
Interfaces (ENI) and binding AWS-managed IPs into Pods using the Linux kernel’s
IPvlan driver in L2 mode.
to be straightforward to configure and deploy within a
VPC
Contiv provides configurable networking
Contrail, based on Tungsten Fabric, is a truly open, multi-cloud network virtualization and policy management platform.
DANM is a networking solution for telco workloads running in a Kubernetes cluster.
Flannel is a very simple overlay
network that satisfies the Kubernetes requirements.
Any traffic bound for that
subnet will be routed directly to the VM by the GCE network fabric.
sysctl net.ipv4.ip_forward=1
Jaguar provides overlay network using vxlan and Jaguar CNIPlugin provides one IP address per pod.
Knitter is a network solution which supports multiple networking in Kubernetes.
Kube-OVN is an OVN-based kubernetes network fabric for enterprises.
Kube-router provides a Linux LVS/IPVS-based service proxy, a Linux kernel forwarding-based pod-to-pod networking solution with no overlays, and iptables/ipset-based network policy enforcer.
If you have a “dumb” L2 network, such as a simple switch in a “bare-metal”
environment, you should be able to do something similar to the above GCE setup.
Multus is a Multi CNI plugin to support the Multi Networking feature in Kubernetes using CRD based network objects in Kubernetes.
NSX-T can provide network virtualization for a multi-cloud and multi-hypervisor environment and is focused on emerging application frameworks and architectures that have heterogeneous endpoints and technology stacks.
NSX-T Container Plug-in (NCP) provides integration between NSX-T and container orchestrators such as Kubernetes
Nuage uses the open source Open vSwitch for the data plane along with a feature rich SDN Controller built on open standards.
OpenVSwitch is a somewhat more mature but also
complicated way to build an overlay network
OVN is an opensource network virtualization solution developed by the
Open vSwitch community.
Project Calico is an open source container networking provider and network policy engine.
Calico provides a highly scalable networking and network policy solution for connecting Kubernetes pods based on the same IP networking principles as the internet
Calico can be deployed without encapsulation or overlays to provide high-performance, high-scale data center networking.
Calico can also be run in policy enforcement mode in conjunction with other networking solutions such as Flannel, aka canal, or native GCE, AWS or Azure networking.
Romana is an open source network and security automation solution that lets you deploy Kubernetes without an overlay network
Weave Net runs as a CNI plug-in
or stand-alone. In either version, it doesn’t require any configuration or extra code
to run, and in both cases, the network provides one IP address per pod - as is standard for Kubernetes.
The network model is implemented by the container runtime on each node.
run one or two control plane instances per failure zone,
scaling those instances vertically first and then scaling horizontally after reaching
the point of falling returns to (vertical) scale.
Kubernetes
nodes do not automatically steer traffic towards control-plane endpoints that are in the
same failure zone
store Event objects in a separate
dedicated etcd instance.
start and configure additional etcd instance
Kubernetes resource limits
help to minimize the impact of memory leaks and other ways that pods and containers can
impact on other components.
Addons' default limits are typically based on data collected from experience running
each addon on small or medium Kubernetes clusters.
When running on large
clusters, addons often consume more of some resources than their default limits.
Many addons scale horizontally - you add capacity by running more pods
The VerticalPodAutoscaler can run in recommender mode to provide suggested
figures for requests and limits.
Some addons run as one copy per node, controlled by a DaemonSet: for example, a node-level log aggregator.
VerticalPodAutoscaler is a custom resource that you can deploy into your cluster
to help you manage resource requests and limits for pods.
The cluster autoscaler
integrates with a number of cloud providers to help you run the right number of
nodes for the level of resource demand in your cluster.
The addon resizer
helps you in resizing the addons automatically as your cluster's scale changes.