The control plane's components make global decisions about the cluster
Control plane components can be run on any machine in the cluster.
for simplicity, set up scripts typically start all control plane components on
the same machine, and do not run user containers on this machine
The API server is the front end for the Kubernetes control plane.
kube-apiserver is designed to scale horizontally—that is, it scales by deploying more instances.
You can run several instances of kube-apiserver and balance traffic between those instances.
Kubernetes cluster uses etcd as its backing store, make sure you have a
back up plan
for those data.
watches for newly created
Pods with no assigned
node, and selects a node for them
to run on.
Factors taken into account for scheduling decisions include:
individual and collective resource requirements, hardware/software/policy
constraints, affinity and anti-affinity specifications, data locality,
inter-workload interference, and deadlines.
each controller is a separate process, but to reduce complexity, they are all compiled into a single binary and run in a single process.
Node controller
Job controller
Endpoints controller
Service Account & Token controllers
The cloud controller manager lets you link your
cluster into your cloud provider's API, and separates out the components that interact
with that cloud platform from components that only interact with your cluster.
If you are running Kubernetes on your own premises, or in a learning environment inside your
own PC, the cluster does not have a cloud controller manager.
An agent that runs on each node in the cluster. It makes sure that containers are running in a Pod.
The kubelet takes a set of PodSpecs that are provided through various mechanisms and ensures that the containers described in those PodSpecs are running and healthy.
The kubelet doesn't manage containers which were not created by Kubernetes.
kube-proxy is a network proxy that runs on each
node in your cluster,
implementing part of the Kubernetes
Service concept.
kube-proxy
maintains network rules on nodes. These network rules allow network
communication to your Pods from network sessions inside or outside of
your cluster.
kube-proxy uses the operating system packet filtering layer if there is one
and it's available.
Kubernetes supports several container runtimes: Docker,
containerd, CRI-O,
and any implementation of the Kubernetes CRI (Container Runtime
Interface).
Addons use Kubernetes resources (DaemonSet,
Deployment, etc)
to implement cluster features
namespaced resources
for addons belong within the kube-system namespace.
all Kubernetes clusters should have cluster DNS,
Cluster DNS is a DNS server, in addition to the other DNS server(s) in your environment, which serves DNS records for Kubernetes services.
Containers started by Kubernetes automatically include this DNS server in their DNS searches.
Container Resource Monitoring records generic time-series metrics
about containers in a central database, and provides a UI for browsing that data.
A cluster-level logging mechanism is responsible for
saving container logs to a central log store with search/browsing interface.
Edge routerA router that enforces the firewall policy for your cluster.
Cluster networkA set of links, logical or physical, that facilitate communication within a cluster according to the Kubernetes networking model.
A Kubernetes ServiceA way to expose an application running on a set of Pods as a network service. that identifies a set of Pods using labelTags objects with identifying attributes that are meaningful and relevant to users. selectors.
Services are assumed to have virtual IPs only routable within the cluster network.
Ingress exposes HTTP and HTTPS routes from outside the cluster to
services within the cluster.
Traffic routing is controlled by rules defined on the Ingress resource.
An Ingress can be configured to give Services externally-reachable URLs, load balance traffic, terminate SSL / TLS, and offer name based virtual hosting.
Exposing services other than HTTP and HTTPS to the internet typically
uses a service of type Service.Type=NodePort or
Service.Type=LoadBalancer.
You must have an ingress controller to satisfy an Ingress. Only creating an Ingress resource has no effect.
As with all other Kubernetes resources, an Ingress needs apiVersion, kind, and metadata fields
Ingress frequently uses annotations to configure some options depending on the Ingress controller,
Ingress resource only supports rules
for directing HTTP traffic.
An optional host.
A list of paths
A backend is a combination of Service and port names
has an associated backend
Both the host and path must match the content of an incoming request before the
load balancer directs traffic to the referenced Service.
HTTP (and HTTPS) requests to the
Ingress that matches the host and path of the rule are sent to the listed backend.
A default backend is often configured in an Ingress controller to service any requests that do not
match a path in the spec.
An Ingress with no rules sends all traffic to a single default backend.
Ingress controllers and load balancers may take a minute or two to allocate an IP address.
A fanout configuration routes traffic from a single IP address to more than one Service,
based on the HTTP URI being requested.
nginx.ingress.kubernetes.io/rewrite-target: /
describe ingress
get ingress
Name-based virtual hosts support routing HTTP traffic to multiple host names at the same IP address.
route requests based on
the Host header.
an Ingress resource without any hosts defined in the rules, then any
web traffic to the IP address of your Ingress controller can be matched without a name based
virtual host being required.
secure an Ingress by specifying a SecretStores sensitive information, such as passwords, OAuth tokens, and ssh keys.
that contains a TLS private key and certificate.
Currently the Ingress only
supports a single TLS port, 443, and assumes TLS termination.
An Ingress controller is bootstrapped with some load balancing policy settings
that it applies to all Ingress, such as the load balancing algorithm, backend
weight scheme, and others.
persistent sessions, dynamic weights) are not yet exposed through the
Ingress. You can instead get these features through the load balancer used for
a Service.
review the controller
specific documentation to see how they handle health checks
edit ingress
After you save your changes, kubectl updates the resource in the API server, which tells the
Ingress controller to reconfigure the load balancer.
kubectl replace -f on a modified Ingress YAML file.
Node: A worker machine in Kubernetes, part of a cluster.
in most common Kubernetes deployments, nodes in the cluster are not part of the public internet.
Edge router: A router that enforces the firewall policy for your cluster.
a gateway managed by a cloud provider or a physical piece of hardware.
Cluster network: A set of links, logical or physical, that facilitate communication within a cluster according to the Kubernetes networking model.
Service: A Kubernetes Service that identifies a set of Pods using label selectors.
An Ingress may be configured to give Services externally-reachable URLs, load balance traffic, terminate SSL / TLS, and offer name-based virtual hosting.
An Ingress does not expose arbitrary ports or protocols.
You must have an Ingress controller to satisfy an Ingress. Only creating an Ingress resource has no effect.
The name of an Ingress object must be a valid
DNS subdomain name
The Ingress spec
has all the information needed to configure a load balancer or proxy server.
Ingress resource only supports rules
for directing HTTP(S) traffic.
An Ingress with no rules sends all traffic to a single default backend and .spec.defaultBackend
is the backend that should handle requests in that case.
If defaultBackend is not set, the handling of requests that do not match any of the rules will be up to the
ingress controller
A common
usage for a Resource backend is to ingress data to an object storage backend
with static assets.
Exact: Matches the URL path exactly and with case sensitivity.
Prefix: Matches based on a URL path prefix split by /. Matching is case
sensitive and done on a path element by element basis.
multiple paths within an Ingress will match a request. In those
cases precedence will be given first to the longest matching path.
Hosts can be precise matches (for example “foo.bar.com”) or a wildcard (for
example “*.foo.com”).
No match, wildcard only covers a single DNS label
Each Ingress should specify a class, a reference to an
IngressClass resource that contains additional configuration including the name
of the controller that should implement the class.
secure an Ingress by specifying a Secret
that contains a TLS private key and certificate.
The Ingress resource only
supports a single TLS port, 443, and assumes TLS termination at the ingress point
(traffic to the Service and its Pods is in plaintext).
TLS will not work on the default rule because the
certificates would have to be issued for all the possible sub-domains.
hosts in the tls section need to explicitly match the host in the rules
section.