Control structures (called "actions" in template parlance) provide you, the
template author, with the ability to control the flow of a template's
generation
"Control structures (called "actions" in template parlance) provide you, the template author, with the ability to control the flow of a template's generation"
The templates/ directory is for template files. When Helm evaluates a chart,
it will send all of the files in the templates/ directory through the template
rendering engine. It then collects the results of those templates and sends them
on to Kubernetes.
The charts/ directory may contain other charts
(which we call subcharts).
we
recommend using the suffix .yaml for YAML files and .tpl for helpers.
The helm get manifest command takes a release name (full-coral) and prints
out all of the Kubernetes resources that were uploaded to the server.
Each file
begins with --- to indicate the start of a YAML document, and then is followed
by an automatically generated comment line that tells us what template file
generated this YAML document.
name: field is limited to 63 characters because of limitations to
the DNS system.
The template directive {{ .Release.Name }} injects the release name into the
template. The values that are passed into a template can be thought of as
namespaced objects, where a dot (.) separates each namespaced element.
The leading dot before Release indicates that we start with the top-most
namespace for this scope
helm install --debug --dry-run goodly-guppy ./mychart. This will render the templates. But instead
of installing the chart, it will return the rendered template to you
Using --dry-run will make it easier to test your code, but it won't ensure
that Kubernetes itself will accept the templates you generate.
It's best not to
assume that your chart will install just because --dry-run works.
Unconditionally returns an empty string and an error with the specified
text.
The ternary function takes two values, and a test value. If the test value is
true, the first value will be returned. If the test value is empty, the second
value will be returned.
When injecting strings from the .Values
object into the template, we ought to quote these strings.
Helm has over 60 available functions. Some of them are defined by the
Go
template language itself. Most of the others
are part of the
Sprig template library
the "Helm template language" as if it is Helm-specific, it
is actually a combination of the Go template language, some extra functions,
and a variety of wrappers to expose certain objects to the templates.
Drawing on a concept from UNIX, pipelines are a tool for chaining
together a series of template commands to compactly express a series of
transformations.
the default function: default DEFAULT_VALUE GIVEN_VALUE
all static default values should live in the values.yaml,
and should not be repeated using the default command (otherwise they would be
redundant).
the default command is perfect for computed values, which
can not be declared inside values.yaml.
When lookup returns an object, it will return a dictionary.
The synopsis of the lookup function is lookup apiVersion, kind, namespace, name -> resource or resource list
When no object is found, an empty value is returned. This can be used to check
for the existence of an object.
The lookup function uses Helm's existing Kubernetes connection configuration
to query Kubernetes.
Helm is not supposed to contact the Kubernetes API Server
during a helm template or a helm install|update|delete|rollback --dry-run,
so the lookup function will return an empty list (i.e. dict) in such a case.
the operators (eq, ne, lt, gt, and, or and so on) are
all implemented as functions. In pipelines, operations can be grouped with
parentheses ((, and )).
a subchart, the values.yaml file of a parent chart
Individual parameters passed with --set
The list above is in order of specificity: values.yaml is the default, which
can be overridden by a parent chart's values.yaml, which can in turn be
overridden by a user-supplied values file, which can in turn be overridden by
--set parameters.
--set has a higher precedence than the default values.yaml file
Values files can contain more structured content
If you need to delete a key from the default values, you may override the value
of the key to be null, in which case Helm will remove the key from the
overridden values merge.
Kubernetes would then fail because you can not declare more than one
livenessProbe handler.
taking advantage of the flexibility of NGINX access logging is application performance monitoring (APM).
it’s simple to get detailed visibility into the performance of your applications by adding timing values to your code and passing them as response headers for inclusion in the NGINX access log.
$request_time – Full request time, starting when NGINX reads the first byte from the client and ending when NGINX sends the last byte of the response body
MinIO is purpose-built to take full advantage of the Kubernetes architecture.
MinIO and Kubernetes work together to simplify infrastructure management, providing a way to manage object storage infrastructure within the Kubernetes toolset.
The operator pattern extends Kubernetes's familiar declarative API model with custom resource definitions (CRDs) to perform common operations like resource orchestration, non-disruptive upgrades, cluster expansion and to maintain high-availability
The Operator uses the command set kubectl that the Kubernetes community was already familiar with and adds the kubectl minio plugin . The MinIO Operator and the MinIO kubectl plugin facilitate the deployment and management of MinIO Object Storage on Kubernetes - which is how multi-tenant object storage as a service is delivered.
choosing a leader for a distributed application without an internal member election process
The Operator Console makes Kubernetes object storage easier still. In this graphical user interface, MinIO created something so simple that anyone in the organization can create, deploy and manage object storage as a service.
The primary unit of managing MinIO on Kubernetes is the tenant.
The MinIO Operator can allocate multiple tenants within the same Kubernetes cluster.
Each tenant, in turn, can have different capacity (i.e: a small 500GB tenant vs a 100TB tenant), resources (1000m CPU and 4Gi RAM vs 4000m CPU and 16Gi RAM) and servers (4 pods vs 16 pods), as well a separate configurations regarding Identity Providers, Encryption and versions.
each tenant is a cluster of server pools (independent sets of nodes with their own compute, network, and storage resources), that, while sharing the same physical infrastructure, are fully isolated from each other in their own namespaces.
Each tenant runs their own MinIO cluster, fully isolated from other tenants
Each tenant scales independently by federating clusters across geographies.
The services keyword defines a Docker image that runs during a job
linked to the Docker image that the image keyword defines. This allows
you to access the service image during build time.
Services are an abstract way of exposing an application running on a set of pods as a network service.
Pods are immutable, which means that when they die, they are not resurrected. The Kubernetes cluster creates new pods in the same node or in a new node once a pod dies.
A service provides a single point of access from outside the Kubernetes cluster and allows you to dynamically access a group of replica pods.
For internal application access within a Kubernetes cluster, ClusterIP is the preferred method
To expose a service to external network requests, NodePort, LoadBalancer, and Ingress are possible options.
Kubernetes Ingress is an API object that provides routing rules to manage external users' access to the services in a Kubernetes cluster, typically via HTTPS/HTTP.
content-based routing, support for multiple protocols, and authentication.
Ingress is made up of an Ingress API object and the Ingress Controller.
Kubernetes Ingress is an API object that describes the desired state for exposing services to the outside of the Kubernetes cluster.
An Ingress Controller reads and processes the Ingress Resource information and usually runs as pods within the Kubernetes cluster.
If Kubernetes Ingress is the API object that provides routing rules to manage external access to services, Ingress Controller is the actual implementation of the Ingress API.
The Ingress Controller is usually a load balancer for routing external traffic to your Kubernetes cluster and is responsible for L4-L7 Network Services.
Layer 7 (L7) refers to the application level of the OSI stack—external connections load-balanced across pods, based on requests.
if Kubernetes Ingress is a computer, then Ingress Controller is a programmer using the computer and taking action.
Ingress Rules are a set of rules for processing inbound HTTP traffic. An Ingress with no rules sends all traffic to a single default backend service.
the Ingress Controller is an application that runs in a Kubernetes cluster and configures an HTTP load balancer according to Ingress Resources.
The load balancer can be a software load balancer running in the cluster or a hardware or cloud load balancer running externally.
ClusterIP is the preferred option for internal service access and uses an internal IP address to access the service
A NodePort is a virtual machine (VM) used to expose a service on a Static Port number.
a NodePort would be used to expose a single service (with no load-balancing requirements for multiple services).
Ingress enables you to consolidate the traffic-routing rules into a single resource and runs as part of a Kubernetes cluster.
An application is accessed from the Internet via Port 80 (HTTP) or Port 443 (HTTPS), and Ingress is an object that allows access to your Kubernetes services from outside the Kubernetes cluster.
To implement Ingress, you need to configure an Ingress Controller in your cluster—it is responsible for processing Ingress Resource information and allowing traffic based on the Ingress Rules.
Node Problem Detector is a daemon for monitoring and reporting about a node's health
Node Problem Detector collects information about node problems from various daemons
and reports these conditions to the API server as NodeCondition
and Event.
Node Problem Detector only supports file based kernel log.
Log tools such as journald are not supported.
when a Node gets down, the pods of the broken node are still running for some time and they still get requests, and those requests, will fail.
1- The Kubelet posts its status to the masters using –node-status-update-frequency=10s
2- A node dies
3- The kube controller manager is the one monitoring the nodes, using –-node-monitor-period=5s it checks, in the masters, the node status reported by the Kubelet.
4- Kube controller manager will see the node is unresponsive, and has this grace period –node-monitor-grace-period=40s until it considers the node unhealthy.
node-status-update-frequency x (N-1) != node-monitor-grace-period
5- Once the node is marked as unhealthy, the kube controller manager will remove its pods based on –pod-eviction-timeout=5m0s
6- Kube proxy has a watcher over the API, so the very first moment the pods are evicted the proxy will notice and update the iptables of the node, removing the endpoints from the services so the failing pods won’t be accessible anymore.
To force redirect a http url to https I use in some cases a middleware to handle the redirect. This is just a simple solution and don't require a change to the server or nginx configuration.
To force your HTTP website to be redirected to HTTPS, you can force it by changing your nginx configuration.