discussing the basic structure of an Nginx configuration file along with some guidelines on how to design your files
/etc/nginx/nginx.conf
In Nginx parlance, the areas that these brackets define are called "contexts" because they contain configuration details that are separated according to their area of concern
if a directive is valid in multiple nested scopes, a declaration in a broader context will be passed on to any child contexts as default values.
The children contexts can override these values at will
Nginx will error out on reading a configuration file with directives that are declared in the wrong context.
The most general context is the "main" or "global" context
Any directive that exist entirely outside of these blocks is said to inhabit the "main" context
The main context represents the broadest environment for Nginx configuration.
The "events" context is contained within the "main" context. It is used to set global options that affect how Nginx handles connections at a general level.
Nginx uses an event-based connection processing model, so the directives defined within this context determine how worker processes should handle connections.
the connection processing method is automatically selected based on the most efficient choice that the platform has available
a worker will only take a single connection at a time
When configuring Nginx as a web server or reverse proxy, the "http" context will hold the majority of the configuration.
The http context is a sibling of the events context, so they should be listed side-by-side, rather than nested
fine-tune the TCP keep alive settings (keepalive_disable, keepalive_requests, and keepalive_timeout)
The "server" context is declared within the "http" context.
multiple declarations
each instance defines a specific virtual server to handle client requests
Each client request will be handled according to the configuration defined in a single server context, so Nginx must decide which server context is most appropriate based on details of the request.
listen: The ip address / port combination that this server block is designed to respond to.
server_name: This directive is the other component used to select a server block for processing.
"Host" header
configure files to try to respond to requests (try_files)
issue redirects and rewrites (return and rewrite)
set arbitrary variables (set)
Location contexts share many relational qualities with server contexts
multiple location contexts can be defined, each location is used to handle a certain type of client request, and each location is selected by virtue of matching the location definition against the client request through a selection algorithm
Location blocks live within server contexts and, unlike server blocks, can be nested inside one another.
While server contexts are selected based on the requested IP address/port combination and the host name in the "Host" header, location blocks further divide up the request handling within a server block by looking at the request URI
The request URI is the portion of the request that comes after the domain name or IP address/port combination.
New directives at this level allow you to reach locations outside of the document root (alias), mark the location as only internally accessible (internal), and proxy to other servers or locations (using http, fastcgi, scgi, and uwsgi proxying).
These can then be used to do A/B testing by providing different content to different hosts.
configures Perl handlers for the location they appear in
set the value of a variable depending on the value of another variable
used to map MIME types to the file extensions that should be associated with them.
this context defines a named pool of servers that Nginx can then proxy requests to
The upstream context should be placed within the http context, outside of any specific server contexts.
The upstream context can then be referenced by name within server or location blocks to pass requests of a certain type to the pool of servers that have been defined.
function as a high performance mail proxy server
The mail context is defined within the "main" or "global" context (outside of the http context).
Nginx has the ability to redirect authentication requests to an external authentication server
the if directive in Nginx will execute the instructions contained if a given test returns "true".
Since Nginx will test conditions of a request with many other purpose-made directives, if should not be used for most forms of conditional execution.
The limit_except context is used to restrict the use of certain HTTP methods within a location context.
The result of the above example is that any client can use the GET and HEAD verbs, but only clients coming from the 192.168.1.1/24 subnet are allowed to use other methods.
Many directives are valid in more than one context
it is usually best to declare directives in the highest context to which they are applicable, and overriding them in lower contexts as necessary.
Declaring at higher levels provides you with a sane default
Nginx already engages in a well-documented selection algorithm for things like selecting server blocks and location blocks.
instead of relying on rewrites to get a user supplied request into the format that you would like to work with, you should try to set up two blocks for the request, one of which represents the desired method, and the other that catches messy requests and redirects (and possibly rewrites) them to your correct block.
incorrect requests can get by with a redirect rather than a rewrite, which should execute with lower overhead.
improve the performance and reliability of a server environment by distributing the workload across multiple servers (e.g. web, application, database).
ACLs are used to test some condition and perform an action (e.g. select a server, or block a request) based on the test result.
ACLs allows flexible network traffic forwarding based on a variety of factors like pattern-matching and the number of connections to a backend
A backend is a set of servers that receives forwarded requests
adding more servers to your backend will increase your potential load capacity by spreading the load over multiple servers
mode http specifies that layer 7 proxying will be used
specifies the load balancing algorithm
health checks
A frontend defines how requests should be forwarded to backends
use_backend rules, which define which backends to use depending on which ACL conditions are matched, and/or a default_backend rule that handles every other case
A frontend can be configured to various types of network traffic
Load balancing this way will forward user traffic based on IP range and port
Generally, all of the servers in the web-backend should be serving identical content--otherwise the user might receive inconsistent content.
Using layer 7 allows the load balancer to forward requests to different backend servers based on the content of the user's request.
allows you to run multiple web application servers under the same domain and port
acl url_blog path_beg /blog matches a request if the path of the user's request begins with /blog.
Round Robin selects servers in turns
Selects the server with the least number of connections--it is recommended for longer sessions
This selects which server to use based on a hash of the source IP
ensure that a user will connect to the same server
require that a user continues to connect to the same backend server. This persistence is achieved through sticky sessions, using the appsession parameter in the backend that requires it.
HAProxy uses health checks to determine if a backend server is available to process requests.
The default health check is to try to establish a TCP connection to the server
If a server fails a health check, and therefore is unable to serve requests, it is automatically disabled in the backend
For certain types of backends, like database servers in certain situations, the default health check is insufficient to determine whether a server is still healthy.
However, your load balancer is a single point of failure in these setups; if it goes down or gets overwhelmed with requests, it can cause high latency or downtime for your service.
A high availability (HA) setup is an infrastructure without a single point of failure
a static IP address that can be remapped from one server to another.
If that load balancer fails, your failover mechanism will detect it and automatically reassign the IP address to one of the passive servers.
for a lot of people, the name “Docker” itself is synonymous with the word “container”.
Docker created a very ergonomic (nice-to-use) tool for working with containers – also called docker.
docker is designed to be installed on a workstation or server and comes with a bunch of tools to make it easy to build and run containers as a developer, or DevOps person.
containerd: This is a daemon process that manages and runs containers.
runc: This is the low-level container runtime (the thing that actually creates and runs containers).
libcontainer, a native Go-based implementation for creating containers.
Kubernetes includes a component called dockershim, which allows it to support Docker.
Kubernetes prefers to run containers through any container runtime which supports its Container Runtime Interface (CRI).
Kubernetes will remove support for Docker directly, and prefer to use only container runtimes that implement its Container Runtime Interface.
Both containerd and CRI-O can run Docker-formatted (actually OCI-formatted) images, they just do it without having to use the docker command or the Docker daemon.
Docker images, are actually images packaged in the Open Container Initiative (OCI) format.
CRI is the API that Kubernetes uses to control the different runtimes that create and manage containers.
CRI makes it easier for Kubernetes to use different container runtimes
containerd is a high-level container runtime that came from Docker, and implements the CRI spec
containerd was separated out of the Docker project, to make Docker more modular.
CRI-O is another high-level container runtime which implements the Container Runtime Interface (CRI).
The idea behind the OCI is that you can choose between different runtimes which conform to the spec.
runc is an OCI-compatible container runtime.
A reference implementation is a piece of software that has implemented all the requirements of a specification or standard.
runc provides all of the low-level functionality for containers, interacting with existing low-level Linux features, like namespaces and control groups.
Keyfiles are bare-minimum forms of security and are best suited for testing or
development environments.
With keyfile authentication, each
mongod instances in the replica set uses the contents of the keyfile as the
shared password for authenticating other members in the deployment.
On UNIX systems, the keyfile must not have group or world
permissions.
Trunk-based development is a version control management practice where developers merge small, frequent updates to a core “trunk” or main branch.
Gitflow and trunk-based development.
Gitflow, which was popularized first, is a stricter development model where only certain individuals can approve changes to the main code. This maintains code quality and minimizes the number of bugs.
Trunk-based development is a more open model since all developers have access to the main code. This enables teams to iterate quickly and implement CI/CD.
Developers can create short-lived branches with a few small commits compared to other long-lived feature branching strategies.
Gitflow is an alternative Git branching model that uses long-lived feature branches and multiple primary branches.
Gitflow also has separate primary branch lines for development, hotfixes, features, and releases.
Trunk-based development is far more simplified since it focuses on the main branch as the source of fixes and releases.
Trunk-based development eases the friction of code integration.
trunk-based development model reduces these conflicts.
Adding an automated test suite and code coverage monitoring for this stream of commits enables continuous integration.
When new code is merged into the trunk, automated integration and code coverage tests run to validate the code quality.
Trunk-based development strives to keep the trunk branch “green”, meaning it's ready to deploy at any commit.
With continuous integration, developers perform trunk-based development in conjunction with automated tests that run after each committee to a trunk.
If trunk-based development was like music it would be a rapid staccato -- short, succinct notes in rapid succession, with the repository commits being the notes.
Instead of creating a feature branch and waiting to build out the complete specification, developers can instead create a trunk commit that introduces the feature flag and pushes new trunk commits that build out the feature specification within the flag.
Automated testing is necessary for any modern software project intending to achieve CI/CD.
Short running unit and integration tests are executed during development and upon code merge.
Automated tests provide a layer of preemptive code review.
Once a branch merges, it is best practice to delete it.
A repository with a large amount of active branches has some unfortunate side effects
Merge branches to the trunk at least once a day
The “continuous” in CI/CD implies that updates are constantly flowing.
cert-manager mainly uses two different custom Kubernetes resources - known as
CRDs -
to configure and control how it operates, as well as to store state. These
resources are Issuers and Certificates.
using
annotations on the ingress with ingress-shim or
directly creating a certificate resource.
The secret that is used in the ingress should match the secret defined
in the certificate.
a typo will result
in the ingress-nginx-controller falling back to its self-signed certificate.