"Nix
The Purely Functional Package Manager
Nix is a powerful package manager for Linux and other Unix systems that makes package management reliable and reproducible. It provides atomic upgrades and rollbacks, side-by-side installation of multiple versions of a package, multi-user package management and easy setup of build environments. Read more…"
"Security Onion is a Linux distro for intrusion detection, network security monitoring, and log management. It's based on Ubuntu and contains Snort, Suricata, Bro, OSSEC, Sguil, Squert, ELSA, Xplico, NetworkMiner, and many other security tools. The easy-to-use Setup wizard allows you to build an army of distributed sensors for your enterprise in minutes!"
ED25519 is more vulnerable to quantum computation than is RSA
best practice to be using a hardware token
to use a yubikey via gpg: with this method you use your gpg subkey as an ssh key
sit down and spend an hour thinking about your backup and recovery strategy first
never share a private keys between physical devices
allows you to revoke a single credential if you lose (control over) that device
If a private key ever turns up on the wrong machine,
you *know* the key and both source and destination
machines have been compromised.
centralized management of authentication/authorization
I have setup a VPS, disabled passwords, and setup a key with a passphrase to gain access. At this point my greatest worry is losing this private key, as that means I can't access the server.What is a reasonable way to backup my private key?
a mountable disk image that's encrypted
a system that can update/rotate your keys across all of your servers on the fly in case one is compromised or assumed to be compromised.
different keys for different purposes per client device
fall back to password plus OTP
relying completely on the security of your disk, against either physical or cyber.
It is better to use a different passphrase for each key but it is also less convenient unless you're using a password manager (personally, I'm using KeePass)
- RSA is pretty standard, and generally speaking is fairly secure for key lengths >=2048. RSA-2048 is the default for ssh-keygen, and is compatible with just about everything.
public-key authentication has somewhat unexpected side effect of preventing MITM per this security consulting firm
Disable passwords and only allow keys even for root with PermitRootLogin without-password
You should definitely use a different passphrase for keys stored on separate computers,
improve the performance and reliability of a server environment by distributing the workload across multiple servers (e.g. web, application, database).
ACLs are used to test some condition and perform an action (e.g. select a server, or block a request) based on the test result.
ACLs allows flexible network traffic forwarding based on a variety of factors like pattern-matching and the number of connections to a backend
A backend is a set of servers that receives forwarded requests
adding more servers to your backend will increase your potential load capacity by spreading the load over multiple servers
mode http specifies that layer 7 proxying will be used
specifies the load balancing algorithm
health checks
A frontend defines how requests should be forwarded to backends
use_backend rules, which define which backends to use depending on which ACL conditions are matched, and/or a default_backend rule that handles every other case
A frontend can be configured to various types of network traffic
Load balancing this way will forward user traffic based on IP range and port
Generally, all of the servers in the web-backend should be serving identical content--otherwise the user might receive inconsistent content.
Using layer 7 allows the load balancer to forward requests to different backend servers based on the content of the user's request.
allows you to run multiple web application servers under the same domain and port
acl url_blog path_beg /blog matches a request if the path of the user's request begins with /blog.
Round Robin selects servers in turns
Selects the server with the least number of connections--it is recommended for longer sessions
This selects which server to use based on a hash of the source IP
ensure that a user will connect to the same server
require that a user continues to connect to the same backend server. This persistence is achieved through sticky sessions, using the appsession parameter in the backend that requires it.
HAProxy uses health checks to determine if a backend server is available to process requests.
The default health check is to try to establish a TCP connection to the server
If a server fails a health check, and therefore is unable to serve requests, it is automatically disabled in the backend
For certain types of backends, like database servers in certain situations, the default health check is insufficient to determine whether a server is still healthy.
However, your load balancer is a single point of failure in these setups; if it goes down or gets overwhelmed with requests, it can cause high latency or downtime for your service.
A high availability (HA) setup is an infrastructure without a single point of failure
a static IP address that can be remapped from one server to another.
If that load balancer fails, your failover mechanism will detect it and automatically reassign the IP address to one of the passive servers.
LXC (LinuX Containers) is a OS-level virtualization technology that allows creation and running of multiple isolated Linux virtual environments (VE) on a single control host.
Docker, previously called dotCloud, was started as a side project and only open-sourced in 2013. It is really an extension of LXC’s capabilities.
Docker is developed in the Go language and utilizes LXC, cgroups, and the Linux kernel itself. Since it’s based on LXC, a Docker container does not include a separate operating system; instead it relies on the operating system’s own functionality as provided by the underlying infrastructure.
Docker acts as a portable container engine, packaging the application and all its dependencies in a virtual container that can run on any Linux server.
a VE there is no preloaded emulation manager software as in a VM.
In a VE, the application (or OS) is spawned in a container and runs with no added overhead, except for a usually minuscule VE initialization process.
LXC will boast bare metal performance characteristics because it only packages the needed applications.
the OS is also just another application that can be packaged too.
a VM, which packages the entire OS and machine setup, including hard drive, virtual processors and network interfaces. The resulting bloated mass usually takes a long time to boot and consumes a lot of CPU and RAM.
don’t offer some other neat features of VM’s such as IaaS setups and live migration.
LXC as supercharged chroot on Linux. It allows you to not only isolate applications, but even the entire OS.
Libvirt, which allows the use of containers through the LXC driver by connecting to 'lxc:///'.
'LXC', is not compatible with libvirt, but is more flexible with more userspace tools.
Portable deployment across machines
Versioning: Docker includes git-like capabilities for tracking successive versions of a container
Component reuse: Docker allows building or stacking of already created packages.
Shared libraries: There is already a public registry (http://index.docker.io/ ) where thousands have already uploaded the useful containers they have created.
Docker taking the devops world by storm since its launch back in 2013.
LXC, while older, has not been as popular with developers as Docker has proven to be
LXC having a focus on sys admins that’s similar to what solutions like the Solaris operating system, with its Solaris Zones, Linux OpenVZ, and FreeBSD, with its BSD Jails virtualization system
it started out being built on top of LXC, Docker later moved beyond LXC containers to its own execution environment called libcontainer.
Unlike LXC, which launches an operating system init for each container, Docker provides one OS environment, supplied by the Docker Engine
LXC tooling sticks close to what system administrators running bare metal servers are used to
The LXC command line provides essential commands that cover routine management tasks, including the creation, launch, and deletion of LXC containers.
Docker containers aim to be even lighter weight in order to support the fast, highly scalable, deployment of applications with microservice architecture.
With backing from Canonical, LXC and LXD have an ecosystem tightly bound to the rest of the open source Linux community.
Docker Swarm
Docker Trusted Registry
Docker Compose
Docker Machine
Kubernetes facilitates the deployment of containers in your data center by representing a cluster of servers as a single system.
Swarm is Docker’s clustering, scheduling and orchestration tool for managing a cluster of Docker hosts.
rkt is a security minded container engine that uses KVM for VM-based isolation and packs other enhanced security features.
Apache Mesos can run different kinds of distributed jobs, including containers.
Elastic Container Service is Amazon’s service for running and orchestrating containerized applications on AWS
LXC offers the advantages of a VE on Linux, mainly the ability to isolate your own private workloads from one another. It is a cheaper and faster solution to implement than a VM, but doing so requires a bit of extra learning and expertise.
Docker is a significant improvement of LXC’s capabilities.
With stacked control plane nodes, where etcd nodes are colocated with control plane nodes
A stacked HA cluster is a topology where the distributed
data storage cluster provided by etcd is stacked on top of the cluster formed by the nodes managed by
kubeadm that run control plane components.
Each control plane node runs an instance of the kube-apiserver, kube-scheduler, and kube-controller-manager
Each control plane node creates a local etcd member and this etcd member communicates only with
the kube-apiserver of this node.
This topology couples the control planes and etcd members on the same nodes.
a stacked cluster runs the risk of failed coupling. If one node goes down, both an etcd member and a control
plane instance are lost
An HA cluster with external etcd is a topology where the distributed data storage cluster provided by etcd is external to the cluster formed by the nodes that run control plane components.
etcd members run on separate hosts, and each etcd host communicates with the kube-apiserver of each control plane node.
This topology decouples the control plane and etcd member. It therefore provides an HA setup where
losing a control plane instance or an etcd member has less impact and does not affect
the cluster redundancy as much as the stacked HA topology.
Kubernetes releases before v1.24 included a direct integration with Docker Engine,
using a component named dockershim. That special direct integration is no longer
part of Kubernetes
You need to install a
container runtime
into each node in the cluster so that Pods can run there.
Kubernetes 1.26 requires that you use a runtime that
conforms with the
Container Runtime Interface (CRI).
On Linux, control groups
are used to constrain resources that are allocated to processes.
Both kubelet and the
underlying container runtime need to interface with control groups to enforce
resource management for pods and containers and set
resources such as cpu/memory requests and limits.
When the cgroupfs
driver is used, the kubelet and the container runtime directly interface with
the cgroup filesystem to configure cgroups.
The cgroupfs driver is not recommended when
systemd is the
init system
When systemd is chosen as the init
system for a Linux distribution, the init process generates and consumes a root control group
(cgroup) and acts as a cgroup manager.
Two cgroup managers result in two views of the available and in-use resources in
the system.
Changing the cgroup driver of a Node that has joined a cluster is a sensitive operation.
If the kubelet has created Pods using the semantics of one cgroup driver, changing the container
runtime to another cgroup driver can cause errors when trying to re-create the Pod sandbox
for such existing Pods. Restarting the kubelet may not solve such errors.
The approach to mitigate this instability is to use systemd as the cgroup driver for
the kubelet and the container runtime when systemd is the selected init system.
Kubernetes 1.26 defaults to using v1 of the CRI API.
If a container runtime does not support the v1 API, the kubelet falls back to
using the (deprecated) v1alpha2 API instead.
Kubernetes uses these values to uniquely identify the nodes in the cluster.
Make sure that the br_netfilter module is loaded.
you should ensure net.bridge.bridge-nf-call-iptables is set to 1 in your sysctl config,
kubeadm will not install or manage kubelet or kubectl for you, so you will
need to ensure they match the version of the Kubernetes control plane you want
kubeadm to install for you.
one minor version skew between the
kubelet and the control plane is supported, but the kubelet version may never exceed the API
server version.
Both the container runtime and the kubelet have a property called
"cgroup driver", which is important
for the management of cgroups on Linux machines.
"My First 5 Minutes on a Server, by Bryan Kennedy, is an excellent intro into securing a server against most attacks. We have a few modifications to his approach that we wanted to document as part of our efforts of externalizing our processes and best practices. We also wanted to spend a bit more time explaining a few things that younger engineers may benefit from."
PXC / MariaDB Clusters really works better with writes on single ode than multi node writes.
proxySQL setup for a cluster in Single-writer mode, Which is the most recommended for Cluster to avoid of conflicts of writes and split-Brain scenarios.
listening on ports 6032 for proxysql admin interface and 6033 for MySQL interface by default
Keycloak is an “Open source identity and access management” solution.
setup a central Identity Provider (IdP) that applications acting as Service Providers (SP) use to authenticate or authorize user access.
FreeIPA does a LOT more than just provide user info though. It can manage user groups, service lists, hosts, DNS, certificates, and much, much, more.
allow Nginx to pass requests off to backend http servers for further processing
Nginx is often set up as a reverse proxy solution to help scale out infrastructure or to pass requests to other servers that are not designed to handle large client loads
explore buffering and caching to improve the performance of proxying operations for clients
Nginx is built to handle many concurrent connections at the same time.
provides you with flexibility in easily adding backend servers or taking them down as needed for maintenance
Proxying in Nginx is accomplished by manipulating a request aimed at the Nginx server and passing it to other servers for the actual processing
The servers that Nginx proxies requests to are known as upstream servers.
Nginx can proxy requests to servers that communicate using the http(s), FastCGI, SCGI, and uwsgi, or memcached protocols through separate sets of directives for each type of proxy
When a request matches a location with a proxy_pass directive inside, the request is forwarded to the URL given by the directive
For example, when a request for /match/here/please is handled by this block, the request URI will be sent to the example.com server as http://example.com/match/here/please
The request coming from Nginx on behalf of a client will look different than a request coming directly from a client
Nginx gets rid of any empty headers
Nginx, by default, will consider any header that contains underscores as invalid. It will remove these from the proxied request
The "Host" header is re-written to the value defined by the $proxy_host variable.
The upstream should not expect this connection to be persistent
Headers with empty values are completely removed from the passed request.
if your backend application will be processing non-standard headers, you must make sure that they do not have underscores
by default, this will be set to the value of $proxy_host, a variable that will contain the domain name or IP address and port taken directly from the proxy_pass definition
This is selected by default as it is the only address Nginx can be sure the upstream server responds to
(as it is pulled directly from the connection info)
$http_host: Sets the "Host" header to the "Host" header from the client request.
The headers sent by the client are always available in Nginx as variables. The variables will start with an $http_ prefix, followed by the header name in lowercase, with any dashes replaced by underscores.
preference to: the host name from the request line itself
set the "Host" header to the $host variable. It is the most flexible and will usually provide the proxied servers with a "Host" header filled in as accurately as possible
sets the "Host" header to the $host variable, which should contain information about the original host being requested
This variable takes the value of the original X-Forwarded-For header retrieved from the client and adds the Nginx server's IP address to the end.
The upstream directive must be set in the http context of your Nginx configuration.
http context
Once defined, this name will be available for use within proxy passes as if it were a regular domain name
By default, this is just a simple round-robin selection process (each request will be routed to a different host in turn)
Specifies that new connections should always be given to the backend that has the least number of active connections.
distributes requests to different servers based on the client's IP address.
mainly used with memcached proxying
As for the hash method, you must provide the key to hash against
Server Weight
Nginx's buffering and caching capabilities
Without buffers, data is sent from the proxied server and immediately begins to be transmitted to the client.
With buffers, the Nginx proxy will temporarily store the backend's response and then feed this data to the client
Nginx defaults to a buffering design
can be set in the http, server, or location contexts.
the sizing directives are configured per request, so increasing them beyond your need can affect your performance
When buffering is "off" only the buffer defined by the proxy_buffer_size directive will be used
A high availability (HA) setup is an infrastructure without a single point of failure, and your load balancers are a part of this configuration.
multiple load balancers (one active and one or more passive) behind a static IP address that can be remapped from one server to another.
Nginx also provides a way to cache content from backend servers
The proxy_cache_path directive must be set in the http context.
The proxy_cache_bypass directive is set to the $http_cache_control variable. This will contain an indicator as to whether the client is explicitly requesting a fresh, non-cached version of the resource
any user-related data should not be cached
For private content, you should set the Cache-Control header to "no-cache", "no-store", or "private" depending on the nature of the data