Laravel queues provide a unified API across a variety of different queue backends, such as Beanstalk, Amazon SQS, Redis, or even a relational database.
The queue configuration file is stored in config/queue.php
a synchronous driver that will execute jobs immediately (for local use)
A null queue driver is also included which discards queued jobs.
In your config/queue.php configuration file, there is a connections configuration option.
any given queue connection may have multiple "queues" which may be thought of as different stacks or piles of queued jobs.
each connection configuration example in the queue configuration file contains a queue attribute.
if you dispatch a job without explicitly defining which queue it should be dispatched to, the job will be placed on the queue that is defined in the queue attribute of the connection configuration
pushing jobs to multiple queues can be especially useful for applications that wish to prioritize or segment how jobs are processed
specify which queues it should process by priority.
If your Redis queue connection uses a Redis Cluster, your queue names must contain a key hash tag.
ensure all of the Redis keys for a given queue are placed into the same hash slot
all of the queueable jobs for your application are stored in the app/Jobs directory.
Job classes are very simple, normally containing only a handle method which is called when the job is processed by the queue.
we were able to pass an Eloquent model directly into the queued job's constructor. Because of the SerializesModels trait that the job is using, Eloquent models will be gracefully serialized and unserialized when the job is processing.
When the job is actually handled, the queue system will automatically re-retrieve the full model instance from the database.
The handle method is called when the job is processed by the queue
The arguments passed to the dispatch method will be given to the job's constructor
delay the execution of a queued job, you may use the delay method when dispatching a job.
dispatch a job immediately (synchronously), you may use the dispatchNow method.
When using this method, the job will not be queued and will be run immediately within the current process
specify a list of queued jobs that should be run in sequence.
Deleting jobs using the $this->delete() method will not prevent chained jobs from being processed. The chain will only stop executing if a job in the chain fails.
this does not push jobs to different queue "connections" as defined by your queue configuration file, but only to specific queues within a single connection.
To specify the queue, use the onQueue method when dispatching the job
To specify the connection, use the onConnection method when dispatching the job
defining the maximum number of attempts on the job class itself.
to defining how many times a job may be attempted before it fails, you may define a time at which the job should timeout.
using the funnel method, you may limit jobs of a given type to only be processed by one worker at a time
using the throttle method, you may throttle a given type of job to only run 10 times every 60 seconds.
If an exception is thrown while the job is being processed, the job will automatically be released back onto the queue so it may be attempted again.
dispatch a Closure. This is great for quick, simple tasks that need to be executed outside of the current request cycle
When dispatching Closures to the queue, the Closure's code contents is cryptographically signed so it can not be modified in transit.
Laravel includes a queue worker that will process new jobs as they are pushed onto the queue.
once the queue:work command has started, it will continue to run until it is manually stopped or you close your terminal
queue workers are long-lived processes and store the booted application state in memory.
they will not notice changes in your code base after they have been started.
during your deployment process, be sure to restart your queue workers.
customize your queue worker even further by only processing particular queues for a given connection
The --once option may be used to instruct the worker to only process a single job from the queue
The --stop-when-empty option may be used to instruct the worker to process all jobs and then exit gracefully.
Daemon queue workers do not "reboot" the framework before processing each job.
you should free any heavy resources after each job completes.
Since queue workers are long-lived processes, they will not pick up changes to your code without being restarted.
restart the workers during your deployment process.
php artisan queue:restart
The queue uses the cache to store restart signals
the queue workers will die when the queue:restart command is executed, you should be running a process manager such as Supervisor to automatically restart the queue workers.
each queue connection defines a retry_after option. This option specifies how many seconds the queue connection should wait before retrying a job that is being processed.
The --timeout option specifies how long the Laravel queue master process will wait before killing off a child queue worker that is processing a job.
When jobs are available on the queue, the worker will keep processing jobs with no delay in between them.
While sleeping, the worker will not process any new jobs - the jobs will be processed after the worker wakes up again
the numprocs directive will instruct Supervisor to run 8 queue:work processes and monitor all of them, automatically restarting them if they fail.
Laravel includes a convenient way to specify the maximum number of times a job should be attempted.
define a failed method directly on your job class, allowing you to perform job specific clean-up when a failure occurs.
a great opportunity to notify your team via email or Slack.
php artisan queue:retry all
php artisan queue:flush
When injecting an Eloquent model into a job, it is automatically serialized before being placed on the queue and restored when the job is processed
Zabbix by default uses "pull" model when a server connects to agents on each monitoring machine, agents periodically gather the info and send it to a server.
Prometheus prefers "pull" model when a server gather info from client machines.
Prometheus requires an application to be instrumented with Prometheus client library (available in different programming languages) for preparing metrics.
expose metrics for Prometheus (similar to "agents" for Zabbix)
Zabbix uses its own tcp-based communication protocol between agents and a server.
Prometheus uses HTTP with protocol buffers (+ text format for ease of use with curl).
Prometheus offers basic tool for exploring gathered data and visualizing it in simple graphs on its native server and also offers a minimal dashboard builder PromDash. But Prometheus is and is designed to be supported by modern visualizing tools like Grafana.
Prometheus offers solution for alerting that is separated from its core into Alertmanager application.
Kubernetes supports many types of volumes, and a Pod can
use any number of them simultaneously.
To use a volume, a Pod specifies what volumes to provide for the Pod (the
.spec.volumes
field) and where to mount those into Containers (the
.spec.containers.volumeMounts
field).
A process in a container sees a filesystem view composed from their Docker
image and volumes.
Volumes can not mount onto other volumes or have hard links to
other volumes.
Each Container in the Pod must independently specify where to
mount each volume
localnfs
cephfs
awsElasticBlockStore
glusterfs
vsphereVolume
An awsElasticBlockStore volume mounts an Amazon Web Services (AWS) EBS
Volume into your Pod.
the contents of an EBS
volume are preserved and the volume is merely unmounted.
an
EBS volume can be pre-populated with data, and that data can be “handed off”
between Pods.
create an EBS volume using aws ec2 create-volume
the nodes on which Pods are running must be AWS EC2 instances
EBS only supports a single EC2 instance mounting a volume
check that the size and EBS volume
type are suitable for your use!
A cephfs volume allows an existing CephFS volume to be
mounted into your Pod.
the contents of a cephfs volume are preserved and the volume is merely
unmounted.
A Container using a ConfigMap as a subPath volume mount will not
receive ConfigMap updates.
An emptyDir volume is first created when a Pod is assigned to a Node, and
exists as long as that Pod is running on that node.
When a Pod is removed from a node for
any reason, the data in the emptyDir is deleted forever.
By default, emptyDir volumes are stored on whatever medium is backing the
node - that might be disk or SSD or network storage, depending on your
environment.
you can set the emptyDir.medium field to "Memory"
to tell Kubernetes to mount a tmpfs (RAM-backed filesystem)
An fc volume allows an existing fibre channel volume to be mounted in a Pod.
configure FC SAN Zoning to allocate and mask those LUNs (volumes) to the target WWNs beforehand so that Kubernetes hosts can access them.
Flocker is an open-source clustered Container data volume manager. It provides management
and orchestration of data volumes backed by a variety of storage backends.
emptyDir
flocker
A flocker volume allows a Flocker dataset to be mounted into a Pod
have your own Flocker installation running
A gcePersistentDisk volume mounts a Google Compute Engine (GCE) Persistent
Disk into your Pod.
Using a PD on a Pod controlled by a ReplicationController will fail unless
the PD is read-only or the replica count is 0 or 1
A glusterfs volume allows a Glusterfs (an open
source networked filesystem) volume to be mounted into your Pod.
have your own GlusterFS installation running
A hostPath volume mounts a file or directory from the host node’s filesystem
into your Pod.
a
powerful escape hatch for some applications
access to Docker internals; use a hostPath
of /var/lib/docker
allowing a Pod to specify whether a given hostPath should exist prior to the
Pod running, whether it should be created, and what it should exist as
specify a type for a hostPath volume
the files or directories created on the underlying hosts are only writable by root.
hostPath:
# directory location on host
path: /data
# this field is optional
type: Directory
An iscsi volume allows an existing iSCSI (SCSI over IP) volume to be mounted
into your Pod.
have your own iSCSI server running
A feature of iSCSI is that it can be mounted as read-only by multiple consumers
simultaneously.
A local volume represents a mounted local storage device such as a disk,
partition or directory.
Local volumes can only be used as a statically created PersistentVolume.
Compared to hostPath volumes, local volumes can be used in a durable and
portable manner without manually scheduling Pods to nodes, as the system is aware
of the volume’s node constraints by looking at the node affinity on the PersistentVolume.
If a node becomes unhealthy,
then the local volume will also become inaccessible, and a Pod using it will not
be able to run.
PersistentVolume spec using a local volume and
nodeAffinity
PersistentVolume nodeAffinity is required when using local volumes. It enables
the Kubernetes scheduler to correctly schedule Pods using local volumes to the
correct node.
PersistentVolume volumeMode can now be set to “Block” (instead of the default
value “Filesystem”) to expose the local volume as a raw block device.
When using local volumes, it is recommended to create a StorageClass with
volumeBindingMode set to WaitForFirstConsumer
An nfs volume allows an existing NFS (Network File System) share to be
mounted into your Pod.
NFS can be mounted by multiple
writers simultaneously.
have your own NFS server running with the share exported
A persistentVolumeClaim volume is used to mount a
PersistentVolume into a Pod.
PersistentVolumes are a
way for users to “claim” durable storage (such as a GCE PersistentDisk or an
iSCSI volume) without knowing the details of the particular cloud environment.
A projected volume maps several existing volume sources into the same directory.
All sources are required to be in the same namespace as the Pod. For more details,
see the all-in-one volume design document.
Each projected volume source is listed in the spec under sources
A Container using a projected volume source as a subPath volume mount will not
receive updates for those volume sources.
RBD volumes can only be mounted by a single consumer in read-write mode - no
simultaneous writers allowed
A secret volume is used to pass sensitive information, such as passwords, to
Pods
store secrets in the Kubernetes API and mount them as files for
use by Pods
secret volumes are
backed by tmpfs (a RAM-backed filesystem) so they are never written to
non-volatile storage.
create a secret in the Kubernetes API before you can use it
A Container using a Secret as a subPath volume mount will not
receive Secret updates.
StorageOS runs as a Container within your Kubernetes environment, making local
or attached storage accessible from any node within the Kubernetes cluster.
Data can be replicated to protect against node failure. Thin provisioning and
compression can improve utilization and reduce cost.
StorageOS provides block storage to Containers, accessible via a file system.
A vsphereVolume is used to mount a vSphere VMDK Volume into your Pod.
supports both VMFS and VSAN datastore.
create VMDK using one of the following methods before using with Pod.
share one volume for multiple uses in a single Pod.
The volumeMounts.subPath
property can be used to specify a sub-path inside the referenced volume instead of its root.
Use the subPathExpr field to construct subPath directory names from Downward API environment variables
enable the VolumeSubpathEnvExpansion feature gate
The subPath and subPathExpr properties are mutually exclusive.
There is no limit on how much space an emptyDir or
hostPath volume can consume, and no isolation between Containers or between
Pods.
emptyDir and hostPath volumes will be able to
request a certain amount of space using a resource
specification, and to select the type of media to use, for clusters that have
several media types.
the Container Storage Interface (CSI)
and Flexvolume. They enable storage vendors to create custom storage plugins
without adding them to the Kubernetes repository.
all volume plugins (like
volume types listed above) were “in-tree” meaning they were built, linked,
compiled, and shipped with the core Kubernetes binaries and extend the core
Kubernetes API.
Container Storage Interface (CSI)
defines a standard interface for container orchestration systems (like
Kubernetes) to expose arbitrary storage systems to their container workloads.
Once a CSI compatible volume driver is deployed on a Kubernetes cluster, users
may use the csi volume type to attach, mount, etc. the volumes exposed by the
CSI driver.
The csi volume type does not support direct reference from Pod and may only be
referenced in a Pod via a PersistentVolumeClaim object.
This feature requires CSIInlineVolume feature gate to be enabled:--feature-gates=CSIInlineVolume=true
In-tree plugins that support CSI Migration and have a corresponding CSI driver implemented
are listed in the “Types of Volumes” section above.
Mount propagation allows for sharing volumes mounted by a Container to
other Containers in the same Pod, or even to other Pods on the same node.
Mount propagation of a volume is controlled by mountPropagation field in Container.volumeMounts.
HostToContainer - This volume mount will receive all subsequent mounts
that are mounted to this volume or any of its subdirectories.
Bidirectional - This volume mount behaves the same the HostToContainer mount.
In addition, all volume mounts created by the Container will be propagated
back to the host and to all Containers of all Pods that use the same volume.
Edit your Docker’s systemd service file. Set MountFlags as follows:MountFlags=shared
FreeIPA DNS integration allows administrator to manage and serve DNS records in a domain using the same CLI or Web UI as when managing identities and policies.
Single-master DNS is error prone, especially for inexperienced admins.
Goal is NOT to provide general-purpose DNS server.
DNS component in FreeIPA is optional and user may choose to manage all DNS records manually in other third party DNS server.
Clients can be configured to automatically run DNS updates (nsupdate) when their IP address changes and thus keeping its DNS record up-to-date. DNS zones can be configured to synchronize client's reverse (PTR) record along with the forward (A, AAAA) DNS record.
It is extremely hard to change DNS domain in existing installations so it is better to think ahead.
You should only use names which are delegated to you by the parent domain.
Not respecting this rule will cause problems sooner or later!
DNSSEC validation.
For internal names you can use arbitrary sub-domain in a DNS sub-tree you own, e.g. int.example.com.. Always respect rules from the previous section.
General advice about DNS views is do not use them because views make DNS deployment harder to maintain and security benefits are questionable (when compared with ACL).
The DNS integration is based on the bind-dyndb-ldap project, which enhances BIND name server to be able to use FreeIPA server LDAP instance as a data backend (data are stored in cn=dns entry, using schema defined by bind-dyndb-ldap
FreeIPA LDAP directory information tree is by default accessible to any user in the network
As DNS data are often considered as sensitive and as having access to cn=dns tree would be basically equal to being able to run zone transfer to all FreeIPA managed DNS zones, contents of this tree in LDAP are hidden by default.
standard system log (/var/log/messages or system journal)
BIND configuration (/etc/named.conf) can be updated to produce a more detailed log.
"FreeIPA DNS integration allows administrator to manage and serve DNS records in a domain using the same CLI or Web UI as when managing identities and policies."
In default usage, terraform init
downloads and installs the plugins for any providers used in the configuration
automatically, placing them in a subdirectory of the .terraform directory.
allows each
configuration to potentially use different versions of plugins.
In automation environments, it can be desirable to disable this behavior
and instead provide a fixed set of plugins already installed on the system
where Terraform is running. This then avoids the overhead of re-downloading
the plugins on each execution
the desire for an
interactive approval step between plan and apply.
terraform init -input=false to initialize the working directory.
terraform plan -out=tfplan -input=false to create a plan and save it to the local file tfplan.
terraform apply -input=false tfplan to apply the plan stored in the file tfplan.
the environment variable TF_IN_AUTOMATION is set to any non-empty
value, Terraform makes some minor adjustments to its output to de-emphasize
specific commands to run.
it can be difficult or impossible to
ensure that the plan and apply subcommands are run on the same machine,
in the same directory, with all of the same files present.
to allow only one plan to be outstanding at a
time.
forcing plans to be approved (or dismissed) in
sequence
-auto-approve
The -auto-approve option tells Terraform not
to require interactive approval of the plan before applying it.
obtain the archive created in the previous step
and extract it at the same absolute path. This re-creates everything
that was present after plan, avoiding strange issues where local files
were created during the plan step.
"In default usage, terraform init downloads and installs the plugins for any providers used in the configuration automatically, placing them in a subdirectory of the .terraform directory. "
In a cluster, logs should have a separate storage and lifecycle independent of nodes, pods, or containers. This concept is called cluster-level logging.
Cluster-level logging architectures require a separate backend to store, analyze, and query logs
Kubernetes
does not provide a native storage solution for log data.
use kubectl logs --previous to retrieve logs from a previous instantiation of a container.
A container engine handles and redirects any output generated to a containerized application's stdout and stderr streams
The Docker JSON logging driver treats each line as a separate message.
By default, if a container restarts, the kubelet keeps one terminated container with its logs.
An important consideration in node-level logging is implementing log rotation,
so that logs don't consume all available storage on the node
You can also set up a container runtime to
rotate an application's logs automatically.
The two kubelet flags container-log-max-size and container-log-max-files can be used to configure the maximum size for each log file and the maximum number of files allowed for each container respectively.
The kubelet and container runtime do not run in containers.
On machines with systemd, the kubelet and container runtime write to journald. If
systemd is not present, the kubelet and container runtime write to .log files
in the /var/log directory.
System components inside containers always write
to the /var/log directory, bypassing the default logging mechanism.
Kubernetes does not provide a native solution for cluster-level logging
Use a node-level logging agent that runs on every node.
implement cluster-level logging by including a node-level logging agent on each node.
the logging agent is a container that has access to a directory with log files from all of the application containers on that node.
the logging agent must run on every node, it is recommended to run the agent
as a DaemonSet
Node-level logging creates only one agent per node and doesn't require any changes to the applications running on the node.
Containers write stdout and stderr, but with no agreed format. A node-level agent collects these logs and forwards them for aggregation.
Each sidecar container prints a log to its own stdout or stderr stream.
It is not recommended to write log entries with different formats to the same log
stream
writing logs to a file and
then streaming them to stdout can double disk usage.
If you have
an application that writes to a single file, it's recommended to set
/dev/stdout as the destination
it's recommended to use stdout and stderr directly and leave rotation
and retention policies to the kubelet.
Using a logging agent in a sidecar container can lead
to significant resource consumption. Moreover, you won't be able to access
those logs using kubectl logs because they are not controlled
by the kubelet.
If you use AWS, you have two load-balancing options: ELB and ALB.
An ELB is a software-based load balancer which can be set up and configured in front of a collection of AWS Elastic Compute (EC2) instances.
The load balancer serves as a single entry point for consumers of the EC2 instances and distributes incoming traffic across all machines available to receive requests.
the ELB also performs a vital role in improving the fault tolerance of the services which it fronts.
he Open Systems Interconnection Model, or OSI Model, is a conceptual model which is used to facilitate communications between different computing systems.
Layer 1 is the physical layer, and represents the physical medium across which the request is sent.
Layer 2 describes the data link layer
Layer 3 (the network layer)
Layer 7, which serves the application layer.
The Classic ELB operates at Layer 4. Layer 4 represents the transport layer, and is controlled by the protocol being used to transmit the request.
A network device, of which the Classic ELB is an example, reads the protocol and port of the incoming request, and then routes it to one or more backend servers.
the ALB operates at Layer 7. Layer 7 represents the application layer, and as such allows for the redirection of traffic based on the content of the request.
Whereas a request to a specific URL backed by a Classic ELB would only enable routing to a particular pool of homogeneous servers, the ALB can route based on the content of the URL, and direct to a specific subgroup of backing servers existing in a heterogeneous collection registered with the load balancer.
The Classic ELB is a simple load balancer, is easy to configure
As organizations move towards microservice architecture or adopt a container-based infrastructure, the ability to merely map a single address to a specific service becomes more complicated and harder to maintain.
the ALB manages routing based on user-defined rules.
oute traffic to different services based on either the host or the content of the path contained within that URL.
Services are an abstract way of exposing an application running on a set of pods as a network service.
Pods are immutable, which means that when they die, they are not resurrected. The Kubernetes cluster creates new pods in the same node or in a new node once a pod dies.
A service provides a single point of access from outside the Kubernetes cluster and allows you to dynamically access a group of replica pods.
For internal application access within a Kubernetes cluster, ClusterIP is the preferred method
To expose a service to external network requests, NodePort, LoadBalancer, and Ingress are possible options.
Kubernetes Ingress is an API object that provides routing rules to manage external users' access to the services in a Kubernetes cluster, typically via HTTPS/HTTP.
content-based routing, support for multiple protocols, and authentication.
Ingress is made up of an Ingress API object and the Ingress Controller.
Kubernetes Ingress is an API object that describes the desired state for exposing services to the outside of the Kubernetes cluster.
An Ingress Controller reads and processes the Ingress Resource information and usually runs as pods within the Kubernetes cluster.
If Kubernetes Ingress is the API object that provides routing rules to manage external access to services, Ingress Controller is the actual implementation of the Ingress API.
The Ingress Controller is usually a load balancer for routing external traffic to your Kubernetes cluster and is responsible for L4-L7 Network Services.
Layer 7 (L7) refers to the application level of the OSI stack—external connections load-balanced across pods, based on requests.
if Kubernetes Ingress is a computer, then Ingress Controller is a programmer using the computer and taking action.
Ingress Rules are a set of rules for processing inbound HTTP traffic. An Ingress with no rules sends all traffic to a single default backend service.
the Ingress Controller is an application that runs in a Kubernetes cluster and configures an HTTP load balancer according to Ingress Resources.
The load balancer can be a software load balancer running in the cluster or a hardware or cloud load balancer running externally.
ClusterIP is the preferred option for internal service access and uses an internal IP address to access the service
A NodePort is a virtual machine (VM) used to expose a service on a Static Port number.
a NodePort would be used to expose a single service (with no load-balancing requirements for multiple services).
Ingress enables you to consolidate the traffic-routing rules into a single resource and runs as part of a Kubernetes cluster.
An application is accessed from the Internet via Port 80 (HTTP) or Port 443 (HTTPS), and Ingress is an object that allows access to your Kubernetes services from outside the Kubernetes cluster.
To implement Ingress, you need to configure an Ingress Controller in your cluster—it is responsible for processing Ingress Resource information and allowing traffic based on the Ingress Rules.