Laravel is shipped with a built-in queue system that helps you run tasks in the background
The QueueManager is registered into the container and it knows how to connect to the different built-in queue drivers
for example when we called the Queue::push() method, what happened is that the manager selected the desired queue driver, connected to it, and called the push method on that driver.
"The Deep Learning textbook is a resource intended to help students and practitioners enter the field of machine learning in general and deep learning in particular. The online version of the book is now complete and will remain available online for free.
The deep learning textbook can now be pre-ordered on Amazon. Pre-orders should ship on December 16, 2016.
For up to date announcements, join our mailing list."
"Torus is an open source project for distributed storage coordinated through etcd.
Torus provides a resource pool and basic file primitives from a set of daemons running atop multiple nodes. These primitives are made consistent by being append-only and coordinated by etcd. From these primitives, a Torus server can support multiple types of volumes, the semantics of which can be broken into subprojects. It ships with a simple block-device volume plugin, but is extensible to more."
DevOps is a set of practices that automates the processes between software development and IT teams, in order that they can build, test, and release software faster and more reliably.
increased trust, faster software releases, ability to solve critical issues quickly, and better manage unplanned work.
bringing together the best of software development and IT operations.
a firm handshake between development and operations
DevOps isn’t magic, and transformations don’t happen overnight.
Infrastructure as code
Culture is the #1 success factor in DevOps.
Building a culture of shared responsibility, transparency and faster feedback is the foundation of every high performing DevOps team.
'not our problem' mentality
DevOps is that change in mindset of looking at the development process holistically and breaking down the barrier between Dev and Ops.
Speed is everything.
Lack of automated test and review cycles block the release to production and poor incident response time kills velocity and team confidence
Open communication helps Dev and Ops teams swarm on issues, fix incidents, and unblock the release pipeline faster.
Unplanned work is a reality that every team faces–a reality that most often impacts team productivity.
“cross-functional collaboration.”
All the tooling and automation in the world are useless if they aren’t accompanied by a genuine desire on the part of development and IT/Ops professionals to work together.
DevOps doesn’t solve tooling problems. It solves human problems.
Forming project- or product-oriented teams to replace function-based teams is a step in the right direction.
sharing a common goal and having a plan to reach it together
join sprint planning sessions, daily stand-ups, and sprint demos.
DevOps culture across every department
open channels of communication, and talk regularly
continuous delivery: the practice of running each code change through a gauntlet of automated tests, often facilitated by cloud-based infrastructure, then packaging up successful builds and promoting them up toward production using automated deploys.
automated deploys alert IT/Ops to server “drift” between environments, which reduces or eliminates surprises when it’s time to release.
“configuration as code.”
when DevOps uses automated deploys to send thoroughly tested code to identically provisioned environments, “Works on my machine!” becomes irrelevant.
A DevOps mindset sees opportunities for continuous improvement everywhere.
regular retrospectives
A/B testing
failure is inevitable. So you might as well set up your team to absorb it, recover, and learn from it (some call this “being anti-fragile”).
Postmortems focus on where processes fell down and how to strengthen them – not on which team member f'ed up the code.
Our engineers are responsible for QA, writing, and running their own tests to get the software out to customers.
How long did it take to go from development to deployment?
How long does it take to recover after a system failure?
service level agreements (SLAs)
Devops isn't any single person's job. It's everyone's job.
DevOps is big on the idea that the same people who build an application should be involved in shipping and running it.
developers and operators pair with each other in each phase of the application’s lifecycle.
Kubernetes supports many types of volumes, and a Pod can
use any number of them simultaneously.
To use a volume, a Pod specifies what volumes to provide for the Pod (the
.spec.volumes
field) and where to mount those into Containers (the
.spec.containers.volumeMounts
field).
A process in a container sees a filesystem view composed from their Docker
image and volumes.
Volumes can not mount onto other volumes or have hard links to
other volumes.
Each Container in the Pod must independently specify where to
mount each volume
localnfs
cephfs
awsElasticBlockStore
glusterfs
vsphereVolume
An awsElasticBlockStore volume mounts an Amazon Web Services (AWS) EBS
Volume into your Pod.
the contents of an EBS
volume are preserved and the volume is merely unmounted.
an
EBS volume can be pre-populated with data, and that data can be “handed off”
between Pods.
create an EBS volume using aws ec2 create-volume
the nodes on which Pods are running must be AWS EC2 instances
EBS only supports a single EC2 instance mounting a volume
check that the size and EBS volume
type are suitable for your use!
A cephfs volume allows an existing CephFS volume to be
mounted into your Pod.
the contents of a cephfs volume are preserved and the volume is merely
unmounted.
A Container using a ConfigMap as a subPath volume mount will not
receive ConfigMap updates.
An emptyDir volume is first created when a Pod is assigned to a Node, and
exists as long as that Pod is running on that node.
When a Pod is removed from a node for
any reason, the data in the emptyDir is deleted forever.
By default, emptyDir volumes are stored on whatever medium is backing the
node - that might be disk or SSD or network storage, depending on your
environment.
you can set the emptyDir.medium field to "Memory"
to tell Kubernetes to mount a tmpfs (RAM-backed filesystem)
An fc volume allows an existing fibre channel volume to be mounted in a Pod.
configure FC SAN Zoning to allocate and mask those LUNs (volumes) to the target WWNs beforehand so that Kubernetes hosts can access them.
Flocker is an open-source clustered Container data volume manager. It provides management
and orchestration of data volumes backed by a variety of storage backends.
emptyDir
flocker
A flocker volume allows a Flocker dataset to be mounted into a Pod
have your own Flocker installation running
A gcePersistentDisk volume mounts a Google Compute Engine (GCE) Persistent
Disk into your Pod.
Using a PD on a Pod controlled by a ReplicationController will fail unless
the PD is read-only or the replica count is 0 or 1
A glusterfs volume allows a Glusterfs (an open
source networked filesystem) volume to be mounted into your Pod.
have your own GlusterFS installation running
A hostPath volume mounts a file or directory from the host node’s filesystem
into your Pod.
a
powerful escape hatch for some applications
access to Docker internals; use a hostPath
of /var/lib/docker
allowing a Pod to specify whether a given hostPath should exist prior to the
Pod running, whether it should be created, and what it should exist as
specify a type for a hostPath volume
the files or directories created on the underlying hosts are only writable by root.
hostPath:
# directory location on host
path: /data
# this field is optional
type: Directory
An iscsi volume allows an existing iSCSI (SCSI over IP) volume to be mounted
into your Pod.
have your own iSCSI server running
A feature of iSCSI is that it can be mounted as read-only by multiple consumers
simultaneously.
A local volume represents a mounted local storage device such as a disk,
partition or directory.
Local volumes can only be used as a statically created PersistentVolume.
Compared to hostPath volumes, local volumes can be used in a durable and
portable manner without manually scheduling Pods to nodes, as the system is aware
of the volume’s node constraints by looking at the node affinity on the PersistentVolume.
If a node becomes unhealthy,
then the local volume will also become inaccessible, and a Pod using it will not
be able to run.
PersistentVolume spec using a local volume and
nodeAffinity
PersistentVolume nodeAffinity is required when using local volumes. It enables
the Kubernetes scheduler to correctly schedule Pods using local volumes to the
correct node.
PersistentVolume volumeMode can now be set to “Block” (instead of the default
value “Filesystem”) to expose the local volume as a raw block device.
When using local volumes, it is recommended to create a StorageClass with
volumeBindingMode set to WaitForFirstConsumer
An nfs volume allows an existing NFS (Network File System) share to be
mounted into your Pod.
NFS can be mounted by multiple
writers simultaneously.
have your own NFS server running with the share exported
A persistentVolumeClaim volume is used to mount a
PersistentVolume into a Pod.
PersistentVolumes are a
way for users to “claim” durable storage (such as a GCE PersistentDisk or an
iSCSI volume) without knowing the details of the particular cloud environment.
A projected volume maps several existing volume sources into the same directory.
All sources are required to be in the same namespace as the Pod. For more details,
see the all-in-one volume design document.
Each projected volume source is listed in the spec under sources
A Container using a projected volume source as a subPath volume mount will not
receive updates for those volume sources.
RBD volumes can only be mounted by a single consumer in read-write mode - no
simultaneous writers allowed
A secret volume is used to pass sensitive information, such as passwords, to
Pods
store secrets in the Kubernetes API and mount them as files for
use by Pods
secret volumes are
backed by tmpfs (a RAM-backed filesystem) so they are never written to
non-volatile storage.
create a secret in the Kubernetes API before you can use it
A Container using a Secret as a subPath volume mount will not
receive Secret updates.
StorageOS runs as a Container within your Kubernetes environment, making local
or attached storage accessible from any node within the Kubernetes cluster.
Data can be replicated to protect against node failure. Thin provisioning and
compression can improve utilization and reduce cost.
StorageOS provides block storage to Containers, accessible via a file system.
A vsphereVolume is used to mount a vSphere VMDK Volume into your Pod.
supports both VMFS and VSAN datastore.
create VMDK using one of the following methods before using with Pod.
share one volume for multiple uses in a single Pod.
The volumeMounts.subPath
property can be used to specify a sub-path inside the referenced volume instead of its root.
Use the subPathExpr field to construct subPath directory names from Downward API environment variables
enable the VolumeSubpathEnvExpansion feature gate
The subPath and subPathExpr properties are mutually exclusive.
There is no limit on how much space an emptyDir or
hostPath volume can consume, and no isolation between Containers or between
Pods.
emptyDir and hostPath volumes will be able to
request a certain amount of space using a resource
specification, and to select the type of media to use, for clusters that have
several media types.
the Container Storage Interface (CSI)
and Flexvolume. They enable storage vendors to create custom storage plugins
without adding them to the Kubernetes repository.
all volume plugins (like
volume types listed above) were “in-tree” meaning they were built, linked,
compiled, and shipped with the core Kubernetes binaries and extend the core
Kubernetes API.
Container Storage Interface (CSI)
defines a standard interface for container orchestration systems (like
Kubernetes) to expose arbitrary storage systems to their container workloads.
Once a CSI compatible volume driver is deployed on a Kubernetes cluster, users
may use the csi volume type to attach, mount, etc. the volumes exposed by the
CSI driver.
The csi volume type does not support direct reference from Pod and may only be
referenced in a Pod via a PersistentVolumeClaim object.
This feature requires CSIInlineVolume feature gate to be enabled:--feature-gates=CSIInlineVolume=true
In-tree plugins that support CSI Migration and have a corresponding CSI driver implemented
are listed in the “Types of Volumes” section above.
Mount propagation allows for sharing volumes mounted by a Container to
other Containers in the same Pod, or even to other Pods on the same node.
Mount propagation of a volume is controlled by mountPropagation field in Container.volumeMounts.
HostToContainer - This volume mount will receive all subsequent mounts
that are mounted to this volume or any of its subdirectories.
Bidirectional - This volume mount behaves the same the HostToContainer mount.
In addition, all volume mounts created by the Container will be propagated
back to the host and to all Containers of all Pods that use the same volume.
Edit your Docker’s systemd service file. Set MountFlags as follows:MountFlags=shared
In this case, Elasticsearch. And because Elasticsearch can be down or struggling, or the network can be down, the shipper would ideally be able to buffer and retry
Logstash is typically used for collecting, parsing, and storing logs for future use as part of log management.
Logstash’s biggest con or “Achille’s heel” has always been performance and resource consumption (the default heap size is 1GB).
This can be a problem for high traffic deployments, when Logstash servers would need to be comparable with the Elasticsearch ones.
Filebeat was made to be that lightweight log shipper that pushes to Logstash or Elasticsearch.
differences between Logstash and Filebeat are that Logstash has more functionality, while Filebeat takes less resources.
Filebeat is just a tiny binary with no dependencies.
For example, how aggressive it should be in searching for new files to tail and when to close file handles when a file didn’t get changes for a while.
For example, the apache module will point Filebeat to default access.log and error.log paths
Filebeat’s scope is very limited,
Initially it could only send logs to Logstash and Elasticsearch, but now it can send to Kafka and Redis, and in 5.x it also gains filtering capabilities.
Filebeat can parse JSON
you can push directly from Filebeat to Elasticsearch, and have Elasticsearch do both parsing and storing.
You shouldn’t need a buffer when tailing files because, just as Logstash, Filebeat remembers where it left off
For larger deployments, you’d typically use Kafka as a queue instead, because Filebeat can talk to Kafka as well
The default syslog daemon on most Linux distros, rsyslog can do so much more than just picking logs from the syslog socket and writing to /var/log/messages.
It can tail files, parse them, buffer (on disk and in memory) and ship to a number of destinations, including Elasticsearch.
rsyslog is the fastest shipper
Its grammar-based parsing module (mmnormalize) works at constant speed no matter the number of rules (we tested this claim).
use it as a simple router/shipper, any decent machine will be limited by network bandwidth
It’s also one of the lightest parsers you can find, depending on the configured memory buffers.
rsyslog requires more work to get the configuration right
the main difference between Logstash and rsyslog is that Logstash is easier to use while rsyslog lighter.
rsyslog fits well in scenarios where you either need something very light yet capable (an appliance, a small VM, collecting syslog from within a Docker container).
rsyslog also works well when you need that ultimate performance.
syslog-ng as an alternative to rsyslog (though historically it was actually the other way around).
a modular syslog daemon, that can do much more than just syslog
Unlike rsyslog, it features a clear, consistent configuration format and has nice documentation.
Similarly to rsyslog, you’d probably want to deploy syslog-ng on boxes where resources are tight, yet you do want to perform potentially complex processing.
syslog-ng has an easier, more polished feel than rsyslog, but likely not that ultimate performance
Fluentd was built on the idea of logging in JSON wherever possible (which is a practice we totally agree with) so that log shippers down the line don’t have to guess which substring is which field of which type.
Fluentd plugins are in Ruby and very easy to write.
structured data through Fluentd, it’s not made to have the flexibility of other shippers on this list (Filebeat excluded).
Fluent Bit, which is to Fluentd similar to how Filebeat is for Logstash.
Fluentd is a good fit when you have diverse or exotic sources and destinations for your logs, because of the number of plugins.
Splunk isn’t a log shipper, it’s a commercial logging solution
Graylog is another complete logging solution, an open-source alternative to Splunk.
everything goes through graylog-server, from authentication to queries.
Graylog is nice because you have a complete logging solution, but it’s going to be harder to customize than an ELK stack.
there is a lengthy suite of tests and checks that run before it is deployed to staging. During this period, which could end up being hours, engineers will likely pick up another task. I’ve seen people merge, and then forget that their changes are on staging, more times than I can count.
only merge code that is ready to go live
written sufficient tests and have validated our changes in development.
All branches are cut from main, and all changes get merged back into main.
If we ever have an issue in production, we always roll forward.
Feature flags can be enabled on a per-user basis so we can monitor performance and gather feedback
Experimental features can be enabled by users in their account settings.
we have monitoring, logging, and alarms around all of our services. We also blue/green deploy, by draining and replacing a percentage of containers.
Dropping your staging environment in favour of true continuous integration and deployment can create a different mindset for shipping software.