Laravel queues provide a unified API across a variety of different queue backends, such as Beanstalk, Amazon SQS, Redis, or even a relational database.
The queue configuration file is stored in config/queue.php
a synchronous driver that will execute jobs immediately (for local use)
A null queue driver is also included which discards queued jobs.
In your config/queue.php configuration file, there is a connections configuration option.
any given queue connection may have multiple "queues" which may be thought of as different stacks or piles of queued jobs.
each connection configuration example in the queue configuration file contains a queue attribute.
if you dispatch a job without explicitly defining which queue it should be dispatched to, the job will be placed on the queue that is defined in the queue attribute of the connection configuration
pushing jobs to multiple queues can be especially useful for applications that wish to prioritize or segment how jobs are processed
specify which queues it should process by priority.
If your Redis queue connection uses a Redis Cluster, your queue names must contain a key hash tag.
ensure all of the Redis keys for a given queue are placed into the same hash slot
all of the queueable jobs for your application are stored in the app/Jobs directory.
Job classes are very simple, normally containing only a handle method which is called when the job is processed by the queue.
we were able to pass an Eloquent model directly into the queued job's constructor. Because of the SerializesModels trait that the job is using, Eloquent models will be gracefully serialized and unserialized when the job is processing.
When the job is actually handled, the queue system will automatically re-retrieve the full model instance from the database.
The handle method is called when the job is processed by the queue
The arguments passed to the dispatch method will be given to the job's constructor
delay the execution of a queued job, you may use the delay method when dispatching a job.
dispatch a job immediately (synchronously), you may use the dispatchNow method.
When using this method, the job will not be queued and will be run immediately within the current process
specify a list of queued jobs that should be run in sequence.
Deleting jobs using the $this->delete() method will not prevent chained jobs from being processed. The chain will only stop executing if a job in the chain fails.
this does not push jobs to different queue "connections" as defined by your queue configuration file, but only to specific queues within a single connection.
To specify the queue, use the onQueue method when dispatching the job
To specify the connection, use the onConnection method when dispatching the job
defining the maximum number of attempts on the job class itself.
to defining how many times a job may be attempted before it fails, you may define a time at which the job should timeout.
using the funnel method, you may limit jobs of a given type to only be processed by one worker at a time
using the throttle method, you may throttle a given type of job to only run 10 times every 60 seconds.
If an exception is thrown while the job is being processed, the job will automatically be released back onto the queue so it may be attempted again.
dispatch a Closure. This is great for quick, simple tasks that need to be executed outside of the current request cycle
When dispatching Closures to the queue, the Closure's code contents is cryptographically signed so it can not be modified in transit.
Laravel includes a queue worker that will process new jobs as they are pushed onto the queue.
once the queue:work command has started, it will continue to run until it is manually stopped or you close your terminal
queue workers are long-lived processes and store the booted application state in memory.
they will not notice changes in your code base after they have been started.
during your deployment process, be sure to restart your queue workers.
customize your queue worker even further by only processing particular queues for a given connection
The --once option may be used to instruct the worker to only process a single job from the queue
The --stop-when-empty option may be used to instruct the worker to process all jobs and then exit gracefully.
Daemon queue workers do not "reboot" the framework before processing each job.
you should free any heavy resources after each job completes.
Since queue workers are long-lived processes, they will not pick up changes to your code without being restarted.
restart the workers during your deployment process.
php artisan queue:restart
The queue uses the cache to store restart signals
the queue workers will die when the queue:restart command is executed, you should be running a process manager such as Supervisor to automatically restart the queue workers.
each queue connection defines a retry_after option. This option specifies how many seconds the queue connection should wait before retrying a job that is being processed.
The --timeout option specifies how long the Laravel queue master process will wait before killing off a child queue worker that is processing a job.
When jobs are available on the queue, the worker will keep processing jobs with no delay in between them.
While sleeping, the worker will not process any new jobs - the jobs will be processed after the worker wakes up again
the numprocs directive will instruct Supervisor to run 8 queue:work processes and monitor all of them, automatically restarting them if they fail.
Laravel includes a convenient way to specify the maximum number of times a job should be attempted.
define a failed method directly on your job class, allowing you to perform job specific clean-up when a failure occurs.
a great opportunity to notify your team via email or Slack.
php artisan queue:retry all
php artisan queue:flush
When injecting an Eloquent model into a job, it is automatically serialized before being placed on the queue and restored when the job is processed
slave Redis instances to be exact copies of master instances.
The slave will automatically reconnect to the master every time the link breaks, and will attempt to be an exact copy of it regardless of what happens to the master.
A build system, git, and development headers for many popular libraries, so that the most popular Ruby, Python and Node.js native extensions can be compiled without problems.
Nginx 1.18. Disabled by default
production-grade features, such as process monitoring, administration and status inspection.
Redis 5.0. Not installed by default.
The image has an app user with UID 9999 and home directory /home/app. Your application is supposed to run as this user.
running applications without root privileges is good security practice.
Your application should be placed inside /home/app.
COPY --chown=app:app
Passenger works like a mod_ruby, mod_nodejs, etc. It changes Nginx into an application server and runs your app from Nginx.
placing a .conf file in the directory /etc/nginx/sites-enabled
The best way to configure Nginx is by adding .conf files to /etc/nginx/main.d and /etc/nginx/conf.d
files in conf.d are included in the Nginx configuration's http context.
any environment variables you set with docker run -e, Docker linking and /etc/container_environment, won't reach Nginx.
To preserve these variables, place an Nginx config file ending with *.conf in the directory /etc/nginx/main.d, in which you tell Nginx to preserve these variables.
By default, Phusion Passenger sets all of the following environment variables to the value production
Setting these environment variables yourself (e.g. using docker run -e RAILS_ENV=...) will not have any effect, because Phusion Passenger overrides all of these environment variables.
PASSENGER_APP_ENV environment variable
passenger-docker autogenerates an Nginx configuration file (/etc/nginx/conf.d/00_app_env.conf) during container boot.
The configuration file is in /etc/redis/redis.conf. Modify it as you see fit, but make sure daemonize no is set.
You can add additional daemons to the image by creating runit entries.
The shell script must be called run, must be executable
the shell script must run the daemon without letting it daemonize/fork it.
We use RVM to install and to manage Ruby interpreters.
In this case, Elasticsearch. And because Elasticsearch can be down or struggling, or the network can be down, the shipper would ideally be able to buffer and retry
Logstash is typically used for collecting, parsing, and storing logs for future use as part of log management.
Logstash’s biggest con or “Achille’s heel” has always been performance and resource consumption (the default heap size is 1GB).
This can be a problem for high traffic deployments, when Logstash servers would need to be comparable with the Elasticsearch ones.
Filebeat was made to be that lightweight log shipper that pushes to Logstash or Elasticsearch.
differences between Logstash and Filebeat are that Logstash has more functionality, while Filebeat takes less resources.
Filebeat is just a tiny binary with no dependencies.
For example, how aggressive it should be in searching for new files to tail and when to close file handles when a file didn’t get changes for a while.
For example, the apache module will point Filebeat to default access.log and error.log paths
Filebeat’s scope is very limited,
Initially it could only send logs to Logstash and Elasticsearch, but now it can send to Kafka and Redis, and in 5.x it also gains filtering capabilities.
Filebeat can parse JSON
you can push directly from Filebeat to Elasticsearch, and have Elasticsearch do both parsing and storing.
You shouldn’t need a buffer when tailing files because, just as Logstash, Filebeat remembers where it left off
For larger deployments, you’d typically use Kafka as a queue instead, because Filebeat can talk to Kafka as well
The default syslog daemon on most Linux distros, rsyslog can do so much more than just picking logs from the syslog socket and writing to /var/log/messages.
It can tail files, parse them, buffer (on disk and in memory) and ship to a number of destinations, including Elasticsearch.
rsyslog is the fastest shipper
Its grammar-based parsing module (mmnormalize) works at constant speed no matter the number of rules (we tested this claim).
use it as a simple router/shipper, any decent machine will be limited by network bandwidth
It’s also one of the lightest parsers you can find, depending on the configured memory buffers.
rsyslog requires more work to get the configuration right
the main difference between Logstash and rsyslog is that Logstash is easier to use while rsyslog lighter.
rsyslog fits well in scenarios where you either need something very light yet capable (an appliance, a small VM, collecting syslog from within a Docker container).
rsyslog also works well when you need that ultimate performance.
syslog-ng as an alternative to rsyslog (though historically it was actually the other way around).
a modular syslog daemon, that can do much more than just syslog
Unlike rsyslog, it features a clear, consistent configuration format and has nice documentation.
Similarly to rsyslog, you’d probably want to deploy syslog-ng on boxes where resources are tight, yet you do want to perform potentially complex processing.
syslog-ng has an easier, more polished feel than rsyslog, but likely not that ultimate performance
Fluentd was built on the idea of logging in JSON wherever possible (which is a practice we totally agree with) so that log shippers down the line don’t have to guess which substring is which field of which type.
Fluentd plugins are in Ruby and very easy to write.
structured data through Fluentd, it’s not made to have the flexibility of other shippers on this list (Filebeat excluded).
Fluent Bit, which is to Fluentd similar to how Filebeat is for Logstash.
Fluentd is a good fit when you have diverse or exotic sources and destinations for your logs, because of the number of plugins.
Splunk isn’t a log shipper, it’s a commercial logging solution
Graylog is another complete logging solution, an open-source alternative to Splunk.
everything goes through graylog-server, from authentication to queries.
Graylog is nice because you have a complete logging solution, but it’s going to be harder to customize than an ELK stack.