automatically discover any services on the Docker host and let Træfik reconfigure itself automatically when containers get created (or shut down) so HTTP traffic can be routed accordingly.
use Træfik as a layer-7 load balancer with SSL termination for a set of micro-services used to run a web application.
Docker containers can only communicate with each other over TCP when they share at least one network.
Docker under the hood creates IPTable rules so containers can't reach other containers unless you'd want to
Træfik can listen to Docker events and reconfigure its own internal configuration when containers are created (or shut down).
Enable the Docker provider and listen for container events on the Docker unix socket we've mounted earlier.
Enable automatic request and configuration of SSL certificates using Let's Encrypt.
These certificates will be stored in the acme.json file, which you can back-up yourself and store off-premises.
there isn't a single container that has any published ports to the host -- everything is routed through Docker networks.
Thanks to Docker labels, we can tell Træfik how to create its internal routing configuration.
container labels and service labels
With the traefik.enable label, we tell Træfik to include this container in its internal configuration.
tell Træfik to use the web network to route HTTP traffic to this container.
Service labels allow managing many routes for the same container.
When both container labels and service labels are defined, container labels are just used as default values for missing service labels but no frontend/backend are going to be defined only with these labels.
In the example, two service names are defined : basic and admin.
They allow creating two frontends and two backends.
Always specify the correct port where the container expects HTTP traffic using traefik.port label.
all containers that are placed in the same network as Træfik will automatically be reachable from the outside world
With the traefik.frontend.auth.basic label, it's possible for Træfik to provide a HTTP basic-auth challenge for the endpoints you provide the label for.
The objective of Let’s Encrypt and the ACME protocol is to make it possible to set up an HTTPS server and have it automatically obtain a browser-trusted certificate, without any human intervention.
First, the agent proves to the CA that the web server controls a domain.
Then, the agent can request, renew, and revoke certificates for that domain.
The first time the agent software interacts with Let’s Encrypt, it generates a new key pair and proves to the Let’s Encrypt CA that the server controls one or more domains.
The Let’s Encrypt CA will look at the domain name being requested and issue one or more sets of challenges
different ways that the agent can prove control of the domain
Once the agent has an authorized key pair, requesting, renewing, and revoking certificates is simple—just send certificate management messages and sign them with the authorized key pair.
A machine image is a single static unit that contains a pre-configured
operating system and installed software which is used to quickly create new
running machines.
"A machine image is a single static unit that contains a pre-configured operating system and installed software which is used to quickly create new running machines."
merged configuration is stored on disk in the .terraform
directory, which should be ignored from version control.
When using partial configuration, Terraform requires at a minimum that
an empty backend configuration is specified in one of the root Terraform
configuration files, to specify the backend type.
User variables are available globally within the rest
of the template.
The env function is available only within the default value of a user
variable, allowing you to default a user variable to an environment variable.
As Packer doesn't run
inside a shell, it won't expand ~
To set user variables from the command line, the -var flag is used as a
parameter to packer build (and some other commands).
Variables can also be set from an external JSON file. The -var-file flag
reads a file containing a key/value mapping of variables to values and sets
those variables.
-var-file=
sensitive variables won't get printed to the logs by adding them to the
"sensitive-variables" list within the Packer template
By default, terraform init downloads plugins into a subdirectory of the
working directory so that each working directory is self-contained.
Terraform optionally allows the
use of a local directory as a shared plugin cache, which then allows each
distinct plugin binary to be downloaded only once.
directory must already exist before Terraform will cache plugins;
Terraform will not create the directory itself.
When a plugin cache directory is enabled, the terraform init command will
still access the plugin distribution server to obtain metadata about which
plugins are available, but once a suitable version has been selected it will
first check to see if the selected plugin is already available in the cache
directory.
When possible, Terraform will use hardlinks or symlinks to avoid storing
a separate copy of a cached plugin in multiple directories.
Terraform will never itself delete a plugin from the
plugin cache once it's been placed there.
The configuration file used to define what image we want built and how is called
a template in Packer terminology.
JSON struck the best balance between
human-editable and machine-editable, allowing both hand-made templates as well
as machine generated templates to easily be made.
validate the
template by running packer validate example.json. This command checks the
syntax as well as the configuration values to verify they look valid.
At the end of running packer build, Packer outputs the artifacts that were
created as part of the build.
Packer only builds images. It does not attempt to manage them in any way.
All strings within templates are processed by a common Packer templating
engine, where variables and functions can be used to modify the value of a
configuration parameter at runtime.
Anything template related happens within double-braces: {{ }}.
Functions are specified directly within the braces, such as
{{timestamp}}
Packer needs to decide on a port to use for VNC when building remotely.
vnc_disable_password - This must be set to "true" when using VNC with
ESXi 6.5 or 6.7
remote_type (string) - The type of remote machine that will be used to
build this VM rather than a local desktop product. The only value accepted
for this currently is esx5. If this is not set, a desktop product will
be used. By default, this is not set.
docker-compose -f docker-compose.yml -f docker-compose-dev.yml up
add the external volume and the mount here
In case the folder we mount to has been declared as a VOLUME during image build, its content will be merged with the name volume we mount from the host
Laravel queues provide a unified API across a variety of different queue backends, such as Beanstalk, Amazon SQS, Redis, or even a relational database.
The queue configuration file is stored in config/queue.php
a synchronous driver that will execute jobs immediately (for local use)
A null queue driver is also included which discards queued jobs.
In your config/queue.php configuration file, there is a connections configuration option.
any given queue connection may have multiple "queues" which may be thought of as different stacks or piles of queued jobs.
each connection configuration example in the queue configuration file contains a queue attribute.
if you dispatch a job without explicitly defining which queue it should be dispatched to, the job will be placed on the queue that is defined in the queue attribute of the connection configuration
pushing jobs to multiple queues can be especially useful for applications that wish to prioritize or segment how jobs are processed
specify which queues it should process by priority.
If your Redis queue connection uses a Redis Cluster, your queue names must contain a key hash tag.
ensure all of the Redis keys for a given queue are placed into the same hash slot
all of the queueable jobs for your application are stored in the app/Jobs directory.
Job classes are very simple, normally containing only a handle method which is called when the job is processed by the queue.
we were able to pass an Eloquent model directly into the queued job's constructor. Because of the SerializesModels trait that the job is using, Eloquent models will be gracefully serialized and unserialized when the job is processing.
When the job is actually handled, the queue system will automatically re-retrieve the full model instance from the database.
The handle method is called when the job is processed by the queue
The arguments passed to the dispatch method will be given to the job's constructor
delay the execution of a queued job, you may use the delay method when dispatching a job.
dispatch a job immediately (synchronously), you may use the dispatchNow method.
When using this method, the job will not be queued and will be run immediately within the current process
specify a list of queued jobs that should be run in sequence.
Deleting jobs using the $this->delete() method will not prevent chained jobs from being processed. The chain will only stop executing if a job in the chain fails.
this does not push jobs to different queue "connections" as defined by your queue configuration file, but only to specific queues within a single connection.
To specify the queue, use the onQueue method when dispatching the job
To specify the connection, use the onConnection method when dispatching the job
defining the maximum number of attempts on the job class itself.
to defining how many times a job may be attempted before it fails, you may define a time at which the job should timeout.
using the funnel method, you may limit jobs of a given type to only be processed by one worker at a time
using the throttle method, you may throttle a given type of job to only run 10 times every 60 seconds.
If an exception is thrown while the job is being processed, the job will automatically be released back onto the queue so it may be attempted again.
dispatch a Closure. This is great for quick, simple tasks that need to be executed outside of the current request cycle
When dispatching Closures to the queue, the Closure's code contents is cryptographically signed so it can not be modified in transit.
Laravel includes a queue worker that will process new jobs as they are pushed onto the queue.
once the queue:work command has started, it will continue to run until it is manually stopped or you close your terminal
queue workers are long-lived processes and store the booted application state in memory.
they will not notice changes in your code base after they have been started.
during your deployment process, be sure to restart your queue workers.
customize your queue worker even further by only processing particular queues for a given connection
The --once option may be used to instruct the worker to only process a single job from the queue
The --stop-when-empty option may be used to instruct the worker to process all jobs and then exit gracefully.
Daemon queue workers do not "reboot" the framework before processing each job.
you should free any heavy resources after each job completes.
Since queue workers are long-lived processes, they will not pick up changes to your code without being restarted.
restart the workers during your deployment process.
php artisan queue:restart
The queue uses the cache to store restart signals
the queue workers will die when the queue:restart command is executed, you should be running a process manager such as Supervisor to automatically restart the queue workers.
each queue connection defines a retry_after option. This option specifies how many seconds the queue connection should wait before retrying a job that is being processed.
The --timeout option specifies how long the Laravel queue master process will wait before killing off a child queue worker that is processing a job.
When jobs are available on the queue, the worker will keep processing jobs with no delay in between them.
While sleeping, the worker will not process any new jobs - the jobs will be processed after the worker wakes up again
the numprocs directive will instruct Supervisor to run 8 queue:work processes and monitor all of them, automatically restarting them if they fail.
Laravel includes a convenient way to specify the maximum number of times a job should be attempted.
define a failed method directly on your job class, allowing you to perform job specific clean-up when a failure occurs.
a great opportunity to notify your team via email or Slack.
php artisan queue:retry all
php artisan queue:flush
When injecting an Eloquent model into a job, it is automatically serialized before being placed on the queue and restored when the job is processed