"tag is a command line tool to manipulate tags on Mac OS X 10.9 Mavericks files, and to query for files with those tags. tag can use the file system's built-in metadata search functionality to rapidly find all files that have been tagged with a given set of tags."
{{ ... }} for Expressions to print to the template output
use a dot (.) to access attributes of a variable
the outer double-curly braces are not part of the
variable, but the print statement.
If you access variables inside tags don’t
put the braces around them.
If a variable or attribute does not exist, you will get back an undefined
value.
the default behavior is to evaluate to an empty string if
printed or iterated over, and to fail for every other operation.
if an object has an item and attribute with the same
name. Additionally, the attr() filter only looks up attributes.
Variables can be modified by filters. Filters are separated from the
variable by a pipe symbol (|) and may have optional arguments in
parentheses.
Multiple filters can be chained
Tests can be used
to test a variable against a common expression.
add is plus the name of the test after the variable.
to find out if a variable is defined, you can do name is defined,
which will then return true or false depending on whether name is defined
in the current template context.
strip whitespace in templates by hand. If you add a minus
sign (-) to the start or end of a block (e.g. a For tag), a
comment, or a variable expression, the whitespaces before or after
that block will be removed
not add whitespace between the tag and the minus sign
mark a block raw
Template inheritance
allows you to build a base “skeleton” template that contains all the common
elements of your site and defines blocks that child templates can override.
The {% extends %} tag is the key here. It tells the template engine that
this template “extends” another template.
access templates in subdirectories with a slash
can’t define multiple {% block %} tags with the same name in the
same template
use the special
self variable and call the block with that name
self.title()
super()
put the name of the block after the end tag for better
readability
if the block is replaced by
a child template, a variable would appear that was not defined in the block or
passed to the context.
setting the block to “scoped” by adding the scoped
modifier to a block declaration
If you have a variable that may
include any of the following chars (>, <, &, or ") you
SHOULD escape it unless the variable contains well-formed and trusted
HTML.
Jinja2 functions (macros, super, self.BLOCKNAME) always return template
data that is marked as safe.
With the default syntax, control structures appear inside
{% ... %} blocks.
the dictsort filter
loop.cycle
Unlike in Python, it’s not possible to break or continue in a loop
use loops recursively
add the recursive modifier
to the loop definition and call the loop variable with the new iterable
where you want to recurse.
The loop variable always refers to the closest (innermost) loop.
whether the value changed at all,
use it to test if a variable is defined, not
empty and not false
Macros are comparable with functions in regular programming languages.
If a macro name starts with an underscore, it’s not exported and can’t
be imported.
pass a macro to another macro
caller()
a single trailing newline is stripped if present
other whitespace (spaces, tabs, newlines etc.) is returned unchanged
a block tag works in “both”
directions. That is, a block tag doesn’t just provide a placeholder to fill
- it also defines the content that fills the placeholder in the parent.
Python dicts are not ordered
caller(user)
call(user)
This is a simple dialog rendered by using a macro and
a call block.
Filter sections allow you to apply regular Jinja2 filters on a block of
template data.
Assignments at
top level (outside of blocks, macros or loops) are exported from the template
like top level macros and can be imported by other templates.
using namespace
objects which allow propagating of changes across scopes
use block assignments to
capture the contents of a block into a variable name.
The extends tag can be used to extend one template from another.
Blocks are used for inheritance and act as both placeholders and replacements
at the same time.
The include statement is useful to include a template and return the
rendered contents of that file into the current namespace
Included templates have access to the variables of the active context by
default.
putting often used code into macros
imports are cached
and imported templates don’t have access to the current template variables,
just the globals by default.
Macros and variables starting with one or more underscores are private and
cannot be imported.
By default, included templates are passed the current context and imported
templates are not.
imports are often used just as a module that holds macros.
Integers and floating point numbers are created by just writing the
number down
Everything between two brackets is a list.
Tuples are like lists that cannot be modified (“immutable”).
A dict in Python is a structure that combines keys and values.
//
Divide two numbers and return the truncated integer result
The special constants true, false, and none are indeed lowercase
all Jinja identifiers are lowercase
(expr)
group an expression.
The is and in operators support negation using an infix notation
in
Perform a sequence / mapping containment test.
|
Applies a filter.
~
Converts all operands into strings and concatenates them.
use inline if expressions.
always an attribute is returned and items are not
looked up.
default(value, default_value=u'', boolean=False)¶
If the value is undefined it will return the passed default value,
otherwise the value of the variable
dictsort(value, case_sensitive=False, by='key', reverse=False)¶
Sort a dict and yield (key, value) pairs.
format(value, *args, **kwargs)¶
Apply python string formatting on an object
groupby(value, attribute)¶
Group a sequence of objects by a common attribute.
grouping by is stored in the grouper
attribute and the list contains all the objects that have this grouper
in common.
indent(s, width=4, first=False, blank=False, indentfirst=None)¶
Return a copy of the string with each line indented by 4 spaces. The
first line and blank lines are not indented by default.
join(value, d=u'', attribute=None)¶
Return a string which is the concatenation of the strings in the
sequence.
map()¶
Applies a filter on a sequence of objects or looks up an attribute.
pprint(value, verbose=False)¶
Pretty print a variable. Useful for debugging.
reject()¶
Filters a sequence of objects by applying a test to each object,
and rejecting the objects with the test succeeding.
replace(s, old, new, count=None)¶
Return a copy of the value with all occurrences of a substring
replaced with a new one.
round(value, precision=0, method='common')¶
Round the number to a given precision
even if rounded to 0 precision, a float is returned.
select()¶
Filters a sequence of objects by applying a test to each object,
and only selecting the objects with the test succeeding.
sort(value, reverse=False, case_sensitive=False, attribute=None)¶
Sort an iterable. Per default it sorts ascending, if you pass it
true as first argument it will reverse the sorting.
striptags(value)¶
Strip SGML/XML tags and replace adjacent whitespace by one space.
tojson(value, indent=None)¶
Dumps a structure to JSON so that it’s safe to use in <script>
tags.
trim(value)¶
Strip leading and trailing whitespace.
unique(value, case_sensitive=False, attribute=None)¶
Returns a list of unique items from the the given iterable
urlize(value, trim_url_limit=None, nofollow=False, target=None, rel=None)¶
Converts URLs in plain text into clickable links.
defined(value)¶
Return true if the variable is defined
in(value, seq)¶
Check if value is in seq.
mapping(value)¶
Return true if the object is a mapping (dict etc.).
number(value)¶
Return true if the variable is a number.
sameas(value, other)¶
Check if an object points to the same memory address than another
object
undefined(value)¶
Like defined() but the other way round.
A joiner is
passed a string and will return that string every time it’s called, except
the first time (in which case it returns an empty string).
namespace(...)¶
Creates a new container that allows attribute assignment using the
{% set %} tag
The with statement makes it possible to create a new inner scope.
Variables set within this scope are not visible outside of the scope.
activate and deactivate the autoescaping from within
the templates
With both trim_blocks and lstrip_blocks enabled, you can put block tags
on their own lines, and the entire block line will be removed when
rendered, preserving the whitespace of the contents
"TMSU is a tool for tagging your files. It provides a simple command-line tool for applying tags and a virtual filesystem so that you can get a tag-based view of your files from within any other program.
TMSU does not alter your files in any way: they remain unchanged on disk, or on the network, wherever you put them. TMSU maintains its own database and you simply gain an additional view, which you can mount, based upon the tags you set up. The only commitment required is your time and there's absolutely no lock-in."
A chart is a collection of files
that describe a related set of Kubernetes resources.
A single chart
might be used to deploy something simple, like a memcached pod, or
something complex, like a full web app stack with HTTP servers,
databases, caches, and so on.
Charts are created as files laid out in a particular directory tree,
then they can be packaged into versioned archives to be deployed.
A chart is organized as a collection of files inside of a directory.
values.yaml # The default configuration values for this chart
charts/ # A directory containing any charts upon which this chart depends.
templates/ # A directory of templates that, when combined with values,
# will generate valid Kubernetes manifest files.
version: A SemVer 2 version (required)
apiVersion: The chart API version, always "v1" (required)
Every chart must have a version number. A version must follow the
SemVer 2 standard.
non-SemVer names are explicitly
disallowed by the system.
When generating a
package, the helm package command will use the version that it finds
in the Chart.yaml as a token in the package name.
the appVersion field is not related to the version field. It is
a way of specifying the version of the application.
appVersion: The version of the app that this contains (optional). This needn't be SemVer.
If the latest version of a chart in the
repository is marked as deprecated, then the chart as a whole is considered to
be deprecated.
deprecated: Whether this chart is deprecated (optional, boolean)
one chart may depend on any number of other charts.
dependencies can be dynamically linked through the requirements.yaml
file or brought in to the charts/ directory and managed manually.
the preferred method of declaring dependencies is by using a
requirements.yaml file inside of your chart.
A requirements.yaml file is a simple file for listing your
dependencies.
The repository field is the full URL to the chart repository.
you must also use helm repo add to add that repo locally.
helm dependency update
and it will use your dependency file to download all the specified
charts into your charts/ directory for you.
When helm dependency update retrieves charts, it will store them as
chart archives in the charts/ directory.
Managing charts with requirements.yaml is a good way to easily keep
charts updated, and also share requirements information throughout a
team.
All charts are loaded by default.
The condition field holds one or more YAML paths (delimited by commas).
If this path exists in the top parent’s values and resolves to a boolean value,
the chart will be enabled or disabled based on that boolean value.
The tags field is a YAML list of labels to associate with this chart.
all charts with tags can be enabled or disabled by
specifying the tag and a boolean value.
The --set parameter can be used as usual to alter tag and condition values.
Conditions (when set in values) always override tags.
The first condition path that exists wins and subsequent ones for that chart are ignored.
The keys containing the values to be imported can be specified in the parent chart’s requirements.yaml file
using a YAML list. Each item in the list is a key which is imported from the child chart’s exports field.
specifying the key data in our import list, Helm looks in the exports field of the child
chart for data key and imports its contents.
the parent key data is not contained in the parent’s final values. If you need to specify the
parent key, use the ‘child-parent’ format.
To access values that are not contained in the exports key of the child chart’s values, you will need to
specify the source key of the values to be imported (child) and the destination path in the parent chart’s
values (parent).
To drop a dependency into your charts/ directory, use the
helm fetch command
A dependency can be either a chart archive (foo-1.2.3.tgz) or an
unpacked chart directory.
name cannot start with _ or ..
Such files are ignored by the chart loader.
a single release is created with all the objects for the chart and its dependencies.
Helm Chart templates are written in the
Go template language, with the
addition of 50 or so add-on template
functions from the Sprig library and a
few other specialized functions
When
Helm renders the charts, it will pass every file in that directory
through the template engine.
Chart developers may supply a file called values.yaml inside of a
chart. This file can contain default values.
Chart users may supply a YAML file that contains values. This can be
provided on the command line with helm install.
When a user supplies custom values, these values will override the
values in the chart’s values.yaml file.
Template files follow the standard conventions for writing Go templates
{{default "minio" .Values.storage}}
Values that are supplied via a values.yaml file (or via the --set
flag) are accessible from the .Values object in a template.
pre-defined, are available to every template, and
cannot be overridden
the names are case
sensitive
Release.Name: The name of the release (not the chart)
Release.IsUpgrade: This is set to true if the current operation is an upgrade or rollback.
Release.Revision: The revision number. It begins at 1, and increments with
each helm upgrade
Chart: The contents of the Chart.yaml
Files: A map-like object containing all non-special files in the chart.
Files can be
accessed using {{index .Files "file.name"}} or using the {{.Files.Get name}} or
{{.Files.GetString name}} functions.
.helmignore
access the contents of the file
as []byte using {{.Files.GetBytes}}
Any unknown Chart.yaml fields will be dropped
Chart.yaml cannot be
used to pass arbitrarily structured data into the template.
A values file is formatted in YAML.
A chart may include a default
values.yaml file
be merged into the default
values file.
The default values file included inside of a chart must be named
values.yaml
accessible inside of templates using the
.Values object
Values files can declare values for the top-level chart, as well as for
any of the charts that are included in that chart’s charts/ directory.
Charts at a higher level have access to all of the variables defined
beneath.
lower level charts cannot access things in
parent charts
Values are namespaced, but namespaces are pruned.
the scope of the values has been reduced and the
namespace prefix removed
Helm supports special “global” value.
a way of sharing one top-level variable with all
subcharts, which is useful for things like setting metadata properties
like labels.
If a subchart declares a global variable, that global will be passed
downward (to the subchart’s subcharts), but not upward to the parent
chart.
global variables of parent charts take precedence over the global variables from subcharts.
helm lint
A chart repository is an HTTP server that houses one or more packaged
charts
Any HTTP server that can serve YAML files and tar files and can answer
GET requests can be used as a repository server.
Helm does not provide tools for uploading charts to
remote repository servers.
the only way to add a chart to $HELM_HOME/starters is to manually
copy it there.
Helm provides a hook mechanism to allow chart developers to intervene
at certain points in a release’s life cycle.
Execute a Job to back up a database before installing a new chart,
and then execute a second job after the upgrade in order to restore
data.
Hooks are declared as an annotation in the metadata section of a manifest
Hooks work like regular templates, but they have special annotations
pre-install
post-install: Executes after all resources are loaded into Kubernetes
pre-delete
post-delete: Executes on a deletion request after all of the release’s
resources have been deleted.
pre-upgrade
post-upgrade
pre-rollback
post-rollback: Executes on a rollback request after all resources
have been modified.
crd-install
test-success: Executes when running helm test and expects the pod to
return successfully (return code == 0).
test-failure: Executes when running helm test and expects the pod to
fail (return code != 0).
Hooks allow you, the chart developer, an opportunity to perform
operations at strategic points in a release lifecycle
Tiller then loads the hook with the lowest weight first (negative to positive)
Tiller returns the release name (and other data) to the client
If the resources is a Job kind, Tiller
will wait until the job successfully runs to completion.
if the job
fails, the release will fail. This is a blocking operation, so the
Helm client will pause while the Job is run.
If they
have hook weights (see below), they are executed in weighted order. Otherwise,
ordering is not guaranteed.
good practice to add a hook weight, and set it
to 0 if weight is not important.
The resources that a hook creates are not tracked or managed as part of the
release.
leave the hook resource alone.
To destroy such
resources, you need to either write code to perform this operation in a pre-delete
or post-delete hook or add "helm.sh/hook-delete-policy" annotation to the hook template file.
Hooks are just Kubernetes manifest files with special annotations in the
metadata section
One resource can implement multiple hooks
no limit to the number of different resources that
may implement a given hook.
When subcharts declare hooks, those are also evaluated. There is no way
for a top-level chart to disable the hooks declared by subcharts.
Hook weights can be positive or negative numbers but must be represented as
strings.
sort those hooks in ascending order.
Hook deletion policies
"before-hook-creation" specifies Tiller should delete the previous hook before the new hook is launched.
By default Tiller will wait for 60 seconds for a deleted hook to no longer exist in the API server before timing out.
Custom Resource Definitions (CRDs) are a special kind in Kubernetes.
The crd-install hook is executed very early during an installation, before
the rest of the manifests are verified.
A common reason why the hook resource might already exist is that it was not deleted following use on a previous install/upgrade.
Helm uses Go templates for templating
your resource files.
two special template functions: include and required
include
function allows you to bring in another template, and then pass the results to other
template functions.
The required function allows you to declare a particular
values entry as required for template rendering.
If the value is empty, the template
rendering will fail with a user submitted error message.
When you are working with string data, you are always safer quoting the
strings than leaving them as bare words
Quote Strings, Don’t Quote Integers
when working with integers do not quote the values
env variables values which are expected to be string
to include a template, and then perform an operation
on that template’s output, Helm has a special include function
The above includes a template called toYaml, passes it $value, and
then passes the output of that template to the nindent function.
Go provides a way for setting template options to control behavior
when a map is indexed with a key that’s not present in the map
The required function gives developers the ability to declare a value entry
as required for template rendering.
The tpl function allows developers to evaluate strings as templates inside a template.
Rendering a external configuration file
(.Files.Get "conf/app.conf")
Image pull secrets are essentially a combination of registry, username, and password.
Automatically Roll Deployments When ConfigMaps or Secrets change
configmaps or secrets are injected as configuration
files in containers
a restart may be required should those
be updated with a subsequent helm upgrade
The sha256sum function can be used to ensure a deployment’s
annotation section is updated if another file changes
checksum/config: {{ include (print $.Template.BasePath "/configmap.yaml") . | sha256sum }}
helm upgrade --recreate-pods
"helm.sh/resource-policy": keep
resources that should not be deleted when Helm runs a
helm delete
this resource becomes
orphaned. Helm will no longer manage it in any way.
create some reusable parts in your chart
In the templates/ directory, any file that begins with an
underscore(_) is not expected to output a Kubernetes manifest file.
by convention, helper templates and partials are placed in a
_helpers.tpl file.
The current best practice for composing a complex application from discrete parts
is to create a top-level umbrella chart that
exposes the global configurations, and then use the charts/ subdirectory to
embed each of the components.
SAP’s Converged charts: These charts
install SAP Converged Cloud a full OpenStack IaaS on Kubernetes. All of the charts are collected
together in one GitHub repository, except for a few submodules.
Deis’s Workflow:
This chart exposes the entire Deis PaaS system with one chart. But it’s different
from the SAP chart in that this umbrella chart is built from each component, and
each component is tracked in a different Git repository.
YAML is a superset of JSON
any valid JSON structure ought to be valid in YAML.
As a best practice, templates should follow a YAML-like syntax unless
the JSON syntax substantially reduces the risk of a formatting issue.
There are functions in Helm that allow you to generate random data,
cryptographic keys, and so on.
a chart repository is a location where packaged charts can be
stored and shared.
A chart repository is an HTTP server that houses an index.yaml file and
optionally some packaged charts.
Because a chart repository can be any HTTP server that can serve YAML and tar
files and can answer GET requests, you have a plethora of options when it comes
down to hosting your own chart repository.
It is not required that a chart package be located on the same server as the
index.yaml file.
A valid chart repository must have an index file. The
index file contains information about each chart in the chart repository.
The Helm project provides an open-source Helm repository server called ChartMuseum that you can host yourself.
$ helm repo index fantastic-charts --url https://fantastic-charts.storage.googleapis.com
A repository will not be added if it does not contain a valid
index.yaml
add the repository to their helm client via the helm
repo add [NAME] [URL] command with any name they would like to use to
reference the repository.
Helm has provenance tools which help chart users verify the integrity and origin
of a package.
Integrity is established by comparing a chart to a provenance record
The provenance file contains a chart’s YAML file plus several pieces of
verification information
Chart repositories serve as a centralized collection of Helm charts.
Chart repositories must make it possible to serve provenance files over HTTP via
a specific request, and must make them available at the same URI path as the chart.
We don’t want to be “the certificate authority” for all chart
signers. Instead, we strongly favor a decentralized model, which is part
of the reason we chose OpenPGP as our foundational technology.
The Keybase platform provides a public
centralized repository for trust information.
A chart contains a number of Kubernetes resources and components that work together.
A test in a helm chart lives under the templates/ directory and is a pod definition that specifies a container with a given command to run.
The pod definition must contain one of the helm test hook annotations: helm.sh/hook: test-success or helm.sh/hook: test-failure
helm test
nest your test suite under a tests/ directory like <chart-name>/templates/tests/
Refer the YAML Anchors/Aliases documentation for information about how to alias and reuse syntax to keep your .circleci/config.yml file small.
workflow orchestration with two parallel jobs
jobs run according to configured requirements, each job waiting to start until the required job finishes successfully
requires: key
fans-out to run a set of acceptance test jobs in parallel, and finally fans-in to run a common deploy job.
Holding a Workflow for a Manual Approval
Workflows can be configured to wait for manual approval of a job before
continuing to the next job
add a job to the jobs list with the
key type: approval
approval is a special job type that is only available to jobs under the workflow key
The name of the job to hold is arbitrary - it could be wait or pause, for example,
as long as the job has a type: approval key in it.
schedule a workflow
to run at a certain time for specific branches.
The triggers key is only added under your workflows key
using cron syntax to represent Coordinated Universal Time (UTC) for specified branches.
By default,
a workflow is triggered on every git push
the commit workflow has no triggers key
and will run on every git push
The nightly workflow has a triggers key
and will run on the specified schedule
Cron step syntax (for example, */1, */20) is not supported.
use a context to share environment variables
use the same shared environment variables when initiated by a user who is part of the organization.
CircleCI does not run workflows for tags
unless you explicitly specify tag filters.
CircleCI branch and tag filters support
the Java variant of regex pattern matching.
Each workflow has an associated workspace which can be used to transfer files to downstream jobs as the workflow progresses.
The workspace is an additive-only store of data.
Jobs can persist data to the workspace
Downstream jobs can attach the workspace to their container filesystem.
Attaching the workspace downloads and unpacks each layer based on the ordering of the upstream jobs in the workflow graph.
Workflows that include jobs running on multiple branches may require data to be shared using workspaces
To persist data from a job and make it available to other jobs, configure the job to use the persist_to_workspace key.
Files and directories named in the paths: property of persist_to_workspace will be uploaded to the workflow’s temporary workspace relative to the directory specified with the root key.
Configure a job to get saved data by configuring the attach_workspace key.
persist_to_workspace
attach_workspace
To rerun only a workflow’s failed jobs, click the Workflows icon in the app and select a workflow to see the status of each job, then click the Rerun button and select Rerun from failed.
if you do not see your workflows triggering, a configuration error is preventing the workflow from starting.
check your Workflows page of the CircleCI app (not the Job page)
If your image is available on a private registry which requires login, use the
--with-registry-auth flag
When you update a service, Docker stops its
containers and restarts them with the new configuration.
When updating an existing service, the flag
is --publish-add. There is also a --publish-rm flag to remove a port that
was previously published.
To update the command an existing service runs, you can use the --args flag.
force the service to use a specific version of the image
If the manager can’t resolve the tag to a digest, each worker
node is responsible for resolving the tag to a digest, and different nodes may
use different versions of the image.
After you create a service, its image is never updated unless you explicitly run
docker service update with the --image flag as described below.
When you run service update with the --image flag, the swarm manager queries
Docker Hub or your private Docker registry for the digest the tag currently
points to and updates the service tasks to use that digest.
You can publish a service task’s port directly on the swarm node
where that service is running.
You can rely on the routing mesh.
When you publish a service port, the swarm makes the service accessible at the
target port on every node, regardless of whether there is a task for the
service running on that node or not.
To publish a service’s ports externally to the swarm, use the
--publish <PUBLISHED-PORT>:<SERVICE-PORT> flag.
But if you maintain a CHANGELOG in this format, and/or your Git tags are also your Docker tags, you can get the previous version and use cache the this image version.
« Docker layer caching » is enough to optimize the build time.
Cache in CI/CD is about saving directories or files across pipelines.
We're building a Docker image, dependencies are installed inside a container.We can't cache a dependencies directory if it doesn't exists in the job workspace.
Dependencies will always be installed from a container but will be extracted by the GitLab Runner in the job workspace. Our goal is to send the cached version in the build context.
We set the directories to cache in the job settings with a key to share the cache per branch and stage.
To avoid old dependencies to be mixed with the new ones, at the risk of keeping unused dependencies in cache, which would make cache and images heavier.
If you need to cache directories in testing jobs, it's easier: use volumes !
version your cache keys !
sharing Docker image between jobs
In every job, we automatically get artifacts from previous stages.
docker save $DOCKER_CI_IMAGE | gzip > app.tar.gz
I personally use the « push / pull » technique,
we docker push after the build, then we docker pull if needed in the next jobs.
You only need the single Dockerfile. You don’t need a separate build script,
You don’t need to create any intermediate
images and you don’t need to extract any artifacts to your local system at all.
Debugging a specific build stage
You can use the COPY --from instruction to
copy from a separate image, either using the local image name, a tag available
locally or on a Docker registry, or a tag ID.
Laravel queues provide a unified API across a variety of different queue backends, such as Beanstalk, Amazon SQS, Redis, or even a relational database.
The queue configuration file is stored in config/queue.php
a synchronous driver that will execute jobs immediately (for local use)
A null queue driver is also included which discards queued jobs.
In your config/queue.php configuration file, there is a connections configuration option.
any given queue connection may have multiple "queues" which may be thought of as different stacks or piles of queued jobs.
each connection configuration example in the queue configuration file contains a queue attribute.
if you dispatch a job without explicitly defining which queue it should be dispatched to, the job will be placed on the queue that is defined in the queue attribute of the connection configuration
pushing jobs to multiple queues can be especially useful for applications that wish to prioritize or segment how jobs are processed
specify which queues it should process by priority.
If your Redis queue connection uses a Redis Cluster, your queue names must contain a key hash tag.
ensure all of the Redis keys for a given queue are placed into the same hash slot
all of the queueable jobs for your application are stored in the app/Jobs directory.
Job classes are very simple, normally containing only a handle method which is called when the job is processed by the queue.
we were able to pass an Eloquent model directly into the queued job's constructor. Because of the SerializesModels trait that the job is using, Eloquent models will be gracefully serialized and unserialized when the job is processing.
When the job is actually handled, the queue system will automatically re-retrieve the full model instance from the database.
The handle method is called when the job is processed by the queue
The arguments passed to the dispatch method will be given to the job's constructor
delay the execution of a queued job, you may use the delay method when dispatching a job.
dispatch a job immediately (synchronously), you may use the dispatchNow method.
When using this method, the job will not be queued and will be run immediately within the current process
specify a list of queued jobs that should be run in sequence.
Deleting jobs using the $this->delete() method will not prevent chained jobs from being processed. The chain will only stop executing if a job in the chain fails.
this does not push jobs to different queue "connections" as defined by your queue configuration file, but only to specific queues within a single connection.
To specify the queue, use the onQueue method when dispatching the job
To specify the connection, use the onConnection method when dispatching the job
defining the maximum number of attempts on the job class itself.
to defining how many times a job may be attempted before it fails, you may define a time at which the job should timeout.
using the funnel method, you may limit jobs of a given type to only be processed by one worker at a time
using the throttle method, you may throttle a given type of job to only run 10 times every 60 seconds.
If an exception is thrown while the job is being processed, the job will automatically be released back onto the queue so it may be attempted again.
dispatch a Closure. This is great for quick, simple tasks that need to be executed outside of the current request cycle
When dispatching Closures to the queue, the Closure's code contents is cryptographically signed so it can not be modified in transit.
Laravel includes a queue worker that will process new jobs as they are pushed onto the queue.
once the queue:work command has started, it will continue to run until it is manually stopped or you close your terminal
queue workers are long-lived processes and store the booted application state in memory.
they will not notice changes in your code base after they have been started.
during your deployment process, be sure to restart your queue workers.
customize your queue worker even further by only processing particular queues for a given connection
The --once option may be used to instruct the worker to only process a single job from the queue
The --stop-when-empty option may be used to instruct the worker to process all jobs and then exit gracefully.
Daemon queue workers do not "reboot" the framework before processing each job.
you should free any heavy resources after each job completes.
Since queue workers are long-lived processes, they will not pick up changes to your code without being restarted.
restart the workers during your deployment process.
php artisan queue:restart
The queue uses the cache to store restart signals
the queue workers will die when the queue:restart command is executed, you should be running a process manager such as Supervisor to automatically restart the queue workers.
each queue connection defines a retry_after option. This option specifies how many seconds the queue connection should wait before retrying a job that is being processed.
The --timeout option specifies how long the Laravel queue master process will wait before killing off a child queue worker that is processing a job.
When jobs are available on the queue, the worker will keep processing jobs with no delay in between them.
While sleeping, the worker will not process any new jobs - the jobs will be processed after the worker wakes up again
the numprocs directive will instruct Supervisor to run 8 queue:work processes and monitor all of them, automatically restarting them if they fail.
Laravel includes a convenient way to specify the maximum number of times a job should be attempted.
define a failed method directly on your job class, allowing you to perform job specific clean-up when a failure occurs.
a great opportunity to notify your team via email or Slack.
php artisan queue:retry all
php artisan queue:flush
When injecting an Eloquent model into a job, it is automatically serialized before being placed on the queue and restored when the job is processed
Edge routerA router that enforces the firewall policy for your cluster.
Cluster networkA set of links, logical or physical, that facilitate communication within a cluster according to the Kubernetes networking model.
A Kubernetes ServiceA way to expose an application running on a set of Pods as a network service. that identifies a set of Pods using labelTags objects with identifying attributes that are meaningful and relevant to users. selectors.
Services are assumed to have virtual IPs only routable within the cluster network.
Ingress exposes HTTP and HTTPS routes from outside the cluster to
services within the cluster.
Traffic routing is controlled by rules defined on the Ingress resource.
An Ingress can be configured to give Services externally-reachable URLs, load balance traffic, terminate SSL / TLS, and offer name based virtual hosting.
Exposing services other than HTTP and HTTPS to the internet typically
uses a service of type Service.Type=NodePort or
Service.Type=LoadBalancer.
You must have an ingress controller to satisfy an Ingress. Only creating an Ingress resource has no effect.
As with all other Kubernetes resources, an Ingress needs apiVersion, kind, and metadata fields
Ingress frequently uses annotations to configure some options depending on the Ingress controller,
Ingress resource only supports rules
for directing HTTP traffic.
An optional host.
A list of paths
A backend is a combination of Service and port names
has an associated backend
Both the host and path must match the content of an incoming request before the
load balancer directs traffic to the referenced Service.
HTTP (and HTTPS) requests to the
Ingress that matches the host and path of the rule are sent to the listed backend.
A default backend is often configured in an Ingress controller to service any requests that do not
match a path in the spec.
An Ingress with no rules sends all traffic to a single default backend.
Ingress controllers and load balancers may take a minute or two to allocate an IP address.
A fanout configuration routes traffic from a single IP address to more than one Service,
based on the HTTP URI being requested.
nginx.ingress.kubernetes.io/rewrite-target: /
describe ingress
get ingress
Name-based virtual hosts support routing HTTP traffic to multiple host names at the same IP address.
route requests based on
the Host header.
an Ingress resource without any hosts defined in the rules, then any
web traffic to the IP address of your Ingress controller can be matched without a name based
virtual host being required.
secure an Ingress by specifying a SecretStores sensitive information, such as passwords, OAuth tokens, and ssh keys.
that contains a TLS private key and certificate.
Currently the Ingress only
supports a single TLS port, 443, and assumes TLS termination.
An Ingress controller is bootstrapped with some load balancing policy settings
that it applies to all Ingress, such as the load balancing algorithm, backend
weight scheme, and others.
persistent sessions, dynamic weights) are not yet exposed through the
Ingress. You can instead get these features through the load balancer used for
a Service.
review the controller
specific documentation to see how they handle health checks
edit ingress
After you save your changes, kubectl updates the resource in the API server, which tells the
Ingress controller to reconfigure the load balancer.
kubectl replace -f on a modified Ingress YAML file.
Node: A worker machine in Kubernetes, part of a cluster.
in most common Kubernetes deployments, nodes in the cluster are not part of the public internet.
Edge router: A router that enforces the firewall policy for your cluster.
a gateway managed by a cloud provider or a physical piece of hardware.
Cluster network: A set of links, logical or physical, that facilitate communication within a cluster according to the Kubernetes networking model.
Service: A Kubernetes Service that identifies a set of Pods using label selectors.
An Ingress may be configured to give Services externally-reachable URLs, load balance traffic, terminate SSL / TLS, and offer name-based virtual hosting.
An Ingress does not expose arbitrary ports or protocols.
You must have an Ingress controller to satisfy an Ingress. Only creating an Ingress resource has no effect.
The name of an Ingress object must be a valid
DNS subdomain name
The Ingress spec
has all the information needed to configure a load balancer or proxy server.
Ingress resource only supports rules
for directing HTTP(S) traffic.
An Ingress with no rules sends all traffic to a single default backend and .spec.defaultBackend
is the backend that should handle requests in that case.
If defaultBackend is not set, the handling of requests that do not match any of the rules will be up to the
ingress controller
A common
usage for a Resource backend is to ingress data to an object storage backend
with static assets.
Exact: Matches the URL path exactly and with case sensitivity.
Prefix: Matches based on a URL path prefix split by /. Matching is case
sensitive and done on a path element by element basis.
multiple paths within an Ingress will match a request. In those
cases precedence will be given first to the longest matching path.
Hosts can be precise matches (for example “foo.bar.com”) or a wildcard (for
example “*.foo.com”).
No match, wildcard only covers a single DNS label
Each Ingress should specify a class, a reference to an
IngressClass resource that contains additional configuration including the name
of the controller that should implement the class.
secure an Ingress by specifying a Secret
that contains a TLS private key and certificate.
The Ingress resource only
supports a single TLS port, 443, and assumes TLS termination at the ingress point
(traffic to the Service and its Pods is in plaintext).
TLS will not work on the default rule because the
certificates would have to be issued for all the possible sub-domains.
hosts in the tls section need to explicitly match the host in the rules
section.
use all rules keywords, like if, changes, and exists, in the same
rule. The rule evaluates to true only when all included keywords evaluate to true.
use parentheses with && and || to build more complicated variable expressions.
Use workflow to specify which types of pipelines
can run.
every push to an open merge request’s source branch
causes duplicated pipelines.
avoid duplicate pipelines by changing the job rules to avoid either push (branch)
pipelines or merge request pipelines.
do not mix only/except jobs with rules jobs in the same pipeline.
For behavior similar to the only/except keywords, you can
check the value of the $CI_PIPELINE_SOURCE variable
commonly used variables for if clauses
rules:changes expressions to determine when
to add jobs to a pipeline
Use !reference tags to reuse rules in different
jobs.
Use except to define when a job does not run.
only or except used without refs is the same as
only:refs / except/refs
If you change multiple files, but only one file ends in .md,
the build job is still skipped.
If you use multiple keywords with only or except, the keywords are evaluated
as a single conjoined expression.
only includes the job if all of the keys have at least one condition that matches.
except excludes the job if any of the keys have at least one condition that matches.
With only, individual keys are logically joined by an AND
With except, individual keys are logically joined by an OR
To specify a job as manual, add when: manual to the job
in the .gitlab-ci.yml file.
Use protected environments
to define a list of users authorized to run a manual job.
Use when: delayed to execute scripts after a waiting period, or if you want to avoid
jobs immediately entering the pending state.
To split a large job into multiple smaller jobs that run in parallel, use the
parallel keyword
run a trigger job multiple times in parallel in a single pipeline,
but with different variable values for each instance of the job.
The @ symbol denotes the beginning of a ref’s repository path.
To match a ref name that contains the @ character in a regular expression,
you must use the hex character code match \x40.
Compare a variable to a string
Check if a variable is undefined
Check if a variable is empty
Check if a variable exists
Check if a variable is empty
Matches are found when using =~.
Matches are not found when using !~.
Join variable expressions together with && or ||
There’s a lot wrong with this: you could be using the wrong version of code that has exploits, has a bug in it, or worse it could have malware bundled in on purpose—you just don’t know.
Keep Base Images Small
Node.js for example, it includes an extra 600MB of libraries you don’t need.