{{ ... }} for Expressions to print to the template output
use a dot (.) to access attributes of a variable
the outer double-curly braces are not part of the
variable, but the print statement.
If you access variables inside tags don’t
put the braces around them.
If a variable or attribute does not exist, you will get back an undefined
value.
the default behavior is to evaluate to an empty string if
printed or iterated over, and to fail for every other operation.
if an object has an item and attribute with the same
name. Additionally, the attr() filter only looks up attributes.
Variables can be modified by filters. Filters are separated from the
variable by a pipe symbol (|) and may have optional arguments in
parentheses.
Multiple filters can be chained
Tests can be used
to test a variable against a common expression.
add is plus the name of the test after the variable.
to find out if a variable is defined, you can do name is defined,
which will then return true or false depending on whether name is defined
in the current template context.
strip whitespace in templates by hand. If you add a minus
sign (-) to the start or end of a block (e.g. a For tag), a
comment, or a variable expression, the whitespaces before or after
that block will be removed
not add whitespace between the tag and the minus sign
mark a block raw
Template inheritance
allows you to build a base “skeleton” template that contains all the common
elements of your site and defines blocks that child templates can override.
The {% extends %} tag is the key here. It tells the template engine that
this template “extends” another template.
access templates in subdirectories with a slash
can’t define multiple {% block %} tags with the same name in the
same template
use the special
self variable and call the block with that name
self.title()
super()
put the name of the block after the end tag for better
readability
if the block is replaced by
a child template, a variable would appear that was not defined in the block or
passed to the context.
setting the block to “scoped” by adding the scoped
modifier to a block declaration
If you have a variable that may
include any of the following chars (>, <, &, or ") you
SHOULD escape it unless the variable contains well-formed and trusted
HTML.
Jinja2 functions (macros, super, self.BLOCKNAME) always return template
data that is marked as safe.
With the default syntax, control structures appear inside
{% ... %} blocks.
the dictsort filter
loop.cycle
Unlike in Python, it’s not possible to break or continue in a loop
use loops recursively
add the recursive modifier
to the loop definition and call the loop variable with the new iterable
where you want to recurse.
The loop variable always refers to the closest (innermost) loop.
whether the value changed at all,
use it to test if a variable is defined, not
empty and not false
Macros are comparable with functions in regular programming languages.
If a macro name starts with an underscore, it’s not exported and can’t
be imported.
pass a macro to another macro
caller()
a single trailing newline is stripped if present
other whitespace (spaces, tabs, newlines etc.) is returned unchanged
a block tag works in “both”
directions. That is, a block tag doesn’t just provide a placeholder to fill
- it also defines the content that fills the placeholder in the parent.
Python dicts are not ordered
caller(user)
call(user)
This is a simple dialog rendered by using a macro and
a call block.
Filter sections allow you to apply regular Jinja2 filters on a block of
template data.
Assignments at
top level (outside of blocks, macros or loops) are exported from the template
like top level macros and can be imported by other templates.
using namespace
objects which allow propagating of changes across scopes
use block assignments to
capture the contents of a block into a variable name.
The extends tag can be used to extend one template from another.
Blocks are used for inheritance and act as both placeholders and replacements
at the same time.
The include statement is useful to include a template and return the
rendered contents of that file into the current namespace
Included templates have access to the variables of the active context by
default.
putting often used code into macros
imports are cached
and imported templates don’t have access to the current template variables,
just the globals by default.
Macros and variables starting with one or more underscores are private and
cannot be imported.
By default, included templates are passed the current context and imported
templates are not.
imports are often used just as a module that holds macros.
Integers and floating point numbers are created by just writing the
number down
Everything between two brackets is a list.
Tuples are like lists that cannot be modified (“immutable”).
A dict in Python is a structure that combines keys and values.
//
Divide two numbers and return the truncated integer result
The special constants true, false, and none are indeed lowercase
all Jinja identifiers are lowercase
(expr)
group an expression.
The is and in operators support negation using an infix notation
in
Perform a sequence / mapping containment test.
|
Applies a filter.
~
Converts all operands into strings and concatenates them.
use inline if expressions.
always an attribute is returned and items are not
looked up.
default(value, default_value=u'', boolean=False)¶
If the value is undefined it will return the passed default value,
otherwise the value of the variable
dictsort(value, case_sensitive=False, by='key', reverse=False)¶
Sort a dict and yield (key, value) pairs.
format(value, *args, **kwargs)¶
Apply python string formatting on an object
groupby(value, attribute)¶
Group a sequence of objects by a common attribute.
grouping by is stored in the grouper
attribute and the list contains all the objects that have this grouper
in common.
indent(s, width=4, first=False, blank=False, indentfirst=None)¶
Return a copy of the string with each line indented by 4 spaces. The
first line and blank lines are not indented by default.
join(value, d=u'', attribute=None)¶
Return a string which is the concatenation of the strings in the
sequence.
map()¶
Applies a filter on a sequence of objects or looks up an attribute.
pprint(value, verbose=False)¶
Pretty print a variable. Useful for debugging.
reject()¶
Filters a sequence of objects by applying a test to each object,
and rejecting the objects with the test succeeding.
replace(s, old, new, count=None)¶
Return a copy of the value with all occurrences of a substring
replaced with a new one.
round(value, precision=0, method='common')¶
Round the number to a given precision
even if rounded to 0 precision, a float is returned.
select()¶
Filters a sequence of objects by applying a test to each object,
and only selecting the objects with the test succeeding.
sort(value, reverse=False, case_sensitive=False, attribute=None)¶
Sort an iterable. Per default it sorts ascending, if you pass it
true as first argument it will reverse the sorting.
striptags(value)¶
Strip SGML/XML tags and replace adjacent whitespace by one space.
tojson(value, indent=None)¶
Dumps a structure to JSON so that it’s safe to use in <script>
tags.
trim(value)¶
Strip leading and trailing whitespace.
unique(value, case_sensitive=False, attribute=None)¶
Returns a list of unique items from the the given iterable
urlize(value, trim_url_limit=None, nofollow=False, target=None, rel=None)¶
Converts URLs in plain text into clickable links.
defined(value)¶
Return true if the variable is defined
in(value, seq)¶
Check if value is in seq.
mapping(value)¶
Return true if the object is a mapping (dict etc.).
number(value)¶
Return true if the variable is a number.
sameas(value, other)¶
Check if an object points to the same memory address than another
object
undefined(value)¶
Like defined() but the other way round.
A joiner is
passed a string and will return that string every time it’s called, except
the first time (in which case it returns an empty string).
namespace(...)¶
Creates a new container that allows attribute assignment using the
{% set %} tag
The with statement makes it possible to create a new inner scope.
Variables set within this scope are not visible outside of the scope.
activate and deactivate the autoescaping from within
the templates
With both trim_blocks and lstrip_blocks enabled, you can put block tags
on their own lines, and the entire block line will be removed when
rendered, preserving the whitespace of the contents
the browser has all of the certificates in the chain to link it up to a trusted root certificate.
Any certificate in between your certificate and the root certificate is called a chain or intermediate certificate.
These must be installed to the web server with the primary certificate for your web site so that user's browers can link your certificate to a trusted authority.
update-policy only applies to, and may only appear in, zone clauses. This statement defines the rules by which DDNS updates may be carried. It may only be used with a key (TSIG or SIG(0)) which is used to cryptographically sign each update request. It is mutually exclusive with allow-update in any single zone clause. The statement may take the keyword local or an update-policy-rule structure. The keyword local is designed to simplify configuration of secure updates using a TSIG key and limits the update source only to localhost (loopback address, 127.0.0.1 or ::1), thus both nsupdate (or any other application using DDNS) and the name server being updated must reside on the same host.
"update-policy only applies to, and may only appear in, zone clauses. This statement defines the rules by which DDNS updates may be carried. It may only be used with a key (TSIG or SIG(0)) which is used to cryptographically sign each update request. It is mutually exclusive with allow-update in any single zone clause. The statement may take the keyword local or an update-policy-rule structure. The keyword local is designed to simplify configuration of secure updates using a TSIG key and limits the update source only to localhost (loopback address, 127.0.0.1 or ::1), thus both nsupdate (or any other application using DDNS) and the name server being updated must reside on the same host.
"
discussing the basic structure of an Nginx configuration file along with some guidelines on how to design your files
/etc/nginx/nginx.conf
In Nginx parlance, the areas that these brackets define are called "contexts" because they contain configuration details that are separated according to their area of concern
if a directive is valid in multiple nested scopes, a declaration in a broader context will be passed on to any child contexts as default values.
The children contexts can override these values at will
Nginx will error out on reading a configuration file with directives that are declared in the wrong context.
The most general context is the "main" or "global" context
Any directive that exist entirely outside of these blocks is said to inhabit the "main" context
The main context represents the broadest environment for Nginx configuration.
The "events" context is contained within the "main" context. It is used to set global options that affect how Nginx handles connections at a general level.
Nginx uses an event-based connection processing model, so the directives defined within this context determine how worker processes should handle connections.
the connection processing method is automatically selected based on the most efficient choice that the platform has available
a worker will only take a single connection at a time
When configuring Nginx as a web server or reverse proxy, the "http" context will hold the majority of the configuration.
The http context is a sibling of the events context, so they should be listed side-by-side, rather than nested
fine-tune the TCP keep alive settings (keepalive_disable, keepalive_requests, and keepalive_timeout)
The "server" context is declared within the "http" context.
multiple declarations
each instance defines a specific virtual server to handle client requests
Each client request will be handled according to the configuration defined in a single server context, so Nginx must decide which server context is most appropriate based on details of the request.
listen: The ip address / port combination that this server block is designed to respond to.
server_name: This directive is the other component used to select a server block for processing.
"Host" header
configure files to try to respond to requests (try_files)
issue redirects and rewrites (return and rewrite)
set arbitrary variables (set)
Location contexts share many relational qualities with server contexts
multiple location contexts can be defined, each location is used to handle a certain type of client request, and each location is selected by virtue of matching the location definition against the client request through a selection algorithm
Location blocks live within server contexts and, unlike server blocks, can be nested inside one another.
While server contexts are selected based on the requested IP address/port combination and the host name in the "Host" header, location blocks further divide up the request handling within a server block by looking at the request URI
The request URI is the portion of the request that comes after the domain name or IP address/port combination.
New directives at this level allow you to reach locations outside of the document root (alias), mark the location as only internally accessible (internal), and proxy to other servers or locations (using http, fastcgi, scgi, and uwsgi proxying).
These can then be used to do A/B testing by providing different content to different hosts.
configures Perl handlers for the location they appear in
set the value of a variable depending on the value of another variable
used to map MIME types to the file extensions that should be associated with them.
this context defines a named pool of servers that Nginx can then proxy requests to
The upstream context should be placed within the http context, outside of any specific server contexts.
The upstream context can then be referenced by name within server or location blocks to pass requests of a certain type to the pool of servers that have been defined.
function as a high performance mail proxy server
The mail context is defined within the "main" or "global" context (outside of the http context).
Nginx has the ability to redirect authentication requests to an external authentication server
the if directive in Nginx will execute the instructions contained if a given test returns "true".
Since Nginx will test conditions of a request with many other purpose-made directives, if should not be used for most forms of conditional execution.
The limit_except context is used to restrict the use of certain HTTP methods within a location context.
The result of the above example is that any client can use the GET and HEAD verbs, but only clients coming from the 192.168.1.1/24 subnet are allowed to use other methods.
Many directives are valid in more than one context
it is usually best to declare directives in the highest context to which they are applicable, and overriding them in lower contexts as necessary.
Declaring at higher levels provides you with a sane default
Nginx already engages in a well-documented selection algorithm for things like selecting server blocks and location blocks.
instead of relying on rewrites to get a user supplied request into the format that you would like to work with, you should try to set up two blocks for the request, one of which represents the desired method, and the other that catches messy requests and redirects (and possibly rewrites) them to your correct block.
incorrect requests can get by with a redirect rather than a rewrite, which should execute with lower overhead.
Control structures (called "actions" in template parlance) provide you, the
template author, with the ability to control the flow of a template's
generation
"Control structures (called "actions" in template parlance) provide you, the template author, with the ability to control the flow of a template's generation"
Data Manipulation Language (DML), commands are used to modify data in a database. DML statements control access to the database data.
DDL commands are used to create, delete or alter the structure of objects in a database but not its data.
DDL deals with descriptions of the database schema and is useful for creating new tables, indexes, sequences, stogroups, etc. and to define the attributes of these objects, such as data type, field length and alternate table names (aliases).
Data Query Language (DQL) is used to get data within the schema objects of a database and also to query it and impose order upon it.
DQL is also a subset of SQL. One of the most common commands in DQL is SELECT.
The most common command types in DDL are CREATE, ALTER and DROP.
"I wanted to generate interesting game maps that weren't constrained to be realistic, and I wanted to try some techniques I hadn't tried before. I usually make tile maps but instead used a different structure. What could I do with 1,000 polygons instead of 1,000,000 tiles? The distinct player-recognizable areas might be useful for gameplay: locations of towns, places to quest, territory to conquer or settle, landmarks, pathfinding waypoints, difficulty zones, etc. I generated maps with polygons, then rasterized them into tile maps that looked like this:"
"jq is like sed for JSON data - you can use it to slice and filter and map and transform structured data with the same ease that sed, awk, grep and friends let you play with text."
db:setup task will create the database, load the schema and initialize
it with the seed data
db:reset task will drop the database and set it up again. This is
functionally equivalent to rails db:drop db:setup.
run a specific migration up or down, the db:migrate:up and
db:migrate:down
the
RAILS_ENV environment variable
db:migrate
VERBOSE=false will suppress all output.
If you have
already run the migration, then you cannot just edit the migration and run the
migration again: Rails thinks it has already run the migration and so will do
nothing when you run rails db:migrate.
must rollback the migration (for
example with bin/rails db:rollback), edit your migration and then run
rails db:migrate to run the corrected version.
editing existing migrations is not a good idea.
should write a new migration that performs the changes
you require
revert method can be helpful when writing a new migration to undo
previous migrations in whole or in part
require_relative
revert
They are not designed to be
edited, they just represent the current state of the database.
Schema Files for
Schema files are also useful if you want a quick look at what attributes an
Active Record object has
annotate_models gem automatically
adds and updates comments at the top of each model summarizing the schema if
you desire that functionality.
database-independent
multiple databases
db/schema.rb cannot express database specific
items such as triggers, stored procedures or check constraints
you can execute custom SQL statements, the schema dumper cannot
reconstitute those statements from the database
Laravel application will exist in Worker processes.
means Laravel can be stored and kept in memory.
Laravel application will exist in the memory and only initialize at the first time. Any changes you did to Laravel will be kept unless you reset them by yourself.
A chart is a collection of files
that describe a related set of Kubernetes resources.
A single chart
might be used to deploy something simple, like a memcached pod, or
something complex, like a full web app stack with HTTP servers,
databases, caches, and so on.
Charts are created as files laid out in a particular directory tree,
then they can be packaged into versioned archives to be deployed.
A chart is organized as a collection of files inside of a directory.
values.yaml # The default configuration values for this chart
charts/ # A directory containing any charts upon which this chart depends.
templates/ # A directory of templates that, when combined with values,
# will generate valid Kubernetes manifest files.
version: A SemVer 2 version (required)
apiVersion: The chart API version, always "v1" (required)
Every chart must have a version number. A version must follow the
SemVer 2 standard.
non-SemVer names are explicitly
disallowed by the system.
When generating a
package, the helm package command will use the version that it finds
in the Chart.yaml as a token in the package name.
the appVersion field is not related to the version field. It is
a way of specifying the version of the application.
appVersion: The version of the app that this contains (optional). This needn't be SemVer.
If the latest version of a chart in the
repository is marked as deprecated, then the chart as a whole is considered to
be deprecated.
deprecated: Whether this chart is deprecated (optional, boolean)
one chart may depend on any number of other charts.
dependencies can be dynamically linked through the requirements.yaml
file or brought in to the charts/ directory and managed manually.
the preferred method of declaring dependencies is by using a
requirements.yaml file inside of your chart.
A requirements.yaml file is a simple file for listing your
dependencies.
The repository field is the full URL to the chart repository.
you must also use helm repo add to add that repo locally.
helm dependency update
and it will use your dependency file to download all the specified
charts into your charts/ directory for you.
When helm dependency update retrieves charts, it will store them as
chart archives in the charts/ directory.
Managing charts with requirements.yaml is a good way to easily keep
charts updated, and also share requirements information throughout a
team.
All charts are loaded by default.
The condition field holds one or more YAML paths (delimited by commas).
If this path exists in the top parent’s values and resolves to a boolean value,
the chart will be enabled or disabled based on that boolean value.
The tags field is a YAML list of labels to associate with this chart.
all charts with tags can be enabled or disabled by
specifying the tag and a boolean value.
The --set parameter can be used as usual to alter tag and condition values.
Conditions (when set in values) always override tags.
The first condition path that exists wins and subsequent ones for that chart are ignored.
The keys containing the values to be imported can be specified in the parent chart’s requirements.yaml file
using a YAML list. Each item in the list is a key which is imported from the child chart’s exports field.
specifying the key data in our import list, Helm looks in the exports field of the child
chart for data key and imports its contents.
the parent key data is not contained in the parent’s final values. If you need to specify the
parent key, use the ‘child-parent’ format.
To access values that are not contained in the exports key of the child chart’s values, you will need to
specify the source key of the values to be imported (child) and the destination path in the parent chart’s
values (parent).
To drop a dependency into your charts/ directory, use the
helm fetch command
A dependency can be either a chart archive (foo-1.2.3.tgz) or an
unpacked chart directory.
name cannot start with _ or ..
Such files are ignored by the chart loader.
a single release is created with all the objects for the chart and its dependencies.
Helm Chart templates are written in the
Go template language, with the
addition of 50 or so add-on template
functions from the Sprig library and a
few other specialized functions
When
Helm renders the charts, it will pass every file in that directory
through the template engine.
Chart developers may supply a file called values.yaml inside of a
chart. This file can contain default values.
Chart users may supply a YAML file that contains values. This can be
provided on the command line with helm install.
When a user supplies custom values, these values will override the
values in the chart’s values.yaml file.
Template files follow the standard conventions for writing Go templates
{{default "minio" .Values.storage}}
Values that are supplied via a values.yaml file (or via the --set
flag) are accessible from the .Values object in a template.
pre-defined, are available to every template, and
cannot be overridden
the names are case
sensitive
Release.Name: The name of the release (not the chart)
Release.IsUpgrade: This is set to true if the current operation is an upgrade or rollback.
Release.Revision: The revision number. It begins at 1, and increments with
each helm upgrade
Chart: The contents of the Chart.yaml
Files: A map-like object containing all non-special files in the chart.
Files can be
accessed using {{index .Files "file.name"}} or using the {{.Files.Get name}} or
{{.Files.GetString name}} functions.
.helmignore
access the contents of the file
as []byte using {{.Files.GetBytes}}
Any unknown Chart.yaml fields will be dropped
Chart.yaml cannot be
used to pass arbitrarily structured data into the template.
A values file is formatted in YAML.
A chart may include a default
values.yaml file
be merged into the default
values file.
The default values file included inside of a chart must be named
values.yaml
accessible inside of templates using the
.Values object
Values files can declare values for the top-level chart, as well as for
any of the charts that are included in that chart’s charts/ directory.
Charts at a higher level have access to all of the variables defined
beneath.
lower level charts cannot access things in
parent charts
Values are namespaced, but namespaces are pruned.
the scope of the values has been reduced and the
namespace prefix removed
Helm supports special “global” value.
a way of sharing one top-level variable with all
subcharts, which is useful for things like setting metadata properties
like labels.
If a subchart declares a global variable, that global will be passed
downward (to the subchart’s subcharts), but not upward to the parent
chart.
global variables of parent charts take precedence over the global variables from subcharts.
helm lint
A chart repository is an HTTP server that houses one or more packaged
charts
Any HTTP server that can serve YAML files and tar files and can answer
GET requests can be used as a repository server.
Helm does not provide tools for uploading charts to
remote repository servers.
the only way to add a chart to $HELM_HOME/starters is to manually
copy it there.
Helm provides a hook mechanism to allow chart developers to intervene
at certain points in a release’s life cycle.
Execute a Job to back up a database before installing a new chart,
and then execute a second job after the upgrade in order to restore
data.
Hooks are declared as an annotation in the metadata section of a manifest
Hooks work like regular templates, but they have special annotations
pre-install
post-install: Executes after all resources are loaded into Kubernetes
pre-delete
post-delete: Executes on a deletion request after all of the release’s
resources have been deleted.
pre-upgrade
post-upgrade
pre-rollback
post-rollback: Executes on a rollback request after all resources
have been modified.
crd-install
test-success: Executes when running helm test and expects the pod to
return successfully (return code == 0).
test-failure: Executes when running helm test and expects the pod to
fail (return code != 0).
Hooks allow you, the chart developer, an opportunity to perform
operations at strategic points in a release lifecycle
Tiller then loads the hook with the lowest weight first (negative to positive)
Tiller returns the release name (and other data) to the client
If the resources is a Job kind, Tiller
will wait until the job successfully runs to completion.
if the job
fails, the release will fail. This is a blocking operation, so the
Helm client will pause while the Job is run.
If they
have hook weights (see below), they are executed in weighted order. Otherwise,
ordering is not guaranteed.
good practice to add a hook weight, and set it
to 0 if weight is not important.
The resources that a hook creates are not tracked or managed as part of the
release.
leave the hook resource alone.
To destroy such
resources, you need to either write code to perform this operation in a pre-delete
or post-delete hook or add "helm.sh/hook-delete-policy" annotation to the hook template file.
Hooks are just Kubernetes manifest files with special annotations in the
metadata section
One resource can implement multiple hooks
no limit to the number of different resources that
may implement a given hook.
When subcharts declare hooks, those are also evaluated. There is no way
for a top-level chart to disable the hooks declared by subcharts.
Hook weights can be positive or negative numbers but must be represented as
strings.
sort those hooks in ascending order.
Hook deletion policies
"before-hook-creation" specifies Tiller should delete the previous hook before the new hook is launched.
By default Tiller will wait for 60 seconds for a deleted hook to no longer exist in the API server before timing out.
Custom Resource Definitions (CRDs) are a special kind in Kubernetes.
The crd-install hook is executed very early during an installation, before
the rest of the manifests are verified.
A common reason why the hook resource might already exist is that it was not deleted following use on a previous install/upgrade.
Helm uses Go templates for templating
your resource files.
two special template functions: include and required
include
function allows you to bring in another template, and then pass the results to other
template functions.
The required function allows you to declare a particular
values entry as required for template rendering.
If the value is empty, the template
rendering will fail with a user submitted error message.
When you are working with string data, you are always safer quoting the
strings than leaving them as bare words
Quote Strings, Don’t Quote Integers
when working with integers do not quote the values
env variables values which are expected to be string
to include a template, and then perform an operation
on that template’s output, Helm has a special include function
The above includes a template called toYaml, passes it $value, and
then passes the output of that template to the nindent function.
Go provides a way for setting template options to control behavior
when a map is indexed with a key that’s not present in the map
The required function gives developers the ability to declare a value entry
as required for template rendering.
The tpl function allows developers to evaluate strings as templates inside a template.
Rendering a external configuration file
(.Files.Get "conf/app.conf")
Image pull secrets are essentially a combination of registry, username, and password.
Automatically Roll Deployments When ConfigMaps or Secrets change
configmaps or secrets are injected as configuration
files in containers
a restart may be required should those
be updated with a subsequent helm upgrade
The sha256sum function can be used to ensure a deployment’s
annotation section is updated if another file changes
checksum/config: {{ include (print $.Template.BasePath "/configmap.yaml") . | sha256sum }}
helm upgrade --recreate-pods
"helm.sh/resource-policy": keep
resources that should not be deleted when Helm runs a
helm delete
this resource becomes
orphaned. Helm will no longer manage it in any way.
create some reusable parts in your chart
In the templates/ directory, any file that begins with an
underscore(_) is not expected to output a Kubernetes manifest file.
by convention, helper templates and partials are placed in a
_helpers.tpl file.
The current best practice for composing a complex application from discrete parts
is to create a top-level umbrella chart that
exposes the global configurations, and then use the charts/ subdirectory to
embed each of the components.
SAP’s Converged charts: These charts
install SAP Converged Cloud a full OpenStack IaaS on Kubernetes. All of the charts are collected
together in one GitHub repository, except for a few submodules.
Deis’s Workflow:
This chart exposes the entire Deis PaaS system with one chart. But it’s different
from the SAP chart in that this umbrella chart is built from each component, and
each component is tracked in a different Git repository.
YAML is a superset of JSON
any valid JSON structure ought to be valid in YAML.
As a best practice, templates should follow a YAML-like syntax unless
the JSON syntax substantially reduces the risk of a formatting issue.
There are functions in Helm that allow you to generate random data,
cryptographic keys, and so on.
a chart repository is a location where packaged charts can be
stored and shared.
A chart repository is an HTTP server that houses an index.yaml file and
optionally some packaged charts.
Because a chart repository can be any HTTP server that can serve YAML and tar
files and can answer GET requests, you have a plethora of options when it comes
down to hosting your own chart repository.
It is not required that a chart package be located on the same server as the
index.yaml file.
A valid chart repository must have an index file. The
index file contains information about each chart in the chart repository.
The Helm project provides an open-source Helm repository server called ChartMuseum that you can host yourself.
$ helm repo index fantastic-charts --url https://fantastic-charts.storage.googleapis.com
A repository will not be added if it does not contain a valid
index.yaml
add the repository to their helm client via the helm
repo add [NAME] [URL] command with any name they would like to use to
reference the repository.
Helm has provenance tools which help chart users verify the integrity and origin
of a package.
Integrity is established by comparing a chart to a provenance record
The provenance file contains a chart’s YAML file plus several pieces of
verification information
Chart repositories serve as a centralized collection of Helm charts.
Chart repositories must make it possible to serve provenance files over HTTP via
a specific request, and must make them available at the same URI path as the chart.
We don’t want to be “the certificate authority” for all chart
signers. Instead, we strongly favor a decentralized model, which is part
of the reason we chose OpenPGP as our foundational technology.
The Keybase platform provides a public
centralized repository for trust information.
A chart contains a number of Kubernetes resources and components that work together.
A test in a helm chart lives under the templates/ directory and is a pod definition that specifies a container with a given command to run.
The pod definition must contain one of the helm test hook annotations: helm.sh/hook: test-success or helm.sh/hook: test-failure
helm test
nest your test suite under a tests/ directory like <chart-name>/templates/tests/
"Cello is a library that brings higher level programming to C.
By acting as a modern, powerful runtime system Cello makes many things easy that were previously impractical or awkward in C such as:
Generic Data Structures
Polymorphic Functions
Interfaces / Type Classes
Constructors / Destructors
Optional Garbage Collection
Exceptions
Reflection
And because Cello works seamlessly alongside standard C you get all the other benefits such as great performance, powerful tooling, and extensive libraries."
If a database does not exist, MongoDB creates the database when you
first store data for that database.
The insertOne() operation creates both the
database myNewDB and the collection myNewCollection1 if they do
not already exist.
MongoDB stores documents in collections.
If a collection does not exist, MongoDB creates the collection when you
first store data for that collection.
MongoDB provides the db.createCollection() method to
explicitly create a collection with various options, such as setting
the maximum size or the documentation validation rules.
By default, a collection does not require its documents to have the
same schema;
To change the structure of the documents in a collection, such as add
new fields, remove existing fields, or change the field values to a new
type, update the documents to the new structure.
Collections are assigned an immutable UUID.
To retrieve the UUID for a collection, run either the
listCollections command
or the db.getCollectionInfos() method.
deployment.yaml: A basic manifest for creating a Kubernetes deployment
using the suffix .yaml for YAML files and .tpl for helpers.
It is just fine to put a plain YAML file like this in the templates/ directory.
helm get manifest
The helm get manifest command takes a release name (full-coral) and prints
out all of the Kubernetes resources that were uploaded to the server. Each file
begins with --- to indicate the start of a YAML document
Names should be unique to a release
The name: field is limited to 63 characters because of limitations to
the DNS system.
release names are limited to 53 characters
{{ .Release.Name }}
A template directive is enclosed in {{ and }} blocks.
The values that are passed into a template can be thought of as namespaced objects, where a dot (.) separates each namespaced element.
The leading dot before Release indicates that we start with the top-most namespace for this scope
The Release object is one of the built-in objects for Helm
When you want to test the template rendering, but not actually install anything, you can use helm install ./mychart --debug --dry-run
Using --dry-run will make it easier to test your code, but it won’t ensure that Kubernetes itself will accept the templates you generate.
Objects are passed into a template from the template engine.
create new objects within your templates
Objects can be simple, and have just one value. Or they can contain other objects or functions.
Release is one of the top-level objects that you can access in your templates.
Release.Namespace: The namespace to be released into (if the manifest doesn’t override)
Values: Values passed into the template from the values.yaml file and from user-supplied files. By default, Values is empty.
Chart: The contents of the Chart.yaml file.
Files: This provides access to all non-special files in a chart.
Files.Get is a function for getting a file by name
Files.GetBytes is a function for getting the contents of a file as an array of bytes instead of as a string. This is useful for things like images.
Template: Contains information about the current template that is being executed
BasePath: The namespaced path to the templates directory of the current chart
The built-in values always begin with a capital letter.
Go’s naming convention
use only initial lower case letters in order to distinguish local names from those built-in.
If this is a subchart, the values.yaml file of a parent chart
Individual parameters passed with --set
values.yaml is the default, which can be overridden by a parent chart’s values.yaml, which can in turn be overridden by a user-supplied values file, which can in turn be overridden by --set parameters.
While structuring data this way is possible, the recommendation is that you keep your values trees shallow, favoring flatness.
If you need to delete a key from the default values, you may override the value of the key to be null, in which case Helm will remove the key from the overridden values merge.
Kubernetes would then fail because you can not declare more than one livenessProbe handler.
When injecting strings from the .Values object into the template, we ought to quote these strings.
quote
Template functions follow the syntax functionName arg1 arg2...
While we talk about the “Helm template language” as if it is Helm-specific, it is actually a combination of the Go template language, some extra functions, and a variety of wrappers to expose certain objects to the templates.
Drawing on a concept from UNIX, pipelines are a tool for chaining together a series of template commands to compactly express a series of transformations.
pipelines are an efficient way of getting several things done in sequence
The repeat function will echo the given string the given number of times
default DEFAULT_VALUE GIVEN_VALUE. This function allows you to specify a default value inside of the template, in case the value is omitted.
all static default values should live in the values.yaml, and should not be repeated using the default command
Operators are implemented as functions that return a boolean value.
To use eq, ne, lt, gt, and, or, not etcetera place the operator at the front of the statement followed by its parameters just as you would a function.
if and
if or
with to specify a scope
range, which provides a “for each”-style loop
block declares a special kind of fillable template area
A pipeline is evaluated as false if the value is:
a boolean false
a numeric zero
an empty string
a nil (empty or null)
an empty collection (map, slice, tuple, dict, array)
incorrect YAML because of the whitespacing
When the template engine runs, it removes the contents inside of {{ and }}, but it leaves the remaining whitespace exactly as is.
{{- (with the dash and space added) indicates that whitespace should be chomped left, while -}} means whitespace to the right should be consumed.
Newlines are whitespace!
an * at the end of the line indicates a newline character that would be removed
Be careful with the chomping modifiers.
the indent function
Scopes can be changed. with can allow you to set the current scope (.) to a particular object.
Inside of the restricted scope, you will not be able to access the other objects from the parent scope.
range
The range function will “range over” (iterate through) the pizzaToppings list.
Just like with sets the scope of ., so does a range operator.
The toppings: |- line is declaring a multi-line string.
not a YAML list. It’s a big string.
the data in ConfigMaps data is composed of key/value pairs, where both the key and the value are simple strings.
The |- marker in YAML takes a multi-line string.
range can be used to iterate over collections that have a key and a value (like a map or dict).
In Helm templates, a variable is a named reference to another object. It follows the form $name
Variables are assigned with a special assignment operator: :=
{{- $relname := .Release.Name -}}
capture both the index and the value
the integer index (starting from zero) to $index and the value to $topping
For data structures that have both a key and a value, we can use range to get both
Variables are normally not “global”. They are scoped to the block in which they are declared.
one variable that is always global - $ - this variable will always point to the root context.
$.
$.
Helm template language is its ability to declare multiple templates and use them together.
A named template (sometimes called a partial or a subtemplate) is simply a template defined inside of a file, and given a name.
when naming templates: template names are global.
If you declare two templates with the same name, whichever one is loaded last will be the one used.
you should be careful to name your templates with chart-specific names.
templates in subcharts are compiled together with top-level templates
naming convention is to prefix each defined template with the name of the chart: {{ define "mychart.labels" }}
each.value — The map value corresponding to this instance. (If a set was
provided, this is the same as each.key.)
for_each keys cannot be the result (or rely on the result of) of impure functions,
including uuid, bcrypt, or timestamp, as their evaluation is deferred during the
main evaluation step.
The value used in for_each is used
to identify the resource instance and will always be disclosed in UI output,
which is why sensitive values are not allowed.
if you would like to call keys(local.map), where
local.map is an object with sensitive values (but non-sensitive keys), you can create a
value to pass to for_each with toset([for k,v in local.map : k]).
for_each
can't refer to any resource attributes that aren't known until after a
configuration is applied (such as a unique ID generated by the remote API when
an object is created).
he for_each argument
does not implicitly convert lists or tuples to sets.
Transform a multi-level nested structure into a flat list by
using nested for expressions with the flatten function.
Instances are
identified by a map key (or set member) from the value provided to for_each
Within nested provisioner or connection blocks, the special
self object refers to the current resource instance, not the resource block
as a whole.
Conversion from list to set discards the ordering of the items in the list and
removes any duplicate elements.