"The YubiKey 4 is the strong authentication bullseye the industry has been aiming at for years, enabling one single key to secure an unlimited number of applications.
Yubico's 4th generation YubiKey is built on high-performance secure elements. It includes the same range of one-time password and public key authentication protocols as in the YubiKey NEO, excluding NFC, but with stronger public/private keys, faster crypto operations and the world's first touch-to-sign feature.
With the YubiKey 4 platform, we have further improved our manufacturing and ordering process, enabling customers to order exactly what functions they want in 500+ unit volumes, with no secrets stored at Yubico or shared with a third-party organization. The best part? An organization can securely customize 1,000 YubiKeys in less than 10 minutes.
For customers who require NFC, the YubiKey NEO is our full-featured key with both contact (USB) and contactless (NFC, MIFARE) communications."
"re:Work is organized around some of the biggest ways you can make an impact in your workplace. Each subject contains guides, with tools and insights, for addressing specific challenges."
"
Slack is a popular team communications application for organizations that offers group chat and direct messaging for mobile, web, and desktop platforms. While Slack offers many benefits to customers, there are also downsides to using the platform, including high subscription fees and the risk of a massive leak of private data if Slack's servers are ever breached (again)."
A PKI allows you to bind public keys (contained in SSL certificates) with a person in a way that allows you to trust the certificate.
Public Key Infrastructures, like the one used to secure the Internet, most commonly use a Certificate Authority (also called a Registration Authority) to verify the identity of an entity and create unforgeable certificates.
An SSL Certificate Authority (also called a trusted third party or CA) is an organization that issues digital certificates to organizations or individuals after verifying their identity.
Commands are reusable sets of steps that you can invoke with specific parameters within an existing job.
you can pass my-executor as the value of a name key under executor. This method is primarily employed when passing parameters to executor invocations.
Development orbs are mutable and expire after 90 days.
Production Orbs are immutable and durable.
CircleCI allows development orbs that have versions that start with dev:
Production orbs are immutable
Each installation of CircleCI, including circleci.com, has only one registry where orbs can be kept.
Organization Admins publish production orbs.
Organization members publish development orbs
You must invoke jobs in the workflow stanza of config.yml file, making sure to pass any necessary parameters as subkeys to the job.
When you declare an executor in a configuration outside of jobs, you can use these declarations for all jobs in the scope of that declaration, enabling you to reuse a single executor definition across multiple jobs.
Orbs are transparent - If you can execute an orb, you and anyone else can view the source of that orb.
models directory is meant to hold tests for your models
controllers directory is meant to hold tests for your controllers
integration directory is meant to hold tests that involve any number of controllers interacting
Fixtures are a way of organizing test data; they reside in the fixtures folder
The test_helper.rb file holds the default configuration for your tests
Fixtures allow you to populate your testing database with predefined data before your tests run
Fixtures are database independent written in YAML.
one file per model.
Each fixture is given a name followed by an indented list of colon-separated key/value pairs.
Keys which resemble YAML keywords such as 'yes' and 'no' are quoted so that the YAML Parser correctly interprets them.
define a reference node between two different fixtures.
ERB allows you to embed Ruby code within templates
The YAML fixture format is pre-processed with ERB when Rails loads fixtures.
Rails by default automatically loads all fixtures from the test/fixtures folder for your models and controllers test.
Fixtures are instances of Active Record.
access the object directly
test_helper.rb specifies the default configuration to run our tests. This is included with all the tests, so any methods added to this file are available to all your tests.
test with method names prefixed with test_.
An assertion is a line of code that evaluates an object (or expression) for expected results.
bin/rake db:test:prepare
Every test contains one or more assertions. Only when all the assertions are successful will the test pass.
rake test command
run a particular test method from the test case by running the test and providing the test method name.
The . (dot) above indicates a passing test. When a test fails you see an F; when a test throws an error you see an E in its place.
we first wrote a test which fails for a desired functionality, then we wrote some code which adds the functionality and finally we ensured that our test passes. This approach to software development is referred to as Test-Driven Development (TDD).
each action also maps to particular CRUD operations in a database
resource :photo and resources :photos creates both singular and plural routes that map to the same controller (PhotosController).
One way to avoid deep nesting (as recommended above) is to generate the collection actions scoped under the parent, so as to get a sense of the hierarchy, but to not nest the member actions.
to only build routes with the minimal amount of information to uniquely identify the resource
The shallow method of the DSL creates a scope inside of which every nesting is shallow
These concerns can be used in resources to avoid code duplication and share behavior across routes
add a member route, just add a member block into the resource block
You can leave out the :on option, this will create the same member route except that the resource id value will be available in params[:photo_id] instead of params[:id].
Singular Resources
use a singular resource to map /profile (rather than /profile/:id) to the show action
Passing a String to get will expect a controller#action format
workaround
organize groups of controllers under a namespace
route /articles (without the prefix /admin) to Admin::ArticlesController
route /admin/articles to ArticlesController (without the Admin:: module prefix)
Nested routes allow you to capture this relationship in your routing.
helpers take an instance of Magazine as the first parameter (magazine_ads_url(@magazine)).
Resources should never be nested more than 1 level deep.
via the :shallow option
a balance between descriptive routes and deep nesting
:shallow_path prefixes member paths with the specified parameter
Routing Concerns allows you to declare common routes that can be reused inside other resources and routes
Rails can also create paths and URLs from an array of parameters.
use url_for with a set of objects
In helpers like link_to, you can specify just the object in place of the full url_for call
insert the action name as the first element of the array
This will recognize /photos/1/preview with GET, and route to the preview action of PhotosController, with the resource id value passed in params[:id]. It will also create the preview_photo_url and preview_photo_path helpers.
pass :on to a
route, eliminating the block:
Collection Routes
This will enable Rails to recognize paths such as /photos/search with GET, and route to the search action of PhotosController. It will also create the search_photos_url and search_photos_path route helpers.
simple routing makes it very easy to map legacy URLs to new Rails actions
add an alternate new action using the :on shortcut
When you set up a regular route, you supply a series of symbols that Rails maps to parts of an incoming HTTP request.
:controller maps to the name of a controller in your application
:action maps to the name of an action within that controller
optional parameters, denoted by parentheses
This route will also route the incoming request of /photos to PhotosController#index, since :action and :id are
use a constraint on :controller that matches the namespace you require
dynamic segments don't accept dots
The params will also include any parameters from the query string
:defaults option.
set params[:format] to "jpg"
cannot override defaults via query parameters
specify a name for any route using the :as option
create logout_path and logout_url as named helpers in your application.
Inside the show action of UsersController, params[:username] will contain the username for the user.
should use the get, post, put, patch and delete methods to constrain a route to a particular verb.
use the match method with the :via option to match multiple verbs at once
Routing both GET and POST requests to a single action has security implications
'GET' in Rails won't check for CSRF token. You should never write to the database from 'GET' requests
use the :constraints option to enforce a format for a dynamic segment
constraints
don't need to use anchors
Request-Based Constraints
the same name as the hash key and then compare the return value with the hash value.
constraint values should match the corresponding Request object method return type
reuse dynamic segments from the match in the path to redirect
this redirection is a 301 "Moved Permanently" redirect.
root method
put the root route at the top of the file
The root route only routes GET requests to the action.
root inside namespaces and scopes
For namespaced controllers you can use the directory notation
Only the directory notation is supported
use the :constraints option to specify a required format on the implicit id
specify a single constraint to apply to a number of routes by using the block
non-resourceful routes
:id parameter doesn't accept dots
:as option lets you override the normal naming for the named route helpers
use the :as option to prefix the named route helpers that Rails generates for a rout
prevent name collisions
prefix routes with a named parameter
This will provide you with URLs such as /bob/articles/1 and will allow you to reference the username part of the path as params[:username] in controllers, helpers and views
:only option
:except option
generate only the routes that you actually need can cut down on memory use and speed up the routing process.
alter path names
http://localhost:3000/rails/info/routes
rake routes
setting the CONTROLLER environment variable
Routes should be included in your testing strategy
DI means that you can declare components very freely and then from any other component, just ask for an instance of it and it will be granted
do test-driven development iteratively in AngularJS!
only do DOM manipulation in a directive
with ngClass we can dynamically update the class;
ngBind allows two-way data binding;
ngShow and ngHide programmatically show or hide an element;
The less DOM manipulation, the easier directives are to test, the easier they are to style, the easier they are to change in the future, and the more re-usable and distributable they are.
still wrong.
Before doing DOM manipulation anywhere in your application, ask yourself if you really need to.
a few things wrong with this
jQuery was never necessary
use angular.element and our component will still work when dropped into a project that doesn't have jQuery.
just use angular.element
the element that is passed to the link function would already be a jQuery element!
directives aren't just collections of jQuery-like functions
Directives are actually extensions of HTML
If HTML doesn't do something you need it to do, you write a directive to do it for you, and then use it just as if it was part of HTML.
think how the team would accomplish it to fit right in with ngClick, ngClass, et al.
Don't even use jQuery. Don't even include it.
ry to think about how to do it within the confines the AngularJS.
In jQuery, selectors are used to find DOM elements and then bind/register event handlers to them.
Views are (declarative) HTML that contain AngularJS directives
Directives set up the event handlers behind the scenes for us and give us dynamic databinding.
Views are tied to models (via scopes). Views are a projection of the model
In AngularJS, think about models, rather than jQuery-selected DOM elements that hold your data.
AngularJS uses controllers and directives (each of which can have their own controller, and/or compile and linking functions) to remove behavior from the view/structure (HTML). Angular also has services and filters to help separate/organize your application.
Think about your models
Think about how you want to present your models -- your views.
using the necessary directives to get dynamic databinding.
Attach a controller to each view (using ng-view and routing, or ng-controller)
Make controllers as thin as possible.
You can do a lot with jQuery without knowing about how JavaScript prototypal inheritance works.
Refer the YAML Anchors/Aliases documentation for information about how to alias and reuse syntax to keep your .circleci/config.yml file small.
workflow orchestration with two parallel jobs
jobs run according to configured requirements, each job waiting to start until the required job finishes successfully
requires: key
fans-out to run a set of acceptance test jobs in parallel, and finally fans-in to run a common deploy job.
Holding a Workflow for a Manual Approval
Workflows can be configured to wait for manual approval of a job before
continuing to the next job
add a job to the jobs list with the
key type: approval
approval is a special job type that is only available to jobs under the workflow key
The name of the job to hold is arbitrary - it could be wait or pause, for example,
as long as the job has a type: approval key in it.
schedule a workflow
to run at a certain time for specific branches.
The triggers key is only added under your workflows key
using cron syntax to represent Coordinated Universal Time (UTC) for specified branches.
By default,
a workflow is triggered on every git push
the commit workflow has no triggers key
and will run on every git push
The nightly workflow has a triggers key
and will run on the specified schedule
Cron step syntax (for example, */1, */20) is not supported.
use a context to share environment variables
use the same shared environment variables when initiated by a user who is part of the organization.
CircleCI does not run workflows for tags
unless you explicitly specify tag filters.
CircleCI branch and tag filters support
the Java variant of regex pattern matching.
Each workflow has an associated workspace which can be used to transfer files to downstream jobs as the workflow progresses.
The workspace is an additive-only store of data.
Jobs can persist data to the workspace
Downstream jobs can attach the workspace to their container filesystem.
Attaching the workspace downloads and unpacks each layer based on the ordering of the upstream jobs in the workflow graph.
Workflows that include jobs running on multiple branches may require data to be shared using workspaces
To persist data from a job and make it available to other jobs, configure the job to use the persist_to_workspace key.
Files and directories named in the paths: property of persist_to_workspace will be uploaded to the workflow’s temporary workspace relative to the directory specified with the root key.
Configure a job to get saved data by configuring the attach_workspace key.
persist_to_workspace
attach_workspace
To rerun only a workflow’s failed jobs, click the Workflows icon in the app and select a workflow to see the status of each job, then click the Rerun button and select Rerun from failed.
if you do not see your workflows triggering, a configuration error is preventing the workflow from starting.
check your Workflows page of the CircleCI app (not the Job page)
A single Google Cloud VPC can span multiple regions without communicating across the public Internet.
Google Cloud VPCs let you increase the IP space of any subnets without any workload shutdown or downtime.
Get private access to Google services, such as storage, big data, analytics, or machine learning, without having to give your service a public IP address.
usually generated on the server where the certificate will be installed and contains information that will be included in the certificate such as the organization name, common name (domain name), locality, and country.
A private key is usually created at the same time that you create the CSR, making a key pair.
CSR or Certificate Signing request is a block of encoded text that is given to a Certificate Authority when applying for an SSL Certificate
most organizations practice continuous delivery, which means that your default branch can be deployed.
Merging everything into the master branch and frequently deploying means you minimize the amount of unreleased code, which is in line with lean and continuous delivery best practices.
you can deploy to production every time you merge a feature branch.
deploy a new version by merging master into the production branch.
you can have your deployment script create a tag on each deployment.
to have an environment that is automatically updated to the master branch
commits only flow downstream, ensures that everything is tested in all environments.
first merge these bug fixes into master, and then cherry-pick them into the release branch.
Merging into master and then cherry-picking into release is called an “upstream first” policy
“merge request” since the final action is to merge the feature branch.
“pull request” since the first manual action is to pull the feature branch
it is common to protect the long-lived branches
After you merge a feature branch, you should remove it from the source control software
When you are ready to code, create a branch for the issue from the master branch.
This branch is the place for any work related to this change.
A merge request is an online place to discuss the change and review the code.
If you open the merge request but do not assign it to anyone, it is a “Work In Progress” merge request.
Start the title of the merge request with “[WIP]” or “WIP:” to prevent it from being merged before it’s ready.
To automatically close linked issues, mention them with the words “fixes” or “closes,” for example, “fixes #14” or “closes #67.” GitLab closes these issues when the code is merged into the default branch.
If you have an issue that spans across multiple repositories, create an issue for each repository and link all issues to a parent issue.
With Git, you can use an interactive rebase (rebase -i) to squash multiple commits into one or reorder them.
you should never rebase commits you have pushed to a remote server.
Rebasing creates new commits for all your changes, which can cause confusion because the same change would have multiple identifiers.
if someone has already reviewed your code, rebasing makes it hard to tell what changed since the last review.
never rebase commits authored by other people.
it is a bad idea to rebase commits that you have already pushed.
always use the “no fast-forward” (--no-ff) strategy when you merge manually.
you should try to avoid merge commits in feature branches
people avoid merge commits by just using rebase to reorder their commits after the commits on the master branch.
Using rebase prevents a merge commit when merging master into your feature branch, and it creates a neat linear history.
you should never rebase commits you have pushed to a remote server
Sometimes you can reuse recorded resolutions (rerere), but merging is better since you only have to resolve conflicts once.
not frequently merge master into the feature branch.
utilizing new code,
resolving merge conflicts
updating long-running branches.
just cherry-picking a commit.
If your feature branch has a merge conflict, creating a merge commit is a standard way of solving this.
keep your feature branches short-lived.
split your features into smaller units of work
you should try to prevent merge commits, but not eliminate them.
Your codebase should be clean, but your history should represent what actually happened.
Splitting up work into individual commits provides context for developers looking at your code later.
push your feature branch frequently, even when it is not yet ready for review.
Commit often and push frequently
A commit message should reflect your intention, not just the contents of the commit.
Testing before merging
When using GitLab flow, developers create their branches from this master branch, so it is essential that it never breaks.
Therefore, each merge request must be tested before it is accepted.
When creating a feature branch, always branch from an up-to-date master
A chart is a collection of files
that describe a related set of Kubernetes resources.
A single chart
might be used to deploy something simple, like a memcached pod, or
something complex, like a full web app stack with HTTP servers,
databases, caches, and so on.
Charts are created as files laid out in a particular directory tree,
then they can be packaged into versioned archives to be deployed.
A chart is organized as a collection of files inside of a directory.
values.yaml # The default configuration values for this chart
charts/ # A directory containing any charts upon which this chart depends.
templates/ # A directory of templates that, when combined with values,
# will generate valid Kubernetes manifest files.
version: A SemVer 2 version (required)
apiVersion: The chart API version, always "v1" (required)
Every chart must have a version number. A version must follow the
SemVer 2 standard.
non-SemVer names are explicitly
disallowed by the system.
When generating a
package, the helm package command will use the version that it finds
in the Chart.yaml as a token in the package name.
the appVersion field is not related to the version field. It is
a way of specifying the version of the application.
appVersion: The version of the app that this contains (optional). This needn't be SemVer.
If the latest version of a chart in the
repository is marked as deprecated, then the chart as a whole is considered to
be deprecated.
deprecated: Whether this chart is deprecated (optional, boolean)
one chart may depend on any number of other charts.
dependencies can be dynamically linked through the requirements.yaml
file or brought in to the charts/ directory and managed manually.
the preferred method of declaring dependencies is by using a
requirements.yaml file inside of your chart.
A requirements.yaml file is a simple file for listing your
dependencies.
The repository field is the full URL to the chart repository.
you must also use helm repo add to add that repo locally.
helm dependency update
and it will use your dependency file to download all the specified
charts into your charts/ directory for you.
When helm dependency update retrieves charts, it will store them as
chart archives in the charts/ directory.
Managing charts with requirements.yaml is a good way to easily keep
charts updated, and also share requirements information throughout a
team.
All charts are loaded by default.
The condition field holds one or more YAML paths (delimited by commas).
If this path exists in the top parent’s values and resolves to a boolean value,
the chart will be enabled or disabled based on that boolean value.
The tags field is a YAML list of labels to associate with this chart.
all charts with tags can be enabled or disabled by
specifying the tag and a boolean value.
The --set parameter can be used as usual to alter tag and condition values.
Conditions (when set in values) always override tags.
The first condition path that exists wins and subsequent ones for that chart are ignored.
The keys containing the values to be imported can be specified in the parent chart’s requirements.yaml file
using a YAML list. Each item in the list is a key which is imported from the child chart’s exports field.
specifying the key data in our import list, Helm looks in the exports field of the child
chart for data key and imports its contents.
the parent key data is not contained in the parent’s final values. If you need to specify the
parent key, use the ‘child-parent’ format.
To access values that are not contained in the exports key of the child chart’s values, you will need to
specify the source key of the values to be imported (child) and the destination path in the parent chart’s
values (parent).
To drop a dependency into your charts/ directory, use the
helm fetch command
A dependency can be either a chart archive (foo-1.2.3.tgz) or an
unpacked chart directory.
name cannot start with _ or ..
Such files are ignored by the chart loader.
a single release is created with all the objects for the chart and its dependencies.
Helm Chart templates are written in the
Go template language, with the
addition of 50 or so add-on template
functions from the Sprig library and a
few other specialized functions
When
Helm renders the charts, it will pass every file in that directory
through the template engine.
Chart developers may supply a file called values.yaml inside of a
chart. This file can contain default values.
Chart users may supply a YAML file that contains values. This can be
provided on the command line with helm install.
When a user supplies custom values, these values will override the
values in the chart’s values.yaml file.
Template files follow the standard conventions for writing Go templates
{{default "minio" .Values.storage}}
Values that are supplied via a values.yaml file (or via the --set
flag) are accessible from the .Values object in a template.
pre-defined, are available to every template, and
cannot be overridden
the names are case
sensitive
Release.Name: The name of the release (not the chart)
Release.IsUpgrade: This is set to true if the current operation is an upgrade or rollback.
Release.Revision: The revision number. It begins at 1, and increments with
each helm upgrade
Chart: The contents of the Chart.yaml
Files: A map-like object containing all non-special files in the chart.
Files can be
accessed using {{index .Files "file.name"}} or using the {{.Files.Get name}} or
{{.Files.GetString name}} functions.
.helmignore
access the contents of the file
as []byte using {{.Files.GetBytes}}
Any unknown Chart.yaml fields will be dropped
Chart.yaml cannot be
used to pass arbitrarily structured data into the template.
A values file is formatted in YAML.
A chart may include a default
values.yaml file
be merged into the default
values file.
The default values file included inside of a chart must be named
values.yaml
accessible inside of templates using the
.Values object
Values files can declare values for the top-level chart, as well as for
any of the charts that are included in that chart’s charts/ directory.
Charts at a higher level have access to all of the variables defined
beneath.
lower level charts cannot access things in
parent charts
Values are namespaced, but namespaces are pruned.
the scope of the values has been reduced and the
namespace prefix removed
Helm supports special “global” value.
a way of sharing one top-level variable with all
subcharts, which is useful for things like setting metadata properties
like labels.
If a subchart declares a global variable, that global will be passed
downward (to the subchart’s subcharts), but not upward to the parent
chart.
global variables of parent charts take precedence over the global variables from subcharts.
helm lint
A chart repository is an HTTP server that houses one or more packaged
charts
Any HTTP server that can serve YAML files and tar files and can answer
GET requests can be used as a repository server.
Helm does not provide tools for uploading charts to
remote repository servers.
the only way to add a chart to $HELM_HOME/starters is to manually
copy it there.
Helm provides a hook mechanism to allow chart developers to intervene
at certain points in a release’s life cycle.
Execute a Job to back up a database before installing a new chart,
and then execute a second job after the upgrade in order to restore
data.
Hooks are declared as an annotation in the metadata section of a manifest
Hooks work like regular templates, but they have special annotations
pre-install
post-install: Executes after all resources are loaded into Kubernetes
pre-delete
post-delete: Executes on a deletion request after all of the release’s
resources have been deleted.
pre-upgrade
post-upgrade
pre-rollback
post-rollback: Executes on a rollback request after all resources
have been modified.
crd-install
test-success: Executes when running helm test and expects the pod to
return successfully (return code == 0).
test-failure: Executes when running helm test and expects the pod to
fail (return code != 0).
Hooks allow you, the chart developer, an opportunity to perform
operations at strategic points in a release lifecycle
Tiller then loads the hook with the lowest weight first (negative to positive)
Tiller returns the release name (and other data) to the client
If the resources is a Job kind, Tiller
will wait until the job successfully runs to completion.
if the job
fails, the release will fail. This is a blocking operation, so the
Helm client will pause while the Job is run.
If they
have hook weights (see below), they are executed in weighted order. Otherwise,
ordering is not guaranteed.
good practice to add a hook weight, and set it
to 0 if weight is not important.
The resources that a hook creates are not tracked or managed as part of the
release.
leave the hook resource alone.
To destroy such
resources, you need to either write code to perform this operation in a pre-delete
or post-delete hook or add "helm.sh/hook-delete-policy" annotation to the hook template file.
Hooks are just Kubernetes manifest files with special annotations in the
metadata section
One resource can implement multiple hooks
no limit to the number of different resources that
may implement a given hook.
When subcharts declare hooks, those are also evaluated. There is no way
for a top-level chart to disable the hooks declared by subcharts.
Hook weights can be positive or negative numbers but must be represented as
strings.
sort those hooks in ascending order.
Hook deletion policies
"before-hook-creation" specifies Tiller should delete the previous hook before the new hook is launched.
By default Tiller will wait for 60 seconds for a deleted hook to no longer exist in the API server before timing out.
Custom Resource Definitions (CRDs) are a special kind in Kubernetes.
The crd-install hook is executed very early during an installation, before
the rest of the manifests are verified.
A common reason why the hook resource might already exist is that it was not deleted following use on a previous install/upgrade.
Helm uses Go templates for templating
your resource files.
two special template functions: include and required
include
function allows you to bring in another template, and then pass the results to other
template functions.
The required function allows you to declare a particular
values entry as required for template rendering.
If the value is empty, the template
rendering will fail with a user submitted error message.
When you are working with string data, you are always safer quoting the
strings than leaving them as bare words
Quote Strings, Don’t Quote Integers
when working with integers do not quote the values
env variables values which are expected to be string
to include a template, and then perform an operation
on that template’s output, Helm has a special include function
The above includes a template called toYaml, passes it $value, and
then passes the output of that template to the nindent function.
Go provides a way for setting template options to control behavior
when a map is indexed with a key that’s not present in the map
The required function gives developers the ability to declare a value entry
as required for template rendering.
The tpl function allows developers to evaluate strings as templates inside a template.
Rendering a external configuration file
(.Files.Get "conf/app.conf")
Image pull secrets are essentially a combination of registry, username, and password.
Automatically Roll Deployments When ConfigMaps or Secrets change
configmaps or secrets are injected as configuration
files in containers
a restart may be required should those
be updated with a subsequent helm upgrade
The sha256sum function can be used to ensure a deployment’s
annotation section is updated if another file changes
checksum/config: {{ include (print $.Template.BasePath "/configmap.yaml") . | sha256sum }}
helm upgrade --recreate-pods
"helm.sh/resource-policy": keep
resources that should not be deleted when Helm runs a
helm delete
this resource becomes
orphaned. Helm will no longer manage it in any way.
create some reusable parts in your chart
In the templates/ directory, any file that begins with an
underscore(_) is not expected to output a Kubernetes manifest file.
by convention, helper templates and partials are placed in a
_helpers.tpl file.
The current best practice for composing a complex application from discrete parts
is to create a top-level umbrella chart that
exposes the global configurations, and then use the charts/ subdirectory to
embed each of the components.
SAP’s Converged charts: These charts
install SAP Converged Cloud a full OpenStack IaaS on Kubernetes. All of the charts are collected
together in one GitHub repository, except for a few submodules.
Deis’s Workflow:
This chart exposes the entire Deis PaaS system with one chart. But it’s different
from the SAP chart in that this umbrella chart is built from each component, and
each component is tracked in a different Git repository.
YAML is a superset of JSON
any valid JSON structure ought to be valid in YAML.
As a best practice, templates should follow a YAML-like syntax unless
the JSON syntax substantially reduces the risk of a formatting issue.
There are functions in Helm that allow you to generate random data,
cryptographic keys, and so on.
a chart repository is a location where packaged charts can be
stored and shared.
A chart repository is an HTTP server that houses an index.yaml file and
optionally some packaged charts.
Because a chart repository can be any HTTP server that can serve YAML and tar
files and can answer GET requests, you have a plethora of options when it comes
down to hosting your own chart repository.
It is not required that a chart package be located on the same server as the
index.yaml file.
A valid chart repository must have an index file. The
index file contains information about each chart in the chart repository.
The Helm project provides an open-source Helm repository server called ChartMuseum that you can host yourself.
$ helm repo index fantastic-charts --url https://fantastic-charts.storage.googleapis.com
A repository will not be added if it does not contain a valid
index.yaml
add the repository to their helm client via the helm
repo add [NAME] [URL] command with any name they would like to use to
reference the repository.
Helm has provenance tools which help chart users verify the integrity and origin
of a package.
Integrity is established by comparing a chart to a provenance record
The provenance file contains a chart’s YAML file plus several pieces of
verification information
Chart repositories serve as a centralized collection of Helm charts.
Chart repositories must make it possible to serve provenance files over HTTP via
a specific request, and must make them available at the same URI path as the chart.
We don’t want to be “the certificate authority” for all chart
signers. Instead, we strongly favor a decentralized model, which is part
of the reason we chose OpenPGP as our foundational technology.
The Keybase platform provides a public
centralized repository for trust information.
A chart contains a number of Kubernetes resources and components that work together.
A test in a helm chart lives under the templates/ directory and is a pod definition that specifies a container with a given command to run.
The pod definition must contain one of the helm test hook annotations: helm.sh/hook: test-success or helm.sh/hook: test-failure
helm test
nest your test suite under a tests/ directory like <chart-name>/templates/tests/
The persistent data stored in the backend belongs to a workspace.
Certain backends support multiple named workspaces, allowing multiple states
to be associated with a single configuration.
Terraform starts with a single workspace named "default". This
workspace is special both because it is the default and also because
it cannot ever be deleted.
Within your Terraform configuration, you may include the name of the current
workspace using the ${terraform.workspace} interpolation sequence.
changing behavior based
on the workspace.
Named workspaces allow conveniently switching between multiple instances of
a single configuration within its single backend.
A common use for multiple workspaces is to create a parallel, distinct copy of
a set of infrastructure in order to test a set of changes before modifying the
main production infrastructure.
Non-default workspaces are often related to feature branches in version control.
Workspaces alone
are not a suitable tool for system decomposition, because each subsystem should
have its own separate configuration and backend, and will thus have its own
distinct set of workspaces.
In particular, organizations commonly want to create a strong separation
between multiple deployments of the same infrastructure serving different
development stages (e.g. staging vs. production) or different internal teams.
use one or more re-usable modules to
represent the common elements, and then represent each instance as a separate
configuration that instantiates those common elements in the context of a
different backend.
If a Terraform state for one configuration is stored in a remote backend
that is accessible to other configurations then
terraform_remote_state
can be used to directly consume its root module outputs from those other
configurations.
For server addresses, use a provider-specific resource to create a DNS
record with a predictable name and then either use that name directly or
use the dns provider to retrieve
the published addresses in other configurations.
Workspaces are technically equivalent to renaming your state file.
using a remote backend instead is recommended when there are
multiple collaborators.
GitLab flow as a clearly defined set of best practices.
It combines feature-driven development and feature branches with issue tracking.
In Git, you add files from the working copy to the staging area. After that, you commit them to your local repo.
The third step is pushing to a shared remote repository.
The biggest problem is that many long-running branches emerge that all contain part of the changes.
It is a convention to call your default branch master and to mostly branch from and merge to this.
Nowadays, most organizations practice continuous delivery, which means that your default branch can be deployed.
Continuous delivery removes the need for hotfix and release branches, including all the ceremony they introduce.
Merging everything into the master branch and frequently deploying means you minimize the amount of unreleased code, which is in line with lean and continuous delivery best practices.
GitHub flow assumes you can deploy to production every time you merge a feature branch.
You can deploy a new version by merging master into the production branch.
If you need to know what code is in production, you can just checkout the production branch to see.
Production branch
Environment branches
have an environment that is automatically updated to the master branch.
deploy the master branch to staging.
To deploy to pre-production, create a merge request from the master branch to the pre-production branch.
Go live by merging the pre-production branch into the production branch.
Release branches
work with release branches if you need to release software to the outside world.
each branch contains a minor version
After announcing a release branch, only add serious bug fixes to the branch.
merge these bug fixes into master, and then cherry-pick them into the release branch.
Merging into master and then cherry-picking into release is called an “upstream first” policy
Tools such as GitHub and Bitbucket choose the name “pull request” since the first manual action is to pull the feature branch.
Tools such as GitLab and others choose the name “merge request” since the final action is to merge the feature branch.
If you work on a feature branch for more than a few hours, it is good to share the intermediate result with the rest of the team.
the merge request automatically updates when new commits are pushed to the branch.
If the assigned person does not feel comfortable, they can request more changes or close the merge request without merging.
In GitLab, it is common to protect the long-lived branches, e.g., the master branch, so that most developers can’t modify them.
if you want to merge into a protected branch, assign your merge request to someone with maintainer permissions.
After you merge a feature branch, you should remove it from the source control software.
Having a reason for every code change helps to inform the rest of the team and to keep the scope of a feature branch small.
If there is no issue yet, create the issue
The issue title should describe the desired state of the system.
For example, the issue title “As an administrator, I want to remove users without receiving an error” is better than “Admin can’t remove users.”
create a branch for the issue from the master branch
If you open the merge request but do not assign it to anyone, it is a “Work In Progress” merge request.
Start the title of the merge request with [WIP] or WIP: to prevent it from being merged before it’s ready.
When they press the merge button, GitLab merges the code and creates a merge commit that makes this event easily visible later on.
Merge requests always create a merge commit, even when the branch could be merged without one.
This merge strategy is called “no fast-forward” in Git.
Suppose that a branch is merged but a problem occurs and the issue is reopened.
In this case, it is no problem to reuse the same branch name since the first branch was deleted when it was merged.
At any time, there is at most one branch for every issue.
It is possible that one feature branch solves more than one issue.
GitLab closes these issues when the code is merged into the default branch.
If you have an issue that spans across multiple repositories, create an issue for each repository and link all issues to a parent issue.
use an interactive rebase (rebase -i) to squash multiple commits into one or reorder them.
you should never rebase commits you have pushed to a remote server.
Rebasing creates new commits for all your changes, which can cause confusion because the same change would have multiple identifiers.
if someone has already reviewed your code, rebasing makes it hard to tell what changed since the last review.
never rebase commits authored by other people.
it is a bad idea to rebase commits that you have already pushed.
If you revert a merge commit and then change your mind, revert the revert commit to redo the merge.
Often, people avoid merge commits by just using rebase to reorder their commits after the commits on the master branch.
Using rebase prevents a merge commit when merging master into your feature branch, and it creates a neat linear history.
every time you rebase, you have to resolve similar conflicts.
Sometimes you can reuse recorded resolutions (rerere), but merging is better since you only have to resolve conflicts once.
A good way to prevent creating many merge commits is to not frequently merge master into the feature branch.
keep your feature branches short-lived.
Most feature branches should take less than one day of work.
If your feature branches often take more than a day of work, try to split your features into smaller units of work.
You could also use feature toggles to hide incomplete features so you can still merge back into master every day.
you should try to prevent merge commits, but not eliminate them.
Your codebase should be clean, but your history should represent what actually happened.
If you rebase code, the history is incorrect, and there is no way for tools to remedy this because they can’t deal with changing commit identifiers
Commit often and push frequently
You should push your feature branch frequently, even when it is not yet ready for review.
A commit message should reflect your intention, not just the contents of the commit.
each merge request must be tested before it is accepted.
test the master branch after each change.
If new commits in master cause merge conflicts with the feature branch, merge master back into the branch to make the CI server re-run the tests.
When creating a feature branch, always branch from an up-to-date master.
Do not merge from upstream again if your code can work and merge cleanly without doing so.