Helm will figure out where to install Tiller by reading your Kubernetes
configuration file (usually $HOME/.kube/config). This is the same file
that kubectl uses.
By default, when Tiller is installed, it does not have authentication enabled.
helm repo update
Without a max history set the history is kept indefinitely, leaving a large number of records for helm and tiller to maintain.
helm init --upgrade
Whenever you install a chart, a new release is created.
one chart can
be installed multiple times into the same cluster. And each can be
independently managed and upgraded.
helm list function will show you a list of all deployed releases.
helm delete
helm status
you
can audit a cluster’s history, and even undelete a release (with helm
rollback).
the Helm
server (Tiller).
The Helm client (helm)
brew install kubernetes-helm
Tiller, the server portion of Helm, typically runs inside of your
Kubernetes cluster.
it can also be run locally, and
configured to talk to a remote Kubernetes cluster.
Role-Based Access Control - RBAC for short
create a service account for Tiller with the right roles and permissions to access resources.
run Tiller in an RBAC-enabled Kubernetes cluster.
run kubectl get pods --namespace
kube-system and see Tiller running.
helm inspect
Helm will look for Tiller in the kube-system namespace unless
--tiller-namespace or TILLER_NAMESPACE is set.
For development, it is sometimes easier to work on Tiller locally, and
configure it to connect to a remote Kubernetes cluster.
even when running locally, Tiller will store release
configuration in ConfigMaps inside of Kubernetes.
helm version should show you both
the client and server version.
Tiller stores its data in Kubernetes ConfigMaps, you can safely
delete and re-install Tiller without worrying about losing any data.
helm reset
The --node-selectors flag allows us to specify the node labels required
for scheduling the Tiller pod.
--override allows you to specify properties of Tiller’s
deployment manifest.
helm init --override manipulates the specified properties of the final
manifest (there is no “values” file).
The --output flag allows us skip the installation of Tiller’s deployment
manifest and simply output the deployment manifest to stdout in either
JSON or YAML format.
By default, tiller stores release information in ConfigMaps in the namespace
where it is running.
switch from the default backend to the secrets
backend, you’ll have to do the migration for this on your own.
a beta SQL storage backend that stores release
information in an SQL database (only postgres has been tested so far).
Once you have the Helm Client and Tiller successfully installed, you can
move on to using Helm to manage charts.
Helm requires that kubelet have access to a copy of the socat program to proxy connections to the Tiller API.
A Release is an instance of a chart running in a Kubernetes cluster.
One chart can often be installed many times into the same cluster.
helm init --client-only
helm init --dry-run --debug
A panic in Tiller is almost always the result of a failure to negotiate with the
Kubernetes API server
Tiller and Helm have to negotiate a common version to make sure that they can safely
communicate without breaking API assumptions
helm delete --purge
Helm stores some files in $HELM_HOME, which is
located by default in ~/.helm
A Chart is a Helm package. It contains all of the resource definitions
necessary to run an application, tool, or service inside of a Kubernetes
cluster.
it like the Kubernetes equivalent of a Homebrew formula,
an Apt dpkg, or a Yum RPM file.
A Repository is the place where charts can be collected and shared.
Set the $HELM_HOME environment variable
each time it is installed, a new release is created.
Helm installs charts into Kubernetes, creating a new release for
each installation. And to find new charts, you can search Helm chart
repositories.
chart repository is named
stable by default
helm search shows you all of the available charts
helm inspect
To install a new package, use the helm install command. At its
simplest, it takes only one argument: The name of the chart.
If you want to use your own release name, simply use the
--name flag on helm install
additional configuration steps you can or
should take.
Helm does not wait until all of the resources are running before it
exits. Many charts require Docker images that are over 600M in size, and
may take a long time to install into the cluster.
helm status
helm inspect
values
helm inspect values stable/mariadb
override any of these settings in a YAML formatted file,
and then pass that file during installation.
helm install -f config.yaml stable/mariadb
--values (or -f): Specify a YAML file with overrides.
--set (and its variants --set-string and --set-file): Specify overrides on the command line.
Values that have been --set can be cleared by running helm upgrade with --reset-values
specified.
Chart
designers are encouraged to consider the --set usage when designing the format
of a values.yaml file.
--set-file key=filepath is another variant of --set.
It reads the file and use its content as a value.
inject a multi-line text into values without dealing with indentation in YAML.
An unpacked chart directory
When a new version of a chart is released, or when you want to change
the configuration of your release, you can use the helm upgrade
command.
Kubernetes charts can be large and
complex, Helm tries to perform the least invasive upgrade.
It will only
update things that have changed since the last release
If both are used, --set values are merged into --values with higher precedence.
The helm get command is a useful tool for looking at a release in the
cluster.
helm rollback
A release version is an incremental revision. Every time an install,
upgrade, or rollback happens, the revision number is incremented by 1.
helm history
a release name cannot be
re-used.
you can rollback a
deleted resource, and have it re-activate.
helm repo list
helm repo add
helm repo update
The Chart Development Guide explains how to develop your own
charts.
helm create
helm lint
helm package
Charts that are archived can be loaded into chart repositories.
chart repository server
Tiller can be installed into any namespace.
Limiting Tiller to only be able to install into specific namespaces and/or resource types is controlled by Kubernetes RBAC roles and rolebindings
Release names are unique PER TILLER INSTANCE
Charts should only contain resources that exist in a single namespace.
not recommended to have multiple Tillers configured to manage resources in the same namespace.
a client-side Helm plugin. A plugin is a
tool that can be accessed through the helm CLI, but which is not part of the
built-in Helm codebase.
Helm plugins are add-on tools that integrate seamlessly with Helm. They provide
a way to extend the core feature set of Helm, but without requiring every new
feature to be written in Go and added to the core tool.
Helm plugins live in $(helm home)/plugins
The Helm plugin model is partially modeled on Git’s plugin model
helm referred to as the porcelain layer, with
plugins being the plumbing.
command is the command that this plugin will
execute when it is called.
Environment variables are interpolated before the plugin
is executed.
The command itself is not executed in a shell. So you can’t oneline a shell script.
Helm is able to fetch Charts using HTTP/S
Variables like KUBECONFIG are set for the plugin if they are set in the
outer environment.
In Kubernetes, granting a role to an application-specific service account is a best practice to ensure that your application is operating in the scope that you have specified.
restrict Tiller’s capabilities to install resources to certain namespaces, or to grant a Helm client running access to a Tiller instance.
Service account with cluster-admin role
The cluster-admin role is created by default in a Kubernetes cluster
Deploy Tiller in a namespace, restricted to deploying resources only in that namespace
Deploy Tiller in a namespace, restricted to deploying resources in another namespace
When running a Helm client in a pod, in order for the Helm client to talk to a Tiller instance, it will need certain privileges to be granted.
SSL Between Helm and Tiller
The Tiller authentication model uses client-side SSL certificates.
creating an internal CA, and using both the
cryptographic and identity functions of SSL.
Helm is a powerful and flexible package-management and operations tool for Kubernetes.
default installation applies no security configurations
with a cluster that is well-secured in a private network with no data-sharing or no other users or teams.
With great power comes great responsibility.
Choose the Best Practices you should apply to your helm installation
Role-based access control, or RBAC
Tiller’s gRPC endpoint and its usage by Helm
Kubernetes employ a role-based access control (or RBAC) system (as do modern operating systems) to help mitigate the damage that can be done if credentials are misused or bugs exist.
In the default installation the gRPC endpoint that Tiller offers is available inside the cluster (not external to the cluster) without authentication configuration applied.
Tiller stores its release information in ConfigMaps. We suggest changing the default to Secrets.
release information
charts
charts are a kind of package that not only installs containers you may or may not have validated yourself, but it may also install into more than one namespace.
As with all shared software, in a controlled or shared environment you must validate all software you install yourself before you install it.
Helm’s provenance tools to ensure the provenance and integrity of charts
"Helm will figure out where to install Tiller by reading your Kubernetes configuration file (usually $HOME/.kube/config). This is the same file that kubectl uses."
GitLab flow as a clearly defined set of best practices.
It combines feature-driven development and feature branches with issue tracking.
In Git, you add files from the working copy to the staging area. After that, you commit them to your local repo.
The third step is pushing to a shared remote repository.
The biggest problem is that many long-running branches emerge that all contain part of the changes.
It is a convention to call your default branch master and to mostly branch from and merge to this.
Nowadays, most organizations practice continuous delivery, which means that your default branch can be deployed.
Continuous delivery removes the need for hotfix and release branches, including all the ceremony they introduce.
Merging everything into the master branch and frequently deploying means you minimize the amount of unreleased code, which is in line with lean and continuous delivery best practices.
GitHub flow assumes you can deploy to production every time you merge a feature branch.
You can deploy a new version by merging master into the production branch.
If you need to know what code is in production, you can just checkout the production branch to see.
Production branch
Environment branches
have an environment that is automatically updated to the master branch.
deploy the master branch to staging.
To deploy to pre-production, create a merge request from the master branch to the pre-production branch.
Go live by merging the pre-production branch into the production branch.
Release branches
work with release branches if you need to release software to the outside world.
each branch contains a minor version
After announcing a release branch, only add serious bug fixes to the branch.
merge these bug fixes into master, and then cherry-pick them into the release branch.
Merging into master and then cherry-picking into release is called an “upstream first” policy
Tools such as GitHub and Bitbucket choose the name “pull request” since the first manual action is to pull the feature branch.
Tools such as GitLab and others choose the name “merge request” since the final action is to merge the feature branch.
If you work on a feature branch for more than a few hours, it is good to share the intermediate result with the rest of the team.
the merge request automatically updates when new commits are pushed to the branch.
If the assigned person does not feel comfortable, they can request more changes or close the merge request without merging.
In GitLab, it is common to protect the long-lived branches, e.g., the master branch, so that most developers can’t modify them.
if you want to merge into a protected branch, assign your merge request to someone with maintainer permissions.
After you merge a feature branch, you should remove it from the source control software.
Having a reason for every code change helps to inform the rest of the team and to keep the scope of a feature branch small.
If there is no issue yet, create the issue
The issue title should describe the desired state of the system.
For example, the issue title “As an administrator, I want to remove users without receiving an error” is better than “Admin can’t remove users.”
create a branch for the issue from the master branch
If you open the merge request but do not assign it to anyone, it is a “Work In Progress” merge request.
Start the title of the merge request with [WIP] or WIP: to prevent it from being merged before it’s ready.
When they press the merge button, GitLab merges the code and creates a merge commit that makes this event easily visible later on.
Merge requests always create a merge commit, even when the branch could be merged without one.
This merge strategy is called “no fast-forward” in Git.
Suppose that a branch is merged but a problem occurs and the issue is reopened.
In this case, it is no problem to reuse the same branch name since the first branch was deleted when it was merged.
At any time, there is at most one branch for every issue.
It is possible that one feature branch solves more than one issue.
GitLab closes these issues when the code is merged into the default branch.
If you have an issue that spans across multiple repositories, create an issue for each repository and link all issues to a parent issue.
use an interactive rebase (rebase -i) to squash multiple commits into one or reorder them.
you should never rebase commits you have pushed to a remote server.
Rebasing creates new commits for all your changes, which can cause confusion because the same change would have multiple identifiers.
if someone has already reviewed your code, rebasing makes it hard to tell what changed since the last review.
never rebase commits authored by other people.
it is a bad idea to rebase commits that you have already pushed.
If you revert a merge commit and then change your mind, revert the revert commit to redo the merge.
Often, people avoid merge commits by just using rebase to reorder their commits after the commits on the master branch.
Using rebase prevents a merge commit when merging master into your feature branch, and it creates a neat linear history.
every time you rebase, you have to resolve similar conflicts.
Sometimes you can reuse recorded resolutions (rerere), but merging is better since you only have to resolve conflicts once.
A good way to prevent creating many merge commits is to not frequently merge master into the feature branch.
keep your feature branches short-lived.
Most feature branches should take less than one day of work.
If your feature branches often take more than a day of work, try to split your features into smaller units of work.
You could also use feature toggles to hide incomplete features so you can still merge back into master every day.
you should try to prevent merge commits, but not eliminate them.
Your codebase should be clean, but your history should represent what actually happened.
If you rebase code, the history is incorrect, and there is no way for tools to remedy this because they can’t deal with changing commit identifiers
Commit often and push frequently
You should push your feature branch frequently, even when it is not yet ready for review.
A commit message should reflect your intention, not just the contents of the commit.
each merge request must be tested before it is accepted.
test the master branch after each change.
If new commits in master cause merge conflicts with the feature branch, merge master back into the branch to make the CI server re-run the tests.
When creating a feature branch, always branch from an up-to-date master.
Do not merge from upstream again if your code can work and merge cleanly without doing so.
"MAME originally stood for Multiple Arcade Machine Emulator.
MAME's purpose is to preserve decades of software history. As electronic technology continues to rush forward, MAME prevents this important "vintage" software from being lost and forgotten. This is achieved by documenting the hardware and how it functions. The source code to MAME serves as this documentation. The fact that the software is usable serves primarily to validate the accuracy of the documentation (how else can you prove that you have recreated the hardware faithfully?). Over time, MAME absorbed the sister-project MESS (Multi Emulator Super System), so MAME now documents a wide variety of (mostly vintage) computers, video game consoles and calculators, in addition to the arcade video games that were its initial focus."
most organizations practice continuous delivery, which means that your default branch can be deployed.
Merging everything into the master branch and frequently deploying means you minimize the amount of unreleased code, which is in line with lean and continuous delivery best practices.
you can deploy to production every time you merge a feature branch.
deploy a new version by merging master into the production branch.
you can have your deployment script create a tag on each deployment.
to have an environment that is automatically updated to the master branch
commits only flow downstream, ensures that everything is tested in all environments.
first merge these bug fixes into master, and then cherry-pick them into the release branch.
Merging into master and then cherry-picking into release is called an “upstream first” policy
“merge request” since the final action is to merge the feature branch.
“pull request” since the first manual action is to pull the feature branch
it is common to protect the long-lived branches
After you merge a feature branch, you should remove it from the source control software
When you are ready to code, create a branch for the issue from the master branch.
This branch is the place for any work related to this change.
A merge request is an online place to discuss the change and review the code.
If you open the merge request but do not assign it to anyone, it is a “Work In Progress” merge request.
Start the title of the merge request with “[WIP]” or “WIP:” to prevent it from being merged before it’s ready.
To automatically close linked issues, mention them with the words “fixes” or “closes,” for example, “fixes #14” or “closes #67.” GitLab closes these issues when the code is merged into the default branch.
If you have an issue that spans across multiple repositories, create an issue for each repository and link all issues to a parent issue.
With Git, you can use an interactive rebase (rebase -i) to squash multiple commits into one or reorder them.
you should never rebase commits you have pushed to a remote server.
Rebasing creates new commits for all your changes, which can cause confusion because the same change would have multiple identifiers.
if someone has already reviewed your code, rebasing makes it hard to tell what changed since the last review.
never rebase commits authored by other people.
it is a bad idea to rebase commits that you have already pushed.
always use the “no fast-forward” (--no-ff) strategy when you merge manually.
you should try to avoid merge commits in feature branches
people avoid merge commits by just using rebase to reorder their commits after the commits on the master branch.
Using rebase prevents a merge commit when merging master into your feature branch, and it creates a neat linear history.
you should never rebase commits you have pushed to a remote server
Sometimes you can reuse recorded resolutions (rerere), but merging is better since you only have to resolve conflicts once.
not frequently merge master into the feature branch.
utilizing new code,
resolving merge conflicts
updating long-running branches.
just cherry-picking a commit.
If your feature branch has a merge conflict, creating a merge commit is a standard way of solving this.
keep your feature branches short-lived.
split your features into smaller units of work
you should try to prevent merge commits, but not eliminate them.
Your codebase should be clean, but your history should represent what actually happened.
Splitting up work into individual commits provides context for developers looking at your code later.
push your feature branch frequently, even when it is not yet ready for review.
Commit often and push frequently
A commit message should reflect your intention, not just the contents of the commit.
Testing before merging
When using GitLab flow, developers create their branches from this master branch, so it is essential that it never breaks.
Therefore, each merge request must be tested before it is accepted.
When creating a feature branch, always branch from an up-to-date master
A git rebase copies the commits from the current branch, and puts these copied commits on top of the specified branch.
The branch that we're rebasing always has the latest changes that we want to keep!
A git rebase changes the history of the project as new hashes are created for the copied commits!
Rebasing is great whenever you're working on a feature branch, and the master branch has been updated.
An interactive rebase can also be useful on the branch you're currently working on, and want to modify some commits.
A git reset gets rid of all the current staged files and gives us control over where HEAD should point to.
A soft reset moves HEAD to the specified commit (or the index of the commit compared to HEAD)
Git should simply reset its state back to where it was on the specified commit: this even includes the changes in your working directory and staged files!
By reverting a certain commit, we create a new commit that contains the reverted changes!
Performing a git revert is very useful in order to undo a certain commit, without modifying the history of the branch.
By cherry-picking a commit, we create a new commit on our active branch that contains the changes that were introduced by the cherry-picked commit.
a fetch simply downloads new data.
A git pull is actually two commands in one: a git fetch, and a git merge
git reflog is a very useful command in order to show a log of all the actions that have been taken
In my opinion, understanding how a technology works under the hood is the best way to achieve learning speed and to build confidence that you are using the tool in the correct way.
The top-level layer may be read by a union-ing file system (AUFS on my docker implementation) to present a single cohesive view of all the changes as one read-only file system
Please keep in mind that --amend actually will create a new commit which replaces the previous one, so don’t use it for modifying commits which already have been pushed to a central repository.
git rebase --interactive
Just pick the commit(s) you want to update, change pick to reword (or r for short), and you will be taken to a new view where you can edit the message.
you can completely remove commits by deleting them from the list, as well as edit, reorder, and squash them.
Squashing allows you to merge several commits into one
In case you don’t want to create additional revert commits but only apply the necessary changes to your working tree, you can use the --no-commit/-n option.
reuse recorded resolution
Unfortunately it turns out that one of the branches isn’t quite there yet, so you decide to un-merge it again. Several days (or weeks) later when the branch is finally ready you merge it again, but thanks to the recorded resolutions, you won’t have to resolve the same merge conflicts again.
You can also define global hooks to use in all your projects by creating a template directory that git will use when initializing a new repository
removing sensitive data
Keep in mind that this will rewrite your project’s entire history, which can be very disruptive in a distributed workflow.