A chart is a collection of files
that describe a related set of Kubernetes resources.
A single chart
might be used to deploy something simple, like a memcached pod, or
something complex, like a full web app stack with HTTP servers,
databases, caches, and so on.
Charts are created as files laid out in a particular directory tree,
then they can be packaged into versioned archives to be deployed.
A chart is organized as a collection of files inside of a directory.
values.yaml # The default configuration values for this chart
charts/ # A directory containing any charts upon which this chart depends.
templates/ # A directory of templates that, when combined with values,
# will generate valid Kubernetes manifest files.
version: A SemVer 2 version (required)
apiVersion: The chart API version, always "v1" (required)
Every chart must have a version number. A version must follow the
SemVer 2 standard.
non-SemVer names are explicitly
disallowed by the system.
When generating a
package, the helm package command will use the version that it finds
in the Chart.yaml as a token in the package name.
the appVersion field is not related to the version field. It is
a way of specifying the version of the application.
appVersion: The version of the app that this contains (optional). This needn't be SemVer.
If the latest version of a chart in the
repository is marked as deprecated, then the chart as a whole is considered to
be deprecated.
deprecated: Whether this chart is deprecated (optional, boolean)
one chart may depend on any number of other charts.
dependencies can be dynamically linked through the requirements.yaml
file or brought in to the charts/ directory and managed manually.
the preferred method of declaring dependencies is by using a
requirements.yaml file inside of your chart.
A requirements.yaml file is a simple file for listing your
dependencies.
The repository field is the full URL to the chart repository.
you must also use helm repo add to add that repo locally.
helm dependency update
and it will use your dependency file to download all the specified
charts into your charts/ directory for you.
When helm dependency update retrieves charts, it will store them as
chart archives in the charts/ directory.
Managing charts with requirements.yaml is a good way to easily keep
charts updated, and also share requirements information throughout a
team.
All charts are loaded by default.
The condition field holds one or more YAML paths (delimited by commas).
If this path exists in the top parent’s values and resolves to a boolean value,
the chart will be enabled or disabled based on that boolean value.
The tags field is a YAML list of labels to associate with this chart.
all charts with tags can be enabled or disabled by
specifying the tag and a boolean value.
The --set parameter can be used as usual to alter tag and condition values.
Conditions (when set in values) always override tags.
The first condition path that exists wins and subsequent ones for that chart are ignored.
The keys containing the values to be imported can be specified in the parent chart’s requirements.yaml file
using a YAML list. Each item in the list is a key which is imported from the child chart’s exports field.
specifying the key data in our import list, Helm looks in the exports field of the child
chart for data key and imports its contents.
the parent key data is not contained in the parent’s final values. If you need to specify the
parent key, use the ‘child-parent’ format.
To access values that are not contained in the exports key of the child chart’s values, you will need to
specify the source key of the values to be imported (child) and the destination path in the parent chart’s
values (parent).
To drop a dependency into your charts/ directory, use the
helm fetch command
A dependency can be either a chart archive (foo-1.2.3.tgz) or an
unpacked chart directory.
name cannot start with _ or ..
Such files are ignored by the chart loader.
a single release is created with all the objects for the chart and its dependencies.
Helm Chart templates are written in the
Go template language, with the
addition of 50 or so add-on template
functions from the Sprig library and a
few other specialized functions
When
Helm renders the charts, it will pass every file in that directory
through the template engine.
Chart developers may supply a file called values.yaml inside of a
chart. This file can contain default values.
Chart users may supply a YAML file that contains values. This can be
provided on the command line with helm install.
When a user supplies custom values, these values will override the
values in the chart’s values.yaml file.
Template files follow the standard conventions for writing Go templates
{{default "minio" .Values.storage}}
Values that are supplied via a values.yaml file (or via the --set
flag) are accessible from the .Values object in a template.
pre-defined, are available to every template, and
cannot be overridden
the names are case
sensitive
Release.Name: The name of the release (not the chart)
Release.IsUpgrade: This is set to true if the current operation is an upgrade or rollback.
Release.Revision: The revision number. It begins at 1, and increments with
each helm upgrade
Chart: The contents of the Chart.yaml
Files: A map-like object containing all non-special files in the chart.
Files can be
accessed using {{index .Files "file.name"}} or using the {{.Files.Get name}} or
{{.Files.GetString name}} functions.
.helmignore
access the contents of the file
as []byte using {{.Files.GetBytes}}
Any unknown Chart.yaml fields will be dropped
Chart.yaml cannot be
used to pass arbitrarily structured data into the template.
A values file is formatted in YAML.
A chart may include a default
values.yaml file
be merged into the default
values file.
The default values file included inside of a chart must be named
values.yaml
accessible inside of templates using the
.Values object
Values files can declare values for the top-level chart, as well as for
any of the charts that are included in that chart’s charts/ directory.
Charts at a higher level have access to all of the variables defined
beneath.
lower level charts cannot access things in
parent charts
Values are namespaced, but namespaces are pruned.
the scope of the values has been reduced and the
namespace prefix removed
Helm supports special “global” value.
a way of sharing one top-level variable with all
subcharts, which is useful for things like setting metadata properties
like labels.
If a subchart declares a global variable, that global will be passed
downward (to the subchart’s subcharts), but not upward to the parent
chart.
global variables of parent charts take precedence over the global variables from subcharts.
helm lint
A chart repository is an HTTP server that houses one or more packaged
charts
Any HTTP server that can serve YAML files and tar files and can answer
GET requests can be used as a repository server.
Helm does not provide tools for uploading charts to
remote repository servers.
the only way to add a chart to $HELM_HOME/starters is to manually
copy it there.
Helm provides a hook mechanism to allow chart developers to intervene
at certain points in a release’s life cycle.
Execute a Job to back up a database before installing a new chart,
and then execute a second job after the upgrade in order to restore
data.
Hooks are declared as an annotation in the metadata section of a manifest
Hooks work like regular templates, but they have special annotations
pre-install
post-install: Executes after all resources are loaded into Kubernetes
pre-delete
post-delete: Executes on a deletion request after all of the release’s
resources have been deleted.
pre-upgrade
post-upgrade
pre-rollback
post-rollback: Executes on a rollback request after all resources
have been modified.
crd-install
test-success: Executes when running helm test and expects the pod to
return successfully (return code == 0).
test-failure: Executes when running helm test and expects the pod to
fail (return code != 0).
Hooks allow you, the chart developer, an opportunity to perform
operations at strategic points in a release lifecycle
Tiller then loads the hook with the lowest weight first (negative to positive)
Tiller returns the release name (and other data) to the client
If the resources is a Job kind, Tiller
will wait until the job successfully runs to completion.
if the job
fails, the release will fail. This is a blocking operation, so the
Helm client will pause while the Job is run.
If they
have hook weights (see below), they are executed in weighted order. Otherwise,
ordering is not guaranteed.
good practice to add a hook weight, and set it
to 0 if weight is not important.
The resources that a hook creates are not tracked or managed as part of the
release.
leave the hook resource alone.
To destroy such
resources, you need to either write code to perform this operation in a pre-delete
or post-delete hook or add "helm.sh/hook-delete-policy" annotation to the hook template file.
Hooks are just Kubernetes manifest files with special annotations in the
metadata section
One resource can implement multiple hooks
no limit to the number of different resources that
may implement a given hook.
When subcharts declare hooks, those are also evaluated. There is no way
for a top-level chart to disable the hooks declared by subcharts.
Hook weights can be positive or negative numbers but must be represented as
strings.
sort those hooks in ascending order.
Hook deletion policies
"before-hook-creation" specifies Tiller should delete the previous hook before the new hook is launched.
By default Tiller will wait for 60 seconds for a deleted hook to no longer exist in the API server before timing out.
Custom Resource Definitions (CRDs) are a special kind in Kubernetes.
The crd-install hook is executed very early during an installation, before
the rest of the manifests are verified.
A common reason why the hook resource might already exist is that it was not deleted following use on a previous install/upgrade.
Helm uses Go templates for templating
your resource files.
two special template functions: include and required
include
function allows you to bring in another template, and then pass the results to other
template functions.
The required function allows you to declare a particular
values entry as required for template rendering.
If the value is empty, the template
rendering will fail with a user submitted error message.
When you are working with string data, you are always safer quoting the
strings than leaving them as bare words
Quote Strings, Don’t Quote Integers
when working with integers do not quote the values
env variables values which are expected to be string
to include a template, and then perform an operation
on that template’s output, Helm has a special include function
The above includes a template called toYaml, passes it $value, and
then passes the output of that template to the nindent function.
Go provides a way for setting template options to control behavior
when a map is indexed with a key that’s not present in the map
The required function gives developers the ability to declare a value entry
as required for template rendering.
The tpl function allows developers to evaluate strings as templates inside a template.
Rendering a external configuration file
(.Files.Get "conf/app.conf")
Image pull secrets are essentially a combination of registry, username, and password.
Automatically Roll Deployments When ConfigMaps or Secrets change
configmaps or secrets are injected as configuration
files in containers
a restart may be required should those
be updated with a subsequent helm upgrade
The sha256sum function can be used to ensure a deployment’s
annotation section is updated if another file changes
checksum/config: {{ include (print $.Template.BasePath "/configmap.yaml") . | sha256sum }}
helm upgrade --recreate-pods
"helm.sh/resource-policy": keep
resources that should not be deleted when Helm runs a
helm delete
this resource becomes
orphaned. Helm will no longer manage it in any way.
create some reusable parts in your chart
In the templates/ directory, any file that begins with an
underscore(_) is not expected to output a Kubernetes manifest file.
by convention, helper templates and partials are placed in a
_helpers.tpl file.
The current best practice for composing a complex application from discrete parts
is to create a top-level umbrella chart that
exposes the global configurations, and then use the charts/ subdirectory to
embed each of the components.
SAP’s Converged charts: These charts
install SAP Converged Cloud a full OpenStack IaaS on Kubernetes. All of the charts are collected
together in one GitHub repository, except for a few submodules.
Deis’s Workflow:
This chart exposes the entire Deis PaaS system with one chart. But it’s different
from the SAP chart in that this umbrella chart is built from each component, and
each component is tracked in a different Git repository.
YAML is a superset of JSON
any valid JSON structure ought to be valid in YAML.
As a best practice, templates should follow a YAML-like syntax unless
the JSON syntax substantially reduces the risk of a formatting issue.
There are functions in Helm that allow you to generate random data,
cryptographic keys, and so on.
a chart repository is a location where packaged charts can be
stored and shared.
A chart repository is an HTTP server that houses an index.yaml file and
optionally some packaged charts.
Because a chart repository can be any HTTP server that can serve YAML and tar
files and can answer GET requests, you have a plethora of options when it comes
down to hosting your own chart repository.
It is not required that a chart package be located on the same server as the
index.yaml file.
A valid chart repository must have an index file. The
index file contains information about each chart in the chart repository.
The Helm project provides an open-source Helm repository server called ChartMuseum that you can host yourself.
$ helm repo index fantastic-charts --url https://fantastic-charts.storage.googleapis.com
A repository will not be added if it does not contain a valid
index.yaml
add the repository to their helm client via the helm
repo add [NAME] [URL] command with any name they would like to use to
reference the repository.
Helm has provenance tools which help chart users verify the integrity and origin
of a package.
Integrity is established by comparing a chart to a provenance record
The provenance file contains a chart’s YAML file plus several pieces of
verification information
Chart repositories serve as a centralized collection of Helm charts.
Chart repositories must make it possible to serve provenance files over HTTP via
a specific request, and must make them available at the same URI path as the chart.
We don’t want to be “the certificate authority” for all chart
signers. Instead, we strongly favor a decentralized model, which is part
of the reason we chose OpenPGP as our foundational technology.
The Keybase platform provides a public
centralized repository for trust information.
A chart contains a number of Kubernetes resources and components that work together.
A test in a helm chart lives under the templates/ directory and is a pod definition that specifies a container with a given command to run.
The pod definition must contain one of the helm test hook annotations: helm.sh/hook: test-success or helm.sh/hook: test-failure
helm test
nest your test suite under a tests/ directory like <chart-name>/templates/tests/
Create an additional staging environment that closely resembles the
production one
Keep any additional configuration in YAML files under the config/ directory
Rails::Application.config_for(:yaml_file)
Use nested routes to express better the relationship between ActiveRecord
models
nest routes more than 1 level deep then use the shallow: true option
namespaced routes to group related actions
Don't use match to define any routes unless there is need to map multiple request types among [:get, :post, :patch, :put, :delete] to a single action using :via option.
Keep the controllers skinny
all the business logic
should naturally reside in the model
Share no more than two instance variables between a controller and a view.
using a template
Prefer render plain: over render text
Prefer corresponding symbols to numeric HTTP status codes
without abbreviations
Keep your models for business logic and data-persistence
only
Group macro-style methods (has_many, validates, etc) in the beginning of
the class definition
Prefer has_many :through to has_and_belongs_to_many
self[:attribute]
self[:attribute] = value
validates
Keep custom validators under app/validators
Consider extracting custom validators to a shared gem
preferable to make a class method instead which serves the
same purpose of the named scope
returns an ActiveRecord::Relation
object
.update_attributes
Override the to_param method of the model
Use the friendly_id gem. It allows creation of human-readable URLs by
using some descriptive attribute of the model instead of its id
find_each to iterate over a collection of AR objects
.find_each
.find_each
Looping through a
collection of records from the database (using the all method, for example)
is very inefficient since it will try to instantiate all the objects at once
always call
before_destroy callbacks that perform validation with prepend: true
Define the dependent option to the has_many and has_one associations
always use the exception raising bang! method or handle the method return value.
When persisting AR objects
Avoid string interpolation in
queries
param will be properly escaped
Consider using named placeholders instead of positional placeholders
use of find over where
when you need to retrieve a single record by id
use of find_by over where and find_by_attribute
use of where.not over SQL
use
heredocs with squish
Keep the schema.rb (or structure.sql) under version control.
Use rake db:schema:load instead of rake db:migrate to initialize an empty
database
Enforce default values in the migrations themselves instead of in the
application layer
change_column_default
imposing data integrity from
the Rails app is impossible
use the change method instead of up and down methods.
constructive migrations
use models in migrations, make sure you define them
so that you don't end up with broken migrations in the future
Don't use non-reversible migration commands in the change method.
In this case, block will be used by create_table in rollback
Never call the model layer directly from a view
Never make complex formatting in the views, export the formatting to a method
in the view helper or the model.
When the labels of an ActiveRecord model need to be translated, use the
activerecord scope
Separate the texts used in the views from translations of ActiveRecord
attributes
Place the locale files for the models in a folder locales/models
the
texts used in the views in folder locales/views
Use the dot-separated keys in the controllers and models
Reserve app/assets for custom stylesheets, javascripts, or images
Third party code such as jQuery or
bootstrap should be placed in
vendor/assets
Provide both HTML and plain-text view templates
config.action_mailer.raise_delivery_errors = true
Use a local SMTP server like
Mailcatcher in the development
environment
Provide default settings for the host name
The _url methods include the host name and the _path
methods don't
_url
Format the from and to addresses properly
default from:
sending html emails all styles should be inline
Sending emails while generating page response should be avoided. It causes
delays in loading of the page and request can timeout if multiple email are
sent.
.start_with?
.end_with?
&.
Config your timezone accordingly in application.rb
config.active_record.default_timezone = :local
it can be only :utc or :local
Don't use Time.parse
Time.zone.parse
Don't use Time.now
Time.zone.now
Put gems used only for development or testing in the appropriate group in the
Gemfile
Add all OS X specific gems to a darwin group in the Gemfile, and
all Linux specific gems to a linux group
Do not remove the Gemfile.lock from version control.
application’s logic is a great example of a component
Aspects cross-cut our application - when we use some kind of persistence (e.g. a database) or network communication (such as ZMQ sockets)
our components need to know about it.
Aspect-oriented programming aims to get rid of cross-cuts by separating
aspect code from component code using injections of our aspects in certain join points
in our component code.
In most cases after and before advice are sufficient.
what does it mean to “evaluate code around” something?
In our case it means: Don’t run this method. Take it and push to my advice as an argument and evaluate this advice
to provide a join point
You’ll often see empty methods in code written in AOP paradigm
provide aspect code to link with our use case
use case is a pure domain object, without even knowing it’s connected with some kind of persistence and logging layer.
Aspect-oriented programming is fixing the problem with polluting pure logic objects with technical context of our applications.
we treat our glues as a configuration part, not the logic part of our apps.
The single responsibility principle asserts that every class should have exactly one responsibility. In other words, each class should be concerned about one unique nugget of functionality
fat models are a little better than fat controllers
when every bit of functionality has been encapsulated into its own object, you find yourself repeating code a lot less.
trigger the query and therefore, we lose our Relation
leaving trivial ordering out of scopes all together.
where
where
.merge() makes it easy to use scopes from other models that have been joined into the query, reducing potential duplication.
ActiveRecord provides an easy API for doing many things with our database, but it also makes it pretty easy to do things inefficiently. The layer of abstraction hides what’s really happening.
first pure SQL, then ActiveRecord
Databases can only do fast lookups for columns with indexes, otherwise it’s doing a sequential scan
Add an index on every id column as well as any column that is used in a where clause.
use a Query class to encapsulate the potentially gnarly query.
subqueries
this Query returns an ActiveRecord::Relation
where
where
Single Responsibility Principle
Avoid ad-hoc queries outside of Scopes and Query Objects
encapsulate data access into scopes and Query objects
An ad-hoc query embedded in a controller (or view, task, etc) is harder to test in isolation and cannot be reused
each action also maps to particular CRUD operations in a database
resource :photo and resources :photos creates both singular and plural routes that map to the same controller (PhotosController).
One way to avoid deep nesting (as recommended above) is to generate the collection actions scoped under the parent, so as to get a sense of the hierarchy, but to not nest the member actions.
to only build routes with the minimal amount of information to uniquely identify the resource
The shallow method of the DSL creates a scope inside of which every nesting is shallow
These concerns can be used in resources to avoid code duplication and share behavior across routes
add a member route, just add a member block into the resource block
You can leave out the :on option, this will create the same member route except that the resource id value will be available in params[:photo_id] instead of params[:id].
Singular Resources
use a singular resource to map /profile (rather than /profile/:id) to the show action
Passing a String to get will expect a controller#action format
workaround
organize groups of controllers under a namespace
route /articles (without the prefix /admin) to Admin::ArticlesController
route /admin/articles to ArticlesController (without the Admin:: module prefix)
Nested routes allow you to capture this relationship in your routing.
helpers take an instance of Magazine as the first parameter (magazine_ads_url(@magazine)).
Resources should never be nested more than 1 level deep.
via the :shallow option
a balance between descriptive routes and deep nesting
:shallow_path prefixes member paths with the specified parameter
Routing Concerns allows you to declare common routes that can be reused inside other resources and routes
Rails can also create paths and URLs from an array of parameters.
use url_for with a set of objects
In helpers like link_to, you can specify just the object in place of the full url_for call
insert the action name as the first element of the array
This will recognize /photos/1/preview with GET, and route to the preview action of PhotosController, with the resource id value passed in params[:id]. It will also create the preview_photo_url and preview_photo_path helpers.
pass :on to a
route, eliminating the block:
Collection Routes
This will enable Rails to recognize paths such as /photos/search with GET, and route to the search action of PhotosController. It will also create the search_photos_url and search_photos_path route helpers.
simple routing makes it very easy to map legacy URLs to new Rails actions
add an alternate new action using the :on shortcut
When you set up a regular route, you supply a series of symbols that Rails maps to parts of an incoming HTTP request.
:controller maps to the name of a controller in your application
:action maps to the name of an action within that controller
optional parameters, denoted by parentheses
This route will also route the incoming request of /photos to PhotosController#index, since :action and :id are
use a constraint on :controller that matches the namespace you require
dynamic segments don't accept dots
The params will also include any parameters from the query string
:defaults option.
set params[:format] to "jpg"
cannot override defaults via query parameters
specify a name for any route using the :as option
create logout_path and logout_url as named helpers in your application.
Inside the show action of UsersController, params[:username] will contain the username for the user.
should use the get, post, put, patch and delete methods to constrain a route to a particular verb.
use the match method with the :via option to match multiple verbs at once
Routing both GET and POST requests to a single action has security implications
'GET' in Rails won't check for CSRF token. You should never write to the database from 'GET' requests
use the :constraints option to enforce a format for a dynamic segment
constraints
don't need to use anchors
Request-Based Constraints
the same name as the hash key and then compare the return value with the hash value.
constraint values should match the corresponding Request object method return type
reuse dynamic segments from the match in the path to redirect
this redirection is a 301 "Moved Permanently" redirect.
root method
put the root route at the top of the file
The root route only routes GET requests to the action.
root inside namespaces and scopes
For namespaced controllers you can use the directory notation
Only the directory notation is supported
use the :constraints option to specify a required format on the implicit id
specify a single constraint to apply to a number of routes by using the block
non-resourceful routes
:id parameter doesn't accept dots
:as option lets you override the normal naming for the named route helpers
use the :as option to prefix the named route helpers that Rails generates for a rout
prevent name collisions
prefix routes with a named parameter
This will provide you with URLs such as /bob/articles/1 and will allow you to reference the username part of the path as params[:username] in controllers, helpers and views
:only option
:except option
generate only the routes that you actually need can cut down on memory use and speed up the routing process.
alter path names
http://localhost:3000/rails/info/routes
rake routes
setting the CONTROLLER environment variable
Routes should be included in your testing strategy
Communication between pods is more complicated, however, and requires a separate networking component that can transparently route traffic from a pod on one node to a pod on another.
pod network plugins. For this cluster, you will use Flannel, a stable and performant option.
Passing the argument --pod-network-cidr=10.244.0.0/16 specifies the private subnet that the pod IPs will be assigned from.
kubectl apply -f descriptor.[yml|json] is the syntax for telling kubectl to create the objects described in the descriptor.[yml|json] file.
deploy Nginx using Deployments and Services
A deployment is a type of Kubernetes object that ensures there's always a specified number of pods running based on a defined template, even if the pod crashes during the cluster's lifetime.
NodePort, a scheme that will make the pod accessible through an arbitrary port opened on each node of the cluster
Services are another type of Kubernetes object that expose cluster internal services to clients, both internal and external.
load balancing requests to multiple pods
Pods are ubiquitous in Kubernetes, so understanding them will facilitate your work
how controllers such as deployments work since they are used frequently in stateless applications for scaling and the automated healing of unhealthy applications.
Understanding the types of services and the options they have is essential for running both stateless and stateful applications.
designed to run on Rack
or complement existing web application frameworks such as Rails and Sinatra by
providing a simple DSL to easily develop RESTful APIs
Grape APIs are Rack applications that are created by subclassing Grape::API
Rails expects a subdirectory that matches the name of the Ruby module and a file name that matches the name of the class
mount multiple API implementations inside another one
mount on a path, which is similar to using prefix inside the mounted API itself.
four strategies in which clients can reach your API's endpoints: :path,
:header, :accept_version_header and :param
clients should pass the desired version as a request parameter,
either in the URL query string or in the request body.
clients should pass the desired version in the HTTP Accept head
clients should pass the desired version in the UR
clients should pass the desired version in the HTTP Accept-Version header.
add a description to API methods and namespaces
Request parameters are available through the params hash object
Parameters are automatically populated from the request body on POST and PUT
route string parameters will have precedence.
Grape allows you to access only the parameters that have been declared by your params block
By default declared(params) includes parameters that have nil values
all valid types
type: File
JSON objects and arrays of objects are accepted equally
any class can be
used as a type so long as an explicit coercion method is supplied
As a special case, variant-member-type collections may also be declared, by
passing a Set or Array with more than one member to type
Parameters can be nested using group or by calling requires or optional with a block
relevant if another parameter is given
Parameters options can be grouped
allow_blank can be combined with both requires and optional
Parameters can be restricted to a specific set of values
Parameters can be restricted to match a specific regular expression
Never define mutually exclusive sets with any required params
Namespaces allow parameter definitions and apply to every method within the namespace
define a route parameter as a namespace using route_param
create custom validation that use request to validate the attribute
rescue a Grape::Exceptions::ValidationErrors and respond with a custom response or turn the response into well-formatted JSON for a JSON API that separates individual parameters and the corresponding error messages
custom validation messages
Request headers are available through the headers helper or from env in their original form
define requirements for your named route parameters using regular
expressions on namespace or endpoint
route will match only if all requirements are met
mix in a module
define reusable params
using cookies method
a 201 for POST-Requests
204 for DELETE-Requests
200 status code for all other Requests
use status to query and set the actual HTTP Status Code
raising errors with error!
It is very crucial to define this endpoint at the very end of your API, as it
literally accepts every request.
rescue_from will rescue the exceptions listed and all their subclasses.
Grape::API provides a logger method which by default will return an instance of the Logger
class from Ruby's standard library.
Grape supports a range of ways to present your data
Grape has built-in Basic and Digest authentication (the given block
is executed in the context of the current Endpoint).
Authentication
applies to the current namespace and any children, but not parents.
Blocks can be executed before or after every API call, using before, after,
before_validation and after_validation
Before and after callbacks execute in the following order
Grape by default anchors all request paths, which means that the request URL
should match from start to end to match
The namespace method has a number of aliases, including: group, resource,
resources, and segment. Use whichever reads the best for your API.
test a Grape API with RSpec by making HTTP requests and examining the response
POST JSON data and specify the correct content-type.
While the *_tag helpers can certainly be used for this task they are somewhat verbose as for each tag you would have to ensure the correct parameter name is used and set the default value of the input appropriately.
For these helpers the first argument is the name of an instance variable and the second is the name of a method (usually an attribute) to call on that object.
must pass the name of an instance variable, i.e. :person or "person", not an actual instance of your model object.
In both the required_version and required_providers settings, each override
constraint entirely replaces the constraints for the same component in the
original block.
If both the base block and the override block both set
required_version then the constraints in the base block are entirely ignored.
Terraform normally loads all of the .tf and .tf.json files within a
directory and expects each one to define a distinct set of configuration
objects.
If two files attempt to define the same object, Terraform returns
an error.
a
human-edited configuration file in the Terraform language native syntax
could be partially overridden using a programmatically-generated file
in JSON syntax.
Terraform has special handling of any configuration
file whose name ends in _override.tf or _override.tf.json
Terraform initially skips these override files when loading configuration,
and then afterwards processes each one in turn (in lexicographical order).
merges the
override block contents into the existing object.
Over-use of override files
hurts readability, since a reader looking only at the original files cannot
easily see that some portions of those files have been overridden without
consulting all of the override files that are present.
When using override
files, use comments in the original files to warn future readers about which
override files apply changes to each block.
A top-level block in an override file merges with a block in a normal
configuration file that has the same block header.
Within a top-level block, an attribute argument within an override block
replaces any argument of the same name in the original block.
Within a top-level block, any nested blocks within an override block replace
all blocks of the same type in the original block.
The contents of nested configuration blocks are not merged.
If more than one override file defines the same top-level block, the overriding
effect is compounded, with later blocks taking precedence over earlier blocks
The settings within terraform blocks are considered individually when
merging.
If the required_providers argument is set, its value is merged on an
element-by-element basis, which allows an override block to adjust the
constraint for a single provider without affecting the constraints for
other providers.
"In both the required_version and required_providers settings, each override constraint entirely replaces the constraints for the same component in the original block. "
A controller tracks at least one Kubernetes resource type.
The
controller(s) for that resource are responsible for making the current
state come closer to that desired state.
in Kubernetes,
a controller will send messages to the
API server that have
useful side effects.
Built-in controllers manage state by
interacting with the cluster API server.
By contrast with Job, some controllers need to make changes to
things outside of your cluster.
the controller makes some change to bring about
your desired state, and then reports current state back to your cluster's API server.
Other control loops can observe that reported data and take their own actions.
As long as the controllers for your cluster are running and able to make
useful changes, it doesn't matter if the overall state is stable or not.
Kubernetes uses lots of controllers that each manage
a particular aspect of cluster state.
a particular control loop
(controller) uses one kind of resource as its desired state, and has a different
kind of resource that it manages to make that desired state happen.
There can be several controllers that create or update the same kind of object.
you can have Deployments and Jobs; these both create Pods.
The Job controller does not delete the Pods that your Deployment created,
because there is information (labels)
the controllers can use to tell those Pods apart.
Kubernetes comes with a set of built-in controllers that run inside
the kube-controller-manager.
the native syntax of the Terraform language, which is
a rich language designed to be relatively easy for humans to read and write.
Terraform's configuration language is based on a more general
language called HCL, and HCL's documentation usually uses the word "attribute"
instead of "argument."
A particular block type may have any number of required labels, or it may
require none
After the block type keyword and any labels, the block body is delimited
by the { and } characters
Identifiers can contain letters, digits, underscores (_), and hyphens (-).
The first character of an identifier must not be a digit, to avoid ambiguity
with literal numbers.
The # single-line comment style is the default comment style and should be
used in most cases.
he idiomatic style
is to use the Unix convention
Indent two spaces for each nesting level.
align their equals signs
Use empty lines to separate logical groups of arguments within a block.
Use one blank line to separate the arguments from
the blocks.
"meta-arguments" (as defined by
the Terraform language semantics)
Avoid separating multiple blocks of the same type with other blocks of
a different type, unless the block types are defined by semantics to
form a family.
Resource names must start with a letter or underscore, and may
contain only letters, digits, underscores, and dashes.
Each resource is associated with a single resource type, which determines
the kind of infrastructure object it manages and what arguments and other
attributes the resource supports.
Each resource type is implemented by a provider,
which is a plugin for Terraform that offers a collection of resource types.
By convention, resource type names start with their
provider's preferred local name.
Most publicly available providers are distributed on the
Terraform Registry, which also
hosts their documentation.
The Terraform language defines several meta-arguments, which can be used with
any resource type to change the behavior of resources.
use precondition and postcondition blocks to specify assumptions and guarantees about how the resource operates.
Some resource types provide a special timeouts nested block argument that
allows you to customize how long certain operations are allowed to take
before being considered to have failed.
Timeouts are handled entirely by the resource type implementation in the
provider
Most
resource types do not support the timeouts block at all.
A resource block declares that you want a particular infrastructure object
to exist with the given settings.
Destroy resources that exist in the state but no longer exist in the configuration.
Destroy and re-create resources whose arguments have changed but which cannot be updated in-place due to remote API limitations.
Expressions within a Terraform module can access
information about resources in the same module, and you can use that information
to help configure other resources. Use the <RESOURCE TYPE>.<NAME>.<ATTRIBUTE>
syntax to reference a resource attribute in an expression.
resources often provide
read-only attributes with information obtained from the remote API; this often
includes things that can't be known until the resource is created, like the
resource's unique random ID.
data sources,
which are a special type of resource used only for looking up information.
some dependencies cannot be recognized implicitly in configuration.
local-only resource types exist for
generating private keys,
issuing self-signed TLS certificates,
and even generating random ids.
The behavior of local-only resources is the same as all other resources, but
their result data exists only within the Terraform state.
The count meta-argument accepts a whole number, and creates that many
instances of the resource or module.
count.index — The distinct index number (starting with 0) corresponding
to this instance.
the count value must be known
before Terraform performs any remote resource actions. This means count
can't refer to any resource attributes that aren't known until after a
configuration is applied
Within nested provisioner or connection blocks, the special
self object refers to the current resource instance, not the resource block
as a whole.
This was fragile, because the resource instances were still identified by their
index instead of the string values in the list.
each.value — The map value corresponding to this instance. (If a set was
provided, this is the same as each.key.)
for_each keys cannot be the result (or rely on the result of) of impure functions,
including uuid, bcrypt, or timestamp, as their evaluation is deferred during the
main evaluation step.
The value used in for_each is used
to identify the resource instance and will always be disclosed in UI output,
which is why sensitive values are not allowed.
if you would like to call keys(local.map), where
local.map is an object with sensitive values (but non-sensitive keys), you can create a
value to pass to for_each with toset([for k,v in local.map : k]).
for_each
can't refer to any resource attributes that aren't known until after a
configuration is applied (such as a unique ID generated by the remote API when
an object is created).
he for_each argument
does not implicitly convert lists or tuples to sets.
Transform a multi-level nested structure into a flat list by
using nested for expressions with the flatten function.
Instances are
identified by a map key (or set member) from the value provided to for_each
Within nested provisioner or connection blocks, the special
self object refers to the current resource instance, not the resource block
as a whole.
Conversion from list to set discards the ordering of the items in the list and
removes any duplicate elements.
models directory is meant to hold tests for your models
controllers directory is meant to hold tests for your controllers
integration directory is meant to hold tests that involve any number of controllers interacting
Fixtures are a way of organizing test data; they reside in the fixtures folder
The test_helper.rb file holds the default configuration for your tests
Fixtures allow you to populate your testing database with predefined data before your tests run
Fixtures are database independent written in YAML.
one file per model.
Each fixture is given a name followed by an indented list of colon-separated key/value pairs.
Keys which resemble YAML keywords such as 'yes' and 'no' are quoted so that the YAML Parser correctly interprets them.
define a reference node between two different fixtures.
ERB allows you to embed Ruby code within templates
The YAML fixture format is pre-processed with ERB when Rails loads fixtures.
Rails by default automatically loads all fixtures from the test/fixtures folder for your models and controllers test.
Fixtures are instances of Active Record.
access the object directly
test_helper.rb specifies the default configuration to run our tests. This is included with all the tests, so any methods added to this file are available to all your tests.
test with method names prefixed with test_.
An assertion is a line of code that evaluates an object (or expression) for expected results.
bin/rake db:test:prepare
Every test contains one or more assertions. Only when all the assertions are successful will the test pass.
rake test command
run a particular test method from the test case by running the test and providing the test method name.
The . (dot) above indicates a passing test. When a test fails you see an F; when a test throws an error you see an E in its place.
we first wrote a test which fails for a desired functionality, then we wrote some code which adds the functionality and finally we ensured that our test passes. This approach to software development is referred to as Test-Driven Development (TDD).