Database constraints and/or stored procedures make the validation mechanisms
database-dependent and can make testing and maintenance more difficult
Client-side validations can be useful, but are generally unreliable
combined with
other techniques, client-side validation can be a convenient way to provide
users with immediate feedback
it's a good
idea to keep your controllers skinny
model-level validations are the most appropriate in most circumstances.
Active Record uses the new_record? instance
method to determine whether an object is already in the database or not.
Creating and saving a new record will send an SQL INSERT operation to the
database. Updating an existing record will send an SQL UPDATE operation
instead. Validations are typically run before these commands are sent to the
database
The bang versions (e.g. save!) raise an exception if the record is invalid.
save and update return false
create just returns the object
skip validations, and will save the object to the
database regardless of its validity.
be used with caution
update_all
save also has the ability to skip validations if passed validate:
false as argument.
save(validate: false)
valid? triggers your validations
and returns true if no errors
After Active Record has performed validations, any errors found can be accessed
through the errors.messages instance method
By definition, an object is valid if this collection is empty after running
validations.
validations are not run when using new.
invalid? is simply the inverse of valid?.
To verify whether or not a particular attribute of an object is valid, you can
use errors[:attribute]. I
only useful after validations have been run
Every time a validation fails, an error message is added to the object's
errors collection,
All of them accept the :on and :message options, which define when the
validation should be run and what message should be added to the errors
collection if it fails, respectively.
validates that a checkbox on the user interface was checked when a
form was submitted.
agree to your
application's terms of service
'acceptance' does not need to be recorded anywhere in your database (if you
don't have a field for it, the helper will just create a virtual attribute).
It defaults to "1" and can be easily changed.
use this helper when your model has associations with other models
and they also need to be validated
valid?
will be called upon each one of the associated objects.
work with all of the association types
Don't use validates_associated on both ends of your associations.
validates that your attributes have only numeric values
By
default, it will match an optional sign followed by an integral or floating
point number.
set
:only_integer to true.
allows a trailing newline
character.
:greater_than
:greater_than_or_equal_to
:equal_to
:less_than
:less_than_or_equal_to
:odd - Specifies the value must be an odd number if set to true.
:even - Specifies the value must be an even number if set to true.
validates that the specified attributes are not empty
if the value is either nil or a blank string
validate associated records whose presence is required, you must
specify the :inverse_of option for the association
inverse_of
an association is present, you'll need to test
whether the associated object itself is present, and not the foreign key used
to map the association
false.blank? is true
validate the presence of a boolean
field
ensure the value will NOT be nil
validates that the specified attributes are absent
not either nil or a blank string
be sure that an association is absent
false.present? is false
validate the absence of a boolean
field you should use validates :field_name, exclusion: { in: [true, false] }.
validates that the attribute's value is unique right before the
object gets saved
a :scope option that you can use to specify other attributes that
are used to limit the uniqueness check
a :case_sensitive option that you can use to define whether the
uniqueness constraint will be case sensitive or not.
There is no default error message for validates_with.
To implement the validate method, you must have a record parameter defined,
which is the record to be validated.
the validator will be initialized only once for the whole application
life cycle, and not on each validation run, so be careful about using instance
variables inside it.
passes the record to a separate class for validation
use a plain old Ruby object
validates attributes against a block
The block receives the record, the attribute's name and the attribute's value.
You can do anything you like to check for valid data within the block
will let validation pass if the attribute's value is blank?, like nil or an
empty string
the :message option lets you specify the message that
will be added to the errors collection when validation fails
skips the validation when the value being validated is
nil
specify when the validation should happen
raise
ActiveModel::StrictValidationFailed when the object is invalid
You can do that by using the :if and :unless options, which
can take a symbol, a string, a Proc or an Array.
use the :if
option when you want to specify when the validation should happen
using eval and needs to
contain valid Ruby code.
Using a Proc object gives you the ability to write an
inline condition instead of a separate method
have multiple validations use one condition, it can
be easily achieved using with_options.
implement a validate method which takes a record as an argument
and performs the validation on it
validates_with method
implement a validate_each method which takes three
arguments: record, attribute, and value
combine standard validations with your
own custom validators.
By default such validations will run every time you call valid?
errors[] is used when you want to check the error messages for a specific attribute.
Returns an instance of the class ActiveModel::Errors containing all errors.
lets you manually add messages that are related to particular attributes
using []= setter
errors[:base] is an array, you can simply add a string to it and it will be used as an error message.
use this method when you want to say that the object is invalid, no matter the values of its attributes.
clear all the messages in the errors collection
calling errors.clear upon an invalid object won't actually make it valid: the errors collection will now be empty, but the next time you call valid? or any method that tries to save this object to the database, the validations will run again.
the total number of error messages for the object.
The parameter name
or symbol to be expanded may be enclosed in braces, which
are optional but serve to protect the variable to be expanded from
characters immediately following it which could be
interpreted as part of the name.
When braces are used, the matching ending brace is the first ‘}’
not escaped by a backslash or within a quoted string, and not within an
embedded arithmetic expansion, command substitution, or parameter
expansion.
${parameter}
braces are required
If the first character of parameter is an exclamation point (!),
and parameter is not a nameref,
it introduces a level of variable indirection.
${parameter:-word}
If parameter is unset or null, the expansion of
word is substituted. Otherwise, the value of
parameter is substituted.
${parameter:=word}
If parameter
is unset or null, the expansion of word
is assigned to parameter.
${parameter:?word}
If parameter
is null or unset, the expansion of word (or a message
to that effect if word
is not present) is written to the standard error and the shell, if it
is not interactive, exits.
${parameter:+word}
If parameter
is null or unset, nothing is substituted, otherwise the expansion of
word is substituted.
${parameter:offset}
${parameter:offset:length}
Substring expansion applied to an associative array produces undefined
results.
${parameter/pattern/string}
The pattern is expanded to produce a pattern just as in
filename expansion.
If pattern begins with ‘/’, all matches of pattern are
replaced with string.
Normally only the first match is replaced
The ‘^’ operator converts lowercase letters matching pattern
to uppercase
the ‘,’ operator converts matching uppercase letters
to lowercase.
Do not use directories as a dependency for generated targets, ever.
Parallel make: add an explicit timestamp dependency (.done) that make can synchronize threaded calls on to avoid a race condition.
Maintain clean targets - makefiles should be able to remove all content that is generated so "make clean" will return the sandbox/directory back to a clean state.
Wrapper check/unit tests with a ENABLE_TESTS conditional
{{ ... }} for Expressions to print to the template output
use a dot (.) to access attributes of a variable
the outer double-curly braces are not part of the
variable, but the print statement.
If you access variables inside tags don’t
put the braces around them.
If a variable or attribute does not exist, you will get back an undefined
value.
the default behavior is to evaluate to an empty string if
printed or iterated over, and to fail for every other operation.
if an object has an item and attribute with the same
name. Additionally, the attr() filter only looks up attributes.
Variables can be modified by filters. Filters are separated from the
variable by a pipe symbol (|) and may have optional arguments in
parentheses.
Multiple filters can be chained
Tests can be used
to test a variable against a common expression.
add is plus the name of the test after the variable.
to find out if a variable is defined, you can do name is defined,
which will then return true or false depending on whether name is defined
in the current template context.
strip whitespace in templates by hand. If you add a minus
sign (-) to the start or end of a block (e.g. a For tag), a
comment, or a variable expression, the whitespaces before or after
that block will be removed
not add whitespace between the tag and the minus sign
mark a block raw
Template inheritance
allows you to build a base “skeleton” template that contains all the common
elements of your site and defines blocks that child templates can override.
The {% extends %} tag is the key here. It tells the template engine that
this template “extends” another template.
access templates in subdirectories with a slash
can’t define multiple {% block %} tags with the same name in the
same template
use the special
self variable and call the block with that name
self.title()
super()
put the name of the block after the end tag for better
readability
if the block is replaced by
a child template, a variable would appear that was not defined in the block or
passed to the context.
setting the block to “scoped” by adding the scoped
modifier to a block declaration
If you have a variable that may
include any of the following chars (>, <, &, or ") you
SHOULD escape it unless the variable contains well-formed and trusted
HTML.
Jinja2 functions (macros, super, self.BLOCKNAME) always return template
data that is marked as safe.
With the default syntax, control structures appear inside
{% ... %} blocks.
the dictsort filter
loop.cycle
Unlike in Python, it’s not possible to break or continue in a loop
use loops recursively
add the recursive modifier
to the loop definition and call the loop variable with the new iterable
where you want to recurse.
The loop variable always refers to the closest (innermost) loop.
whether the value changed at all,
use it to test if a variable is defined, not
empty and not false
Macros are comparable with functions in regular programming languages.
If a macro name starts with an underscore, it’s not exported and can’t
be imported.
pass a macro to another macro
caller()
a single trailing newline is stripped if present
other whitespace (spaces, tabs, newlines etc.) is returned unchanged
a block tag works in “both”
directions. That is, a block tag doesn’t just provide a placeholder to fill
- it also defines the content that fills the placeholder in the parent.
Python dicts are not ordered
caller(user)
call(user)
This is a simple dialog rendered by using a macro and
a call block.
Filter sections allow you to apply regular Jinja2 filters on a block of
template data.
Assignments at
top level (outside of blocks, macros or loops) are exported from the template
like top level macros and can be imported by other templates.
using namespace
objects which allow propagating of changes across scopes
use block assignments to
capture the contents of a block into a variable name.
The extends tag can be used to extend one template from another.
Blocks are used for inheritance and act as both placeholders and replacements
at the same time.
The include statement is useful to include a template and return the
rendered contents of that file into the current namespace
Included templates have access to the variables of the active context by
default.
putting often used code into macros
imports are cached
and imported templates don’t have access to the current template variables,
just the globals by default.
Macros and variables starting with one or more underscores are private and
cannot be imported.
By default, included templates are passed the current context and imported
templates are not.
imports are often used just as a module that holds macros.
Integers and floating point numbers are created by just writing the
number down
Everything between two brackets is a list.
Tuples are like lists that cannot be modified (“immutable”).
A dict in Python is a structure that combines keys and values.
//
Divide two numbers and return the truncated integer result
The special constants true, false, and none are indeed lowercase
all Jinja identifiers are lowercase
(expr)
group an expression.
The is and in operators support negation using an infix notation
in
Perform a sequence / mapping containment test.
|
Applies a filter.
~
Converts all operands into strings and concatenates them.
use inline if expressions.
always an attribute is returned and items are not
looked up.
default(value, default_value=u'', boolean=False)¶
If the value is undefined it will return the passed default value,
otherwise the value of the variable
dictsort(value, case_sensitive=False, by='key', reverse=False)¶
Sort a dict and yield (key, value) pairs.
format(value, *args, **kwargs)¶
Apply python string formatting on an object
groupby(value, attribute)¶
Group a sequence of objects by a common attribute.
grouping by is stored in the grouper
attribute and the list contains all the objects that have this grouper
in common.
indent(s, width=4, first=False, blank=False, indentfirst=None)¶
Return a copy of the string with each line indented by 4 spaces. The
first line and blank lines are not indented by default.
join(value, d=u'', attribute=None)¶
Return a string which is the concatenation of the strings in the
sequence.
map()¶
Applies a filter on a sequence of objects or looks up an attribute.
pprint(value, verbose=False)¶
Pretty print a variable. Useful for debugging.
reject()¶
Filters a sequence of objects by applying a test to each object,
and rejecting the objects with the test succeeding.
replace(s, old, new, count=None)¶
Return a copy of the value with all occurrences of a substring
replaced with a new one.
round(value, precision=0, method='common')¶
Round the number to a given precision
even if rounded to 0 precision, a float is returned.
select()¶
Filters a sequence of objects by applying a test to each object,
and only selecting the objects with the test succeeding.
sort(value, reverse=False, case_sensitive=False, attribute=None)¶
Sort an iterable. Per default it sorts ascending, if you pass it
true as first argument it will reverse the sorting.
striptags(value)¶
Strip SGML/XML tags and replace adjacent whitespace by one space.
tojson(value, indent=None)¶
Dumps a structure to JSON so that it’s safe to use in <script>
tags.
trim(value)¶
Strip leading and trailing whitespace.
unique(value, case_sensitive=False, attribute=None)¶
Returns a list of unique items from the the given iterable
urlize(value, trim_url_limit=None, nofollow=False, target=None, rel=None)¶
Converts URLs in plain text into clickable links.
defined(value)¶
Return true if the variable is defined
in(value, seq)¶
Check if value is in seq.
mapping(value)¶
Return true if the object is a mapping (dict etc.).
number(value)¶
Return true if the variable is a number.
sameas(value, other)¶
Check if an object points to the same memory address than another
object
undefined(value)¶
Like defined() but the other way round.
A joiner is
passed a string and will return that string every time it’s called, except
the first time (in which case it returns an empty string).
namespace(...)¶
Creates a new container that allows attribute assignment using the
{% set %} tag
The with statement makes it possible to create a new inner scope.
Variables set within this scope are not visible outside of the scope.
activate and deactivate the autoescaping from within
the templates
With both trim_blocks and lstrip_blocks enabled, you can put block tags
on their own lines, and the entire block line will be removed when
rendered, preserving the whitespace of the contents
the abstraction of the infrastructure layer, which is now considered code. Deployment of a new application may require the deployment of new infrastructure code as well.
"big bang" deployments update whole or large parts of an application in one fell swoop.
Big bang deployments required the business to conduct extensive development and testing before release, often associated with the "waterfall model" of large sequential releases.
Rollbacks are often costly, time-consuming, or even impossible.
In a rolling deployment, an application’s new version gradually replaces the old one.
new and old versions will coexist without affecting functionality or user experience.
Each container is modified to download the latest image from the app vendor’s site.
two identical production environments work in parallel.
Once the testing results are successful, application traffic is routed from blue to green.
In a blue-green deployment, both systems use the same persistence layer or database back end.
You can use the primary database by blue for write operations and use the secondary by green for read operations.
Blue-green deployments rely on traffic routing.
long TTL values can delay these changes.
The main challenge of canary deployment is to devise a way to route some users to the new application.
Using an application logic to unlock new features to specific users and groups.
With CD, the CI-built code artifact is packaged and always ready to be deployed in one or more environments.
Use Build Automation tools to automate environment builds
Use configuration management tools
Enable automated rollbacks for deployments
An application performance monitoring (APM) tool can help your team monitor critical performance metrics including server response times after deployments.
If a resolver requests records that are already in the name
server's authoritative data or cached data, the name server
answers with that information
if the records aren't in its
database, the name server sends the query to a forwarder and waits a
short period for an answer before resuming normal operation and
contacting the remote name servers itself. What the name server is
doing differently here is sending a
recursive query to the
forwarder, expecting it to find the answer.
the ultimate challenge is to fuse the logic of the
database and data replication with the logic of having several
servers coordinated in a consistent and simple way
MySQL Group Replication provides distributed state machine
replication with strong coordination between servers.
Servers
coordinate themselves automatically when they are part of the same
group
The group can operate in a single-primary mode with automatic
primary election, where only one server accepts updates at a time.
For a transaction to commit, the majority of the group have to agree
on the order of a given transaction in the global sequence of
transactions
Deciding to commit or abort a transaction is done by
each server individually, but all servers make the same decision
group communication
protocols
the Paxos algorithm. It acts as the group communication systems
engine.
If you use AWS, you have two load-balancing options: ELB and ALB.
An ELB is a software-based load balancer which can be set up and configured in front of a collection of AWS Elastic Compute (EC2) instances.
The load balancer serves as a single entry point for consumers of the EC2 instances and distributes incoming traffic across all machines available to receive requests.
the ELB also performs a vital role in improving the fault tolerance of the services which it fronts.
he Open Systems Interconnection Model, or OSI Model, is a conceptual model which is used to facilitate communications between different computing systems.
Layer 1 is the physical layer, and represents the physical medium across which the request is sent.
Layer 2 describes the data link layer
Layer 3 (the network layer)
Layer 7, which serves the application layer.
The Classic ELB operates at Layer 4. Layer 4 represents the transport layer, and is controlled by the protocol being used to transmit the request.
A network device, of which the Classic ELB is an example, reads the protocol and port of the incoming request, and then routes it to one or more backend servers.
the ALB operates at Layer 7. Layer 7 represents the application layer, and as such allows for the redirection of traffic based on the content of the request.
Whereas a request to a specific URL backed by a Classic ELB would only enable routing to a particular pool of homogeneous servers, the ALB can route based on the content of the URL, and direct to a specific subgroup of backing servers existing in a heterogeneous collection registered with the load balancer.
The Classic ELB is a simple load balancer, is easy to configure
As organizations move towards microservice architecture or adopt a container-based infrastructure, the ability to merely map a single address to a specific service becomes more complicated and harder to maintain.
the ALB manages routing based on user-defined rules.
oute traffic to different services based on either the host or the content of the path contained within that URL.
The control plane's components make global decisions about the cluster
Control plane components can be run on any machine in the cluster.
for simplicity, set up scripts typically start all control plane components on
the same machine, and do not run user containers on this machine
The API server is the front end for the Kubernetes control plane.
kube-apiserver is designed to scale horizontally—that is, it scales by deploying more instances.
You can run several instances of kube-apiserver and balance traffic between those instances.
Kubernetes cluster uses etcd as its backing store, make sure you have a
back up plan
for those data.
watches for newly created
Pods with no assigned
node, and selects a node for them
to run on.
Factors taken into account for scheduling decisions include:
individual and collective resource requirements, hardware/software/policy
constraints, affinity and anti-affinity specifications, data locality,
inter-workload interference, and deadlines.
each controller is a separate process, but to reduce complexity, they are all compiled into a single binary and run in a single process.
Node controller
Job controller
Endpoints controller
Service Account & Token controllers
The cloud controller manager lets you link your
cluster into your cloud provider's API, and separates out the components that interact
with that cloud platform from components that only interact with your cluster.
If you are running Kubernetes on your own premises, or in a learning environment inside your
own PC, the cluster does not have a cloud controller manager.
An agent that runs on each node in the cluster. It makes sure that containers are running in a Pod.
The kubelet takes a set of PodSpecs that are provided through various mechanisms and ensures that the containers described in those PodSpecs are running and healthy.
The kubelet doesn't manage containers which were not created by Kubernetes.
kube-proxy is a network proxy that runs on each
node in your cluster,
implementing part of the Kubernetes
Service concept.
kube-proxy
maintains network rules on nodes. These network rules allow network
communication to your Pods from network sessions inside or outside of
your cluster.
kube-proxy uses the operating system packet filtering layer if there is one
and it's available.
Kubernetes supports several container runtimes: Docker,
containerd, CRI-O,
and any implementation of the Kubernetes CRI (Container Runtime
Interface).
Addons use Kubernetes resources (DaemonSet,
Deployment, etc)
to implement cluster features
namespaced resources
for addons belong within the kube-system namespace.
all Kubernetes clusters should have cluster DNS,
Cluster DNS is a DNS server, in addition to the other DNS server(s) in your environment, which serves DNS records for Kubernetes services.
Containers started by Kubernetes automatically include this DNS server in their DNS searches.
Container Resource Monitoring records generic time-series metrics
about containers in a central database, and provides a UI for browsing that data.
A cluster-level logging mechanism is responsible for
saving container logs to a central log store with search/browsing interface.
When InnoDB
starts, it inspects the data files and the transaction log, and performs two
steps. It applies committed transaction log entries to the data files, and it
performs an undo operation on any transactions that modified data but did not
commit.
Percona XtraBackup works by remembering the log sequence number (LSN)
when it starts, and then copying away the data files.
Percona XtraBackup runs a
background process that watches the transaction log files, and copies changes
from it.
Percona XtraBackup needs to do this continually
Percona XtraBackup needs the transaction log records for every change
to the data files since it began execution.
Percona XtraBackup uses Backup locks
where available as a lightweight alternative to FLUSH TABLES WITH READ
LOCK.
Locking is only done for MyISAM and other non-InnoDB tables
after Percona XtraBackup finishes backing up all InnoDB/XtraDB data and
logs.
xtrabackup tries to avoid backup locks and FLUSH TABLES WITH READ LOCK
when the instance contains only InnoDB tables. In this case, xtrabackup
obtains binary log coordinates from performance_schema.log_status
When backup locks are supported by the server, xtrabackup first copies
InnoDB data, runs the LOCK TABLES FOR BACKUP and then copies the MyISAM
tables.
the STDERR of xtrabackup is not written in any file. You will
have to redirect it to a file, e.g., xtrabackup OPTIONS 2> backupout.log
During the prepare phase, Percona XtraBackup performs crash recovery against
the copied data files, using the copied transaction log file. After this is
done, the database is ready to restore and use.
the tools enable you
to do operations such as streaming and incremental backups with
various combinations of copying the data files, copying the log files,
and applying the logs to the data.
To restore a backup with xtrabackup you can use the --copy-back or
--move-back options.
you may have to
change the files’ ownership to mysql before starting the database server, as
they will be owned by the user who created the backup.
When injecting strings from the .Values
object into the template, we ought to quote these strings.
Helm has over 60 available functions. Some of them are defined by the
Go
template language itself. Most of the others
are part of the
Sprig template library
the "Helm template language" as if it is Helm-specific, it
is actually a combination of the Go template language, some extra functions,
and a variety of wrappers to expose certain objects to the templates.
Drawing on a concept from UNIX, pipelines are a tool for chaining
together a series of template commands to compactly express a series of
transformations.
the default function: default DEFAULT_VALUE GIVEN_VALUE
all static default values should live in the values.yaml,
and should not be repeated using the default command (otherwise they would be
redundant).
the default command is perfect for computed values, which
can not be declared inside values.yaml.
When lookup returns an object, it will return a dictionary.
The synopsis of the lookup function is lookup apiVersion, kind, namespace, name -> resource or resource list
When no object is found, an empty value is returned. This can be used to check
for the existence of an object.
The lookup function uses Helm's existing Kubernetes connection configuration
to query Kubernetes.
Helm is not supposed to contact the Kubernetes API Server
during a helm template or a helm install|update|delete|rollback --dry-run,
so the lookup function will return an empty list (i.e. dict) in such a case.
the operators (eq, ne, lt, gt, and, or and so on) are
all implemented as functions. In pipelines, operations can be grouped with
parentheses ((, and )).
the native syntax of the Terraform language, which is
a rich language designed to be relatively easy for humans to read and write.
Terraform's configuration language is based on a more general
language called HCL, and HCL's documentation usually uses the word "attribute"
instead of "argument."
A particular block type may have any number of required labels, or it may
require none
After the block type keyword and any labels, the block body is delimited
by the { and } characters
Identifiers can contain letters, digits, underscores (_), and hyphens (-).
The first character of an identifier must not be a digit, to avoid ambiguity
with literal numbers.
The # single-line comment style is the default comment style and should be
used in most cases.
he idiomatic style
is to use the Unix convention
Indent two spaces for each nesting level.
align their equals signs
Use empty lines to separate logical groups of arguments within a block.
Use one blank line to separate the arguments from
the blocks.
"meta-arguments" (as defined by
the Terraform language semantics)
Avoid separating multiple blocks of the same type with other blocks of
a different type, unless the block types are defined by semantics to
form a family.
Resource names must start with a letter or underscore, and may
contain only letters, digits, underscores, and dashes.
Each resource is associated with a single resource type, which determines
the kind of infrastructure object it manages and what arguments and other
attributes the resource supports.
Each resource type is implemented by a provider,
which is a plugin for Terraform that offers a collection of resource types.
By convention, resource type names start with their
provider's preferred local name.
Most publicly available providers are distributed on the
Terraform Registry, which also
hosts their documentation.
The Terraform language defines several meta-arguments, which can be used with
any resource type to change the behavior of resources.
use precondition and postcondition blocks to specify assumptions and guarantees about how the resource operates.
Some resource types provide a special timeouts nested block argument that
allows you to customize how long certain operations are allowed to take
before being considered to have failed.
Timeouts are handled entirely by the resource type implementation in the
provider
Most
resource types do not support the timeouts block at all.
A resource block declares that you want a particular infrastructure object
to exist with the given settings.
Destroy resources that exist in the state but no longer exist in the configuration.
Destroy and re-create resources whose arguments have changed but which cannot be updated in-place due to remote API limitations.
Expressions within a Terraform module can access
information about resources in the same module, and you can use that information
to help configure other resources. Use the <RESOURCE TYPE>.<NAME>.<ATTRIBUTE>
syntax to reference a resource attribute in an expression.
resources often provide
read-only attributes with information obtained from the remote API; this often
includes things that can't be known until the resource is created, like the
resource's unique random ID.
data sources,
which are a special type of resource used only for looking up information.
some dependencies cannot be recognized implicitly in configuration.
local-only resource types exist for
generating private keys,
issuing self-signed TLS certificates,
and even generating random ids.
The behavior of local-only resources is the same as all other resources, but
their result data exists only within the Terraform state.
The count meta-argument accepts a whole number, and creates that many
instances of the resource or module.
count.index — The distinct index number (starting with 0) corresponding
to this instance.
the count value must be known
before Terraform performs any remote resource actions. This means count
can't refer to any resource attributes that aren't known until after a
configuration is applied
Within nested provisioner or connection blocks, the special
self object refers to the current resource instance, not the resource block
as a whole.
This was fragile, because the resource instances were still identified by their
index instead of the string values in the list.
each action also maps to particular CRUD operations in a database
resource :photo and resources :photos creates both singular and plural routes that map to the same controller (PhotosController).
One way to avoid deep nesting (as recommended above) is to generate the collection actions scoped under the parent, so as to get a sense of the hierarchy, but to not nest the member actions.
to only build routes with the minimal amount of information to uniquely identify the resource
The shallow method of the DSL creates a scope inside of which every nesting is shallow
These concerns can be used in resources to avoid code duplication and share behavior across routes
add a member route, just add a member block into the resource block
You can leave out the :on option, this will create the same member route except that the resource id value will be available in params[:photo_id] instead of params[:id].
Singular Resources
use a singular resource to map /profile (rather than /profile/:id) to the show action
Passing a String to get will expect a controller#action format
workaround
organize groups of controllers under a namespace
route /articles (without the prefix /admin) to Admin::ArticlesController
route /admin/articles to ArticlesController (without the Admin:: module prefix)
Nested routes allow you to capture this relationship in your routing.
helpers take an instance of Magazine as the first parameter (magazine_ads_url(@magazine)).
Resources should never be nested more than 1 level deep.
via the :shallow option
a balance between descriptive routes and deep nesting
:shallow_path prefixes member paths with the specified parameter
Routing Concerns allows you to declare common routes that can be reused inside other resources and routes
Rails can also create paths and URLs from an array of parameters.
use url_for with a set of objects
In helpers like link_to, you can specify just the object in place of the full url_for call
insert the action name as the first element of the array
This will recognize /photos/1/preview with GET, and route to the preview action of PhotosController, with the resource id value passed in params[:id]. It will also create the preview_photo_url and preview_photo_path helpers.
pass :on to a
route, eliminating the block:
Collection Routes
This will enable Rails to recognize paths such as /photos/search with GET, and route to the search action of PhotosController. It will also create the search_photos_url and search_photos_path route helpers.
simple routing makes it very easy to map legacy URLs to new Rails actions
add an alternate new action using the :on shortcut
When you set up a regular route, you supply a series of symbols that Rails maps to parts of an incoming HTTP request.
:controller maps to the name of a controller in your application
:action maps to the name of an action within that controller
optional parameters, denoted by parentheses
This route will also route the incoming request of /photos to PhotosController#index, since :action and :id are
use a constraint on :controller that matches the namespace you require
dynamic segments don't accept dots
The params will also include any parameters from the query string
:defaults option.
set params[:format] to "jpg"
cannot override defaults via query parameters
specify a name for any route using the :as option
create logout_path and logout_url as named helpers in your application.
Inside the show action of UsersController, params[:username] will contain the username for the user.
should use the get, post, put, patch and delete methods to constrain a route to a particular verb.
use the match method with the :via option to match multiple verbs at once
Routing both GET and POST requests to a single action has security implications
'GET' in Rails won't check for CSRF token. You should never write to the database from 'GET' requests
use the :constraints option to enforce a format for a dynamic segment
constraints
don't need to use anchors
Request-Based Constraints
the same name as the hash key and then compare the return value with the hash value.
constraint values should match the corresponding Request object method return type
reuse dynamic segments from the match in the path to redirect
this redirection is a 301 "Moved Permanently" redirect.
root method
put the root route at the top of the file
The root route only routes GET requests to the action.
root inside namespaces and scopes
For namespaced controllers you can use the directory notation
Only the directory notation is supported
use the :constraints option to specify a required format on the implicit id
specify a single constraint to apply to a number of routes by using the block
non-resourceful routes
:id parameter doesn't accept dots
:as option lets you override the normal naming for the named route helpers
use the :as option to prefix the named route helpers that Rails generates for a rout
prevent name collisions
prefix routes with a named parameter
This will provide you with URLs such as /bob/articles/1 and will allow you to reference the username part of the path as params[:username] in controllers, helpers and views
:only option
:except option
generate only the routes that you actually need can cut down on memory use and speed up the routing process.
alter path names
http://localhost:3000/rails/info/routes
rake routes
setting the CONTROLLER environment variable
Routes should be included in your testing strategy
Docker builds images automatically by reading the instructions from a
Dockerfile -- a text file that contains all commands, in order, needed to
build a given image.
A Docker image consists of read-only layers each of which represents a
Dockerfile instruction.
The layers are stacked and each one is a delta of the
changes from the previous layer
When you run an image and generate a container, you add a new writable layer
(the “container layer”) on top of the underlying layers.
By “ephemeral,” we mean that the container can be stopped
and destroyed, then rebuilt and replaced with an absolute minimum set up and
configuration.
Inadvertently including files that are not necessary for building an image
results in a larger build context and larger image size.
To exclude files not relevant to the build (without restructuring your source
repository) use a .dockerignore file. This file supports exclusion patterns
similar to .gitignore files.
minimize image layers by leveraging build cache.
if your build contains several layers, you can order them from the
less frequently changed (to ensure the build cache is reusable) to the more
frequently changed
avoid
installing extra or unnecessary packages just because they might be “nice to
have.”
Each container should have only one concern.
Decoupling applications into
multiple containers makes it easier to scale horizontally and reuse containers
Limiting each container to one process is a good rule of thumb, but it is not a
hard and fast rule.
Use your best judgment to keep containers as clean and modular as possible.
do multi-stage builds
and only copy the artifacts you need into the final image. This allows you to
include tools and debug information in your intermediate build stages without
increasing the size of the final image.
avoid duplication of packages and make the
list much easier to update.
When building an image, Docker steps through the instructions in your
Dockerfile, executing each in the order specified.
the next
instruction is compared against all child images derived from that base
image to see if one of them was built using the exact same instruction. If
not, the cache is invalidated.
simply comparing the instruction in the Dockerfile with one
of the child images is sufficient.
For the ADD and COPY instructions, the contents of the file(s)
in the image are examined and a checksum is calculated for each file.
If anything has changed in the file(s), such
as the contents and metadata, then the cache is invalidated.
cache checking does not look at the
files in the container to determine a cache match.
In that case just
the command string itself is used to find a match.
Whenever possible, use current official repositories as the basis for your
images.
Using RUN apt-get update && apt-get install -y ensures your Dockerfile
installs the latest package versions with no further coding or manual
intervention.
cache busting
Docker executes these commands using the /bin/sh -c interpreter, which only
evaluates the exit code of the last operation in the pipe to determine success.
set -o pipefail && to ensure that an unexpected error prevents the
build from inadvertently succeeding.
The CMD instruction should be used to run the software contained by your
image, along with any arguments.
CMD should almost always be used in the form
of CMD [“executable”, “param1”, “param2”…]
CMD should rarely be used in the manner of CMD [“param”, “param”] in
conjunction with ENTRYPOINT
The ENV instruction is also useful for providing required environment
variables specific to services you wish to containerize,
Each ENV line creates a new intermediate layer, just like RUN commands
COPY
is preferred
COPY only
supports the basic copying of local files into the container
the best use for ADD is local tar file
auto-extraction into the image, as in ADD rootfs.tar.xz /
If you have multiple Dockerfile steps that use different files from your
context, COPY them individually, rather than all at once.
using ADD to fetch packages from remote URLs is
strongly discouraged; you should use curl or wget instead
The best use for ENTRYPOINT is to set the image’s main command, allowing that
image to be run as though it was that command (and then use CMD as the
default flags).
the image name can double as a reference to the binary as
shown in the command
The VOLUME instruction should be used to expose any database storage area,
configuration storage, or files/folders created by your docker container.
use VOLUME for any mutable and/or user-serviceable
parts of your image
If you absolutely need
functionality similar to sudo, such as initializing the daemon as root but
running it as non-root), consider using “gosu”.
always use absolute paths for your
WORKDIR
An ONBUILD command executes after the current Dockerfile build completes.
Think
of the ONBUILD command as an instruction the parent Dockerfile gives
to the child Dockerfile
A Docker build executes ONBUILD commands before any command in a child
Dockerfile.
Be careful when putting ADD or COPY in ONBUILD. The “onbuild” image
fails catastrophically if the new build’s context is missing the resource being
added.
Infrastructure as code is at the heart of provisioning for cloud infrastructure marking a significant shift away from monolithic point-and-click management tools.
infrastructure as code enables operators to take a programmatic approach to provisioning.
provides a single workflow to provision and maintain infrastructure and services from all of your vendors, making it not only easier to switch providers
A Terraform Provider is responsible for understanding API interactions between and exposing the resources from a given Infrastructure, Platform, or SaaS offering to Terraform.
write a Terraform file that describes the Virtual Machine that you want, apply that file with Terraform and create that VM as you described without ever needing to log into the vSphere dashboard.
HashiCorp Configuration Language (HCL)
the provider credentials are passed in at the top of the script to connect to the vSphere account.
modules— a way to encapsulate infrastructure resources into a reusable format.
All strings within templates are processed by a common Packer templating
engine, where variables and functions can be used to modify the value of a
configuration parameter at runtime.
Anything template related happens within double-braces: {{ }}.
Functions are specified directly within the braces, such as
{{timestamp}}
A backing service is any service the app consumes over the network as part of its normal operation.
A deploy of the twelve-factor app should be able to swap out a local MySQL database with one managed by a third party (such as Amazon RDS) without any changes to the app’s code.
only the resource handle in the config needs to change
long-term archival. These archival destinations are not visible to or configurable by the app, and instead are completely managed by the execution environment.
Most significantly, the stream can be sent to a log indexing and analysis system such as Splunk, or a general-purpose data warehousing system such as Hadoop/Hive.