When an instance initiates an outbound flow to a destination in the public IP address space, Azure dynamically maps the private IP address to a public IP address.
After this mapping is created, return traffic for this outbound originated flow can also reach the private IP address where the flow originated.
Azure uses source network address translation (SNAT) to perform this function
When multiple private IP addresses are masquerading behind a single public IP address, Azure uses port address translation (PAT) to masquerade private IP addresses.
If you want outbound connectivity when working with Standard SKUs, you must explicitly define it either with Standard Public IP addresses or Standard public Load Balancer.
the VM is part of a public Load Balancer backend pool. The VM does not have a public IP address assigned to it.
The Load Balancer resource must be configured with a load balancer rule to create a link between the public IP frontend with the backend pool.
VM has an Instance Level Public IP (ILPIP) assigned to it. As far as outbound connections are concerned, it doesn't matter whether the VM is load balanced or not.
When an ILPIP is used, the VM uses the ILPIP for all outbound flows.
A public IP assigned to a VM is a 1:1 relationship (rather than 1: many) and implemented as a stateless 1:1 NAT.
Port masquerading (PAT) is not used, and the VM has all ephemeral ports available for use.
When the load-balanced VM creates an outbound flow, Azure translates the private source IP address of the outbound flow to the public IP address of the public Load Balancer frontend.
Azure uses SNAT to perform this function. Azure also uses PAT to masquerade multiple private IP addresses behind a public IP address.
Ephemeral ports of the load balancer's public IP address frontend are used to distinguish individual flows originated by the VM.
When multiple public IP addresses are associated with Load Balancer Basic, any of these public IP addresses are a candidate for outbound flows, and one is selected at random.
the VM is not part of a public Load Balancer pool (and not part of an internal Standard Load Balancer pool) and does not have an ILPIP address assigned to it.
The public IP address used for this outbound flow is not configurable and does not count against the subscription's public IP resource limit.
Do not use this scenario for whitelisting IP addresses.
This public IP address does not belong to you and cannot be reserved.
Standard Load Balancer uses all candidates for outbound flows at the same time when multiple (public) IP frontends is present.
Load Balancer Basic chooses a single frontend to be used for outbound flows when multiple (public) IP frontends are candidates for outbound flows.
the disableOutboundSnat option defaults to false and signifies that this rule programs outbound SNAT for the associated VMs in the backend pool of the load balancing rule.
Port masquerading SNAT (PAT)
Ephemeral port preallocation for port masquerading SNAT (PAT)
determine the public source IP address of an outbound connection.
each action also maps to particular CRUD operations in a database
resource :photo and resources :photos creates both singular and plural routes that map to the same controller (PhotosController).
One way to avoid deep nesting (as recommended above) is to generate the collection actions scoped under the parent, so as to get a sense of the hierarchy, but to not nest the member actions.
to only build routes with the minimal amount of information to uniquely identify the resource
The shallow method of the DSL creates a scope inside of which every nesting is shallow
These concerns can be used in resources to avoid code duplication and share behavior across routes
add a member route, just add a member block into the resource block
You can leave out the :on option, this will create the same member route except that the resource id value will be available in params[:photo_id] instead of params[:id].
Singular Resources
use a singular resource to map /profile (rather than /profile/:id) to the show action
Passing a String to get will expect a controller#action format
workaround
organize groups of controllers under a namespace
route /articles (without the prefix /admin) to Admin::ArticlesController
route /admin/articles to ArticlesController (without the Admin:: module prefix)
Nested routes allow you to capture this relationship in your routing.
helpers take an instance of Magazine as the first parameter (magazine_ads_url(@magazine)).
Resources should never be nested more than 1 level deep.
via the :shallow option
a balance between descriptive routes and deep nesting
:shallow_path prefixes member paths with the specified parameter
Routing Concerns allows you to declare common routes that can be reused inside other resources and routes
Rails can also create paths and URLs from an array of parameters.
use url_for with a set of objects
In helpers like link_to, you can specify just the object in place of the full url_for call
insert the action name as the first element of the array
This will recognize /photos/1/preview with GET, and route to the preview action of PhotosController, with the resource id value passed in params[:id]. It will also create the preview_photo_url and preview_photo_path helpers.
pass :on to a
route, eliminating the block:
Collection Routes
This will enable Rails to recognize paths such as /photos/search with GET, and route to the search action of PhotosController. It will also create the search_photos_url and search_photos_path route helpers.
simple routing makes it very easy to map legacy URLs to new Rails actions
add an alternate new action using the :on shortcut
When you set up a regular route, you supply a series of symbols that Rails maps to parts of an incoming HTTP request.
:controller maps to the name of a controller in your application
:action maps to the name of an action within that controller
optional parameters, denoted by parentheses
This route will also route the incoming request of /photos to PhotosController#index, since :action and :id are
use a constraint on :controller that matches the namespace you require
dynamic segments don't accept dots
The params will also include any parameters from the query string
:defaults option.
set params[:format] to "jpg"
cannot override defaults via query parameters
specify a name for any route using the :as option
create logout_path and logout_url as named helpers in your application.
Inside the show action of UsersController, params[:username] will contain the username for the user.
should use the get, post, put, patch and delete methods to constrain a route to a particular verb.
use the match method with the :via option to match multiple verbs at once
Routing both GET and POST requests to a single action has security implications
'GET' in Rails won't check for CSRF token. You should never write to the database from 'GET' requests
use the :constraints option to enforce a format for a dynamic segment
constraints
don't need to use anchors
Request-Based Constraints
the same name as the hash key and then compare the return value with the hash value.
constraint values should match the corresponding Request object method return type
reuse dynamic segments from the match in the path to redirect
this redirection is a 301 "Moved Permanently" redirect.
root method
put the root route at the top of the file
The root route only routes GET requests to the action.
root inside namespaces and scopes
For namespaced controllers you can use the directory notation
Only the directory notation is supported
use the :constraints option to specify a required format on the implicit id
specify a single constraint to apply to a number of routes by using the block
non-resourceful routes
:id parameter doesn't accept dots
:as option lets you override the normal naming for the named route helpers
use the :as option to prefix the named route helpers that Rails generates for a rout
prevent name collisions
prefix routes with a named parameter
This will provide you with URLs such as /bob/articles/1 and will allow you to reference the username part of the path as params[:username] in controllers, helpers and views
:only option
:except option
generate only the routes that you actually need can cut down on memory use and speed up the routing process.
alter path names
http://localhost:3000/rails/info/routes
rake routes
setting the CONTROLLER environment variable
Routes should be included in your testing strategy
Refer the YAML Anchors/Aliases documentation for information about how to alias and reuse syntax to keep your .circleci/config.yml file small.
workflow orchestration with two parallel jobs
jobs run according to configured requirements, each job waiting to start until the required job finishes successfully
requires: key
fans-out to run a set of acceptance test jobs in parallel, and finally fans-in to run a common deploy job.
Holding a Workflow for a Manual Approval
Workflows can be configured to wait for manual approval of a job before
continuing to the next job
add a job to the jobs list with the
key type: approval
approval is a special job type that is only available to jobs under the workflow key
The name of the job to hold is arbitrary - it could be wait or pause, for example,
as long as the job has a type: approval key in it.
schedule a workflow
to run at a certain time for specific branches.
The triggers key is only added under your workflows key
using cron syntax to represent Coordinated Universal Time (UTC) for specified branches.
By default,
a workflow is triggered on every git push
the commit workflow has no triggers key
and will run on every git push
The nightly workflow has a triggers key
and will run on the specified schedule
Cron step syntax (for example, */1, */20) is not supported.
use a context to share environment variables
use the same shared environment variables when initiated by a user who is part of the organization.
CircleCI does not run workflows for tags
unless you explicitly specify tag filters.
CircleCI branch and tag filters support
the Java variant of regex pattern matching.
Each workflow has an associated workspace which can be used to transfer files to downstream jobs as the workflow progresses.
The workspace is an additive-only store of data.
Jobs can persist data to the workspace
Downstream jobs can attach the workspace to their container filesystem.
Attaching the workspace downloads and unpacks each layer based on the ordering of the upstream jobs in the workflow graph.
Workflows that include jobs running on multiple branches may require data to be shared using workspaces
To persist data from a job and make it available to other jobs, configure the job to use the persist_to_workspace key.
Files and directories named in the paths: property of persist_to_workspace will be uploaded to the workflow’s temporary workspace relative to the directory specified with the root key.
Configure a job to get saved data by configuring the attach_workspace key.
persist_to_workspace
attach_workspace
To rerun only a workflow’s failed jobs, click the Workflows icon in the app and select a workflow to see the status of each job, then click the Rerun button and select Rerun from failed.
if you do not see your workflows triggering, a configuration error is preventing the workflow from starting.
check your Workflows page of the CircleCI app (not the Job page)
"[" is a command. It's actually syntactic sugar for the built-in command test which checks and compares its arguments. The "]" is actually an argument to the [ command that tells it to stop checking for arguments!
why > and < get weird inside single square brackets -- Bash actually thinks you're trying to do an input or output redirect inside a command!
the [[ double square brackets ]] and (( double parens )) are not exactly commands. They're actually Bash language keywords, which is what makes them behave a little more predictably.
The [[ double square brackets ]] work essentially the same as [ single square brackets ], albeit with some more superpowers like more powerful regex support.
The (( double parentheses )) are actually a construct that allow arithmetic inside Bash.
If the results inside are zero, it returns an exit code of 1. (Essentially, zero is "falsey.")
the greater and less-than symbols work just fine inside arithmetic parens.
exit code 0 for success.
exit code 1 for failure.
If the regex works out, the return code of the double square brackets is 0, and thus the function returns 0. If not, everything returns 1. This is a really great way to name regexes.
the stuff immediately after the if can be any command in the whole wide world, as long as it provides an exit code, which is pretty much always.
""[" is a command. It's actually syntactic sugar for the built-in command test which checks and compares its arguments. The "]" is actually an argument to the [ command that tells it to stop checking for arguments!"
Baseimage-docker only advocates running multiple OS processes inside a single container.
Password and challenge-response authentication are disabled by default. Only key authentication is allowed.
A tool for running a command as another user
The Docker developers advocate the philosophy of running a single logical service per container. A logical service can consist of multiple OS processes.
All syslog messages are forwarded to "docker logs".
Splitting your logical service into multiple OS processes also makes sense from a security standpoint.
Baseimage-docker provides tools to encourage running processes as different users
sometimes it makes sense to run multiple services in a single container, and sometimes it doesn't.
Baseimage-docker advocates running multiple OS processes inside a single container, and a single logical service can consist of multiple OS processes.
using environment variables to pass parameters to containers is very much the "Docker way"
add additional daemons (e.g. your own app) to the image by creating runit entries.
the shell script must run the daemon without letting it daemonize/fork it.
All executable scripts in /etc/my_init.d, if this directory exists. The scripts are run in lexicographic order.
variables will also be passed to all child processes
Environment variables on Unix are inherited on a per-process basis
there is no good central place for defining environment variables for all applications and services
centrally defining environment variables
One of the ideas behind Docker is that containers should be stateless, easily restartable, and behave like a black box.
a one-shot command in a new container
immediately exit after the command exits,
However the downside of this approach is that the init system is not started. That is, while invoking COMMAND, important daemons such as cron and syslog are not running. Also, orphaned child processes are not properly reaped, because COMMAND is PID 1.
Baseimage-docker provides a facility to run a single one-shot command, while solving all of the aforementioned problems
Nginx is one such example: it removes all environment variables unless you explicitly instruct it to retain them through the env configuration option.
Mechanisms for easily running multiple processes, without violating the Docker philosophy
Ubuntu is not designed to be run inside Docker
According to the Unix process model, the init process -- PID 1 -- inherits all orphaned child processes and must reap them
Syslog-ng seems to be much more stable
cron daemon
Rotates and compresses logs
/sbin/setuser
A tool for installing apt packages that automatically cleans up after itself.
a single logical service inside a single container
A daemon is a program which runs in the background of its system, such
as a web server.
The shell script must be called run, must be executable, and is to be
placed in the directory /etc/service/<NAME>. runsv will switch to
the directory and invoke ./run after your container starts.
If any script exits with a non-zero exit code, the booting will fail.
If your process is started with
a shell script, make sure you exec the actual process, otherwise the shell will receive the signal
and not your process.
any environment variables set with docker run --env or with the ENV command in the Dockerfile, will be picked up by my_init
not possible for a child process to change the environment variables of other processes
they will not see the environment variables that were originally passed by Docker.
We ignore HOME, SHELL, USER and a bunch of other environment variables on purpose, because not ignoring them will break multi-user containers.
my_init imports environment variables from the directory /etc/container_environment
/etc/container_environment.sh - a dump of the environment variables in Bash format.
modify the environment variables in my_init (and therefore the environment variables in all child processes that are spawned after that point in time), by altering the files in /etc/container_environment
my_init only activates changes in /etc/container_environment when running startup scripts
environment variables don't contain sensitive data, then you can also relax the permissions
Syslog messages are forwarded to the console
syslog-ng is started separately before the runit supervisor process, and shutdown after runit exits.
RUN apt-get update && apt-get upgrade -y -o Dpkg::Options::="--force-confold"
/sbin/my_init --skip-startup-files --quiet --
By default, no keys are installed, so nobody can login
provide a pregenerated, insecure key (PuTTY format)
RUN /usr/sbin/enable_insecure_key
docker run YOUR_IMAGE /sbin/my_init --enable-insecure-key
RUN cat /tmp/your_key.pub >> /root/.ssh/authorized_keys && rm -f /tmp/your_key.pub
The default baseimage-docker installs syslog-ng, cron and sshd services during the build process
designed to run on Rack
or complement existing web application frameworks such as Rails and Sinatra by
providing a simple DSL to easily develop RESTful APIs
Grape APIs are Rack applications that are created by subclassing Grape::API
Rails expects a subdirectory that matches the name of the Ruby module and a file name that matches the name of the class
mount multiple API implementations inside another one
mount on a path, which is similar to using prefix inside the mounted API itself.
four strategies in which clients can reach your API's endpoints: :path,
:header, :accept_version_header and :param
clients should pass the desired version as a request parameter,
either in the URL query string or in the request body.
clients should pass the desired version in the HTTP Accept head
clients should pass the desired version in the UR
clients should pass the desired version in the HTTP Accept-Version header.
add a description to API methods and namespaces
Request parameters are available through the params hash object
Parameters are automatically populated from the request body on POST and PUT
route string parameters will have precedence.
Grape allows you to access only the parameters that have been declared by your params block
By default declared(params) includes parameters that have nil values
all valid types
type: File
JSON objects and arrays of objects are accepted equally
any class can be
used as a type so long as an explicit coercion method is supplied
As a special case, variant-member-type collections may also be declared, by
passing a Set or Array with more than one member to type
Parameters can be nested using group or by calling requires or optional with a block
relevant if another parameter is given
Parameters options can be grouped
allow_blank can be combined with both requires and optional
Parameters can be restricted to a specific set of values
Parameters can be restricted to match a specific regular expression
Never define mutually exclusive sets with any required params
Namespaces allow parameter definitions and apply to every method within the namespace
define a route parameter as a namespace using route_param
create custom validation that use request to validate the attribute
rescue a Grape::Exceptions::ValidationErrors and respond with a custom response or turn the response into well-formatted JSON for a JSON API that separates individual parameters and the corresponding error messages
custom validation messages
Request headers are available through the headers helper or from env in their original form
define requirements for your named route parameters using regular
expressions on namespace or endpoint
route will match only if all requirements are met
mix in a module
define reusable params
using cookies method
a 201 for POST-Requests
204 for DELETE-Requests
200 status code for all other Requests
use status to query and set the actual HTTP Status Code
raising errors with error!
It is very crucial to define this endpoint at the very end of your API, as it
literally accepts every request.
rescue_from will rescue the exceptions listed and all their subclasses.
Grape::API provides a logger method which by default will return an instance of the Logger
class from Ruby's standard library.
Grape supports a range of ways to present your data
Grape has built-in Basic and Digest authentication (the given block
is executed in the context of the current Endpoint).
Authentication
applies to the current namespace and any children, but not parents.
Blocks can be executed before or after every API call, using before, after,
before_validation and after_validation
Before and after callbacks execute in the following order
Grape by default anchors all request paths, which means that the request URL
should match from start to end to match
The namespace method has a number of aliases, including: group, resource,
resources, and segment. Use whichever reads the best for your API.
test a Grape API with RSpec by making HTTP requests and examining the response
POST JSON data and specify the correct content-type.
A build system, git, and development headers for many popular libraries, so that the most popular Ruby, Python and Node.js native extensions can be compiled without problems.
Nginx 1.18. Disabled by default
production-grade features, such as process monitoring, administration and status inspection.
Redis 5.0. Not installed by default.
The image has an app user with UID 9999 and home directory /home/app. Your application is supposed to run as this user.
running applications without root privileges is good security practice.
Your application should be placed inside /home/app.
COPY --chown=app:app
Passenger works like a mod_ruby, mod_nodejs, etc. It changes Nginx into an application server and runs your app from Nginx.
placing a .conf file in the directory /etc/nginx/sites-enabled
The best way to configure Nginx is by adding .conf files to /etc/nginx/main.d and /etc/nginx/conf.d
files in conf.d are included in the Nginx configuration's http context.
any environment variables you set with docker run -e, Docker linking and /etc/container_environment, won't reach Nginx.
To preserve these variables, place an Nginx config file ending with *.conf in the directory /etc/nginx/main.d, in which you tell Nginx to preserve these variables.
By default, Phusion Passenger sets all of the following environment variables to the value production
Setting these environment variables yourself (e.g. using docker run -e RAILS_ENV=...) will not have any effect, because Phusion Passenger overrides all of these environment variables.
PASSENGER_APP_ENV environment variable
passenger-docker autogenerates an Nginx configuration file (/etc/nginx/conf.d/00_app_env.conf) during container boot.
The configuration file is in /etc/redis/redis.conf. Modify it as you see fit, but make sure daemonize no is set.
You can add additional daemons to the image by creating runit entries.
The shell script must be called run, must be executable
the shell script must run the daemon without letting it daemonize/fork it.
We use RVM to install and to manage Ruby interpreters.
use all rules keywords, like if, changes, and exists, in the same
rule. The rule evaluates to true only when all included keywords evaluate to true.
use parentheses with && and || to build more complicated variable expressions.
Use workflow to specify which types of pipelines
can run.
every push to an open merge request’s source branch
causes duplicated pipelines.
avoid duplicate pipelines by changing the job rules to avoid either push (branch)
pipelines or merge request pipelines.
do not mix only/except jobs with rules jobs in the same pipeline.
For behavior similar to the only/except keywords, you can
check the value of the $CI_PIPELINE_SOURCE variable
commonly used variables for if clauses
rules:changes expressions to determine when
to add jobs to a pipeline
Use !reference tags to reuse rules in different
jobs.
Use except to define when a job does not run.
only or except used without refs is the same as
only:refs / except/refs
If you change multiple files, but only one file ends in .md,
the build job is still skipped.
If you use multiple keywords with only or except, the keywords are evaluated
as a single conjoined expression.
only includes the job if all of the keys have at least one condition that matches.
except excludes the job if any of the keys have at least one condition that matches.
With only, individual keys are logically joined by an AND
With except, individual keys are logically joined by an OR
To specify a job as manual, add when: manual to the job
in the .gitlab-ci.yml file.
Use protected environments
to define a list of users authorized to run a manual job.
Use when: delayed to execute scripts after a waiting period, or if you want to avoid
jobs immediately entering the pending state.
To split a large job into multiple smaller jobs that run in parallel, use the
parallel keyword
run a trigger job multiple times in parallel in a single pipeline,
but with different variable values for each instance of the job.
The @ symbol denotes the beginning of a ref’s repository path.
To match a ref name that contains the @ character in a regular expression,
you must use the hex character code match \x40.
Compare a variable to a string
Check if a variable is undefined
Check if a variable is empty
Check if a variable exists
Check if a variable is empty
Matches are found when using =~.
Matches are not found when using !~.
Join variable expressions together with && or ||