"Delivering production software can often be a painful task. Long test periods and the integration between operations and development can ruin or delay a promising delivery. That's what DevOps can fix. DevOps is a cultural change that aims to smoothly integrate development and operations procedures, breaking the barriers between them and focusing on automation, collaboration, and sharing of knowledge and tools. This book shows you how to implement DevOps and Continuous Delivery practices to raise your system's deployment frequency, increasing your production application's stability and robustness."
"The YubiKey 4 is the strong authentication bullseye the industry has been aiming at for years, enabling one single key to secure an unlimited number of applications.
Yubico's 4th generation YubiKey is built on high-performance secure elements. It includes the same range of one-time password and public key authentication protocols as in the YubiKey NEO, excluding NFC, but with stronger public/private keys, faster crypto operations and the world's first touch-to-sign feature.
With the YubiKey 4 platform, we have further improved our manufacturing and ordering process, enabling customers to order exactly what functions they want in 500+ unit volumes, with no secrets stored at Yubico or shared with a third-party organization. The best part? An organization can securely customize 1,000 YubiKeys in less than 10 minutes.
For customers who require NFC, the YubiKey NEO is our full-featured key with both contact (USB) and contactless (NFC, MIFARE) communications."
Supervisor is a client/server system that allows its users to monitor and control a number of processes on UNIX-like operating systems.
It shares some of the same goals of programs like launchd, daemontools, and runit. Unlike some of these programs, it is not meant to be run as a substitute for init as "process id 1". Instead it is meant to be used to control processes related to a project or a customer, and is meant to start like any other program at boot time.
"Vega is a declarative format for creating, saving, and sharing visualization designs. With Vega, visualizations are described in JSON, and generate interactive views using either HTML5 Canvas or SVG."
DevOps is a set of practices that automates the processes between software development and IT teams, in order that they can build, test, and release software faster and more reliably.
increased trust, faster software releases, ability to solve critical issues quickly, and better manage unplanned work.
bringing together the best of software development and IT operations.
a firm handshake between development and operations
DevOps isn’t magic, and transformations don’t happen overnight.
Infrastructure as code
Culture is the #1 success factor in DevOps.
Building a culture of shared responsibility, transparency and faster feedback is the foundation of every high performing DevOps team.
'not our problem' mentality
DevOps is that change in mindset of looking at the development process holistically and breaking down the barrier between Dev and Ops.
Speed is everything.
Lack of automated test and review cycles block the release to production and poor incident response time kills velocity and team confidence
Open communication helps Dev and Ops teams swarm on issues, fix incidents, and unblock the release pipeline faster.
Unplanned work is a reality that every team faces–a reality that most often impacts team productivity.
“cross-functional collaboration.”
All the tooling and automation in the world are useless if they aren’t accompanied by a genuine desire on the part of development and IT/Ops professionals to work together.
DevOps doesn’t solve tooling problems. It solves human problems.
Forming project- or product-oriented teams to replace function-based teams is a step in the right direction.
sharing a common goal and having a plan to reach it together
join sprint planning sessions, daily stand-ups, and sprint demos.
DevOps culture across every department
open channels of communication, and talk regularly
continuous delivery: the practice of running each code change through a gauntlet of automated tests, often facilitated by cloud-based infrastructure, then packaging up successful builds and promoting them up toward production using automated deploys.
automated deploys alert IT/Ops to server “drift” between environments, which reduces or eliminates surprises when it’s time to release.
“configuration as code.”
when DevOps uses automated deploys to send thoroughly tested code to identically provisioned environments, “Works on my machine!” becomes irrelevant.
A DevOps mindset sees opportunities for continuous improvement everywhere.
regular retrospectives
A/B testing
failure is inevitable. So you might as well set up your team to absorb it, recover, and learn from it (some call this “being anti-fragile”).
Postmortems focus on where processes fell down and how to strengthen them – not on which team member f'ed up the code.
Our engineers are responsible for QA, writing, and running their own tests to get the software out to customers.
How long did it take to go from development to deployment?
How long does it take to recover after a system failure?
service level agreements (SLAs)
Devops isn't any single person's job. It's everyone's job.
DevOps is big on the idea that the same people who build an application should be involved in shipping and running it.
developers and operators pair with each other in each phase of the application’s lifecycle.
By default, secrets are mounted into a service at /run/secrets/<secret-name>
docker secret create
If you use a distributed storage driver, such as Amazon S3, you can use a
fully replicated service. Each worker can write to the storage back-end
without causing write conflicts.
You can access the service on port 443 of any swarm node. Docker sends the
requests to the node which is running the service.
--publish published=443,target=443
The most important aspect is that a load balanced cluster of registries must
share the same resources
S3 or Azure, they should be
accessing the same resource and share an identical configuration.
you must make sure you are properly sending the
X-Forwarded-Proto, X-Forwarded-For, and Host headers to their “client-side”
values. Failure to do so usually makes the registry issue redirects to internal
hostnames or downgrading from https to http.
A properly secured registry should return 401 when the “/v2/” endpoint is hit
without credentials
registries should always
implement access restrictions.
REGISTRY_AUTH=htpasswd
REGISTRY_AUTH_HTPASSWD_PATH=/auth/htpasswd
The registry also supports delegated authentication which redirects users to a
specific trusted token server. This approach is more complicated to set up, and
only makes sense if you need to fully configure ACLs and need more control over
the registry’s integration into your global authorization and authentication
systems.
Rails 4 automatically adds the sass-rails, coffee-rails and uglifier
gems to your Gemfile
reduce the number of requests that a browser makes to render a web page
Starting with version 3.1, Rails defaults to concatenating all JavaScript files into one master .js file and all CSS files into one master .css file
In production, Rails inserts an MD5 fingerprint into each filename so that the file is cached by the web browser
The technique sprockets uses for fingerprinting is to insert a hash of the
content into the name, usually at the end.
asset minification or compression
The sass-rails gem is automatically used for CSS compression if included
in Gemfile and no config.assets.css_compressor option is set.
Supported languages include Sass for CSS, CoffeeScript for JavaScript, and ERB for both by default.
When a filename is unique and based on its content, HTTP headers can be set to encourage caches everywhere (whether at CDNs, at ISPs, in networking equipment, or in web browsers) to keep their own copy of the content
asset pipeline is technically no longer a core feature of Rails 4
Rails uses for fingerprinting is to insert a hash of the content into the name, usually at the end
With the asset pipeline, the preferred location for these assets is now the app/assets directory.
Fingerprinting is enabled by default for production and disabled for all other
environments
The files in app/assets are never served directly in production.
Paths are traversed in the order that they occur in the search path
You should use app/assets for
files that must undergo some pre-processing before they are served.
By default .coffee and .scss files will not be precompiled on their own
app/assets is for assets that are owned by the application, such as custom images, JavaScript files or stylesheets.
lib/assets is for your own libraries' code that doesn't really fit into the scope of the application or those libraries which are shared across applications.
vendor/assets is for assets that are owned by outside entities, such as code for JavaScript plugins and CSS frameworks.
Any path under assets/* will be searched
By default these files will be ready to use by your application immediately using the require_tree directive.
By default, this means the files in app/assets take precedence, and will mask corresponding paths in lib and vendor
Sprockets uses files named index (with the relevant extensions) for a special purpose
Rails.application.config.assets.paths
causes turbolinks to check if
an asset has been updated and if so loads it into the page
if you add an erb extension to a CSS asset (for example, application.css.erb), then helpers like asset_path are available in your CSS rules
If you add an erb extension to a JavaScript asset, making it something such as application.js.erb, then you can use the asset_path helper in your JavaScript code
The asset pipeline automatically evaluates ERB
data URI — a method of embedding the image data directly into the CSS file — you can use the asset_data_uri helper.
Sprockets will also look through the paths specified in config.assets.paths,
which includes the standard application paths and any paths added by Rails
engines.
image_tag
the closing tag cannot be of the style -%>
asset_data_uri
app/assets/javascripts/application.js
sass-rails provides -url and -path helpers (hyphenated in Sass,
underscored in Ruby) for the following asset classes: image, font, video, audio,
JavaScript and stylesheet.
Rails.application.config.assets.compress
In JavaScript files, the directives begin with //=
The require_tree directive tells Sprockets to recursively include all JavaScript files in the specified directory into the output.
manifest files contain directives — instructions that tell Sprockets which files to require in order to build a single CSS or JavaScript file.
You should not rely on any particular order among those
Sprockets uses manifest files to determine which assets to include and serve.
the family of require directives prevents files from being included twice in the output
which files to require in order to build a single CSS or JavaScript file
Directives are processed top to bottom, but the order in which files are included by require_tree is unspecified.
In JavaScript files, Sprockets directives begin with //=
If require_self is called more than once, only the last call is respected.
require
directive is used to tell Sprockets the files you wish to require.
You need not supply the extensions explicitly.
Sprockets assumes you are requiring a .js file when done from within a .js
file
paths must be
specified relative to the manifest file
require_directory
Rails 4 creates both app/assets/javascripts/application.js and
app/assets/stylesheets/application.css regardless of whether the
--skip-sprockets option is used when creating a new rails application.
The file extensions used on an asset determine what preprocessing is applied.
app/assets/stylesheets/application.css
Additional layers of preprocessing can be requested by adding other extensions, where each extension is processed in a right-to-left manner
require_self
use the Sass @import rule
instead of these Sprockets directives.
Keep in mind that the order of these preprocessors is important
In development mode, assets are served as separate files in the order they are specified in the manifest file.
when these files are
requested they are processed by the processors provided by the coffee-script
and sass gems and then sent back to the browser as JavaScript and CSS
respectively.
css.scss.erb
js.coffee.erb
Keep in mind the order of these preprocessors is important.
By default Rails assumes that assets have been precompiled and will be served as static assets by your web server
with the Asset Pipeline the :cache and :concat options aren't used anymore
Assets are compiled and cached on the first request after the server is started
GitLab flow as a clearly defined set of best practices.
It combines feature-driven development and feature branches with issue tracking.
In Git, you add files from the working copy to the staging area. After that, you commit them to your local repo.
The third step is pushing to a shared remote repository.
The biggest problem is that many long-running branches emerge that all contain part of the changes.
It is a convention to call your default branch master and to mostly branch from and merge to this.
Nowadays, most organizations practice continuous delivery, which means that your default branch can be deployed.
Continuous delivery removes the need for hotfix and release branches, including all the ceremony they introduce.
Merging everything into the master branch and frequently deploying means you minimize the amount of unreleased code, which is in line with lean and continuous delivery best practices.
GitHub flow assumes you can deploy to production every time you merge a feature branch.
You can deploy a new version by merging master into the production branch.
If you need to know what code is in production, you can just checkout the production branch to see.
Production branch
Environment branches
have an environment that is automatically updated to the master branch.
deploy the master branch to staging.
To deploy to pre-production, create a merge request from the master branch to the pre-production branch.
Go live by merging the pre-production branch into the production branch.
Release branches
work with release branches if you need to release software to the outside world.
each branch contains a minor version
After announcing a release branch, only add serious bug fixes to the branch.
merge these bug fixes into master, and then cherry-pick them into the release branch.
Merging into master and then cherry-picking into release is called an “upstream first” policy
Tools such as GitHub and Bitbucket choose the name “pull request” since the first manual action is to pull the feature branch.
Tools such as GitLab and others choose the name “merge request” since the final action is to merge the feature branch.
If you work on a feature branch for more than a few hours, it is good to share the intermediate result with the rest of the team.
the merge request automatically updates when new commits are pushed to the branch.
If the assigned person does not feel comfortable, they can request more changes or close the merge request without merging.
In GitLab, it is common to protect the long-lived branches, e.g., the master branch, so that most developers can’t modify them.
if you want to merge into a protected branch, assign your merge request to someone with maintainer permissions.
After you merge a feature branch, you should remove it from the source control software.
Having a reason for every code change helps to inform the rest of the team and to keep the scope of a feature branch small.
If there is no issue yet, create the issue
The issue title should describe the desired state of the system.
For example, the issue title “As an administrator, I want to remove users without receiving an error” is better than “Admin can’t remove users.”
create a branch for the issue from the master branch
If you open the merge request but do not assign it to anyone, it is a “Work In Progress” merge request.
Start the title of the merge request with [WIP] or WIP: to prevent it from being merged before it’s ready.
When they press the merge button, GitLab merges the code and creates a merge commit that makes this event easily visible later on.
Merge requests always create a merge commit, even when the branch could be merged without one.
This merge strategy is called “no fast-forward” in Git.
Suppose that a branch is merged but a problem occurs and the issue is reopened.
In this case, it is no problem to reuse the same branch name since the first branch was deleted when it was merged.
At any time, there is at most one branch for every issue.
It is possible that one feature branch solves more than one issue.
GitLab closes these issues when the code is merged into the default branch.
If you have an issue that spans across multiple repositories, create an issue for each repository and link all issues to a parent issue.
use an interactive rebase (rebase -i) to squash multiple commits into one or reorder them.
you should never rebase commits you have pushed to a remote server.
Rebasing creates new commits for all your changes, which can cause confusion because the same change would have multiple identifiers.
if someone has already reviewed your code, rebasing makes it hard to tell what changed since the last review.
never rebase commits authored by other people.
it is a bad idea to rebase commits that you have already pushed.
If you revert a merge commit and then change your mind, revert the revert commit to redo the merge.
Often, people avoid merge commits by just using rebase to reorder their commits after the commits on the master branch.
Using rebase prevents a merge commit when merging master into your feature branch, and it creates a neat linear history.
every time you rebase, you have to resolve similar conflicts.
Sometimes you can reuse recorded resolutions (rerere), but merging is better since you only have to resolve conflicts once.
A good way to prevent creating many merge commits is to not frequently merge master into the feature branch.
keep your feature branches short-lived.
Most feature branches should take less than one day of work.
If your feature branches often take more than a day of work, try to split your features into smaller units of work.
You could also use feature toggles to hide incomplete features so you can still merge back into master every day.
you should try to prevent merge commits, but not eliminate them.
Your codebase should be clean, but your history should represent what actually happened.
If you rebase code, the history is incorrect, and there is no way for tools to remedy this because they can’t deal with changing commit identifiers
Commit often and push frequently
You should push your feature branch frequently, even when it is not yet ready for review.
A commit message should reflect your intention, not just the contents of the commit.
each merge request must be tested before it is accepted.
test the master branch after each change.
If new commits in master cause merge conflicts with the feature branch, merge master back into the branch to make the CI server re-run the tests.
When creating a feature branch, always branch from an up-to-date master.
Do not merge from upstream again if your code can work and merge cleanly without doing so.
AS3 manages topology records globally in /Common, it is required that records only be managed through AS3, as it will treat the records declaratively.
If a record is added outside of AS3, it will be removed if it is not included in the next AS3 declaration for topology records (AS3 completely overwrites non-AS3 topologies when a declaration is submitted).
using AS3 to delete a tenant (for example, sending DELETE to the /declare/<TENANT> endpoint) that contains GSLB topologies will completely remove ALL GSLB topologies from the BIG-IP.
When posting a large declaration (hundreds of application services in a single declaration), you may experience a 500 error stating that the save sys config operation failed.
Even if you have asynchronous mode set to false, after 45 seconds AS3 sets asynchronous mode to true (API swap), and returns an async response.
When creating a new tenant using AS3, it must not use the same name as a
partition you separately create on the target BIG-IP system.
If you use the
same name and then post the declaration, AS3 overwrites (or removes) the
existing partition completely, including all configuration objects in that
partition.
use AS3 to create a tenant (which creates a BIG-IP partition),
manually adding configuration objects to the partition created by AS3 can
have unexpected results
When you delete the
Tenant using AS3, the system deletes both virtual servers.
if a Firewall_Address_List contains zero addresses, a dummy IPv6 address of ::1:5ee:bad:c0de is added in order to maintain a valid Firewall_Address_List. If an address is added to the list, the dummy address is removed.
use /mgmt/shared/appsvcs/declare?async=true if you have a particularly large declaration which will take a long time to process.
reviewing the Sizing BIG-IP Virtual Editions section (page 7) of Deploying BIG-IP VEs in a Hyper-Converged Infrastructure
To test whether your system has AS3 installed or not, use GET with the /mgmt/shared/appsvcs/info URI.
You may find it more convenient to put multi-line texts such as iRules into
AS3 declarations by first encoding them in Base64.
no matter your BIG-IP user account name, audit logs show all
messages from admin and not the specific user name.
Create an additional staging environment that closely resembles the
production one
Keep any additional configuration in YAML files under the config/ directory
Rails::Application.config_for(:yaml_file)
Use nested routes to express better the relationship between ActiveRecord
models
nest routes more than 1 level deep then use the shallow: true option
namespaced routes to group related actions
Don't use match to define any routes unless there is need to map multiple request types among [:get, :post, :patch, :put, :delete] to a single action using :via option.
Keep the controllers skinny
all the business logic
should naturally reside in the model
Share no more than two instance variables between a controller and a view.
using a template
Prefer render plain: over render text
Prefer corresponding symbols to numeric HTTP status codes
without abbreviations
Keep your models for business logic and data-persistence
only
Group macro-style methods (has_many, validates, etc) in the beginning of
the class definition
Prefer has_many :through to has_and_belongs_to_many
self[:attribute]
self[:attribute] = value
validates
Keep custom validators under app/validators
Consider extracting custom validators to a shared gem
preferable to make a class method instead which serves the
same purpose of the named scope
returns an ActiveRecord::Relation
object
.update_attributes
Override the to_param method of the model
Use the friendly_id gem. It allows creation of human-readable URLs by
using some descriptive attribute of the model instead of its id
find_each to iterate over a collection of AR objects
.find_each
.find_each
Looping through a
collection of records from the database (using the all method, for example)
is very inefficient since it will try to instantiate all the objects at once
always call
before_destroy callbacks that perform validation with prepend: true
Define the dependent option to the has_many and has_one associations
always use the exception raising bang! method or handle the method return value.
When persisting AR objects
Avoid string interpolation in
queries
param will be properly escaped
Consider using named placeholders instead of positional placeholders
use of find over where
when you need to retrieve a single record by id
use of find_by over where and find_by_attribute
use of where.not over SQL
use
heredocs with squish
Keep the schema.rb (or structure.sql) under version control.
Use rake db:schema:load instead of rake db:migrate to initialize an empty
database
Enforce default values in the migrations themselves instead of in the
application layer
change_column_default
imposing data integrity from
the Rails app is impossible
use the change method instead of up and down methods.
constructive migrations
use models in migrations, make sure you define them
so that you don't end up with broken migrations in the future
Don't use non-reversible migration commands in the change method.
In this case, block will be used by create_table in rollback
Never call the model layer directly from a view
Never make complex formatting in the views, export the formatting to a method
in the view helper or the model.
When the labels of an ActiveRecord model need to be translated, use the
activerecord scope
Separate the texts used in the views from translations of ActiveRecord
attributes
Place the locale files for the models in a folder locales/models
the
texts used in the views in folder locales/views
Use the dot-separated keys in the controllers and models
Reserve app/assets for custom stylesheets, javascripts, or images
Third party code such as jQuery or
bootstrap should be placed in
vendor/assets
Provide both HTML and plain-text view templates
config.action_mailer.raise_delivery_errors = true
Use a local SMTP server like
Mailcatcher in the development
environment
Provide default settings for the host name
The _url methods include the host name and the _path
methods don't
_url
Format the from and to addresses properly
default from:
sending html emails all styles should be inline
Sending emails while generating page response should be avoided. It causes
delays in loading of the page and request can timeout if multiple email are
sent.
.start_with?
.end_with?
&.
Config your timezone accordingly in application.rb
config.active_record.default_timezone = :local
it can be only :utc or :local
Don't use Time.parse
Time.zone.parse
Don't use Time.now
Time.zone.now
Put gems used only for development or testing in the appropriate group in the
Gemfile
Add all OS X specific gems to a darwin group in the Gemfile, and
all Linux specific gems to a linux group
Do not remove the Gemfile.lock from version control.
an example group is a class in which the block passed to
describe is evaluated
The blocks passed to it are evaluated in the
context of an instance of that class
nested groups using the describe or context
methods
can declare example groups using either describe or context
can declare examples within a group using any of it, specify, or
example
Declare a shared example group using shared_examples, and then include it
in any group using include_examples.
Nearly anything that can be declared within an example group can be declared
within a shared example group.
shared_context and include_context.
When a class is passed to describe, you can access it from an example
using the described_class method
rspec-core stores a metadata hash with every example and group
Example groups are defined by a describe or
context block, which is eagerly evaluated when the spec file is
loaded
Examples -- typically defined by an it block -- and any other
blocks with per-example semantics -- such as a before(:example) hook -- are
evaluated in the context of
an instance of the example group class to which the example belongs.
Examples are not executed when the spec file is loaded
run any examples until all spec files have been loaded
images are saved in the host registry, we can benefit from Docker layer caching
All jobs will share the same environment, if many of them run simultaneously they might get into conflicts.
storage management (accumulating images)
The Docker socket binding technique means making a volume of /var/run/docker.sock between host and containers.
all containers would share the same Docker daemon.
Add privileged = true in the [runners.docker] section, the privileged mode is mandatory to use DinD.
To avoid that the runner only run one job at a time, change the concurrent value on the first line.
To avoid building a Docker image at each job, it can be built in a first job, pushed to the image registry provided by GitLab, and pulled in the next jobs.
functional tests depending on a database.
Docker Compose allows you to easily start multiple containers, but it has no more feature than Docker itself
Docker in Docker works well, but has its drawbacks, like Docker layer caching which needs some more commands to be used.
But if you maintain a CHANGELOG in this format, and/or your Git tags are also your Docker tags, you can get the previous version and use cache the this image version.
« Docker layer caching » is enough to optimize the build time.
Cache in CI/CD is about saving directories or files across pipelines.
We're building a Docker image, dependencies are installed inside a container.We can't cache a dependencies directory if it doesn't exists in the job workspace.
Dependencies will always be installed from a container but will be extracted by the GitLab Runner in the job workspace. Our goal is to send the cached version in the build context.
We set the directories to cache in the job settings with a key to share the cache per branch and stage.
To avoid old dependencies to be mixed with the new ones, at the risk of keeping unused dependencies in cache, which would make cache and images heavier.
If you need to cache directories in testing jobs, it's easier: use volumes !
version your cache keys !
sharing Docker image between jobs
In every job, we automatically get artifacts from previous stages.
docker save $DOCKER_CI_IMAGE | gzip > app.tar.gz
I personally use the « push / pull » technique,
we docker push after the build, then we docker pull if needed in the next jobs.
(Recommended) If you have plans to upgrade this single control-plane kubeadm cluster
to high availability you should specify the --control-plane-endpoint to set the shared endpoint
for all control-plane nodes
set the --pod-network-cidr to
a provider-specific value.
kubeadm tries to detect the container runtime by using a list of well
known endpoints.
kubeadm uses the network interface associated
with the default gateway to set the advertise address for this particular control-plane node's API server.
To use a different network interface, specify the --apiserver-advertise-address=<ip-address> argument
to kubeadm init
Do not share the admin.conf file with anyone and instead grant users custom permissions by generating
them a kubeconfig file using the kubeadm kubeconfig user command.
The token is used for mutual authentication between the control-plane node and the joining
nodes. The token included here is secret. Keep it safe, because anyone with this
token can add authenticated nodes to your cluster.
You must deploy a
Container Network Interface
(CNI) based Pod network add-on so that your Pods can communicate with each other.
Cluster DNS (CoreDNS) will not start up before a network is installed.
Take care that your Pod network must not overlap with any of the host
networks
Make sure that your Pod network plugin supports RBAC, and so do any manifests
that you use to deploy it.
You can install only one Pod network per cluster.
The cluster created here has a single control-plane node, with a single etcd database
running on it.
The node-role.kubernetes.io/control-plane label is such a restricted label and kubeadm manually applies it using
a privileged client after a node has been created.
By default, your cluster will not schedule Pods on the control plane nodes for security
reasons.
remove the node-role.kubernetes.io/control-plane:NoSchedule taint
from any nodes that have it, including the control plane nodes, meaning that the
scheduler will then be able to schedule Pods everywhere.
Ephemeral containers differ from other containers in that they lack guarantees
for resources or execution, and they will never be automatically restarted, so
they are not appropriate for building applications.
Ephemeral containers are created using a special ephemeralcontainers handler
in the API rather than by adding them directly to pod.spec, so it's not
possible to add an ephemeral container using kubectl edit
distroless images
enable you to deploy minimal container images that reduce attack surface
and exposure to bugs and vulnerabilities.
enable process namespace
sharing so
you can view processes in other containers.
each action also maps to particular CRUD operations in a database
resource :photo and resources :photos creates both singular and plural routes that map to the same controller (PhotosController).
One way to avoid deep nesting (as recommended above) is to generate the collection actions scoped under the parent, so as to get a sense of the hierarchy, but to not nest the member actions.
to only build routes with the minimal amount of information to uniquely identify the resource
The shallow method of the DSL creates a scope inside of which every nesting is shallow
These concerns can be used in resources to avoid code duplication and share behavior across routes
add a member route, just add a member block into the resource block
You can leave out the :on option, this will create the same member route except that the resource id value will be available in params[:photo_id] instead of params[:id].
Singular Resources
use a singular resource to map /profile (rather than /profile/:id) to the show action
Passing a String to get will expect a controller#action format
workaround
organize groups of controllers under a namespace
route /articles (without the prefix /admin) to Admin::ArticlesController
route /admin/articles to ArticlesController (without the Admin:: module prefix)
Nested routes allow you to capture this relationship in your routing.
helpers take an instance of Magazine as the first parameter (magazine_ads_url(@magazine)).
Resources should never be nested more than 1 level deep.
via the :shallow option
a balance between descriptive routes and deep nesting
:shallow_path prefixes member paths with the specified parameter
Routing Concerns allows you to declare common routes that can be reused inside other resources and routes
Rails can also create paths and URLs from an array of parameters.
use url_for with a set of objects
In helpers like link_to, you can specify just the object in place of the full url_for call
insert the action name as the first element of the array
This will recognize /photos/1/preview with GET, and route to the preview action of PhotosController, with the resource id value passed in params[:id]. It will also create the preview_photo_url and preview_photo_path helpers.
pass :on to a
route, eliminating the block:
Collection Routes
This will enable Rails to recognize paths such as /photos/search with GET, and route to the search action of PhotosController. It will also create the search_photos_url and search_photos_path route helpers.
simple routing makes it very easy to map legacy URLs to new Rails actions
add an alternate new action using the :on shortcut
When you set up a regular route, you supply a series of symbols that Rails maps to parts of an incoming HTTP request.
:controller maps to the name of a controller in your application
:action maps to the name of an action within that controller
optional parameters, denoted by parentheses
This route will also route the incoming request of /photos to PhotosController#index, since :action and :id are
use a constraint on :controller that matches the namespace you require
dynamic segments don't accept dots
The params will also include any parameters from the query string
:defaults option.
set params[:format] to "jpg"
cannot override defaults via query parameters
specify a name for any route using the :as option
create logout_path and logout_url as named helpers in your application.
Inside the show action of UsersController, params[:username] will contain the username for the user.
should use the get, post, put, patch and delete methods to constrain a route to a particular verb.
use the match method with the :via option to match multiple verbs at once
Routing both GET and POST requests to a single action has security implications
'GET' in Rails won't check for CSRF token. You should never write to the database from 'GET' requests
use the :constraints option to enforce a format for a dynamic segment
constraints
don't need to use anchors
Request-Based Constraints
the same name as the hash key and then compare the return value with the hash value.
constraint values should match the corresponding Request object method return type
reuse dynamic segments from the match in the path to redirect
this redirection is a 301 "Moved Permanently" redirect.
root method
put the root route at the top of the file
The root route only routes GET requests to the action.
root inside namespaces and scopes
For namespaced controllers you can use the directory notation
Only the directory notation is supported
use the :constraints option to specify a required format on the implicit id
specify a single constraint to apply to a number of routes by using the block
non-resourceful routes
:id parameter doesn't accept dots
:as option lets you override the normal naming for the named route helpers
use the :as option to prefix the named route helpers that Rails generates for a rout
prevent name collisions
prefix routes with a named parameter
This will provide you with URLs such as /bob/articles/1 and will allow you to reference the username part of the path as params[:username] in controllers, helpers and views
:only option
:except option
generate only the routes that you actually need can cut down on memory use and speed up the routing process.
alter path names
http://localhost:3000/rails/info/routes
rake routes
setting the CONTROLLER environment variable
Routes should be included in your testing strategy