Refer the YAML Anchors/Aliases documentation for information about how to alias and reuse syntax to keep your .circleci/config.yml file small.
workflow orchestration with two parallel jobs
jobs run according to configured requirements, each job waiting to start until the required job finishes successfully
requires: key
fans-out to run a set of acceptance test jobs in parallel, and finally fans-in to run a common deploy job.
Holding a Workflow for a Manual Approval
Workflows can be configured to wait for manual approval of a job before
continuing to the next job
add a job to the jobs list with the
key type: approval
approval is a special job type that is only available to jobs under the workflow key
The name of the job to hold is arbitrary - it could be wait or pause, for example,
as long as the job has a type: approval key in it.
schedule a workflow
to run at a certain time for specific branches.
The triggers key is only added under your workflows key
using cron syntax to represent Coordinated Universal Time (UTC) for specified branches.
By default,
a workflow is triggered on every git push
the commit workflow has no triggers key
and will run on every git push
The nightly workflow has a triggers key
and will run on the specified schedule
Cron step syntax (for example, */1, */20) is not supported.
use a context to share environment variables
use the same shared environment variables when initiated by a user who is part of the organization.
CircleCI does not run workflows for tags
unless you explicitly specify tag filters.
CircleCI branch and tag filters support
the Java variant of regex pattern matching.
Each workflow has an associated workspace which can be used to transfer files to downstream jobs as the workflow progresses.
The workspace is an additive-only store of data.
Jobs can persist data to the workspace
Downstream jobs can attach the workspace to their container filesystem.
Attaching the workspace downloads and unpacks each layer based on the ordering of the upstream jobs in the workflow graph.
Workflows that include jobs running on multiple branches may require data to be shared using workspaces
To persist data from a job and make it available to other jobs, configure the job to use the persist_to_workspace key.
Files and directories named in the paths: property of persist_to_workspace will be uploaded to the workflow’s temporary workspace relative to the directory specified with the root key.
Configure a job to get saved data by configuring the attach_workspace key.
persist_to_workspace
attach_workspace
To rerun only a workflow’s failed jobs, click the Workflows icon in the app and select a workflow to see the status of each job, then click the Rerun button and select Rerun from failed.
if you do not see your workflows triggering, a configuration error is preventing the workflow from starting.
check your Workflows page of the CircleCI app (not the Job page)
Orbs are packages of config that you either import by name or configure inline to simplify your config, share, and reuse config within and across projects.
Jobs are a collection of Steps.
All of the steps in the job are executed in a single unit which consumes a CircleCI container from your plan while it’s running.
Workspaces persist data between jobs in a single Workflow.
Caching persists data between the same job in different Workflow builds.
Artifacts persist data after a Workflow has finished.
run using the machine executor which enables reuse of recently used machine executor runs,
docker executor which can compose Docker containers to run your tests and any services they require
macos executor
Steps are a collection of executable commands which are run during a job
In addition to the run: key, keys for save_cache:, restore_cache:, deploy:, store_artifacts:, store_test_results: and add_ssh_keys are nested under Steps.
checkout: key is required to checkout your code
run: enables addition of arbitrary, multi-line shell command scripting
orchestrating job runs with parallel, sequential, and manual approval workflows.
Do not use directories as a dependency for generated targets, ever.
Parallel make: add an explicit timestamp dependency (.done) that make can synchronize threaded calls on to avoid a race condition.
Maintain clean targets - makefiles should be able to remove all content that is generated so "make clean" will return the sandbox/directory back to a clean state.
Wrapper check/unit tests with a ENABLE_TESTS conditional
Before accessing variables within objects and collections make sure they are there! PLEASE!
If that variable is a constant or won't be changed then use the Const keyword in applicable languages and the CAPITALISATION convention to let users aware of your decisions about them.
The name of a method is more important than we give it credit for, when a method changes so should its name.
Make sure you are returning the right thing, trying to make it as generic as possible.
Void should do something, not change something!
Private vs Public, this is a big topic
keeping an eye of the access level of a method can stop issues further down the line
Gherkin is a Business Readable, Domain Specific Language created especially for behavior descriptions.
specify the 3 main points of a test, including what you expect to happen using the following keywords GIVEN, WHEN / AND , THEN.
look at how the code is structured, make sure methods aren't too long, don't have too many branches, and that for and if statements could be simplified.
Use your initiative and discuss if a rewrite would benefit maintainability for the future.
it's unnecessary to leave commented code when working in and around areas with them.
are Python functions that receive an HttpRequest object and returns an HttpResponse object.
Receive a request as a parameter and returns a response as a result.
is a Web application that does something. An app usually is composed of a set of models (database tables),
views, templates, tests.
One project can be composed of multiple apps, or a single
app.
deploying code
changes at every small iteration, reducing the chance of developing
new code based on bugged or failed previous versions
based on
automating the execution of scripts to minimize the chance of
introducing errors while developing applications.
For every push to the repository, you
can create a set of scripts to build and test your application
automatically, decreasing the chance of introducing errors to your app.
Executors define the environment in which the steps of a job will be run.
Executor declarations in config outside of jobs can be used by all jobs in the scope of that declaration, allowing you to reuse a single executor definition across multiple jobs.
It is also possible to allow an orb to define the executor used by all of its commands.
When invoking an executor in a job any keys in the job itself will override those of the executor invoked.
Steps are used when you have a job or command that needs to mix predefined and user-defined steps.
Use the enum parameter type when you want to enforce that the value must be one from a specific set of string values.
Use an executor parameter type to allow the invoker of a job to decide what
executor it will run on
invoke the same job more than once in the workflows stanza of config.yml, passing any necessary parameters as subkeys to the job.
If a job is declared inside an orb it can use commands in that orb or the global commands.
To use parameters in executors, define the parameters under the given executor.
Parameters are in-scope only within the job or command that defined them.
A single configuration may invoke a job multiple times.
Every job invocation may optionally accept two special arguments: pre-steps and post-steps.
Pre and post steps allow you to execute steps in a given job
without modifying the job.
conditions are checked before a workflow is actually run
you cannot use a condition to check an environment
variable.
Conditional steps may be located anywhere a regular step could and may only use parameter values as inputs.
A conditional step consists of a step with the key when or unless. Under this conditional key are the subkeys steps and condition
A condition is a single value that evaluates to true or false at the time the config is processed, so you cannot use environment variables as conditions
我想大概的意思就是:如果是 admin 可以看到全部 post,如果不是只能看到 published = true 的 post
use this class from your controller via the policy_scope method:
PostPolicy::Scope.new(current_user, Post).resolve
policy_scope(@user.posts).each
This
method will raise an exception if authorize has not yet been called.
verify_policy_scoped to your controller. This
will raise an exception in the vein of verify_authorized. However, it tracks
if policy_scope is used instead of authorize
need to
conditionally bypass verification, you can use skip_authorization
skip_policy_scope
Having a mechanism that ensures authorization happens allows developers to
thoroughly test authorization scenarios as units on the policy objects
themselves.
Pundit doesn't do anything you couldn't have easily done
yourself. It's a very small library, it just provides a few neat helpers.
all of the policy and scope classes are just plain Ruby classes
rails g pundit:policy post
define a filter that redirects unauthenticated users to the
login page
fail more gracefully
raise Pundit::NotAuthorizedError, "must be logged in" unless user
having rails handle them as a 403 error and serving a 403 error page.
retrieve a policy for a record outside the controller or
view
define a method in your controller called pundit_user
Pundit strongly encourages you to model your application in such a way that the
only context you need for authorization is a user object and a domain model that
you want to check authorization for.
Pundit does not allow you to pass additional arguments to policies
authorization is dependent
on IP address in addition to the authenticated user
create a special class which wraps up both user and IP and passes it to the policy.
set up a permitted_attributes method in your policy
policy(@post).permitted_attributes
permitted_attributes(@post)
Pundit provides a convenient helper method
permit different attributes based on the current action,
If you have defined an action-specific method on your policy for the current action, the permitted_attributes helper will call it instead of calling permitted_attributes on your controller
If you don't have an instance for the first argument to authorize, then you can pass
the class
restart the Rails server
Given there is a policy without a corresponding model / ruby class,
you can retrieve it by passing a symbol
after_action :verify_authorized
It is not some kind of
failsafe mechanism or authorization mechanism.
Pundit will work just fine without
using verify_authorized and verify_policy_scoped
Rails 4 automatically adds the sass-rails, coffee-rails and uglifier
gems to your Gemfile
reduce the number of requests that a browser makes to render a web page
Starting with version 3.1, Rails defaults to concatenating all JavaScript files into one master .js file and all CSS files into one master .css file
In production, Rails inserts an MD5 fingerprint into each filename so that the file is cached by the web browser
The technique sprockets uses for fingerprinting is to insert a hash of the
content into the name, usually at the end.
asset minification or compression
The sass-rails gem is automatically used for CSS compression if included
in Gemfile and no config.assets.css_compressor option is set.
Supported languages include Sass for CSS, CoffeeScript for JavaScript, and ERB for both by default.
When a filename is unique and based on its content, HTTP headers can be set to encourage caches everywhere (whether at CDNs, at ISPs, in networking equipment, or in web browsers) to keep their own copy of the content
asset pipeline is technically no longer a core feature of Rails 4
Rails uses for fingerprinting is to insert a hash of the content into the name, usually at the end
With the asset pipeline, the preferred location for these assets is now the app/assets directory.
Fingerprinting is enabled by default for production and disabled for all other
environments
The files in app/assets are never served directly in production.
Paths are traversed in the order that they occur in the search path
You should use app/assets for
files that must undergo some pre-processing before they are served.
By default .coffee and .scss files will not be precompiled on their own
app/assets is for assets that are owned by the application, such as custom images, JavaScript files or stylesheets.
lib/assets is for your own libraries' code that doesn't really fit into the scope of the application or those libraries which are shared across applications.
vendor/assets is for assets that are owned by outside entities, such as code for JavaScript plugins and CSS frameworks.
Any path under assets/* will be searched
By default these files will be ready to use by your application immediately using the require_tree directive.
By default, this means the files in app/assets take precedence, and will mask corresponding paths in lib and vendor
Sprockets uses files named index (with the relevant extensions) for a special purpose
Rails.application.config.assets.paths
causes turbolinks to check if
an asset has been updated and if so loads it into the page
if you add an erb extension to a CSS asset (for example, application.css.erb), then helpers like asset_path are available in your CSS rules
If you add an erb extension to a JavaScript asset, making it something such as application.js.erb, then you can use the asset_path helper in your JavaScript code
The asset pipeline automatically evaluates ERB
data URI — a method of embedding the image data directly into the CSS file — you can use the asset_data_uri helper.
Sprockets will also look through the paths specified in config.assets.paths,
which includes the standard application paths and any paths added by Rails
engines.
image_tag
the closing tag cannot be of the style -%>
asset_data_uri
app/assets/javascripts/application.js
sass-rails provides -url and -path helpers (hyphenated in Sass,
underscored in Ruby) for the following asset classes: image, font, video, audio,
JavaScript and stylesheet.
Rails.application.config.assets.compress
In JavaScript files, the directives begin with //=
The require_tree directive tells Sprockets to recursively include all JavaScript files in the specified directory into the output.
manifest files contain directives — instructions that tell Sprockets which files to require in order to build a single CSS or JavaScript file.
You should not rely on any particular order among those
Sprockets uses manifest files to determine which assets to include and serve.
the family of require directives prevents files from being included twice in the output
which files to require in order to build a single CSS or JavaScript file
Directives are processed top to bottom, but the order in which files are included by require_tree is unspecified.
In JavaScript files, Sprockets directives begin with //=
If require_self is called more than once, only the last call is respected.
require
directive is used to tell Sprockets the files you wish to require.
You need not supply the extensions explicitly.
Sprockets assumes you are requiring a .js file when done from within a .js
file
paths must be
specified relative to the manifest file
require_directory
Rails 4 creates both app/assets/javascripts/application.js and
app/assets/stylesheets/application.css regardless of whether the
--skip-sprockets option is used when creating a new rails application.
The file extensions used on an asset determine what preprocessing is applied.
app/assets/stylesheets/application.css
Additional layers of preprocessing can be requested by adding other extensions, where each extension is processed in a right-to-left manner
require_self
use the Sass @import rule
instead of these Sprockets directives.
Keep in mind that the order of these preprocessors is important
In development mode, assets are served as separate files in the order they are specified in the manifest file.
when these files are
requested they are processed by the processors provided by the coffee-script
and sass gems and then sent back to the browser as JavaScript and CSS
respectively.
css.scss.erb
js.coffee.erb
Keep in mind the order of these preprocessors is important.
By default Rails assumes that assets have been precompiled and will be served as static assets by your web server
with the Asset Pipeline the :cache and :concat options aren't used anymore
Assets are compiled and cached on the first request after the server is started
Detection criteria is specific to each buildpack – for
instance, an NPM buildpack might look for a package.json, and a Go buildpack might look for Go source files.
A builder is essentially an image containing buildpacks.
Helm will figure out where to install Tiller by reading your Kubernetes
configuration file (usually $HOME/.kube/config). This is the same file
that kubectl uses.
By default, when Tiller is installed, it does not have authentication enabled.
helm repo update
Without a max history set the history is kept indefinitely, leaving a large number of records for helm and tiller to maintain.
helm init --upgrade
Whenever you install a chart, a new release is created.
one chart can
be installed multiple times into the same cluster. And each can be
independently managed and upgraded.
helm list function will show you a list of all deployed releases.
helm delete
helm status
you
can audit a cluster’s history, and even undelete a release (with helm
rollback).
the Helm
server (Tiller).
The Helm client (helm)
brew install kubernetes-helm
Tiller, the server portion of Helm, typically runs inside of your
Kubernetes cluster.
it can also be run locally, and
configured to talk to a remote Kubernetes cluster.
Role-Based Access Control - RBAC for short
create a service account for Tiller with the right roles and permissions to access resources.
run Tiller in an RBAC-enabled Kubernetes cluster.
run kubectl get pods --namespace
kube-system and see Tiller running.
helm inspect
Helm will look for Tiller in the kube-system namespace unless
--tiller-namespace or TILLER_NAMESPACE is set.
For development, it is sometimes easier to work on Tiller locally, and
configure it to connect to a remote Kubernetes cluster.
even when running locally, Tiller will store release
configuration in ConfigMaps inside of Kubernetes.
helm version should show you both
the client and server version.
Tiller stores its data in Kubernetes ConfigMaps, you can safely
delete and re-install Tiller without worrying about losing any data.
helm reset
The --node-selectors flag allows us to specify the node labels required
for scheduling the Tiller pod.
--override allows you to specify properties of Tiller’s
deployment manifest.
helm init --override manipulates the specified properties of the final
manifest (there is no “values” file).
The --output flag allows us skip the installation of Tiller’s deployment
manifest and simply output the deployment manifest to stdout in either
JSON or YAML format.
By default, tiller stores release information in ConfigMaps in the namespace
where it is running.
switch from the default backend to the secrets
backend, you’ll have to do the migration for this on your own.
a beta SQL storage backend that stores release
information in an SQL database (only postgres has been tested so far).
Once you have the Helm Client and Tiller successfully installed, you can
move on to using Helm to manage charts.
Helm requires that kubelet have access to a copy of the socat program to proxy connections to the Tiller API.
A Release is an instance of a chart running in a Kubernetes cluster.
One chart can often be installed many times into the same cluster.
helm init --client-only
helm init --dry-run --debug
A panic in Tiller is almost always the result of a failure to negotiate with the
Kubernetes API server
Tiller and Helm have to negotiate a common version to make sure that they can safely
communicate without breaking API assumptions
helm delete --purge
Helm stores some files in $HELM_HOME, which is
located by default in ~/.helm
A Chart is a Helm package. It contains all of the resource definitions
necessary to run an application, tool, or service inside of a Kubernetes
cluster.
it like the Kubernetes equivalent of a Homebrew formula,
an Apt dpkg, or a Yum RPM file.
A Repository is the place where charts can be collected and shared.
Set the $HELM_HOME environment variable
each time it is installed, a new release is created.
Helm installs charts into Kubernetes, creating a new release for
each installation. And to find new charts, you can search Helm chart
repositories.
chart repository is named
stable by default
helm search shows you all of the available charts
helm inspect
To install a new package, use the helm install command. At its
simplest, it takes only one argument: The name of the chart.
If you want to use your own release name, simply use the
--name flag on helm install
additional configuration steps you can or
should take.
Helm does not wait until all of the resources are running before it
exits. Many charts require Docker images that are over 600M in size, and
may take a long time to install into the cluster.
helm status
helm inspect
values
helm inspect values stable/mariadb
override any of these settings in a YAML formatted file,
and then pass that file during installation.
helm install -f config.yaml stable/mariadb
--values (or -f): Specify a YAML file with overrides.
--set (and its variants --set-string and --set-file): Specify overrides on the command line.
Values that have been --set can be cleared by running helm upgrade with --reset-values
specified.
Chart
designers are encouraged to consider the --set usage when designing the format
of a values.yaml file.
--set-file key=filepath is another variant of --set.
It reads the file and use its content as a value.
inject a multi-line text into values without dealing with indentation in YAML.
An unpacked chart directory
When a new version of a chart is released, or when you want to change
the configuration of your release, you can use the helm upgrade
command.
Kubernetes charts can be large and
complex, Helm tries to perform the least invasive upgrade.
It will only
update things that have changed since the last release
If both are used, --set values are merged into --values with higher precedence.
The helm get command is a useful tool for looking at a release in the
cluster.
helm rollback
A release version is an incremental revision. Every time an install,
upgrade, or rollback happens, the revision number is incremented by 1.
helm history
a release name cannot be
re-used.
you can rollback a
deleted resource, and have it re-activate.
helm repo list
helm repo add
helm repo update
The Chart Development Guide explains how to develop your own
charts.
helm create
helm lint
helm package
Charts that are archived can be loaded into chart repositories.
chart repository server
Tiller can be installed into any namespace.
Limiting Tiller to only be able to install into specific namespaces and/or resource types is controlled by Kubernetes RBAC roles and rolebindings
Release names are unique PER TILLER INSTANCE
Charts should only contain resources that exist in a single namespace.
not recommended to have multiple Tillers configured to manage resources in the same namespace.
a client-side Helm plugin. A plugin is a
tool that can be accessed through the helm CLI, but which is not part of the
built-in Helm codebase.
Helm plugins are add-on tools that integrate seamlessly with Helm. They provide
a way to extend the core feature set of Helm, but without requiring every new
feature to be written in Go and added to the core tool.
Helm plugins live in $(helm home)/plugins
The Helm plugin model is partially modeled on Git’s plugin model
helm referred to as the porcelain layer, with
plugins being the plumbing.
command is the command that this plugin will
execute when it is called.
Environment variables are interpolated before the plugin
is executed.
The command itself is not executed in a shell. So you can’t oneline a shell script.
Helm is able to fetch Charts using HTTP/S
Variables like KUBECONFIG are set for the plugin if they are set in the
outer environment.
In Kubernetes, granting a role to an application-specific service account is a best practice to ensure that your application is operating in the scope that you have specified.
restrict Tiller’s capabilities to install resources to certain namespaces, or to grant a Helm client running access to a Tiller instance.
Service account with cluster-admin role
The cluster-admin role is created by default in a Kubernetes cluster
Deploy Tiller in a namespace, restricted to deploying resources only in that namespace
Deploy Tiller in a namespace, restricted to deploying resources in another namespace
When running a Helm client in a pod, in order for the Helm client to talk to a Tiller instance, it will need certain privileges to be granted.
SSL Between Helm and Tiller
The Tiller authentication model uses client-side SSL certificates.
creating an internal CA, and using both the
cryptographic and identity functions of SSL.
Helm is a powerful and flexible package-management and operations tool for Kubernetes.
default installation applies no security configurations
with a cluster that is well-secured in a private network with no data-sharing or no other users or teams.
With great power comes great responsibility.
Choose the Best Practices you should apply to your helm installation
Role-based access control, or RBAC
Tiller’s gRPC endpoint and its usage by Helm
Kubernetes employ a role-based access control (or RBAC) system (as do modern operating systems) to help mitigate the damage that can be done if credentials are misused or bugs exist.
In the default installation the gRPC endpoint that Tiller offers is available inside the cluster (not external to the cluster) without authentication configuration applied.
Tiller stores its release information in ConfigMaps. We suggest changing the default to Secrets.
release information
charts
charts are a kind of package that not only installs containers you may or may not have validated yourself, but it may also install into more than one namespace.
As with all shared software, in a controlled or shared environment you must validate all software you install yourself before you install it.
Helm’s provenance tools to ensure the provenance and integrity of charts
"Helm will figure out where to install Tiller by reading your Kubernetes configuration file (usually $HOME/.kube/config). This is the same file that kubectl uses."
The persistent data stored in the backend belongs to a workspace.
Certain backends support multiple named workspaces, allowing multiple states
to be associated with a single configuration.
Terraform starts with a single workspace named "default". This
workspace is special both because it is the default and also because
it cannot ever be deleted.
Within your Terraform configuration, you may include the name of the current
workspace using the ${terraform.workspace} interpolation sequence.
changing behavior based
on the workspace.
Named workspaces allow conveniently switching between multiple instances of
a single configuration within its single backend.
A common use for multiple workspaces is to create a parallel, distinct copy of
a set of infrastructure in order to test a set of changes before modifying the
main production infrastructure.
Non-default workspaces are often related to feature branches in version control.
Workspaces alone
are not a suitable tool for system decomposition, because each subsystem should
have its own separate configuration and backend, and will thus have its own
distinct set of workspaces.
In particular, organizations commonly want to create a strong separation
between multiple deployments of the same infrastructure serving different
development stages (e.g. staging vs. production) or different internal teams.
use one or more re-usable modules to
represent the common elements, and then represent each instance as a separate
configuration that instantiates those common elements in the context of a
different backend.
If a Terraform state for one configuration is stored in a remote backend
that is accessible to other configurations then
terraform_remote_state
can be used to directly consume its root module outputs from those other
configurations.
For server addresses, use a provider-specific resource to create a DNS
record with a predictable name and then either use that name directly or
use the dns provider to retrieve
the published addresses in other configurations.
Workspaces are technically equivalent to renaming your state file.
using a remote backend instead is recommended when there are
multiple collaborators.
AS3 manages topology records globally in /Common, it is required that records only be managed through AS3, as it will treat the records declaratively.
If a record is added outside of AS3, it will be removed if it is not included in the next AS3 declaration for topology records (AS3 completely overwrites non-AS3 topologies when a declaration is submitted).
using AS3 to delete a tenant (for example, sending DELETE to the /declare/<TENANT> endpoint) that contains GSLB topologies will completely remove ALL GSLB topologies from the BIG-IP.
When posting a large declaration (hundreds of application services in a single declaration), you may experience a 500 error stating that the save sys config operation failed.
Even if you have asynchronous mode set to false, after 45 seconds AS3 sets asynchronous mode to true (API swap), and returns an async response.
When creating a new tenant using AS3, it must not use the same name as a
partition you separately create on the target BIG-IP system.
If you use the
same name and then post the declaration, AS3 overwrites (or removes) the
existing partition completely, including all configuration objects in that
partition.
use AS3 to create a tenant (which creates a BIG-IP partition),
manually adding configuration objects to the partition created by AS3 can
have unexpected results
When you delete the
Tenant using AS3, the system deletes both virtual servers.
if a Firewall_Address_List contains zero addresses, a dummy IPv6 address of ::1:5ee:bad:c0de is added in order to maintain a valid Firewall_Address_List. If an address is added to the list, the dummy address is removed.
use /mgmt/shared/appsvcs/declare?async=true if you have a particularly large declaration which will take a long time to process.
reviewing the Sizing BIG-IP Virtual Editions section (page 7) of Deploying BIG-IP VEs in a Hyper-Converged Infrastructure
To test whether your system has AS3 installed or not, use GET with the /mgmt/shared/appsvcs/info URI.
You may find it more convenient to put multi-line texts such as iRules into
AS3 declarations by first encoding them in Base64.
no matter your BIG-IP user account name, audit logs show all
messages from admin and not the specific user name.
designed to run on Rack
or complement existing web application frameworks such as Rails and Sinatra by
providing a simple DSL to easily develop RESTful APIs
Grape APIs are Rack applications that are created by subclassing Grape::API
Rails expects a subdirectory that matches the name of the Ruby module and a file name that matches the name of the class
mount multiple API implementations inside another one
mount on a path, which is similar to using prefix inside the mounted API itself.
four strategies in which clients can reach your API's endpoints: :path,
:header, :accept_version_header and :param
clients should pass the desired version as a request parameter,
either in the URL query string or in the request body.
clients should pass the desired version in the HTTP Accept head
clients should pass the desired version in the UR
clients should pass the desired version in the HTTP Accept-Version header.
add a description to API methods and namespaces
Request parameters are available through the params hash object
Parameters are automatically populated from the request body on POST and PUT
route string parameters will have precedence.
Grape allows you to access only the parameters that have been declared by your params block
By default declared(params) includes parameters that have nil values
all valid types
type: File
JSON objects and arrays of objects are accepted equally
any class can be
used as a type so long as an explicit coercion method is supplied
As a special case, variant-member-type collections may also be declared, by
passing a Set or Array with more than one member to type
Parameters can be nested using group or by calling requires or optional with a block
relevant if another parameter is given
Parameters options can be grouped
allow_blank can be combined with both requires and optional
Parameters can be restricted to a specific set of values
Parameters can be restricted to match a specific regular expression
Never define mutually exclusive sets with any required params
Namespaces allow parameter definitions and apply to every method within the namespace
define a route parameter as a namespace using route_param
create custom validation that use request to validate the attribute
rescue a Grape::Exceptions::ValidationErrors and respond with a custom response or turn the response into well-formatted JSON for a JSON API that separates individual parameters and the corresponding error messages
custom validation messages
Request headers are available through the headers helper or from env in their original form
define requirements for your named route parameters using regular
expressions on namespace or endpoint
route will match only if all requirements are met
mix in a module
define reusable params
using cookies method
a 201 for POST-Requests
204 for DELETE-Requests
200 status code for all other Requests
use status to query and set the actual HTTP Status Code
raising errors with error!
It is very crucial to define this endpoint at the very end of your API, as it
literally accepts every request.
rescue_from will rescue the exceptions listed and all their subclasses.
Grape::API provides a logger method which by default will return an instance of the Logger
class from Ruby's standard library.
Grape supports a range of ways to present your data
Grape has built-in Basic and Digest authentication (the given block
is executed in the context of the current Endpoint).
Authentication
applies to the current namespace and any children, but not parents.
Blocks can be executed before or after every API call, using before, after,
before_validation and after_validation
Before and after callbacks execute in the following order
Grape by default anchors all request paths, which means that the request URL
should match from start to end to match
The namespace method has a number of aliases, including: group, resource,
resources, and segment. Use whichever reads the best for your API.
test a Grape API with RSpec by making HTTP requests and examining the response
POST JSON data and specify the correct content-type.
Create an additional staging environment that closely resembles the
production one
Keep any additional configuration in YAML files under the config/ directory
Rails::Application.config_for(:yaml_file)
Use nested routes to express better the relationship between ActiveRecord
models
nest routes more than 1 level deep then use the shallow: true option
namespaced routes to group related actions
Don't use match to define any routes unless there is need to map multiple request types among [:get, :post, :patch, :put, :delete] to a single action using :via option.
Keep the controllers skinny
all the business logic
should naturally reside in the model
Share no more than two instance variables between a controller and a view.
using a template
Prefer render plain: over render text
Prefer corresponding symbols to numeric HTTP status codes
without abbreviations
Keep your models for business logic and data-persistence
only
Group macro-style methods (has_many, validates, etc) in the beginning of
the class definition
Prefer has_many :through to has_and_belongs_to_many
self[:attribute]
self[:attribute] = value
validates
Keep custom validators under app/validators
Consider extracting custom validators to a shared gem
preferable to make a class method instead which serves the
same purpose of the named scope
returns an ActiveRecord::Relation
object
.update_attributes
Override the to_param method of the model
Use the friendly_id gem. It allows creation of human-readable URLs by
using some descriptive attribute of the model instead of its id
find_each to iterate over a collection of AR objects
.find_each
.find_each
Looping through a
collection of records from the database (using the all method, for example)
is very inefficient since it will try to instantiate all the objects at once
always call
before_destroy callbacks that perform validation with prepend: true
Define the dependent option to the has_many and has_one associations
always use the exception raising bang! method or handle the method return value.
When persisting AR objects
Avoid string interpolation in
queries
param will be properly escaped
Consider using named placeholders instead of positional placeholders
use of find over where
when you need to retrieve a single record by id
use of find_by over where and find_by_attribute
use of where.not over SQL
use
heredocs with squish
Keep the schema.rb (or structure.sql) under version control.
Use rake db:schema:load instead of rake db:migrate to initialize an empty
database
Enforce default values in the migrations themselves instead of in the
application layer
change_column_default
imposing data integrity from
the Rails app is impossible
use the change method instead of up and down methods.
constructive migrations
use models in migrations, make sure you define them
so that you don't end up with broken migrations in the future
Don't use non-reversible migration commands in the change method.
In this case, block will be used by create_table in rollback
Never call the model layer directly from a view
Never make complex formatting in the views, export the formatting to a method
in the view helper or the model.
When the labels of an ActiveRecord model need to be translated, use the
activerecord scope
Separate the texts used in the views from translations of ActiveRecord
attributes
Place the locale files for the models in a folder locales/models
the
texts used in the views in folder locales/views
Use the dot-separated keys in the controllers and models
Reserve app/assets for custom stylesheets, javascripts, or images
Third party code such as jQuery or
bootstrap should be placed in
vendor/assets
Provide both HTML and plain-text view templates
config.action_mailer.raise_delivery_errors = true
Use a local SMTP server like
Mailcatcher in the development
environment
Provide default settings for the host name
The _url methods include the host name and the _path
methods don't
_url
Format the from and to addresses properly
default from:
sending html emails all styles should be inline
Sending emails while generating page response should be avoided. It causes
delays in loading of the page and request can timeout if multiple email are
sent.
.start_with?
.end_with?
&.
Config your timezone accordingly in application.rb
config.active_record.default_timezone = :local
it can be only :utc or :local
Don't use Time.parse
Time.zone.parse
Don't use Time.now
Time.zone.now
Put gems used only for development or testing in the appropriate group in the
Gemfile
Add all OS X specific gems to a darwin group in the Gemfile, and
all Linux specific gems to a linux group
Do not remove the Gemfile.lock from version control.