A machine image is a single static unit that contains a pre-configured
operating system and installed software which is used to quickly create new
running machines.
"A machine image is a single static unit that contains a pre-configured operating system and installed software which is used to quickly create new running machines."
"Celery is an asynchronous task queue/job queue based on distributed message passing. It is focused on real-time operation, but supports scheduling as well.
The execution units, called tasks, are executed concurrently on a single or more worker servers using multiprocessing, Eventlet, or gevent. Tasks can execute asynchronously (in the background) or synchronously (wait until ready)."
"The YubiKey 4 is the strong authentication bullseye the industry has been aiming at for years, enabling one single key to secure an unlimited number of applications.
Yubico's 4th generation YubiKey is built on high-performance secure elements. It includes the same range of one-time password and public key authentication protocols as in the YubiKey NEO, excluding NFC, but with stronger public/private keys, faster crypto operations and the world's first touch-to-sign feature.
With the YubiKey 4 platform, we have further improved our manufacturing and ordering process, enabling customers to order exactly what functions they want in 500+ unit volumes, with no secrets stored at Yubico or shared with a third-party organization. The best part? An organization can securely customize 1,000 YubiKeys in less than 10 minutes.
For customers who require NFC, the YubiKey NEO is our full-featured key with both contact (USB) and contactless (NFC, MIFARE) communications."
rails dbconsole figures out which database you're using and drops you into whichever command line interface you would use with it
The console command lets you interact with your Rails application from the command line. On the underside, rails console uses IRB
rake about gives information about version numbers for Ruby, RubyGems, Rails, the Rails subcomponents, your application's folder, the current Rails environment name, your app's database adapter, and schema version
You can precompile the assets in app/assets using rake assets:precompile and remove those compiled assets using rake assets:clean.
rake db:version is useful when troubleshooting
The doc: namespace has the tools to generate documentation for your app, API documentation, guides.
rake notes will search through your code for comments beginning with FIXME, OPTIMIZE or TODO.
You can also use custom annotations in your code and list them using rake notes:custom by specifying the annotation using an environment variable ANNOTATION.
rake routes will list all of your defined routes, which is useful for tracking down routing problems in your app, or giving you a good overview of the URLs in an app you're trying to get familiar with.
rake secret will give you a pseudo-random key to use for your session secret.
Custom rake tasks have a .rake extension and are placed in
Rails.root/lib/tasks.
rails new . --git --database=postgresql
All commands can run with -h or --help to list more information
The rails server command launches a small web server named WEBrick which comes bundled with Ruby
rails server -e production -p 4000
You can run a server as a daemon by passing a -d option
The rails generate command uses templates to create a whole lot of things.
Using generators will save you a large amount of time by writing boilerplate code, code that is necessary for the app to work.
With a normal, plain-old Rails application, your URLs will generally follow the pattern of http://(host)/(controller)/(action), and a URL like http://(host)/(controller) will hit the index action of that controller.
A scaffold in Rails is a full set of model, database migration for that model, controller to manipulate it, views to view and manipulate the data, and a test suite for each of the above.
Unit tests are code that tests and makes assertions about code.
Unit tests are your friend.
rails console --sandbox
rails db
Each task has a description, and should help you find the thing you need.
rake tmp:clear clears all the three: cache, sessions and sockets.
Communication between pods is more complicated, however, and requires a separate networking component that can transparently route traffic from a pod on one node to a pod on another.
pod network plugins. For this cluster, you will use Flannel, a stable and performant option.
Passing the argument --pod-network-cidr=10.244.0.0/16 specifies the private subnet that the pod IPs will be assigned from.
kubectl apply -f descriptor.[yml|json] is the syntax for telling kubectl to create the objects described in the descriptor.[yml|json] file.
deploy Nginx using Deployments and Services
A deployment is a type of Kubernetes object that ensures there's always a specified number of pods running based on a defined template, even if the pod crashes during the cluster's lifetime.
NodePort, a scheme that will make the pod accessible through an arbitrary port opened on each node of the cluster
Services are another type of Kubernetes object that expose cluster internal services to clients, both internal and external.
load balancing requests to multiple pods
Pods are ubiquitous in Kubernetes, so understanding them will facilitate your work
how controllers such as deployments work since they are used frequently in stateless applications for scaling and the automated healing of unhealthy applications.
Understanding the types of services and the options they have is essential for running both stateless and stateful applications.
Pods are the smallest deployable units of computing
A Pod (as in a pod of whales or pea pod) is a group of one or more
containersA lightweight and portable executable image that contains software and all of its dependencies. (such as
Docker containers), with shared storage/network, and a specification
for how to run the containers.
A Pod’s contents are always co-located and
co-scheduled, and run in a shared context.
A Pod models an
application-specific “logical host”
application
containers which are relatively tightly coupled
being executed on the same physical or virtual machine would mean being
executed on the same logical host.
The shared context of a Pod is a set of Linux namespaces, cgroups, and
potentially other facets of isolation
Containers within a Pod share an IP address and port space, and
can find each other via localhost
Containers in different Pods have distinct IP addresses
and can not communicate by IPC without
special configuration.
These containers usually communicate with each other via Pod IP addresses.
Applications within a Pod also have access to shared volumesA directory containing data, accessible to the containers in a pod.
, which are defined
as part of a Pod and are made available to be mounted into each application’s
filesystem.
a Pod is modelled as
a group of Docker containers with shared namespaces and shared filesystem
volumes
Pods are considered to be relatively
ephemeral (rather than durable) entities.
Pods are created, assigned a unique ID (UID), and
scheduled to nodes where they remain until termination (according to restart
policy) or deletion.
it can be replaced by an identical Pod
When something is said to have the same lifetime as a Pod, such as a volume,
that means that it exists as long as that Pod (with that UID) exists.
uses a persistent volume for shared storage between the containers
Pods serve as unit of deployment, horizontal scaling, and
replication
The applications in a Pod all use the same network namespace (same IP and port
space), and can thus “find” each other and communicate using localhost
flat shared networking space
Containers within the Pod see the system hostname as being the same as the configured
name for the Pod.
Volumes enable data to survive
container restarts and to be shared among the applications within the Pod.
Individual Pods are not intended to run multiple instances of the same
application
The individual containers may be
versioned, rebuilt and redeployed independently.
Pods aren’t intended to be treated as durable entities.
Controllers like StatefulSet
can also provide support to stateful Pods.
When a user requests deletion of a Pod, the system records the intended grace period before the Pod is allowed to be forcefully killed, and a TERM signal is sent to the main process in each container.
Once the grace period has expired, the KILL signal is sent to those processes, and the Pod is then deleted from the API server.
grace period
Pod is removed from endpoints list for service, and are no longer considered part of the set of running Pods for replication controllers.
When the grace period expires, any processes still running in the Pod are killed with SIGKILL.
By default, all deletes are graceful within 30 seconds.
You must specify an additional flag --force along with --grace-period=0 in order to perform force deletions.
Force deletion of a Pod is defined as deletion of a Pod from the cluster state and etcd immediately.
StatefulSet Pods
Processes within the container get almost the same privileges that are available to processes outside a container.
Microservices also bring a set of additional benefits, such as easier scaling, the possibility to use multiple programming languages and technologies, and others.
Java is a frequent choice for building a microservices architecture as it is a mature language tested over decades and has a multitude of microservices-favorable frameworks, such as legendary Spring, Jersey, Play, and others.
A monolithic architecture keeps it all simple. An app has just one server and one database.
All the connections between units are inside-code calls.
split our application into microservices and got a set of units completely independent for deployment and maintenance.
Each of microservices responsible for a certain business function communicates either via sync HTTP/REST or async AMQP protocols.
ensure seamless communication between newly created distributed components.
The gateway became an entry point for all clients’ requests.
We also set the Zuul 2 framework for our gateway service so that the application could leverage the benefits of non-blocking HTTP calls.
we've implemented the Eureka server as our server discovery that keeps a list of utilized user profile and order servers to help them discover each other.
We also have a message broker (RabbitMQ) as an intermediary between the notification server and the rest of the servers to allow async messaging in-between.
microservices can definitely help when it comes to creating complex applications that deal with huge loads and need continuous improvement and scaling.
models directory is meant to hold tests for your models
controllers directory is meant to hold tests for your controllers
integration directory is meant to hold tests that involve any number of controllers interacting
Fixtures are a way of organizing test data; they reside in the fixtures folder
The test_helper.rb file holds the default configuration for your tests
Fixtures allow you to populate your testing database with predefined data before your tests run
Fixtures are database independent written in YAML.
one file per model.
Each fixture is given a name followed by an indented list of colon-separated key/value pairs.
Keys which resemble YAML keywords such as 'yes' and 'no' are quoted so that the YAML Parser correctly interprets them.
define a reference node between two different fixtures.
ERB allows you to embed Ruby code within templates
The YAML fixture format is pre-processed with ERB when Rails loads fixtures.
Rails by default automatically loads all fixtures from the test/fixtures folder for your models and controllers test.
Fixtures are instances of Active Record.
access the object directly
test_helper.rb specifies the default configuration to run our tests. This is included with all the tests, so any methods added to this file are available to all your tests.
test with method names prefixed with test_.
An assertion is a line of code that evaluates an object (or expression) for expected results.
bin/rake db:test:prepare
Every test contains one or more assertions. Only when all the assertions are successful will the test pass.
rake test command
run a particular test method from the test case by running the test and providing the test method name.
The . (dot) above indicates a passing test. When a test fails you see an F; when a test throws an error you see an E in its place.
we first wrote a test which fails for a desired functionality, then we wrote some code which adds the functionality and finally we ensured that our test passes. This approach to software development is referred to as Test-Driven Development (TDD).
The single responsibility principle asserts that every class should have exactly one responsibility. In other words, each class should be concerned about one unique nugget of functionality
fat models are a little better than fat controllers
when every bit of functionality has been encapsulated into its own object, you find yourself repeating code a lot less.
"LESS TIME TESTING.
MORE TIME INNOVATING.
Accelerate your software development process using the world's
largest automated testing cloud for web and mobile applications
FREE TRIAL "
Serverless was first used to describe applications that significantly or fully
depend on 3rd party applications / services (‘in the cloud’) to manage server-side
logic and state.
‘rich client’ applications (think single page
web apps, or mobile apps) that use the vast ecosystem of cloud accessible
databases (like Parse, Firebase), authentication services (Auth0, AWS Cognito),
etc.
Serverless can also mean applications where some amount of server-side logic
is still written by the application developer but unlike traditional architectures
is run in stateless compute containers that are event-triggered, ephemeral (may
only last for one invocation), and fully managed by a 3rd party.
‘Functions as a service
AWS Lambda is one of the most popular implementations of FaaS at present,
A good example is
Auth0 - they started initially with BaaS ‘Authentication
as a Service’, but with Auth0 Webtask they are entering the
FaaS space.
a typical ecommerce app
a backend data-processing service
with zero administration.
FaaS offerings do not require coding to a specific framework or
library.
Horizontal scaling is completely automatic, elastic, and managed by the
provider
Functions in FaaS are triggered by event types defined by the provider.
a FaaS-supported message broker
from a
deployment-unit point of view FaaS functions are stateless.
allowed the client direct access to a
subset of our database
deleted the authentication logic in the original application and have
replaced it with a third party BaaS service
The client is in fact well on its way to becoming a Single Page Application.
implement a FaaS function that responds to http requests via an
API Gateway
port the search code from the Pet Store server to the Pet Store Search
function
replaced a long lived consumer application with a
FaaS function that runs within the event driven context
server
applications - is a key difference when comparing with other modern
architectural trends like containers and PaaS
the only code that needs to
change when moving to FaaS is the ‘main method / startup’ code, in that it is
deleted, and likely the specific code that is the top-level message handler
(the ‘message listener interface’ implementation), but this might only be a change
in method signature
With FaaS you need to write the function ahead of time to assume parallelism
Most providers also allow functions to be triggered as a response to inbound
http requests, typically in some kind of API gateway
you should assume that for any given
invocation of a function none of the in-process or host state that you create
will be available to any subsequent invocation.
FaaS
functions are either naturally stateless
store
state across requests or for further input to handle a request.
certain classes of long lived task are not suited to FaaS
functions without re-architecture
if you were writing a
low-latency trading application you probably wouldn’t want to use FaaS systems
at this time
An
API Gateway is an HTTP server where routes / endpoints are defined in
configuration and each route is associated with a FaaS function.
API
Gateway will allow mapping from http request parameters to inputs arguments
for the FaaS function
API Gateways may also perform authentication, input validation,
response code mapping, etc.
the Serverless Framework makes working
with API Gateway + Lambda significantly easier than using the first principles
provided by AWS.
Apex - a project to
‘Build, deploy, and manage AWS Lambda functions with ease.'
'Serverless'
to mean the union of a couple of other ideas - 'Backend as a Service' and
'Functions as a Service'.
Orbs are packages of config that you either import by name or configure inline to simplify your config, share, and reuse config within and across projects.
Jobs are a collection of Steps.
All of the steps in the job are executed in a single unit which consumes a CircleCI container from your plan while it’s running.
Workspaces persist data between jobs in a single Workflow.
Caching persists data between the same job in different Workflow builds.
Artifacts persist data after a Workflow has finished.
run using the machine executor which enables reuse of recently used machine executor runs,
docker executor which can compose Docker containers to run your tests and any services they require
macos executor
Steps are a collection of executable commands which are run during a job
In addition to the run: key, keys for save_cache:, restore_cache:, deploy:, store_artifacts:, store_test_results: and add_ssh_keys are nested under Steps.
checkout: key is required to checkout your code
run: enables addition of arbitrary, multi-line shell command scripting
orchestrating job runs with parallel, sequential, and manual approval workflows.
Do not use directories as a dependency for generated targets, ever.
Parallel make: add an explicit timestamp dependency (.done) that make can synchronize threaded calls on to avoid a race condition.
Maintain clean targets - makefiles should be able to remove all content that is generated so "make clean" will return the sandbox/directory back to a clean state.
Wrapper check/unit tests with a ENABLE_TESTS conditional
In the swarm mode model, each task invokes
exactly one container
A task is analogous to a “slot” where the scheduler
places a container.
A task is the atomic unit of scheduling within a swarm.
A task is a one-directional mechanism. It progresses monotonically through a
series of states: assigned, prepared, running, etc.
Docker swarm mode is a general purpose scheduler and
orchestrator.
Hypothetically, you could implement other types of
tasks such as virtual machine tasks or non-containerized process tasks.
If all nodes are paused or drained, and you create a service, it is
pending until a node becomes available.
reserve a specific amount of memory for a service.
impose placement constraints on the service
As the administrator of
a swarm, you declare the desired state of your swarm, and the manager works with
the nodes in the swarm to create that state.
two types of service deployments, replicated and global.
A global service is a service that runs one task on every node.
Good candidates for global services are monitoring agents, an anti-virus
scanners or other types of containers that you want to run on every node in the
swarm.
我想大概的意思就是:如果是 admin 可以看到全部 post,如果不是只能看到 published = true 的 post
use this class from your controller via the policy_scope method:
PostPolicy::Scope.new(current_user, Post).resolve
policy_scope(@user.posts).each
This
method will raise an exception if authorize has not yet been called.
verify_policy_scoped to your controller. This
will raise an exception in the vein of verify_authorized. However, it tracks
if policy_scope is used instead of authorize
need to
conditionally bypass verification, you can use skip_authorization
skip_policy_scope
Having a mechanism that ensures authorization happens allows developers to
thoroughly test authorization scenarios as units on the policy objects
themselves.
Pundit doesn't do anything you couldn't have easily done
yourself. It's a very small library, it just provides a few neat helpers.
all of the policy and scope classes are just plain Ruby classes
rails g pundit:policy post
define a filter that redirects unauthenticated users to the
login page
fail more gracefully
raise Pundit::NotAuthorizedError, "must be logged in" unless user
having rails handle them as a 403 error and serving a 403 error page.
retrieve a policy for a record outside the controller or
view
define a method in your controller called pundit_user
Pundit strongly encourages you to model your application in such a way that the
only context you need for authorization is a user object and a domain model that
you want to check authorization for.
Pundit does not allow you to pass additional arguments to policies
authorization is dependent
on IP address in addition to the authenticated user
create a special class which wraps up both user and IP and passes it to the policy.
set up a permitted_attributes method in your policy
policy(@post).permitted_attributes
permitted_attributes(@post)
Pundit provides a convenient helper method
permit different attributes based on the current action,
If you have defined an action-specific method on your policy for the current action, the permitted_attributes helper will call it instead of calling permitted_attributes on your controller
If you don't have an instance for the first argument to authorize, then you can pass
the class
restart the Rails server
Given there is a policy without a corresponding model / ruby class,
you can retrieve it by passing a symbol
after_action :verify_authorized
It is not some kind of
failsafe mechanism or authorization mechanism.
Pundit will work just fine without
using verify_authorized and verify_policy_scoped
most organizations practice continuous delivery, which means that your default branch can be deployed.
Merging everything into the master branch and frequently deploying means you minimize the amount of unreleased code, which is in line with lean and continuous delivery best practices.
you can deploy to production every time you merge a feature branch.
deploy a new version by merging master into the production branch.
you can have your deployment script create a tag on each deployment.
to have an environment that is automatically updated to the master branch
commits only flow downstream, ensures that everything is tested in all environments.
first merge these bug fixes into master, and then cherry-pick them into the release branch.
Merging into master and then cherry-picking into release is called an “upstream first” policy
“merge request” since the final action is to merge the feature branch.
“pull request” since the first manual action is to pull the feature branch
it is common to protect the long-lived branches
After you merge a feature branch, you should remove it from the source control software
When you are ready to code, create a branch for the issue from the master branch.
This branch is the place for any work related to this change.
A merge request is an online place to discuss the change and review the code.
If you open the merge request but do not assign it to anyone, it is a “Work In Progress” merge request.
Start the title of the merge request with “[WIP]” or “WIP:” to prevent it from being merged before it’s ready.
To automatically close linked issues, mention them with the words “fixes” or “closes,” for example, “fixes #14” or “closes #67.” GitLab closes these issues when the code is merged into the default branch.
If you have an issue that spans across multiple repositories, create an issue for each repository and link all issues to a parent issue.
With Git, you can use an interactive rebase (rebase -i) to squash multiple commits into one or reorder them.
you should never rebase commits you have pushed to a remote server.
Rebasing creates new commits for all your changes, which can cause confusion because the same change would have multiple identifiers.
if someone has already reviewed your code, rebasing makes it hard to tell what changed since the last review.
never rebase commits authored by other people.
it is a bad idea to rebase commits that you have already pushed.
always use the “no fast-forward” (--no-ff) strategy when you merge manually.
you should try to avoid merge commits in feature branches
people avoid merge commits by just using rebase to reorder their commits after the commits on the master branch.
Using rebase prevents a merge commit when merging master into your feature branch, and it creates a neat linear history.
you should never rebase commits you have pushed to a remote server
Sometimes you can reuse recorded resolutions (rerere), but merging is better since you only have to resolve conflicts once.
not frequently merge master into the feature branch.
utilizing new code,
resolving merge conflicts
updating long-running branches.
just cherry-picking a commit.
If your feature branch has a merge conflict, creating a merge commit is a standard way of solving this.
keep your feature branches short-lived.
split your features into smaller units of work
you should try to prevent merge commits, but not eliminate them.
Your codebase should be clean, but your history should represent what actually happened.
Splitting up work into individual commits provides context for developers looking at your code later.
push your feature branch frequently, even when it is not yet ready for review.
Commit often and push frequently
A commit message should reflect your intention, not just the contents of the commit.
Testing before merging
When using GitLab flow, developers create their branches from this master branch, so it is essential that it never breaks.
Therefore, each merge request must be tested before it is accepted.
When creating a feature branch, always branch from an up-to-date master
GitLab flow as a clearly defined set of best practices.
It combines feature-driven development and feature branches with issue tracking.
In Git, you add files from the working copy to the staging area. After that, you commit them to your local repo.
The third step is pushing to a shared remote repository.
The biggest problem is that many long-running branches emerge that all contain part of the changes.
It is a convention to call your default branch master and to mostly branch from and merge to this.
Nowadays, most organizations practice continuous delivery, which means that your default branch can be deployed.
Continuous delivery removes the need for hotfix and release branches, including all the ceremony they introduce.
Merging everything into the master branch and frequently deploying means you minimize the amount of unreleased code, which is in line with lean and continuous delivery best practices.
GitHub flow assumes you can deploy to production every time you merge a feature branch.
You can deploy a new version by merging master into the production branch.
If you need to know what code is in production, you can just checkout the production branch to see.
Production branch
Environment branches
have an environment that is automatically updated to the master branch.
deploy the master branch to staging.
To deploy to pre-production, create a merge request from the master branch to the pre-production branch.
Go live by merging the pre-production branch into the production branch.
Release branches
work with release branches if you need to release software to the outside world.
each branch contains a minor version
After announcing a release branch, only add serious bug fixes to the branch.
merge these bug fixes into master, and then cherry-pick them into the release branch.
Merging into master and then cherry-picking into release is called an “upstream first” policy
Tools such as GitHub and Bitbucket choose the name “pull request” since the first manual action is to pull the feature branch.
Tools such as GitLab and others choose the name “merge request” since the final action is to merge the feature branch.
If you work on a feature branch for more than a few hours, it is good to share the intermediate result with the rest of the team.
the merge request automatically updates when new commits are pushed to the branch.
If the assigned person does not feel comfortable, they can request more changes or close the merge request without merging.
In GitLab, it is common to protect the long-lived branches, e.g., the master branch, so that most developers can’t modify them.
if you want to merge into a protected branch, assign your merge request to someone with maintainer permissions.
After you merge a feature branch, you should remove it from the source control software.
Having a reason for every code change helps to inform the rest of the team and to keep the scope of a feature branch small.
If there is no issue yet, create the issue
The issue title should describe the desired state of the system.
For example, the issue title “As an administrator, I want to remove users without receiving an error” is better than “Admin can’t remove users.”
create a branch for the issue from the master branch
If you open the merge request but do not assign it to anyone, it is a “Work In Progress” merge request.
Start the title of the merge request with [WIP] or WIP: to prevent it from being merged before it’s ready.
When they press the merge button, GitLab merges the code and creates a merge commit that makes this event easily visible later on.
Merge requests always create a merge commit, even when the branch could be merged without one.
This merge strategy is called “no fast-forward” in Git.
Suppose that a branch is merged but a problem occurs and the issue is reopened.
In this case, it is no problem to reuse the same branch name since the first branch was deleted when it was merged.
At any time, there is at most one branch for every issue.
It is possible that one feature branch solves more than one issue.
GitLab closes these issues when the code is merged into the default branch.
If you have an issue that spans across multiple repositories, create an issue for each repository and link all issues to a parent issue.
use an interactive rebase (rebase -i) to squash multiple commits into one or reorder them.
you should never rebase commits you have pushed to a remote server.
Rebasing creates new commits for all your changes, which can cause confusion because the same change would have multiple identifiers.
if someone has already reviewed your code, rebasing makes it hard to tell what changed since the last review.
never rebase commits authored by other people.
it is a bad idea to rebase commits that you have already pushed.
If you revert a merge commit and then change your mind, revert the revert commit to redo the merge.
Often, people avoid merge commits by just using rebase to reorder their commits after the commits on the master branch.
Using rebase prevents a merge commit when merging master into your feature branch, and it creates a neat linear history.
every time you rebase, you have to resolve similar conflicts.
Sometimes you can reuse recorded resolutions (rerere), but merging is better since you only have to resolve conflicts once.
A good way to prevent creating many merge commits is to not frequently merge master into the feature branch.
keep your feature branches short-lived.
Most feature branches should take less than one day of work.
If your feature branches often take more than a day of work, try to split your features into smaller units of work.
You could also use feature toggles to hide incomplete features so you can still merge back into master every day.
you should try to prevent merge commits, but not eliminate them.
Your codebase should be clean, but your history should represent what actually happened.
If you rebase code, the history is incorrect, and there is no way for tools to remedy this because they can’t deal with changing commit identifiers
Commit often and push frequently
You should push your feature branch frequently, even when it is not yet ready for review.
A commit message should reflect your intention, not just the contents of the commit.
each merge request must be tested before it is accepted.
test the master branch after each change.
If new commits in master cause merge conflicts with the feature branch, merge master back into the branch to make the CI server re-run the tests.
When creating a feature branch, always branch from an up-to-date master.
Do not merge from upstream again if your code can work and merge cleanly without doing so.