The returned certificate is the public certificate (which includes the public key but not the private key), which itself can be in a couple of formats.
this is a container format that may include just the public certificate (such as with Apache installs, and CA certificate files /etc/ssl/certs), or may include an entire certificate chain including public key, private key, and root certificates
Privacy Enhanced Mail (PEM), a failed method for secure email but the container format it used lives on
This is a PEM formatted file containing just the private-key of a specific certificate and is merely a conventional name and not a standardized one.
The rights on these files are very important
/etc/ssl/private
OpenSSL can convert these to .pem
.cert .cer .crt A .pem (or rarely .der) formatted file with a different extension
there are four different ways to present certificates and their components
used preferentially by open-source software
It can have a variety of extensions (.pem, .key, .cer, .cert, more)
The parent format of PEM
a binary version of the base64-encoded PEM file.
PEM on it's own isn't a certificate, it's just a way of encoding data
X.509 certificates are one type of data that is commonly encoded using PEM.
nginx "fails fast" when the client informs it that it's going to send a body larger than the client_max_body_size by sending a 413 response and closing the connection.
Because nginx closes the connection, the client sends data to the closed socket, causing a TCP RST.
Most clients don't read responses until the entire request body is sent.
Client body and buffers are key because nginx must buffer incoming data.
The clean setting frees up memory and consumption limits by instructing nginx to store incoming buffer in a file and then clean this file later from disk by deleting it.
rails dbconsole figures out which database you're using and drops you into whichever command line interface you would use with it
The console command lets you interact with your Rails application from the command line. On the underside, rails console uses IRB
rake about gives information about version numbers for Ruby, RubyGems, Rails, the Rails subcomponents, your application's folder, the current Rails environment name, your app's database adapter, and schema version
You can precompile the assets in app/assets using rake assets:precompile and remove those compiled assets using rake assets:clean.
rake db:version is useful when troubleshooting
The doc: namespace has the tools to generate documentation for your app, API documentation, guides.
rake notes will search through your code for comments beginning with FIXME, OPTIMIZE or TODO.
You can also use custom annotations in your code and list them using rake notes:custom by specifying the annotation using an environment variable ANNOTATION.
rake routes will list all of your defined routes, which is useful for tracking down routing problems in your app, or giving you a good overview of the URLs in an app you're trying to get familiar with.
rake secret will give you a pseudo-random key to use for your session secret.
Custom rake tasks have a .rake extension and are placed in
Rails.root/lib/tasks.
rails new . --git --database=postgresql
All commands can run with -h or --help to list more information
The rails server command launches a small web server named WEBrick which comes bundled with Ruby
rails server -e production -p 4000
You can run a server as a daemon by passing a -d option
The rails generate command uses templates to create a whole lot of things.
Using generators will save you a large amount of time by writing boilerplate code, code that is necessary for the app to work.
With a normal, plain-old Rails application, your URLs will generally follow the pattern of http://(host)/(controller)/(action), and a URL like http://(host)/(controller) will hit the index action of that controller.
A scaffold in Rails is a full set of model, database migration for that model, controller to manipulate it, views to view and manipulate the data, and a test suite for each of the above.
Unit tests are code that tests and makes assertions about code.
Unit tests are your friend.
rails console --sandbox
rails db
Each task has a description, and should help you find the thing you need.
rake tmp:clear clears all the three: cache, sessions and sockets.
view templates are written in a language called ERB (Embedded Ruby) which is converted by the request cycle in Rails before being sent to the user.
Each action's purpose is to collect information to provide it to a view.
A view's purpose is to display this information in a human readable format.
routing file which holds entries in a special DSL (domain-specific language) that tells Rails how to connect incoming requests to controllers and actions.
You can create, read, update and destroy items for a resource and these operations are referred to as CRUD operations
A controller is simply a class that is defined to inherit from ApplicationController.
If not found, then it will attempt to load a template called application/new. It looks for one here because the PostsController inherits from ApplicationController
:formats specifies the format of template to be served in response. The default format is :html, and so Rails is looking for an HTML template.
:handlers, is telling us what template handlers could be used to render our template.
When you call form_for, you pass it an identifying object for this
form. In this case, it's the symbol :post. This tells the form_for
helper what this form is for.
that the action attribute for the form is pointing at /posts/new
When a form is submitted, the fields of the form are sent to Rails as parameters.
parameters can then be referenced inside the controller actions, typically to perform a particular task
params method is the object which represents the parameters (or fields) coming in from the form.
Active Record is smart enough to automatically map column names to
model attributes,
Rails uses rake commands to run migrations,
and it's possible to undo a migration after it's been applied to your database
every Rails model can be initialized with its
respective attributes, which are automatically mapped to the respective
database columns.
migration creates a method named change which will be called when you
run this migration.
The action defined in this method is also reversible, which
means Rails knows how to reverse the change made by this migration, in case you
want to reverse it later
Migration filenames include a timestamp to ensure that they're processed in the
order that they were created.
@post.save returns a boolean indicating
whether the model was saved or not.
prevents an attacker from
setting the model's attributes by manipulating the hash passed to the model.
If you want to link to an action in the same controller, you don't
need to specify the :controller option, as Rails will use the current
controller by default.
inherits from
ActiveRecord::Base
Active Record supplies a great deal of functionality to
your Rails models for free, including basic database CRUD (Create, Read, Update,
Destroy) operations, data validation, as well as sophisticated search support
and the ability to relate multiple models to one another.
Rails includes methods to help you validate the data that you send to models
Rails can validate a variety of conditions in a model,
including the presence or uniqueness of columns, their format, and the
existence of associated objects.
redirect_to will tell the browser to issue another request.
rendering is done within the same request as the form submission
Each request for a
comment has to keep track of the post to which the comment is attached, thus the
initial call to the find method of the Post model to get the post in question.
pluralize is a rails helper that takes a number and a string as its
arguments. If the number is greater than one, the string will be automatically pluralized.
The render method is used so that the @post object is passed back to the new template when it is rendered.
The method: :patch option tells Rails that we want this form to be submitted
via the PATCH HTTP method which is the HTTP method you're expected to use to
update resources according to the REST protocol.
it accepts a hash containing the attributes
that you want to update.
field_with_errors. You can define a css rule to make them
standout
belongs_to :post, which sets up an Active Record association
creates comments as a nested resource within posts
call destroy on Active Record objects when you want to delete
them from the database.
Rails allows you to
use the dependent option of an association to achieve this.
store all external data as UTF-8
you're better off
ensuring that all external data is UTF-8
use UTF-8 as the internal storage of your database
Rails defaults to converting data from your database into UTF-8 at
the boundary.
:patch
By default forms built with the form_for helper are sent via POST
The :method and :'data-confirm'
options are used as HTML5 attributes so that when the link is clicked,
Rails will first show a confirm dialog to the user, and then submit the link with method delete.
This is done via the JavaScript file jquery_ujs which is automatically included
into your application's layout (app/views/layouts/application.html.erb) when you
generated the application.
Without this file, the confirmation dialog box wouldn't appear.
just defines the partial template we want to render
As the render
method iterates over the @post.comments collection, it assigns each
comment to
a local variable named the same as the partial
use the authentication system
require and permit
the method is often made private to make sure
it can't be called outside its intended context.
standard CRUD actions in each
controller in the following order: index, show, new, edit, create, update
and destroy.
must be placed
before any private or protected method in the controller in order to work
understand and use Docker build-time variables, environment variables and docker-compose templating the right way.
ARG is only available during the build of a Docker image (RUN etc), not after the image is created and containers are started from it (ENTRYPOINT, CMD).
ENV values are available to containers, but also RUN-style commands during the Docker build starting with the line where they are introduced.
set an environment variable in an intermediate container using bash (RUN export VARI=5 && …) it will not persist in the next command.
An env_file, is a convenient way to pass many environment variables to a single command in one batch.
not be confused with a .env file
the dot in front of env - .env, not an “env_file”.
If you have a file named .env in your project, it’s only used to put values
into the docker-compose.yml file which is in the same folder. Those are used
with Docker Compose and Docker Stack.
Just type docker-compose config. This way you’ll see how the docker-compose.yml file content looks after the substitution step has been performed without running anything else.
ARG are also known as build-time variables. They are only available
from the moment they are ‘announced’ in the Dockerfile with an ARG instruction
up to the moment when the image is built.
Running containers can’t access
values of ARG variables.
ENV variables are also available during the build, as soon as you introduce
them with an ENV instruction. However, unlike ARG, they are also accessible
by containers started from the final image.
ENV values can be overridden when
starting a container,
If you don’t provide a value to expected ARG variables which don’t
have a default, you’ll get an error message.
args block
You can use ARG to set
the default values of ENV vars.
dynamic on-build env values
2. Pass environment variable values from your host
1. Provide values one by one
3. Take values from a file (env_file)
for each RUN
statement, a new container is launched from an intermediate image.
An image is saved by the end of
the command, but environment variables do not persist that way.
The precedence is, from stronger to less-strong: stuff the containerized application sets, values from single environment entries, values from the env_file(s) and finally Dockerfile defaults.
"KeyBox is a web-based SSH console that centrally manages administrative access to systems. Web-based administration is combined with management and distribution of user's public SSH keys. https://www.sshkeybox.com"
Matcher rules determine if a particular request should be forwarded to a backend
if any rule matches
if all rules match
In order to use regular expressions with Host and Path matchers, you must declare an arbitrarily named variable followed by the colon-separated regular expression, all enclosed in curly braces.
Use a *Prefix* matcher if your backend listens on a particular base path but also serves requests on sub-paths.
For instance, PathPrefix: /products would match /products but also /products/shoes and /products/shirts.
Since the path is forwarded as-is, your backend is expected to listen on /products
Use Path if your backend listens on the exact path only. For instance, Path: /products would match /products but not /products/shoes.
Modifier rules ALWAYS apply after the Matcher rules.
A backend is responsible to load-balance the traffic coming from one or more frontends to a set of http servers
wrr: Weighted Round Robin
drr: Dynamic Round Robin: increases weights on servers that perform better than others.
A circuit breaker can also be applied to a backend, preventing high loads on failing servers.
To proactively prevent backends from being overwhelmed with high load, a maximum connection limit can also be applied to each backend.
Sticky sessions are supported with both load balancers.
When sticky sessions are enabled, a cookie is set on the initial request.
The check is defined by a path appended to the backend URL and an interval (given in a format understood by time.ParseDuration) specifying how often the health check should be executed (the default being 30 seconds).
Each backend must respond to the health check within 5 seconds.
The static configuration is the global configuration which is setting up connections to configuration backends and entrypoints.
We only need to enable watch option to make Træfik watch configuration backend changes and generate its configuration automatically.
Separate the regular expression and the replacement by a space.
a comma-separated key/value pair where both key and value must be literals.
namespacing of your backends happens on the basis of hosts in addition to paths
Modifiers will be applied in a pre-determined order regardless of their order in the rule configuration section.
customize priority
Custom headers can be configured through the frontends, to add headers to either requests or responses that match the frontend's rules.
Security related headers (HSTS headers, SSL redirection, Browser XSS filter, etc) can be added and configured per frontend in a similar manner to the custom headers above.
Servers are simply defined using a url. You can also apply a custom weight to each server (this will be used by load-balancing).
Maximum connections can be configured by specifying an integer value for maxconn.amount and maxconn.extractorfunc which is a strategy used to determine how to categorize requests in order to evaluate the maximum connections.
By default, secrets are mounted into a service at /run/secrets/<secret-name>
docker secret create
If you use a distributed storage driver, such as Amazon S3, you can use a
fully replicated service. Each worker can write to the storage back-end
without causing write conflicts.
You can access the service on port 443 of any swarm node. Docker sends the
requests to the node which is running the service.
--publish published=443,target=443
The most important aspect is that a load balanced cluster of registries must
share the same resources
S3 or Azure, they should be
accessing the same resource and share an identical configuration.
you must make sure you are properly sending the
X-Forwarded-Proto, X-Forwarded-For, and Host headers to their “client-side”
values. Failure to do so usually makes the registry issue redirects to internal
hostnames or downgrading from https to http.
A properly secured registry should return 401 when the “/v2/” endpoint is hit
without credentials
registries should always
implement access restrictions.
REGISTRY_AUTH=htpasswd
REGISTRY_AUTH_HTPASSWD_PATH=/auth/htpasswd
The registry also supports delegated authentication which redirects users to a
specific trusted token server. This approach is more complicated to set up, and
only makes sense if you need to fully configure ACLs and need more control over
the registry’s integration into your global authorization and authentication
systems.
Any data that needs to persist must be stored in a stateful backing service, typically a database.
The memory space or filesystem of the process can be used as a brief, single-transaction cache.
wipe out all local (e.g., memory and filesystem) state
compiling during the build stage
“sticky sessions” – that is, caching user session data in memory of the app’s process and expecting future requests from the same visitor to be routed to the same process.
Sticky sessions are a violation of twelve-factor and should never be used or relied upon
PHP processes run as child processes of Apache, started on demand as needed by request volume.
Java processes take the opposite approach, with the JVM providing one massive uberprocess that reserves a large block of system resources (CPU and memory) on startup, with concurrency managed internally via threads
Processes in the twelve-factor app take strong cues from the unix process model for running service daemons.
Laravel queues provide a unified API across a variety of different queue backends, such as Beanstalk, Amazon SQS, Redis, or even a relational database.
The queue configuration file is stored in config/queue.php
a synchronous driver that will execute jobs immediately (for local use)
A null queue driver is also included which discards queued jobs.
In your config/queue.php configuration file, there is a connections configuration option.
any given queue connection may have multiple "queues" which may be thought of as different stacks or piles of queued jobs.
each connection configuration example in the queue configuration file contains a queue attribute.
if you dispatch a job without explicitly defining which queue it should be dispatched to, the job will be placed on the queue that is defined in the queue attribute of the connection configuration
pushing jobs to multiple queues can be especially useful for applications that wish to prioritize or segment how jobs are processed
specify which queues it should process by priority.
If your Redis queue connection uses a Redis Cluster, your queue names must contain a key hash tag.
ensure all of the Redis keys for a given queue are placed into the same hash slot
all of the queueable jobs for your application are stored in the app/Jobs directory.
Job classes are very simple, normally containing only a handle method which is called when the job is processed by the queue.
we were able to pass an Eloquent model directly into the queued job's constructor. Because of the SerializesModels trait that the job is using, Eloquent models will be gracefully serialized and unserialized when the job is processing.
When the job is actually handled, the queue system will automatically re-retrieve the full model instance from the database.
The handle method is called when the job is processed by the queue
The arguments passed to the dispatch method will be given to the job's constructor
delay the execution of a queued job, you may use the delay method when dispatching a job.
dispatch a job immediately (synchronously), you may use the dispatchNow method.
When using this method, the job will not be queued and will be run immediately within the current process
specify a list of queued jobs that should be run in sequence.
Deleting jobs using the $this->delete() method will not prevent chained jobs from being processed. The chain will only stop executing if a job in the chain fails.
this does not push jobs to different queue "connections" as defined by your queue configuration file, but only to specific queues within a single connection.
To specify the queue, use the onQueue method when dispatching the job
To specify the connection, use the onConnection method when dispatching the job
defining the maximum number of attempts on the job class itself.
to defining how many times a job may be attempted before it fails, you may define a time at which the job should timeout.
using the funnel method, you may limit jobs of a given type to only be processed by one worker at a time
using the throttle method, you may throttle a given type of job to only run 10 times every 60 seconds.
If an exception is thrown while the job is being processed, the job will automatically be released back onto the queue so it may be attempted again.
dispatch a Closure. This is great for quick, simple tasks that need to be executed outside of the current request cycle
When dispatching Closures to the queue, the Closure's code contents is cryptographically signed so it can not be modified in transit.
Laravel includes a queue worker that will process new jobs as they are pushed onto the queue.
once the queue:work command has started, it will continue to run until it is manually stopped or you close your terminal
queue workers are long-lived processes and store the booted application state in memory.
they will not notice changes in your code base after they have been started.
during your deployment process, be sure to restart your queue workers.
customize your queue worker even further by only processing particular queues for a given connection
The --once option may be used to instruct the worker to only process a single job from the queue
The --stop-when-empty option may be used to instruct the worker to process all jobs and then exit gracefully.
Daemon queue workers do not "reboot" the framework before processing each job.
you should free any heavy resources after each job completes.
Since queue workers are long-lived processes, they will not pick up changes to your code without being restarted.
restart the workers during your deployment process.
php artisan queue:restart
The queue uses the cache to store restart signals
the queue workers will die when the queue:restart command is executed, you should be running a process manager such as Supervisor to automatically restart the queue workers.
each queue connection defines a retry_after option. This option specifies how many seconds the queue connection should wait before retrying a job that is being processed.
The --timeout option specifies how long the Laravel queue master process will wait before killing off a child queue worker that is processing a job.
When jobs are available on the queue, the worker will keep processing jobs with no delay in between them.
While sleeping, the worker will not process any new jobs - the jobs will be processed after the worker wakes up again
the numprocs directive will instruct Supervisor to run 8 queue:work processes and monitor all of them, automatically restarting them if they fail.
Laravel includes a convenient way to specify the maximum number of times a job should be attempted.
define a failed method directly on your job class, allowing you to perform job specific clean-up when a failure occurs.
a great opportunity to notify your team via email or Slack.
php artisan queue:retry all
php artisan queue:flush
When injecting an Eloquent model into a job, it is automatically serialized before being placed on the queue and restored when the job is processed
"Pinpoint is an open source APM (Application Performance Management) tool for large-scale distributed systems written in Java. http://naver.github.io/pinpoint/"