Serverless was first used to describe applications that significantly or fully
depend on 3rd party applications / services (‘in the cloud’) to manage server-side
logic and state.
‘rich client’ applications (think single page
web apps, or mobile apps) that use the vast ecosystem of cloud accessible
databases (like Parse, Firebase), authentication services (Auth0, AWS Cognito),
etc.
Serverless can also mean applications where some amount of server-side logic
is still written by the application developer but unlike traditional architectures
is run in stateless compute containers that are event-triggered, ephemeral (may
only last for one invocation), and fully managed by a 3rd party.
‘Functions as a service
AWS Lambda is one of the most popular implementations of FaaS at present,
A good example is
Auth0 - they started initially with BaaS ‘Authentication
as a Service’, but with Auth0 Webtask they are entering the
FaaS space.
a typical ecommerce app
a backend data-processing service
with zero administration.
FaaS offerings do not require coding to a specific framework or
library.
Horizontal scaling is completely automatic, elastic, and managed by the
provider
Functions in FaaS are triggered by event types defined by the provider.
a FaaS-supported message broker
from a
deployment-unit point of view FaaS functions are stateless.
allowed the client direct access to a
subset of our database
deleted the authentication logic in the original application and have
replaced it with a third party BaaS service
The client is in fact well on its way to becoming a Single Page Application.
implement a FaaS function that responds to http requests via an
API Gateway
port the search code from the Pet Store server to the Pet Store Search
function
replaced a long lived consumer application with a
FaaS function that runs within the event driven context
server
applications - is a key difference when comparing with other modern
architectural trends like containers and PaaS
the only code that needs to
change when moving to FaaS is the ‘main method / startup’ code, in that it is
deleted, and likely the specific code that is the top-level message handler
(the ‘message listener interface’ implementation), but this might only be a change
in method signature
With FaaS you need to write the function ahead of time to assume parallelism
Most providers also allow functions to be triggered as a response to inbound
http requests, typically in some kind of API gateway
you should assume that for any given
invocation of a function none of the in-process or host state that you create
will be available to any subsequent invocation.
FaaS
functions are either naturally stateless
store
state across requests or for further input to handle a request.
certain classes of long lived task are not suited to FaaS
functions without re-architecture
if you were writing a
low-latency trading application you probably wouldn’t want to use FaaS systems
at this time
An
API Gateway is an HTTP server where routes / endpoints are defined in
configuration and each route is associated with a FaaS function.
API
Gateway will allow mapping from http request parameters to inputs arguments
for the FaaS function
API Gateways may also perform authentication, input validation,
response code mapping, etc.
the Serverless Framework makes working
with API Gateway + Lambda significantly easier than using the first principles
provided by AWS.
Apex - a project to
‘Build, deploy, and manage AWS Lambda functions with ease.'
'Serverless'
to mean the union of a couple of other ideas - 'Backend as a Service' and
'Functions as a Service'.
before(:all) hooks are invoked before the transaction is opened. You can use
this to speed things up by creating data once before any example in a group is
run
Microservices are a useful architecture, but even their advocates
say that using them incurs a significant
MicroservicePremium, which means they are only useful
with more complex systems.
you should build a new
application as a monolith initially, even if you think it's likely
that it will benefit from a microservices architecture later on.
Any refactoring of functionality
between services is much harder than it is in a monolith.
By building a
monolith first, you can figure out what the right boundaries are,
before a microservices design brushes a layer of treacle over them.
The logical way is to design a monolith carefully,
paying attention to modularity within the software, both at the API
boundaries and how the data is stored.
start with a monolith and gradually
peel off microservices at the edges
Don't be afraid of building a
monolith that you will discard, particularly if a monolith can get
you to market quickly
rails dbconsole figures out which database you're using and drops you into whichever command line interface you would use with it
The console command lets you interact with your Rails application from the command line. On the underside, rails console uses IRB
rake about gives information about version numbers for Ruby, RubyGems, Rails, the Rails subcomponents, your application's folder, the current Rails environment name, your app's database adapter, and schema version
You can precompile the assets in app/assets using rake assets:precompile and remove those compiled assets using rake assets:clean.
rake db:version is useful when troubleshooting
The doc: namespace has the tools to generate documentation for your app, API documentation, guides.
rake notes will search through your code for comments beginning with FIXME, OPTIMIZE or TODO.
You can also use custom annotations in your code and list them using rake notes:custom by specifying the annotation using an environment variable ANNOTATION.
rake routes will list all of your defined routes, which is useful for tracking down routing problems in your app, or giving you a good overview of the URLs in an app you're trying to get familiar with.
rake secret will give you a pseudo-random key to use for your session secret.
Custom rake tasks have a .rake extension and are placed in
Rails.root/lib/tasks.
rails new . --git --database=postgresql
All commands can run with -h or --help to list more information
The rails server command launches a small web server named WEBrick which comes bundled with Ruby
rails server -e production -p 4000
You can run a server as a daemon by passing a -d option
The rails generate command uses templates to create a whole lot of things.
Using generators will save you a large amount of time by writing boilerplate code, code that is necessary for the app to work.
With a normal, plain-old Rails application, your URLs will generally follow the pattern of http://(host)/(controller)/(action), and a URL like http://(host)/(controller) will hit the index action of that controller.
A scaffold in Rails is a full set of model, database migration for that model, controller to manipulate it, views to view and manipulate the data, and a test suite for each of the above.
Unit tests are code that tests and makes assertions about code.
Unit tests are your friend.
rails console --sandbox
rails db
Each task has a description, and should help you find the thing you need.
rake tmp:clear clears all the three: cache, sessions and sockets.
Before moving our production infrastructure over however, we decided that we wanted to start developing with them locally first. We could shake out any issues with our applications before risking the production environment.
using Chef and Vagrant to provision local VMs
Engineers at IFTTT currently all use Apple computers
use a macro-style class method to register them as callbacks
self.name = login.capitalize if name.blank?
registered to only fire on certain life cycle events
considered good practice to declare callback methods as protected or private
all the available Active Record callbacks,
after_initialize callback will be called whenever an Active Record object is instantiated, either by directly using new or when a record is loaded from the database
after_find callback will be called whenever Active Record loads a record from the database.
after_find is called before after_initialize if both are defined
after_touch callback will be called whenever an Active Record object is touched.
belongs_to :company, touch: true
methods trigger callbacks
after_find callback is triggered by the following finder methods
after_initialize callback is triggered every time a new object of the class is initialized
should be used with caution, however, because important business rules and application logic may be kept in callbacks.
As you start registering new callbacks for your models, they will be queued for execution
The whole callback chain is wrapped in a transaction
Callbacks work through model relationships, and can even be defined by them.
As with validations, we can also make the calling of a callback method conditional on the satisfaction of a given predicate
When using the :if option, the callback won't be executed if the predicate method returns false; when using the :unless option, the callback won't be executed if the predicate method returns true.
with a Symbol
with a String
with a Proc
using eval and hence needs to contain valid Ruby code.
mix both :if and :unless in the same callback declaration
needed to instantiate a new PictureFileCallbacks object, since we declared our callback as an instance method.
Active Record makes it possible to create classes that encapsulate the callback methods, so it becomes very easy to reuse them.
won't be necessary to instantiate
after_commit
after_rollback
very similar to the after_save callback except that they don't execute until after database changes have either been committed or rolled back
delete_picture_file_from_disk
after_commit
If anything raises an exception after the after_destroy callback is called and the transaction rolls back, the file will have been deleted and the model will be left in an inconsistent state
A single Google Cloud VPC can span multiple regions without communicating across the public Internet.
Google Cloud VPCs let you increase the IP space of any subnets without any workload shutdown or downtime.
Get private access to Google services, such as storage, big data, analytics, or machine learning, without having to give your service a public IP address.
bear in mind that the best way to configure ProxySQL is through its admin interface.
llow you to control the list of the backend servers, how traffic is routed to them, and other important settings (such as caching, access control, etc)
Once you've made modifications to the in-memory data structure, you must load the new configuration to the runtime, or persist the new settings to disk
mysql_variables: contains global variables that control the functionality for handling the incoming MySQL traffic.
mysql_users: contains rows for the mysql_users table from the admin interface. Basically, these define the users which can connect to the proxy, and the users with which the proxy can connect to the backend servers.
mysql_servers: contains rows for the mysql_servers table from the admin interface. Basically, these define the backend servers towards which the incoming MySQL traffic is routed.
mysql_query_rules: contains rows for the mysql_query_rules table from the admin interface. Basically, these define the rules used to classify and route the incoming MySQL traffic, according to various criteria (patterns matched, user used to run the query, etc.).
The default for --listen-addr is to listen on all interfaces on TCP port 2377 (0.0.0.0:2377)
Depending on your network architecture, you may want your swarm management interface only accessible on a management network that could be separate from a data and/or public network that are each attached to a physical server.
Please keep in mind that --amend actually will create a new commit which replaces the previous one, so don’t use it for modifying commits which already have been pushed to a central repository.
git rebase --interactive
Just pick the commit(s) you want to update, change pick to reword (or r for short), and you will be taken to a new view where you can edit the message.
you can completely remove commits by deleting them from the list, as well as edit, reorder, and squash them.
Squashing allows you to merge several commits into one
In case you don’t want to create additional revert commits but only apply the necessary changes to your working tree, you can use the --no-commit/-n option.
reuse recorded resolution
Unfortunately it turns out that one of the branches isn’t quite there yet, so you decide to un-merge it again. Several days (or weeks) later when the branch is finally ready you merge it again, but thanks to the recorded resolutions, you won’t have to resolve the same merge conflicts again.
You can also define global hooks to use in all your projects by creating a template directory that git will use when initializing a new repository
removing sensitive data
Keep in mind that this will rewrite your project’s entire history, which can be very disruptive in a distributed workflow.
convert an existing Rails helper to a decorator
method
That method is presentation-centric, and thus does not
belong in a model.
Where
does that come from? It's a method of the source Article, whose methods have
been made available on the decorator by the delegate_all call above.
a great way to
replace procedural helpers like the one above with "real" object-oriented
programming
Baseimage-docker only advocates running multiple OS processes inside a single container.
Password and challenge-response authentication are disabled by default. Only key authentication is allowed.
A tool for running a command as another user
The Docker developers advocate the philosophy of running a single logical service per container. A logical service can consist of multiple OS processes.
All syslog messages are forwarded to "docker logs".
Baseimage-docker advocates running multiple OS processes inside a single container, and a single logical service can consist of multiple OS processes.
Baseimage-docker provides tools to encourage running processes as different users
sometimes it makes sense to run multiple services in a single container, and sometimes it doesn't.
Splitting your logical service into multiple OS processes also makes sense from a security standpoint.
using environment variables to pass parameters to containers is very much the "Docker way"
Baseimage-docker provides a facility to run a single one-shot command, while solving all of the aforementioned problems
the shell script must run the daemon without letting it daemonize/fork it.
All executable scripts in /etc/my_init.d, if this directory exists. The scripts are run in lexicographic order.
variables will also be passed to all child processes
Environment variables on Unix are inherited on a per-process basis
there is no good central place for defining environment variables for all applications and services
centrally defining environment variables
One of the ideas behind Docker is that containers should be stateless, easily restartable, and behave like a black box.
a one-shot command in a new container
immediately exit after the command exits,
However the downside of this approach is that the init system is not started. That is, while invoking COMMAND, important daemons such as cron and syslog are not running. Also, orphaned child processes are not properly reaped, because COMMAND is PID 1.
add additional daemons (e.g. your own app) to the image by creating runit entries.
Nginx is one such example: it removes all environment variables unless you explicitly instruct it to retain them through the env configuration option.
Mechanisms for easily running multiple processes, without violating the Docker philosophy
Ubuntu is not designed to be run inside Docker
According to the Unix process model, the init process -- PID 1 -- inherits all orphaned child processes and must reap them
Syslog-ng seems to be much more stable
cron daemon
Rotates and compresses logs
/sbin/setuser
A tool for installing apt packages that automatically cleans up after itself.
a single logical service inside a single container
A daemon is a program which runs in the background of its system, such
as a web server.
The shell script must be called run, must be executable, and is to be
placed in the directory /etc/service/<NAME>. runsv will switch to
the directory and invoke ./run after your container starts.
If any script exits with a non-zero exit code, the booting will fail.
If your process is started with
a shell script, make sure you exec the actual process, otherwise the shell will receive the signal
and not your process.
any environment variables set with docker run --env or with the ENV command in the Dockerfile, will be picked up by my_init
not possible for a child process to change the environment variables of other processes
they will not see the environment variables that were originally passed by Docker.
We ignore HOME, SHELL, USER and a bunch of other environment variables on purpose, because not ignoring them will break multi-user containers.
my_init imports environment variables from the directory /etc/container_environment
/etc/container_environment.sh - a dump of the environment variables in Bash format.
modify the environment variables in my_init (and therefore the environment variables in all child processes that are spawned after that point in time), by altering the files in /etc/container_environment
my_init only activates changes in /etc/container_environment when running startup scripts
environment variables don't contain sensitive data, then you can also relax the permissions
Syslog messages are forwarded to the console
syslog-ng is started separately before the runit supervisor process, and shutdown after runit exits.
RUN apt-get update && apt-get upgrade -y -o Dpkg::Options::="--force-confold"
/sbin/my_init --skip-startup-files --quiet --
By default, no keys are installed, so nobody can login
provide a pregenerated, insecure key (PuTTY format)
RUN /usr/sbin/enable_insecure_key
docker run YOUR_IMAGE /sbin/my_init --enable-insecure-key
RUN cat /tmp/your_key.pub >> /root/.ssh/authorized_keys && rm -f /tmp/your_key.pub
The default baseimage-docker installs syslog-ng, cron and sshd services during the build process
long-term archival. These archival destinations are not visible to or configurable by the app, and instead are completely managed by the execution environment.
Most significantly, the stream can be sent to a log indexing and analysis system such as Splunk, or a general-purpose data warehousing system such as Hadoop/Hive.
PHP provides a function that lets you override the default session mechanism by specifying the names of your own functions for taking care of the distinct tasks
The handler PHP uses to handle data serialization is defined by the session.serialize_handler configuration directive. It is set to php by default.
REPLACE
REPLACE, which behaves exactly like INSERT, except that it handles cases where a record already exists with the same session identifier by first deleting that record.
the _write() function keeps the timestamp of the last access in the access column for each record, this can be used to determine which records to delete.
several ProxySQL instances to communicate with and share configuration updates with each other.
4 tables where you can make changes and propagate the configuration
When you make a change like INSERT/DELETE/UPDATE on any of these tables, after running the command LOAD … TO RUNTIME , ProxySQL creates a new checksum of the table’s data and increments the version number in the table runtime_checksums_values
all nodes are monitoring and communicating with all the other ProxySQL nodes. When another node detects a change in the checksum and version (both at the same time), each node will get a copy of the table that was modified, make the same changes locally, and apply the new config to RUNTIME to refresh the new config, make it visible to the applications connected and automatically save it to DISK for persistence.
a “synchronous cluster” so any changes to these 4 tables on any ProxySQL server will be replicated to all other ProxySQL nodes.