"The YubiKey 4 is the strong authentication bullseye the industry has been aiming at for years, enabling one single key to secure an unlimited number of applications.
Yubico's 4th generation YubiKey is built on high-performance secure elements. It includes the same range of one-time password and public key authentication protocols as in the YubiKey NEO, excluding NFC, but with stronger public/private keys, faster crypto operations and the world's first touch-to-sign feature.
With the YubiKey 4 platform, we have further improved our manufacturing and ordering process, enabling customers to order exactly what functions they want in 500+ unit volumes, with no secrets stored at Yubico or shared with a third-party organization. The best part? An organization can securely customize 1,000 YubiKeys in less than 10 minutes.
For customers who require NFC, the YubiKey NEO is our full-featured key with both contact (USB) and contactless (NFC, MIFARE) communications."
"TMSU is a tool for tagging your files. It provides a simple command-line tool for applying tags and a virtual filesystem so that you can get a tag-based view of your files from within any other program.
TMSU does not alter your files in any way: they remain unchanged on disk, or on the network, wherever you put them. TMSU maintains its own database and you simply gain an additional view, which you can mount, based upon the tags you set up. The only commitment required is your time and there's absolutely no lock-in."
"My First 5 Minutes on a Server, by Bryan Kennedy, is an excellent intro into securing a server against most attacks. We have a few modifications to his approach that we wanted to document as part of our efforts of externalizing our processes and best practices. We also wanted to spend a bit more time explaining a few things that younger engineers may benefit from."
Gobot is set of libraries in the Go programming language for robotics and physical computing.
It provides a simple, yet powerful way to create solutions that incorporate multiple, different hardware devices at the same time.
Want to use Ruby on robots? Check out our sister project Artoo (http://artoo.io).
Want to use Node.js? Check out our sister project Cylon (http://cylonjs.com).
Supervisor is a client/server system that allows its users to monitor and control a number of processes on UNIX-like operating systems.
It shares some of the same goals of programs like launchd, daemontools, and runit. Unlike some of these programs, it is not meant to be run as a substitute for init as "process id 1". Instead it is meant to be used to control processes related to a project or a customer, and is meant to start like any other program at boot time.
"A bootloader is a special program that is executed each time a bootable device is initialized by the computer during its power on or reset that will load the kernel image into the memory. This application is very close to hardware and to the architecture of the CPU. All x86 PCs boot in Real Mode. In this mode you have only 16-bit instructions. Our bootloader runs in Real Mode and our bootloader is a 16-bit program."
understand and use Docker build-time variables, environment variables and docker-compose templating the right way.
ARG is only available during the build of a Docker image (RUN etc), not after the image is created and containers are started from it (ENTRYPOINT, CMD).
ENV values are available to containers, but also RUN-style commands during the Docker build starting with the line where they are introduced.
set an environment variable in an intermediate container using bash (RUN export VARI=5 && …) it will not persist in the next command.
An env_file, is a convenient way to pass many environment variables to a single command in one batch.
not be confused with a .env file
the dot in front of env - .env, not an “env_file”.
If you have a file named .env in your project, it’s only used to put values
into the docker-compose.yml file which is in the same folder. Those are used
with Docker Compose and Docker Stack.
Just type docker-compose config. This way you’ll see how the docker-compose.yml file content looks after the substitution step has been performed without running anything else.
ARG are also known as build-time variables. They are only available
from the moment they are ‘announced’ in the Dockerfile with an ARG instruction
up to the moment when the image is built.
Running containers can’t access
values of ARG variables.
ENV variables are also available during the build, as soon as you introduce
them with an ENV instruction. However, unlike ARG, they are also accessible
by containers started from the final image.
ENV values can be overridden when
starting a container,
If you don’t provide a value to expected ARG variables which don’t
have a default, you’ll get an error message.
args block
You can use ARG to set
the default values of ENV vars.
dynamic on-build env values
2. Pass environment variable values from your host
1. Provide values one by one
3. Take values from a file (env_file)
for each RUN
statement, a new container is launched from an intermediate image.
An image is saved by the end of
the command, but environment variables do not persist that way.
The precedence is, from stronger to less-strong: stuff the containerized application sets, values from single environment entries, values from the env_file(s) and finally Dockerfile defaults.
Refer the YAML Anchors/Aliases documentation for information about how to alias and reuse syntax to keep your .circleci/config.yml file small.
workflow orchestration with two parallel jobs
jobs run according to configured requirements, each job waiting to start until the required job finishes successfully
requires: key
fans-out to run a set of acceptance test jobs in parallel, and finally fans-in to run a common deploy job.
Holding a Workflow for a Manual Approval
Workflows can be configured to wait for manual approval of a job before
continuing to the next job
add a job to the jobs list with the
key type: approval
approval is a special job type that is only available to jobs under the workflow key
The name of the job to hold is arbitrary - it could be wait or pause, for example,
as long as the job has a type: approval key in it.
schedule a workflow
to run at a certain time for specific branches.
The triggers key is only added under your workflows key
using cron syntax to represent Coordinated Universal Time (UTC) for specified branches.
By default,
a workflow is triggered on every git push
the commit workflow has no triggers key
and will run on every git push
The nightly workflow has a triggers key
and will run on the specified schedule
Cron step syntax (for example, */1, */20) is not supported.
use a context to share environment variables
use the same shared environment variables when initiated by a user who is part of the organization.
CircleCI does not run workflows for tags
unless you explicitly specify tag filters.
CircleCI branch and tag filters support
the Java variant of regex pattern matching.
Each workflow has an associated workspace which can be used to transfer files to downstream jobs as the workflow progresses.
The workspace is an additive-only store of data.
Jobs can persist data to the workspace
Downstream jobs can attach the workspace to their container filesystem.
Attaching the workspace downloads and unpacks each layer based on the ordering of the upstream jobs in the workflow graph.
Workflows that include jobs running on multiple branches may require data to be shared using workspaces
To persist data from a job and make it available to other jobs, configure the job to use the persist_to_workspace key.
Files and directories named in the paths: property of persist_to_workspace will be uploaded to the workflow’s temporary workspace relative to the directory specified with the root key.
Configure a job to get saved data by configuring the attach_workspace key.
persist_to_workspace
attach_workspace
To rerun only a workflow’s failed jobs, click the Workflows icon in the app and select a workflow to see the status of each job, then click the Rerun button and select Rerun from failed.
if you do not see your workflows triggering, a configuration error is preventing the workflow from starting.
check your Workflows page of the CircleCI app (not the Job page)
DevOps is a set of practices that automates the processes between software development and IT teams, in order that they can build, test, and release software faster and more reliably.
increased trust, faster software releases, ability to solve critical issues quickly, and better manage unplanned work.
bringing together the best of software development and IT operations.
a firm handshake between development and operations
DevOps isn’t magic, and transformations don’t happen overnight.
Infrastructure as code
Culture is the #1 success factor in DevOps.
Building a culture of shared responsibility, transparency and faster feedback is the foundation of every high performing DevOps team.
'not our problem' mentality
DevOps is that change in mindset of looking at the development process holistically and breaking down the barrier between Dev and Ops.
Speed is everything.
Lack of automated test and review cycles block the release to production and poor incident response time kills velocity and team confidence
Open communication helps Dev and Ops teams swarm on issues, fix incidents, and unblock the release pipeline faster.
Unplanned work is a reality that every team faces–a reality that most often impacts team productivity.
“cross-functional collaboration.”
All the tooling and automation in the world are useless if they aren’t accompanied by a genuine desire on the part of development and IT/Ops professionals to work together.
DevOps doesn’t solve tooling problems. It solves human problems.
Forming project- or product-oriented teams to replace function-based teams is a step in the right direction.
sharing a common goal and having a plan to reach it together
join sprint planning sessions, daily stand-ups, and sprint demos.
DevOps culture across every department
open channels of communication, and talk regularly
continuous delivery: the practice of running each code change through a gauntlet of automated tests, often facilitated by cloud-based infrastructure, then packaging up successful builds and promoting them up toward production using automated deploys.
automated deploys alert IT/Ops to server “drift” between environments, which reduces or eliminates surprises when it’s time to release.
“configuration as code.”
when DevOps uses automated deploys to send thoroughly tested code to identically provisioned environments, “Works on my machine!” becomes irrelevant.
A DevOps mindset sees opportunities for continuous improvement everywhere.
regular retrospectives
A/B testing
failure is inevitable. So you might as well set up your team to absorb it, recover, and learn from it (some call this “being anti-fragile”).
Postmortems focus on where processes fell down and how to strengthen them – not on which team member f'ed up the code.
Our engineers are responsible for QA, writing, and running their own tests to get the software out to customers.
How long did it take to go from development to deployment?
How long does it take to recover after a system failure?
service level agreements (SLAs)
Devops isn't any single person's job. It's everyone's job.
DevOps is big on the idea that the same people who build an application should be involved in shipping and running it.
developers and operators pair with each other in each phase of the application’s lifecycle.
Matcher rules determine if a particular request should be forwarded to a backend
if any rule matches
if all rules match
In order to use regular expressions with Host and Path matchers, you must declare an arbitrarily named variable followed by the colon-separated regular expression, all enclosed in curly braces.
Use a *Prefix* matcher if your backend listens on a particular base path but also serves requests on sub-paths.
For instance, PathPrefix: /products would match /products but also /products/shoes and /products/shirts.
Since the path is forwarded as-is, your backend is expected to listen on /products
Use Path if your backend listens on the exact path only. For instance, Path: /products would match /products but not /products/shoes.
Modifier rules ALWAYS apply after the Matcher rules.
A backend is responsible to load-balance the traffic coming from one or more frontends to a set of http servers
wrr: Weighted Round Robin
drr: Dynamic Round Robin: increases weights on servers that perform better than others.
A circuit breaker can also be applied to a backend, preventing high loads on failing servers.
To proactively prevent backends from being overwhelmed with high load, a maximum connection limit can also be applied to each backend.
Sticky sessions are supported with both load balancers.
When sticky sessions are enabled, a cookie is set on the initial request.
The check is defined by a path appended to the backend URL and an interval (given in a format understood by time.ParseDuration) specifying how often the health check should be executed (the default being 30 seconds).
Each backend must respond to the health check within 5 seconds.
The static configuration is the global configuration which is setting up connections to configuration backends and entrypoints.
We only need to enable watch option to make Træfik watch configuration backend changes and generate its configuration automatically.
Separate the regular expression and the replacement by a space.
a comma-separated key/value pair where both key and value must be literals.
namespacing of your backends happens on the basis of hosts in addition to paths
Modifiers will be applied in a pre-determined order regardless of their order in the rule configuration section.
customize priority
Custom headers can be configured through the frontends, to add headers to either requests or responses that match the frontend's rules.
Security related headers (HSTS headers, SSL redirection, Browser XSS filter, etc) can be added and configured per frontend in a similar manner to the custom headers above.
Servers are simply defined using a url. You can also apply a custom weight to each server (this will be used by load-balancing).
Maximum connections can be configured by specifying an integer value for maxconn.amount and maxconn.extractorfunc which is a strategy used to determine how to categorize requests in order to evaluate the maximum connections.
"Zipkin is a distributed tracing system. It helps gather timing data needed to troubleshoot latency problems in microservice architectures. It manages both the collection and lookup of this data. Zipkin's design is based on the Google Dapper paper."
"In computing, the Two Generals Problem is a thought experiment meant to illustrate the pitfalls and design challenges of attempting to coordinate an action by communicating over an unreliable link. In the experiment, two generals are only able to communicate with one another by sending a messenger through enemy territory. The experiment asks how they might reach an agreement on the time to launch an attack, while knowing that any messenger they send could be captured."
the abstraction of the infrastructure layer, which is now considered code. Deployment of a new application may require the deployment of new infrastructure code as well.
"big bang" deployments update whole or large parts of an application in one fell swoop.
Big bang deployments required the business to conduct extensive development and testing before release, often associated with the "waterfall model" of large sequential releases.
Rollbacks are often costly, time-consuming, or even impossible.
In a rolling deployment, an application’s new version gradually replaces the old one.
new and old versions will coexist without affecting functionality or user experience.
Each container is modified to download the latest image from the app vendor’s site.
two identical production environments work in parallel.
Once the testing results are successful, application traffic is routed from blue to green.
In a blue-green deployment, both systems use the same persistence layer or database back end.
You can use the primary database by blue for write operations and use the secondary by green for read operations.
Blue-green deployments rely on traffic routing.
long TTL values can delay these changes.
The main challenge of canary deployment is to devise a way to route some users to the new application.
Using an application logic to unlock new features to specific users and groups.
With CD, the CI-built code artifact is packaged and always ready to be deployed in one or more environments.
Use Build Automation tools to automate environment builds
Use configuration management tools
Enable automated rollbacks for deployments
An application performance monitoring (APM) tool can help your team monitor critical performance metrics including server response times after deployments.
Executors define the environment in which the steps of a job will be run.
Executor declarations in config outside of jobs can be used by all jobs in the scope of that declaration, allowing you to reuse a single executor definition across multiple jobs.
It is also possible to allow an orb to define the executor used by all of its commands.
When invoking an executor in a job any keys in the job itself will override those of the executor invoked.
Steps are used when you have a job or command that needs to mix predefined and user-defined steps.
Use the enum parameter type when you want to enforce that the value must be one from a specific set of string values.
Use an executor parameter type to allow the invoker of a job to decide what
executor it will run on
invoke the same job more than once in the workflows stanza of config.yml, passing any necessary parameters as subkeys to the job.
If a job is declared inside an orb it can use commands in that orb or the global commands.
To use parameters in executors, define the parameters under the given executor.
Parameters are in-scope only within the job or command that defined them.
A single configuration may invoke a job multiple times.
Every job invocation may optionally accept two special arguments: pre-steps and post-steps.
Pre and post steps allow you to execute steps in a given job
without modifying the job.
conditions are checked before a workflow is actually run
you cannot use a condition to check an environment
variable.
Conditional steps may be located anywhere a regular step could and may only use parameter values as inputs.
A conditional step consists of a step with the key when or unless. Under this conditional key are the subkeys steps and condition
A condition is a single value that evaluates to true or false at the time the config is processed, so you cannot use environment variables as conditions
"Cartographer is a system that provides real-time simultaneous localization and mapping (SLAM) in 2D and 3D across multiple platforms and sensor configurations."