"
Build and connect intelligent bots to interact with your users naturally wherever they are, from text/sms to Skype, Slack, Office 365 mail and other popular services."
"DistributedLog (DL) is a high-performance, replicated log service, offering durability, replication and strong consistency as essentials for building reliable distributed systems.
"
"ImagePlay is a rapid prototyping tool for building and testing image processing algorithms.
It comes with a variety of over 70 individual image processors which can be combined into complex process chains.
ImagePlay is completely open source and can be built for Windows, Mac and Linux."
docker as a project offers you the complete set of higher-level tools to carry everything that forms an application across systems and machines - virtual or physical - and brings along loads more of great benefits with it
docker daemon: used to manage docker (LXC) containers on the host it runs
docker CLI: used to command and communicate with the docker daemon
images: snapshots of containers or base OS (e.g. Ubuntu) images
Dockerfiles: scripts automating the building process of images
Docker containers are basically directories which can be packed (e.g. tar-archived) like any other, then shared and run across various different machines and platforms (hosts).
Linux Containers can be defined as a combination various kernel-level features (i.e. things that Linux-kernel can do) which allow management of applications (and resources they use) contained within their own environment
Each container is layered like an onion and each action taken within a container consists of putting another block (which actually translates to a simple change within the file system) on top of the previous one.
Each docker container starts from a docker image which forms the base for other applications and layers to come.
Docker images constitute the base of docker containers from which everything starts to form
a solid, consistent and dependable base with everything that is needed to run the applications
As more layers (tools, applications etc.) are added on top of the base, new images can be formed by committing these changes.
a Dockerfile for automated image building
Dockerfiles are scripts containing a successive series of instructions, directions, and commands which are to be executed to form a new docker image.
As you work with a container and continue to perform actions on it (e.g. download and install software, configure files etc.), to have it keep its state, you need to “commit”.
Please remember to “commit” all your changes.
When you "run" any process using an image, in return, you will have a container.
When the process is not actively running, this container will be a non-running container. Nonetheless, all of them will reside on your system until you remove them via rm command.
To create a new container, you need to use a base image and specify a command to run.
you can not change the command you run after having created a container (hence specifying one during "creation")
If you would like to save the progress and changes you made with a container, you can use “commit”
"Security Onion is a Linux distro for intrusion detection, network security monitoring, and log management. It's based on Ubuntu and contains Snort, Suricata, Bro, OSSEC, Sguil, Squert, ELSA, Xplico, NetworkMiner, and many other security tools. The easy-to-use Setup wizard allows you to build an army of distributed sensors for your enterprise in minutes!"
"The DHCP bootloaders will automatically get a network address if you have DHCP on your network while the static bootloaders will prompt you for network information.
SHA256 checksums are generated during each build of iPXE and are located here. You can also view the scripts that are embedded into the images here.
"
製作像是window,mac上的執行檔
""As a user, I want to download an application from the original author, and run it on my Linux desktop system just like I would do with a Windows or Mac application."
"As an application author, I want to provide packages for Linux desktop systems, without the need to get it 'into' a distribution and without having to build for gazillions of different distributions.""
Refer the YAML Anchors/Aliases documentation for information about how to alias and reuse syntax to keep your .circleci/config.yml file small.
workflow orchestration with two parallel jobs
jobs run according to configured requirements, each job waiting to start until the required job finishes successfully
requires: key
fans-out to run a set of acceptance test jobs in parallel, and finally fans-in to run a common deploy job.
Holding a Workflow for a Manual Approval
Workflows can be configured to wait for manual approval of a job before
continuing to the next job
add a job to the jobs list with the
key type: approval
approval is a special job type that is only available to jobs under the workflow key
The name of the job to hold is arbitrary - it could be wait or pause, for example,
as long as the job has a type: approval key in it.
schedule a workflow
to run at a certain time for specific branches.
The triggers key is only added under your workflows key
using cron syntax to represent Coordinated Universal Time (UTC) for specified branches.
By default,
a workflow is triggered on every git push
the commit workflow has no triggers key
and will run on every git push
The nightly workflow has a triggers key
and will run on the specified schedule
Cron step syntax (for example, */1, */20) is not supported.
use a context to share environment variables
use the same shared environment variables when initiated by a user who is part of the organization.
CircleCI does not run workflows for tags
unless you explicitly specify tag filters.
CircleCI branch and tag filters support
the Java variant of regex pattern matching.
Each workflow has an associated workspace which can be used to transfer files to downstream jobs as the workflow progresses.
The workspace is an additive-only store of data.
Jobs can persist data to the workspace
Downstream jobs can attach the workspace to their container filesystem.
Attaching the workspace downloads and unpacks each layer based on the ordering of the upstream jobs in the workflow graph.
Workflows that include jobs running on multiple branches may require data to be shared using workspaces
To persist data from a job and make it available to other jobs, configure the job to use the persist_to_workspace key.
Files and directories named in the paths: property of persist_to_workspace will be uploaded to the workflow’s temporary workspace relative to the directory specified with the root key.
Configure a job to get saved data by configuring the attach_workspace key.
persist_to_workspace
attach_workspace
To rerun only a workflow’s failed jobs, click the Workflows icon in the app and select a workflow to see the status of each job, then click the Rerun button and select Rerun from failed.
if you do not see your workflows triggering, a configuration error is preventing the workflow from starting.
check your Workflows page of the CircleCI app (not the Job page)
All strings within templates are processed by a common Packer templating
engine, where variables and functions can be used to modify the value of a
configuration parameter at runtime.
Anything template related happens within double-braces: {{ }}.
Functions are specified directly within the braces, such as
{{timestamp}}
docker-compose -f docker-compose.yml -f docker-compose-dev.yml up
add the external volume and the mount here
In case the folder we mount to has been declared as a VOLUME during image build, its content will be merged with the name volume we mount from the host
Any data that needs to persist must be stored in a stateful backing service, typically a database.
The memory space or filesystem of the process can be used as a brief, single-transaction cache.
wipe out all local (e.g., memory and filesystem) state
compiling during the build stage
“sticky sessions” – that is, caching user session data in memory of the app’s process and expecting future requests from the same visitor to be routed to the same process.
Sticky sessions are a violation of twelve-factor and should never be used or relied upon
the abstraction of the infrastructure layer, which is now considered code. Deployment of a new application may require the deployment of new infrastructure code as well.
"big bang" deployments update whole or large parts of an application in one fell swoop.
Big bang deployments required the business to conduct extensive development and testing before release, often associated with the "waterfall model" of large sequential releases.
Rollbacks are often costly, time-consuming, or even impossible.
In a rolling deployment, an application’s new version gradually replaces the old one.
new and old versions will coexist without affecting functionality or user experience.
Each container is modified to download the latest image from the app vendor’s site.
two identical production environments work in parallel.
Once the testing results are successful, application traffic is routed from blue to green.
In a blue-green deployment, both systems use the same persistence layer or database back end.
You can use the primary database by blue for write operations and use the secondary by green for read operations.
Blue-green deployments rely on traffic routing.
long TTL values can delay these changes.
The main challenge of canary deployment is to devise a way to route some users to the new application.
Using an application logic to unlock new features to specific users and groups.
With CD, the CI-built code artifact is packaged and always ready to be deployed in one or more environments.
Use Build Automation tools to automate environment builds
Use configuration management tools
Enable automated rollbacks for deployments
An application performance monitoring (APM) tool can help your team monitor critical performance metrics including server response times after deployments.