every Kubernetes operation is exposed as an API endpoint and can be executed by an HTTP request to this endpoint.
the main job of kubectl is to carry out HTTP requests to the Kubernetes API
Kubernetes maintains an internal state of resources, and all Kubernetes operations are CRUD operations on these resources.
Kubernetes is a fully resource-centred system
Kubernetes API reference is organised as a list of resource types with their associated operations.
This is how kubectl works for all commands that interact with the Kubernetes cluster.
kubectl simply makes HTTP requests to the appropriate Kubernetes API endpoints.
it's totally possible to control Kubernetes with a tool like curl by manually issuing HTTP requests to the Kubernetes API.
Kubernetes consists of a set of independent components that run as separate processes on the nodes of a cluster.
components on the master nodes
Storage backend: stores resource definitions (usually etcd is used)
API server: provides Kubernetes API and manages storage backend
Controller manager: ensures resource statuses match specifications
Scheduler: schedules Pods to worker nodes
component on the worker nodes
Kubelet: manages execution of containers on a worker node
triggers the ReplicaSet controller, which is a sub-process of the controller manager.
the scheduler, who watches for Pod definitions that are not yet scheduled to a worker node.
creating and updating resources in the storage backend on the master node.
The kubelet of the worker node your ReplicaSet Pods have been scheduled to instructs the configured container runtime (which may be Docker) to download the required container images and run the containers.
Kubernetes components (except the API server and the storage backend) work by watching for resource changes in the storage backend and manipulating resources in the storage backend.
However, these components do not access the storage backend directly, but only through the Kubernetes API.
double usage of the Kubernetes API for internal components as well as for external users is a fundamental design concept of Kubernetes.
All other Kubernetes components and users read, watch, and manipulate the state (i.e. resources) of Kubernetes through the Kubernetes API
The storage backend stores the state (i.e. resources) of Kubernetes.
command completion is a shell feature that works by the means of a completion script.
A completion script is a shell script that defines the completion behaviour for a specific command. Sourcing a completion script enables completion for the corresponding command.
kubectl completion zsh
/etc/bash_completion.d directory (create it, if it doesn't exist)
source <(kubectl completion bash)
source <(kubectl completion zsh)
autoload -Uz compinit
compinit
the API reference, which contains the full specifications of all resources.
kubectl api-resources
displays the resource names in their plural form (e.g. deployments instead of deployment). It also displays the shortname (e.g. deploy) for those resources that have one. Don't worry about these differences. All of these name variants are equivalent for kubectl.
.spec
custom columns output format comes in. It lets you freely define the columns and the data to display in them. You can choose any field of a resource to be displayed as a separate column in the output
kubectl get pods -o custom-columns='NAME:metadata.name,NODE:spec.nodeName'
kubectl explain pod.spec.
kubectl explain pod.metadata.
browse the resource specifications and try it out with any fields you like!
JSONPath is a language to extract data from JSON documents (it is similar to XPath for XML).
with kubectl explain, only a subset of the JSONPath capabilities is supported
Many fields of Kubernetes resources are lists, and this operator allows you to select items of these lists. It is often used with a wildcard as [*] to select all items of the list.
kubectl get pods -o custom-columns='NAME:metadata.name,IMAGES:spec.containers[*].image'
a Pod may contain more than one container.
The availability zones for each node are obtained through the special failure-domain.beta.kubernetes.io/zone label.
kubectl get nodes -o yaml
kubectl get nodes -o json
The default kubeconfig file is ~/.kube/config
with multiple clusters, then you have connection parameters for multiple clusters configured in your kubeconfig file.
Within a cluster, you can set up multiple namespaces (a namespace is kind of "virtual" clusters within a physical cluster)
overwrite the default kubeconfig file with the --kubeconfig option for every kubectl command.
Namespace: the namespace to use when connecting to the cluster
a one-to-one mapping between clusters and contexts.
When kubectl reads a kubeconfig file, it always uses the information from the current context.
just change the current context in the kubeconfig file
to switch to another namespace in the same cluster, you can change the value of the namespace element of the current context
kubectl also provides the --cluster, --user, --namespace, and --context options that allow you to overwrite individual elements and the current context itself, regardless of what is set in the kubeconfig file.
for switching between clusters and namespaces is kubectx.
kubectl config get-contexts
just have to download the shell scripts named kubectl-ctx and kubectl-ns to any directory in your PATH and make them executable (for example, with chmod +x)
kubectl proxy
kubectl get roles
kubectl get pod
Kubectl plugins are distributed as simple executable files with a name of the form kubectl-x. The prefix kubectl- is mandatory,
To install a plugin, you just have to copy the kubectl-x file to any directory in your PATH and make it executable (for example, with chmod +x)
krew itself is a kubectl plugin
check out the kubectl-plugins GitHub topic
The executable can be of any type, a Bash script, a compiled Go program, a Python script, it really doesn't matter. The only requirement is that it can be directly executed by the operating system.
kubectl plugins can be written in any programming or scripting language.
you can write more sophisticated plugins with real programming languages, for example, using a Kubernetes client library. If you use Go, you can also use the cli-runtime library, which exists specifically for writing kubectl plugins.
a kubeconfig file consists of a set of contexts
changing the current context means changing the cluster, if you have only a single context per cluster.
Keyfiles are bare-minimum forms of security and are best suited for testing or
development environments.
With keyfile authentication, each
mongod instances in the replica set uses the contents of the keyfile as the
shared password for authenticating other members in the deployment.
On UNIX systems, the keyfile must not have group or world
permissions.
The initial idea is to make application invokes deconstructor of each component as soon as the application receives specific signals such as SIGTERM and SIGINT
When you run a docker container, by default it has a PID namespace, which means the docker process is isolated from other processes on your host.
The PID namespace has an important task to reap zombie processes.
This uses /bin/bash as PID1 and runs your program as the subprocess.
When a signal is sent to a shell, the signal actually won’t be forwarded to subprocesses.
By using the exec form, we can run our program as PID1
if you use exec form to run a shell script to spawn your application, remember to use exec syscall to overwrite /usr/bin/bash otherwise it will act as senario1
/bin/bash can handle repeating zombie process
with Tini, SIGTERM properly terminates your process even if you didn’t explicitly install a signal handler for it.
run tini as PID1 and it will forward the signal for subprocesses.
tini is a signal proxy and it also can deal with zombie process issue automatically.
run your program with tini by passing --init flag to docker run
use docker stop, docker will wait for 10s for stopping container before killing a process (by default). The main process inside the container will receive SIGTERM, then docker daemon will wait for 10s and send SIGKILL to terminate process.
kill running containers immediately. it’s more like kill -9 and kill --SIGKILL
Co stands for cooperation. A co routine is asked to (or better expected to) willingly suspend its execution to give other co-routines a chance to execute too. So a co-routine is about sharing CPU resources (willingly) so others can use the same resource as oneself is using.
A thread on the other hand does not need to suspend its execution. Being suspended is completely transparent to the thread and the thread is forced by underlying hardware to suspend itself.
co-routines can not be concurrently executed and race conditions can not occur.
Concurrency is the separation of tasks to provide interleaved
execution.
Parallelism is the simultaneous execution of multiple
pieces of work in order to increase speed.
With threads, the operating system switches running threads preemptively according to its scheduler, which is an algorithm in the operating system kernel.
With coroutines, the programmer and programming language determine when to switch coroutines
In contrast to threads, which are pre-emptively scheduled by the operating system, coroutine switches are cooperative, meaning the programmer (and possibly the programming language and its runtime) controls when a switch will happen.
preemption
Coroutines are a form of sequential processing: only one is executing at any given time
Threads are (at least conceptually) a form of concurrent processing: multiple threads may be executing at any given time.
"Co stands for cooperation. A co routine is asked to (or better expected to) willingly suspend its execution to give other co-routines a chance to execute too. So a co-routine is about sharing CPU resources (willingly) so others can use the same resource as oneself is using."
"The ground-breaking bluetooth beacon - An Open Source JavaScript microcontroller you can program wirelessly. No software needed so get started in seconds."
"TMSU is a tool for tagging your files. It provides a simple command-line tool for applying tags and a virtual filesystem so that you can get a tag-based view of your files from within any other program.
TMSU does not alter your files in any way: they remain unchanged on disk, or on the network, wherever you put them. TMSU maintains its own database and you simply gain an additional view, which you can mount, based upon the tags you set up. The only commitment required is your time and there's absolutely no lock-in."
Gobot is set of libraries in the Go programming language for robotics and physical computing.
It provides a simple, yet powerful way to create solutions that incorporate multiple, different hardware devices at the same time.
Want to use Ruby on robots? Check out our sister project Artoo (http://artoo.io).
Want to use Node.js? Check out our sister project Cylon (http://cylonjs.com).
convert an existing Rails helper to a decorator
method
That method is presentation-centric, and thus does not
belong in a model.
Where
does that come from? It's a method of the source Article, whose methods have
been made available on the decorator by the delegate_all call above.
a great way to
replace procedural helpers like the one above with "real" object-oriented
programming
{{ ... }} for Expressions to print to the template output
use a dot (.) to access attributes of a variable
the outer double-curly braces are not part of the
variable, but the print statement.
If you access variables inside tags don’t
put the braces around them.
If a variable or attribute does not exist, you will get back an undefined
value.
the default behavior is to evaluate to an empty string if
printed or iterated over, and to fail for every other operation.
if an object has an item and attribute with the same
name. Additionally, the attr() filter only looks up attributes.
Variables can be modified by filters. Filters are separated from the
variable by a pipe symbol (|) and may have optional arguments in
parentheses.
Multiple filters can be chained
Tests can be used
to test a variable against a common expression.
add is plus the name of the test after the variable.
to find out if a variable is defined, you can do name is defined,
which will then return true or false depending on whether name is defined
in the current template context.
strip whitespace in templates by hand. If you add a minus
sign (-) to the start or end of a block (e.g. a For tag), a
comment, or a variable expression, the whitespaces before or after
that block will be removed
not add whitespace between the tag and the minus sign
mark a block raw
Template inheritance
allows you to build a base “skeleton” template that contains all the common
elements of your site and defines blocks that child templates can override.
The {% extends %} tag is the key here. It tells the template engine that
this template “extends” another template.
access templates in subdirectories with a slash
can’t define multiple {% block %} tags with the same name in the
same template
use the special
self variable and call the block with that name
self.title()
super()
put the name of the block after the end tag for better
readability
if the block is replaced by
a child template, a variable would appear that was not defined in the block or
passed to the context.
setting the block to “scoped” by adding the scoped
modifier to a block declaration
If you have a variable that may
include any of the following chars (>, <, &, or ") you
SHOULD escape it unless the variable contains well-formed and trusted
HTML.
Jinja2 functions (macros, super, self.BLOCKNAME) always return template
data that is marked as safe.
With the default syntax, control structures appear inside
{% ... %} blocks.
the dictsort filter
loop.cycle
Unlike in Python, it’s not possible to break or continue in a loop
use loops recursively
add the recursive modifier
to the loop definition and call the loop variable with the new iterable
where you want to recurse.
The loop variable always refers to the closest (innermost) loop.
whether the value changed at all,
use it to test if a variable is defined, not
empty and not false
Macros are comparable with functions in regular programming languages.
If a macro name starts with an underscore, it’s not exported and can’t
be imported.
pass a macro to another macro
caller()
a single trailing newline is stripped if present
other whitespace (spaces, tabs, newlines etc.) is returned unchanged
a block tag works in “both”
directions. That is, a block tag doesn’t just provide a placeholder to fill
- it also defines the content that fills the placeholder in the parent.
Python dicts are not ordered
caller(user)
call(user)
This is a simple dialog rendered by using a macro and
a call block.
Filter sections allow you to apply regular Jinja2 filters on a block of
template data.
Assignments at
top level (outside of blocks, macros or loops) are exported from the template
like top level macros and can be imported by other templates.
using namespace
objects which allow propagating of changes across scopes
use block assignments to
capture the contents of a block into a variable name.
The extends tag can be used to extend one template from another.
Blocks are used for inheritance and act as both placeholders and replacements
at the same time.
The include statement is useful to include a template and return the
rendered contents of that file into the current namespace
Included templates have access to the variables of the active context by
default.
putting often used code into macros
imports are cached
and imported templates don’t have access to the current template variables,
just the globals by default.
Macros and variables starting with one or more underscores are private and
cannot be imported.
By default, included templates are passed the current context and imported
templates are not.
imports are often used just as a module that holds macros.
Integers and floating point numbers are created by just writing the
number down
Everything between two brackets is a list.
Tuples are like lists that cannot be modified (“immutable”).
A dict in Python is a structure that combines keys and values.
//
Divide two numbers and return the truncated integer result
The special constants true, false, and none are indeed lowercase
all Jinja identifiers are lowercase
(expr)
group an expression.
The is and in operators support negation using an infix notation
in
Perform a sequence / mapping containment test.
|
Applies a filter.
~
Converts all operands into strings and concatenates them.
use inline if expressions.
always an attribute is returned and items are not
looked up.
default(value, default_value=u'', boolean=False)¶
If the value is undefined it will return the passed default value,
otherwise the value of the variable
dictsort(value, case_sensitive=False, by='key', reverse=False)¶
Sort a dict and yield (key, value) pairs.
format(value, *args, **kwargs)¶
Apply python string formatting on an object
groupby(value, attribute)¶
Group a sequence of objects by a common attribute.
grouping by is stored in the grouper
attribute and the list contains all the objects that have this grouper
in common.
indent(s, width=4, first=False, blank=False, indentfirst=None)¶
Return a copy of the string with each line indented by 4 spaces. The
first line and blank lines are not indented by default.
join(value, d=u'', attribute=None)¶
Return a string which is the concatenation of the strings in the
sequence.
map()¶
Applies a filter on a sequence of objects or looks up an attribute.
pprint(value, verbose=False)¶
Pretty print a variable. Useful for debugging.
reject()¶
Filters a sequence of objects by applying a test to each object,
and rejecting the objects with the test succeeding.
replace(s, old, new, count=None)¶
Return a copy of the value with all occurrences of a substring
replaced with a new one.
round(value, precision=0, method='common')¶
Round the number to a given precision
even if rounded to 0 precision, a float is returned.
select()¶
Filters a sequence of objects by applying a test to each object,
and only selecting the objects with the test succeeding.
sort(value, reverse=False, case_sensitive=False, attribute=None)¶
Sort an iterable. Per default it sorts ascending, if you pass it
true as first argument it will reverse the sorting.
striptags(value)¶
Strip SGML/XML tags and replace adjacent whitespace by one space.
tojson(value, indent=None)¶
Dumps a structure to JSON so that it’s safe to use in <script>
tags.
trim(value)¶
Strip leading and trailing whitespace.
unique(value, case_sensitive=False, attribute=None)¶
Returns a list of unique items from the the given iterable
urlize(value, trim_url_limit=None, nofollow=False, target=None, rel=None)¶
Converts URLs in plain text into clickable links.
defined(value)¶
Return true if the variable is defined
in(value, seq)¶
Check if value is in seq.
mapping(value)¶
Return true if the object is a mapping (dict etc.).
number(value)¶
Return true if the variable is a number.
sameas(value, other)¶
Check if an object points to the same memory address than another
object
undefined(value)¶
Like defined() but the other way round.
A joiner is
passed a string and will return that string every time it’s called, except
the first time (in which case it returns an empty string).
namespace(...)¶
Creates a new container that allows attribute assignment using the
{% set %} tag
The with statement makes it possible to create a new inner scope.
Variables set within this scope are not visible outside of the scope.
activate and deactivate the autoescaping from within
the templates
With both trim_blocks and lstrip_blocks enabled, you can put block tags
on their own lines, and the entire block line will be removed when
rendered, preserving the whitespace of the contents
"Cello is a library that brings higher level programming to C.
By acting as a modern, powerful runtime system Cello makes many things easy that were previously impractical or awkward in C such as:
Generic Data Structures
Polymorphic Functions
Interfaces / Type Classes
Constructors / Destructors
Optional Garbage Collection
Exceptions
Reflection
And because Cello works seamlessly alongside standard C you get all the other benefits such as great performance, powerful tooling, and extensive libraries."
MongoDB uses a locking system to ensure data set consistency. If
certain operations are long-running or a queue forms, performance
will degrade as requests and operations wait for the lock.
performance limitations as a result of inadequate
or inappropriate indexing strategies, or as a consequence of poor schema
design patterns.
performance issues may be temporary and related to
abnormal traffic load.
If globalLock.currentQueue.total is consistently high,
then there is a chance that a large number of requests are waiting for
a lock.
If globalLock.totalTime is
high relative to uptime, the database has
existed in a lock state for a significant amount of time.
For write-heavy applications, deploy sharding and add one or more
shards to a sharded cluster to distribute load among
mongod instances.
Unless constrained by system-wide limits, the maximum number of
incoming connections supported by MongoDB is configured with the
maxIncomingConnections setting.
When logLevel is set to 0, MongoDB records slow
operations to the diagnostic log at a rate determined by
slowOpSampleRate.
At higher logLevel settings, all operations appear in
the diagnostic log regardless of their latency with the following
exception
Full Time Diagnostic Data Collection (FTDC) mechanism. FTDC data files
are compressed, are not human-readable, and inherit the same file access
permissions as the MongoDB data files.
mongod processes store FTDC data files in a
diagnostic.data directory under the instances
storage.dbPath.
"MongoDB uses a locking system to ensure data set consistency. If certain operations are long-running or a queue forms, performance will degrade as requests and operations wait for the lock."