"Crystal is a programming language with the following goals:
Have a syntax similar to Ruby (but compatibility with it is not a goal)
Statically type-checked but without having to specify the type of variables or method arguments.
Be able to call C code by writing bindings to it in Crystal.
Have compile-time evaluation and generation of code, to avoid boilerplate code.
Compile to efficient native code.
"
when you run PHP script every time, PHP needs to initialize modules and launch Zend Engine for your running environment. And your PHP script needs to be compiled to OpCodes and then Zend Engine can finally execute them.
in traditional PHP lifecycle, it wastes a bunch of time building and destroying resources for your script execution.
have a built-in server on top of Swoole, and all the scripts can be kept in memory after the first load
"when you run PHP script every time, PHP needs to initialize modules and launch Zend Engine for your running environment. And your PHP script needs to be compiled to OpCodes and then Zend Engine can finally execute them."
An equivalent in other languages would be Javascript’s npm, Ruby’s gems or PHP’s composer.
Maven expects a certain directory structure for your Java source code to live in and when you later do a mvn clean install , the whole compilation and packaging work will be done for you.
any directory that contains a pom.xml file is also a valid Maven project.
A pom.xml file contains everything needed to describe your Java project.
Java source code is to be meant to live in the "/src/main/java" folder
Maven will put compiled Java classes into the "target/classes" folder
Maven will also build a .jar or .war file, depending on your project, that lives in the "target" folder.
Maven has the concept of a build lifecycle, which is made up of different phases.
clean is not part of Maven’s default lifecycle, you end up with commands like mvn clean install or mvn clean package. Install or package will trigger all preceding phases, but you need to specify clean in addition.
Maven will always download your project dependencies into your local maven repository first and then reference them for your build.
local repositories (in your user’s home directory: ~/.m2/)
clean: deletes the /target folder.
mvn clean package
mvn clean install
package: Converts your .java source code into a .jar/.war file and puts it into the /target folder.
install: First, it does a package(!). Then it takes that .jar/.war file and puts it into your local Maven repository, which lives in ~/.m2/repository.
calling 'mvn install' would be enough if Maven was smart enough to do reliable, incremental builds.
figuring out what Java source files/modules changed and only compile those.
developers got it ingrained to always call 'mvn clean install' (even though this increases build time a lot in bigger projects).
make sure that Maven always tries to download the latest snapshot dependency versions
Modern IDEs provide developers with sophisticated features like code completion, refactoring, navigating to a symbol's definition, syntax highlighting, and error and warning markers.
an IDE needs a sophisticated understanding of the programming language that the program's source is written in.
Conventional compilers or interpreters for a specific programming language are typically unable to provide these language services, because they are written with the goal of either transforming the source code into object code or immediately executing the code.
Prior to the design and implementation of the Language Server Protocol for the development of Visual Studio Code, most language services were generally tied to a given IDE or other editor.
The Language Server Protocol allows for decoupling language services from the editor so that the services may be contained within a general purpose language server.
LSP is not restricted to programming languages. It can be used for any kind of text-based language, like specifications[7] or domain-specific languages (DSL).
When a user edits one or more source code files using a language server protocol-enabled tool, the tool acts as a client that consumes the language services provided by a language server.
The protocol does not make any provisions about how requests, responses and notifications are transferred between client and server.
"BearSSL is an implementation of the SSL/TLS protocol (RFC 5246) written in C. It aims at offering the following features:
Be correct and secure. In particular, insecure protocol versions and choices of algorithms are not supported, by design; cryptographic algorithm implementations are constant-time by default.
Be small, both in RAM and code footprint. For instance, a minimal server implementation may fit in about 20 kilobytes of compiled code and 25 kilobytes of RAM.
Be highly portable. BearSSL targets not only "big" operating systems like Linux and Windows, but also small embedded systems and even special contexts like bootstrap code.
Be feature-rich and extensible. SSL/TLS has many defined cipher suites and extensions; BearSSL should implement most of them, and allow extra algorithm implementations to be added afterwards, possibly from third parties."
Rails 4 automatically adds the sass-rails, coffee-rails and uglifier
gems to your Gemfile
reduce the number of requests that a browser makes to render a web page
Starting with version 3.1, Rails defaults to concatenating all JavaScript files into one master .js file and all CSS files into one master .css file
In production, Rails inserts an MD5 fingerprint into each filename so that the file is cached by the web browser
The technique sprockets uses for fingerprinting is to insert a hash of the
content into the name, usually at the end.
asset minification or compression
The sass-rails gem is automatically used for CSS compression if included
in Gemfile and no config.assets.css_compressor option is set.
Supported languages include Sass for CSS, CoffeeScript for JavaScript, and ERB for both by default.
When a filename is unique and based on its content, HTTP headers can be set to encourage caches everywhere (whether at CDNs, at ISPs, in networking equipment, or in web browsers) to keep their own copy of the content
asset pipeline is technically no longer a core feature of Rails 4
Rails uses for fingerprinting is to insert a hash of the content into the name, usually at the end
With the asset pipeline, the preferred location for these assets is now the app/assets directory.
Fingerprinting is enabled by default for production and disabled for all other
environments
The files in app/assets are never served directly in production.
Paths are traversed in the order that they occur in the search path
You should use app/assets for
files that must undergo some pre-processing before they are served.
By default .coffee and .scss files will not be precompiled on their own
app/assets is for assets that are owned by the application, such as custom images, JavaScript files or stylesheets.
lib/assets is for your own libraries' code that doesn't really fit into the scope of the application or those libraries which are shared across applications.
vendor/assets is for assets that are owned by outside entities, such as code for JavaScript plugins and CSS frameworks.
Any path under assets/* will be searched
By default these files will be ready to use by your application immediately using the require_tree directive.
By default, this means the files in app/assets take precedence, and will mask corresponding paths in lib and vendor
Sprockets uses files named index (with the relevant extensions) for a special purpose
Rails.application.config.assets.paths
causes turbolinks to check if
an asset has been updated and if so loads it into the page
if you add an erb extension to a CSS asset (for example, application.css.erb), then helpers like asset_path are available in your CSS rules
If you add an erb extension to a JavaScript asset, making it something such as application.js.erb, then you can use the asset_path helper in your JavaScript code
The asset pipeline automatically evaluates ERB
data URI — a method of embedding the image data directly into the CSS file — you can use the asset_data_uri helper.
Sprockets will also look through the paths specified in config.assets.paths,
which includes the standard application paths and any paths added by Rails
engines.
image_tag
the closing tag cannot be of the style -%>
asset_data_uri
app/assets/javascripts/application.js
sass-rails provides -url and -path helpers (hyphenated in Sass,
underscored in Ruby) for the following asset classes: image, font, video, audio,
JavaScript and stylesheet.
Rails.application.config.assets.compress
In JavaScript files, the directives begin with //=
The require_tree directive tells Sprockets to recursively include all JavaScript files in the specified directory into the output.
manifest files contain directives — instructions that tell Sprockets which files to require in order to build a single CSS or JavaScript file.
You should not rely on any particular order among those
Sprockets uses manifest files to determine which assets to include and serve.
the family of require directives prevents files from being included twice in the output
which files to require in order to build a single CSS or JavaScript file
Directives are processed top to bottom, but the order in which files are included by require_tree is unspecified.
In JavaScript files, Sprockets directives begin with //=
If require_self is called more than once, only the last call is respected.
require
directive is used to tell Sprockets the files you wish to require.
You need not supply the extensions explicitly.
Sprockets assumes you are requiring a .js file when done from within a .js
file
paths must be
specified relative to the manifest file
require_directory
Rails 4 creates both app/assets/javascripts/application.js and
app/assets/stylesheets/application.css regardless of whether the
--skip-sprockets option is used when creating a new rails application.
The file extensions used on an asset determine what preprocessing is applied.
app/assets/stylesheets/application.css
Additional layers of preprocessing can be requested by adding other extensions, where each extension is processed in a right-to-left manner
require_self
use the Sass @import rule
instead of these Sprockets directives.
Keep in mind that the order of these preprocessors is important
In development mode, assets are served as separate files in the order they are specified in the manifest file.
when these files are
requested they are processed by the processors provided by the coffee-script
and sass gems and then sent back to the browser as JavaScript and CSS
respectively.
css.scss.erb
js.coffee.erb
Keep in mind the order of these preprocessors is important.
By default Rails assumes that assets have been precompiled and will be served as static assets by your web server
with the Asset Pipeline the :cache and :concat options aren't used anymore
Assets are compiled and cached on the first request after the server is started
DI means that you can declare components very freely and then from any other component, just ask for an instance of it and it will be granted
do test-driven development iteratively in AngularJS!
only do DOM manipulation in a directive
with ngClass we can dynamically update the class;
ngBind allows two-way data binding;
ngShow and ngHide programmatically show or hide an element;
The less DOM manipulation, the easier directives are to test, the easier they are to style, the easier they are to change in the future, and the more re-usable and distributable they are.
still wrong.
Before doing DOM manipulation anywhere in your application, ask yourself if you really need to.
a few things wrong with this
jQuery was never necessary
use angular.element and our component will still work when dropped into a project that doesn't have jQuery.
just use angular.element
the element that is passed to the link function would already be a jQuery element!
directives aren't just collections of jQuery-like functions
Directives are actually extensions of HTML
If HTML doesn't do something you need it to do, you write a directive to do it for you, and then use it just as if it was part of HTML.
think how the team would accomplish it to fit right in with ngClick, ngClass, et al.
Don't even use jQuery. Don't even include it.
ry to think about how to do it within the confines the AngularJS.
In jQuery, selectors are used to find DOM elements and then bind/register event handlers to them.
Views are (declarative) HTML that contain AngularJS directives
Directives set up the event handlers behind the scenes for us and give us dynamic databinding.
Views are tied to models (via scopes). Views are a projection of the model
In AngularJS, think about models, rather than jQuery-selected DOM elements that hold your data.
AngularJS uses controllers and directives (each of which can have their own controller, and/or compile and linking functions) to remove behavior from the view/structure (HTML). Angular also has services and filters to help separate/organize your application.
Think about your models
Think about how you want to present your models -- your views.
using the necessary directives to get dynamic databinding.
Attach a controller to each view (using ng-view and routing, or ng-controller)
Make controllers as thin as possible.
You can do a lot with jQuery without knowing about how JavaScript prototypal inheritance works.
rails dbconsole figures out which database you're using and drops you into whichever command line interface you would use with it
The console command lets you interact with your Rails application from the command line. On the underside, rails console uses IRB
rake about gives information about version numbers for Ruby, RubyGems, Rails, the Rails subcomponents, your application's folder, the current Rails environment name, your app's database adapter, and schema version
You can precompile the assets in app/assets using rake assets:precompile and remove those compiled assets using rake assets:clean.
rake db:version is useful when troubleshooting
The doc: namespace has the tools to generate documentation for your app, API documentation, guides.
rake notes will search through your code for comments beginning with FIXME, OPTIMIZE or TODO.
You can also use custom annotations in your code and list them using rake notes:custom by specifying the annotation using an environment variable ANNOTATION.
rake routes will list all of your defined routes, which is useful for tracking down routing problems in your app, or giving you a good overview of the URLs in an app you're trying to get familiar with.
rake secret will give you a pseudo-random key to use for your session secret.
Custom rake tasks have a .rake extension and are placed in
Rails.root/lib/tasks.
rails new . --git --database=postgresql
All commands can run with -h or --help to list more information
The rails server command launches a small web server named WEBrick which comes bundled with Ruby
rails server -e production -p 4000
You can run a server as a daemon by passing a -d option
The rails generate command uses templates to create a whole lot of things.
Using generators will save you a large amount of time by writing boilerplate code, code that is necessary for the app to work.
With a normal, plain-old Rails application, your URLs will generally follow the pattern of http://(host)/(controller)/(action), and a URL like http://(host)/(controller) will hit the index action of that controller.
A scaffold in Rails is a full set of model, database migration for that model, controller to manipulate it, views to view and manipulate the data, and a test suite for each of the above.
Unit tests are code that tests and makes assertions about code.
Unit tests are your friend.
rails console --sandbox
rails db
Each task has a description, and should help you find the thing you need.
rake tmp:clear clears all the three: cache, sessions and sockets.
every Kubernetes operation is exposed as an API endpoint and can be executed by an HTTP request to this endpoint.
the main job of kubectl is to carry out HTTP requests to the Kubernetes API
Kubernetes maintains an internal state of resources, and all Kubernetes operations are CRUD operations on these resources.
Kubernetes is a fully resource-centred system
Kubernetes API reference is organised as a list of resource types with their associated operations.
This is how kubectl works for all commands that interact with the Kubernetes cluster.
kubectl simply makes HTTP requests to the appropriate Kubernetes API endpoints.
it's totally possible to control Kubernetes with a tool like curl by manually issuing HTTP requests to the Kubernetes API.
Kubernetes consists of a set of independent components that run as separate processes on the nodes of a cluster.
components on the master nodes
Storage backend: stores resource definitions (usually etcd is used)
API server: provides Kubernetes API and manages storage backend
Controller manager: ensures resource statuses match specifications
Scheduler: schedules Pods to worker nodes
component on the worker nodes
Kubelet: manages execution of containers on a worker node
triggers the ReplicaSet controller, which is a sub-process of the controller manager.
the scheduler, who watches for Pod definitions that are not yet scheduled to a worker node.
creating and updating resources in the storage backend on the master node.
The kubelet of the worker node your ReplicaSet Pods have been scheduled to instructs the configured container runtime (which may be Docker) to download the required container images and run the containers.
Kubernetes components (except the API server and the storage backend) work by watching for resource changes in the storage backend and manipulating resources in the storage backend.
However, these components do not access the storage backend directly, but only through the Kubernetes API.
double usage of the Kubernetes API for internal components as well as for external users is a fundamental design concept of Kubernetes.
All other Kubernetes components and users read, watch, and manipulate the state (i.e. resources) of Kubernetes through the Kubernetes API
The storage backend stores the state (i.e. resources) of Kubernetes.
command completion is a shell feature that works by the means of a completion script.
A completion script is a shell script that defines the completion behaviour for a specific command. Sourcing a completion script enables completion for the corresponding command.
kubectl completion zsh
/etc/bash_completion.d directory (create it, if it doesn't exist)
source <(kubectl completion bash)
source <(kubectl completion zsh)
autoload -Uz compinit
compinit
the API reference, which contains the full specifications of all resources.
kubectl api-resources
displays the resource names in their plural form (e.g. deployments instead of deployment). It also displays the shortname (e.g. deploy) for those resources that have one. Don't worry about these differences. All of these name variants are equivalent for kubectl.
.spec
custom columns output format comes in. It lets you freely define the columns and the data to display in them. You can choose any field of a resource to be displayed as a separate column in the output
kubectl get pods -o custom-columns='NAME:metadata.name,NODE:spec.nodeName'
kubectl explain pod.spec.
kubectl explain pod.metadata.
browse the resource specifications and try it out with any fields you like!
JSONPath is a language to extract data from JSON documents (it is similar to XPath for XML).
with kubectl explain, only a subset of the JSONPath capabilities is supported
Many fields of Kubernetes resources are lists, and this operator allows you to select items of these lists. It is often used with a wildcard as [*] to select all items of the list.
kubectl get pods -o custom-columns='NAME:metadata.name,IMAGES:spec.containers[*].image'
a Pod may contain more than one container.
The availability zones for each node are obtained through the special failure-domain.beta.kubernetes.io/zone label.
kubectl get nodes -o yaml
kubectl get nodes -o json
The default kubeconfig file is ~/.kube/config
with multiple clusters, then you have connection parameters for multiple clusters configured in your kubeconfig file.
Within a cluster, you can set up multiple namespaces (a namespace is kind of "virtual" clusters within a physical cluster)
overwrite the default kubeconfig file with the --kubeconfig option for every kubectl command.
Namespace: the namespace to use when connecting to the cluster
a one-to-one mapping between clusters and contexts.
When kubectl reads a kubeconfig file, it always uses the information from the current context.
just change the current context in the kubeconfig file
to switch to another namespace in the same cluster, you can change the value of the namespace element of the current context
kubectl also provides the --cluster, --user, --namespace, and --context options that allow you to overwrite individual elements and the current context itself, regardless of what is set in the kubeconfig file.
for switching between clusters and namespaces is kubectx.
kubectl config get-contexts
just have to download the shell scripts named kubectl-ctx and kubectl-ns to any directory in your PATH and make them executable (for example, with chmod +x)
kubectl proxy
kubectl get roles
kubectl get pod
Kubectl plugins are distributed as simple executable files with a name of the form kubectl-x. The prefix kubectl- is mandatory,
To install a plugin, you just have to copy the kubectl-x file to any directory in your PATH and make it executable (for example, with chmod +x)
krew itself is a kubectl plugin
check out the kubectl-plugins GitHub topic
The executable can be of any type, a Bash script, a compiled Go program, a Python script, it really doesn't matter. The only requirement is that it can be directly executed by the operating system.
kubectl plugins can be written in any programming or scripting language.
you can write more sophisticated plugins with real programming languages, for example, using a Kubernetes client library. If you use Go, you can also use the cli-runtime library, which exists specifically for writing kubectl plugins.
a kubeconfig file consists of a set of contexts
changing the current context means changing the cluster, if you have only a single context per cluster.
The release stage takes the build produced by the build stage and combines it with the deploy’s current config.
is ready for immediate execution in the execution environment.
The run stage (also known as “runtime”) runs the app in the execution environment
strict separation between the build, release, and run stages.
the Capistrano deployment tool stores releases in a subdirectory named releases, where the current release is a symlink to the current release directory.
Every release should always have a unique release ID
Releases are an append-only ledger and a release cannot be mutated once it is created.
Any data that needs to persist must be stored in a stateful backing service, typically a database.
The memory space or filesystem of the process can be used as a brief, single-transaction cache.
wipe out all local (e.g., memory and filesystem) state
compiling during the build stage
“sticky sessions” – that is, caching user session data in memory of the app’s process and expecting future requests from the same visitor to be routed to the same process.
Sticky sessions are a violation of twelve-factor and should never be used or relied upon
Kubernetes supports many types of volumes, and a Pod can
use any number of them simultaneously.
To use a volume, a Pod specifies what volumes to provide for the Pod (the
.spec.volumes
field) and where to mount those into Containers (the
.spec.containers.volumeMounts
field).
A process in a container sees a filesystem view composed from their Docker
image and volumes.
Volumes can not mount onto other volumes or have hard links to
other volumes.
Each Container in the Pod must independently specify where to
mount each volume
localnfs
cephfs
awsElasticBlockStore
glusterfs
vsphereVolume
An awsElasticBlockStore volume mounts an Amazon Web Services (AWS) EBS
Volume into your Pod.
the contents of an EBS
volume are preserved and the volume is merely unmounted.
an
EBS volume can be pre-populated with data, and that data can be “handed off”
between Pods.
create an EBS volume using aws ec2 create-volume
the nodes on which Pods are running must be AWS EC2 instances
EBS only supports a single EC2 instance mounting a volume
check that the size and EBS volume
type are suitable for your use!
A cephfs volume allows an existing CephFS volume to be
mounted into your Pod.
the contents of a cephfs volume are preserved and the volume is merely
unmounted.
A Container using a ConfigMap as a subPath volume mount will not
receive ConfigMap updates.
An emptyDir volume is first created when a Pod is assigned to a Node, and
exists as long as that Pod is running on that node.
When a Pod is removed from a node for
any reason, the data in the emptyDir is deleted forever.
By default, emptyDir volumes are stored on whatever medium is backing the
node - that might be disk or SSD or network storage, depending on your
environment.
you can set the emptyDir.medium field to "Memory"
to tell Kubernetes to mount a tmpfs (RAM-backed filesystem)
An fc volume allows an existing fibre channel volume to be mounted in a Pod.
configure FC SAN Zoning to allocate and mask those LUNs (volumes) to the target WWNs beforehand so that Kubernetes hosts can access them.
Flocker is an open-source clustered Container data volume manager. It provides management
and orchestration of data volumes backed by a variety of storage backends.
emptyDir
flocker
A flocker volume allows a Flocker dataset to be mounted into a Pod
have your own Flocker installation running
A gcePersistentDisk volume mounts a Google Compute Engine (GCE) Persistent
Disk into your Pod.
Using a PD on a Pod controlled by a ReplicationController will fail unless
the PD is read-only or the replica count is 0 or 1
A glusterfs volume allows a Glusterfs (an open
source networked filesystem) volume to be mounted into your Pod.
have your own GlusterFS installation running
A hostPath volume mounts a file or directory from the host node’s filesystem
into your Pod.
a
powerful escape hatch for some applications
access to Docker internals; use a hostPath
of /var/lib/docker
allowing a Pod to specify whether a given hostPath should exist prior to the
Pod running, whether it should be created, and what it should exist as
specify a type for a hostPath volume
the files or directories created on the underlying hosts are only writable by root.
hostPath:
# directory location on host
path: /data
# this field is optional
type: Directory
An iscsi volume allows an existing iSCSI (SCSI over IP) volume to be mounted
into your Pod.
have your own iSCSI server running
A feature of iSCSI is that it can be mounted as read-only by multiple consumers
simultaneously.
A local volume represents a mounted local storage device such as a disk,
partition or directory.
Local volumes can only be used as a statically created PersistentVolume.
Compared to hostPath volumes, local volumes can be used in a durable and
portable manner without manually scheduling Pods to nodes, as the system is aware
of the volume’s node constraints by looking at the node affinity on the PersistentVolume.
If a node becomes unhealthy,
then the local volume will also become inaccessible, and a Pod using it will not
be able to run.
PersistentVolume spec using a local volume and
nodeAffinity
PersistentVolume nodeAffinity is required when using local volumes. It enables
the Kubernetes scheduler to correctly schedule Pods using local volumes to the
correct node.
PersistentVolume volumeMode can now be set to “Block” (instead of the default
value “Filesystem”) to expose the local volume as a raw block device.
When using local volumes, it is recommended to create a StorageClass with
volumeBindingMode set to WaitForFirstConsumer
An nfs volume allows an existing NFS (Network File System) share to be
mounted into your Pod.
NFS can be mounted by multiple
writers simultaneously.
have your own NFS server running with the share exported
A persistentVolumeClaim volume is used to mount a
PersistentVolume into a Pod.
PersistentVolumes are a
way for users to “claim” durable storage (such as a GCE PersistentDisk or an
iSCSI volume) without knowing the details of the particular cloud environment.
A projected volume maps several existing volume sources into the same directory.
All sources are required to be in the same namespace as the Pod. For more details,
see the all-in-one volume design document.
Each projected volume source is listed in the spec under sources
A Container using a projected volume source as a subPath volume mount will not
receive updates for those volume sources.
RBD volumes can only be mounted by a single consumer in read-write mode - no
simultaneous writers allowed
A secret volume is used to pass sensitive information, such as passwords, to
Pods
store secrets in the Kubernetes API and mount them as files for
use by Pods
secret volumes are
backed by tmpfs (a RAM-backed filesystem) so they are never written to
non-volatile storage.
create a secret in the Kubernetes API before you can use it
A Container using a Secret as a subPath volume mount will not
receive Secret updates.
StorageOS runs as a Container within your Kubernetes environment, making local
or attached storage accessible from any node within the Kubernetes cluster.
Data can be replicated to protect against node failure. Thin provisioning and
compression can improve utilization and reduce cost.
StorageOS provides block storage to Containers, accessible via a file system.
A vsphereVolume is used to mount a vSphere VMDK Volume into your Pod.
supports both VMFS and VSAN datastore.
create VMDK using one of the following methods before using with Pod.
share one volume for multiple uses in a single Pod.
The volumeMounts.subPath
property can be used to specify a sub-path inside the referenced volume instead of its root.
Use the subPathExpr field to construct subPath directory names from Downward API environment variables
enable the VolumeSubpathEnvExpansion feature gate
The subPath and subPathExpr properties are mutually exclusive.
There is no limit on how much space an emptyDir or
hostPath volume can consume, and no isolation between Containers or between
Pods.
emptyDir and hostPath volumes will be able to
request a certain amount of space using a resource
specification, and to select the type of media to use, for clusters that have
several media types.
the Container Storage Interface (CSI)
and Flexvolume. They enable storage vendors to create custom storage plugins
without adding them to the Kubernetes repository.
all volume plugins (like
volume types listed above) were “in-tree” meaning they were built, linked,
compiled, and shipped with the core Kubernetes binaries and extend the core
Kubernetes API.
Container Storage Interface (CSI)
defines a standard interface for container orchestration systems (like
Kubernetes) to expose arbitrary storage systems to their container workloads.
Once a CSI compatible volume driver is deployed on a Kubernetes cluster, users
may use the csi volume type to attach, mount, etc. the volumes exposed by the
CSI driver.
The csi volume type does not support direct reference from Pod and may only be
referenced in a Pod via a PersistentVolumeClaim object.
This feature requires CSIInlineVolume feature gate to be enabled:--feature-gates=CSIInlineVolume=true
In-tree plugins that support CSI Migration and have a corresponding CSI driver implemented
are listed in the “Types of Volumes” section above.
Mount propagation allows for sharing volumes mounted by a Container to
other Containers in the same Pod, or even to other Pods on the same node.
Mount propagation of a volume is controlled by mountPropagation field in Container.volumeMounts.
HostToContainer - This volume mount will receive all subsequent mounts
that are mounted to this volume or any of its subdirectories.
Bidirectional - This volume mount behaves the same the HostToContainer mount.
In addition, all volume mounts created by the Container will be propagated
back to the host and to all Containers of all Pods that use the same volume.
Edit your Docker’s systemd service file. Set MountFlags as follows:MountFlags=shared
The control plane's components make global decisions about the cluster
Control plane components can be run on any machine in the cluster.
for simplicity, set up scripts typically start all control plane components on
the same machine, and do not run user containers on this machine
The API server is the front end for the Kubernetes control plane.
kube-apiserver is designed to scale horizontally—that is, it scales by deploying more instances.
You can run several instances of kube-apiserver and balance traffic between those instances.
Kubernetes cluster uses etcd as its backing store, make sure you have a
back up plan
for those data.
watches for newly created
Pods with no assigned
node, and selects a node for them
to run on.
Factors taken into account for scheduling decisions include:
individual and collective resource requirements, hardware/software/policy
constraints, affinity and anti-affinity specifications, data locality,
inter-workload interference, and deadlines.
each controller is a separate process, but to reduce complexity, they are all compiled into a single binary and run in a single process.
Node controller
Job controller
Endpoints controller
Service Account & Token controllers
The cloud controller manager lets you link your
cluster into your cloud provider's API, and separates out the components that interact
with that cloud platform from components that only interact with your cluster.
If you are running Kubernetes on your own premises, or in a learning environment inside your
own PC, the cluster does not have a cloud controller manager.
An agent that runs on each node in the cluster. It makes sure that containers are running in a Pod.
The kubelet takes a set of PodSpecs that are provided through various mechanisms and ensures that the containers described in those PodSpecs are running and healthy.
The kubelet doesn't manage containers which were not created by Kubernetes.
kube-proxy is a network proxy that runs on each
node in your cluster,
implementing part of the Kubernetes
Service concept.
kube-proxy
maintains network rules on nodes. These network rules allow network
communication to your Pods from network sessions inside or outside of
your cluster.
kube-proxy uses the operating system packet filtering layer if there is one
and it's available.
Kubernetes supports several container runtimes: Docker,
containerd, CRI-O,
and any implementation of the Kubernetes CRI (Container Runtime
Interface).
Addons use Kubernetes resources (DaemonSet,
Deployment, etc)
to implement cluster features
namespaced resources
for addons belong within the kube-system namespace.
all Kubernetes clusters should have cluster DNS,
Cluster DNS is a DNS server, in addition to the other DNS server(s) in your environment, which serves DNS records for Kubernetes services.
Containers started by Kubernetes automatically include this DNS server in their DNS searches.
Container Resource Monitoring records generic time-series metrics
about containers in a central database, and provides a UI for browsing that data.
A cluster-level logging mechanism is responsible for
saving container logs to a central log store with search/browsing interface.
如果能夠寫出一個不錯的編譯器,那不就是大家都需要的軟體了嗎?
因此他便開始撰寫C語言的編譯器,那就是現在相當有名的GNU C Compiler(gcc)!
他還撰寫了更多可以被呼叫的C函式庫(GNU C library),以及可以被使用來操作作業系統的基本介面BASH shell!
這些都在1990年左右完成了!
有鑑於圖形使用者介面(Graphical User Interface, GUI)
的需求日益加重,在1984年由MIT與其他協力廠商首次發表了X Window System
,並且更在1988年成立了非營利性質的XFree86這個組織。所謂的XFree86其實是
X Window System + Free + x86的整合名稱呢!
deployment.yaml: A basic manifest for creating a Kubernetes deployment
using the suffix .yaml for YAML files and .tpl for helpers.
It is just fine to put a plain YAML file like this in the templates/ directory.
helm get manifest
The helm get manifest command takes a release name (full-coral) and prints
out all of the Kubernetes resources that were uploaded to the server. Each file
begins with --- to indicate the start of a YAML document
Names should be unique to a release
The name: field is limited to 63 characters because of limitations to
the DNS system.
release names are limited to 53 characters
{{ .Release.Name }}
A template directive is enclosed in {{ and }} blocks.
The values that are passed into a template can be thought of as namespaced objects, where a dot (.) separates each namespaced element.
The leading dot before Release indicates that we start with the top-most namespace for this scope
The Release object is one of the built-in objects for Helm
When you want to test the template rendering, but not actually install anything, you can use helm install ./mychart --debug --dry-run
Using --dry-run will make it easier to test your code, but it won’t ensure that Kubernetes itself will accept the templates you generate.
Objects are passed into a template from the template engine.
create new objects within your templates
Objects can be simple, and have just one value. Or they can contain other objects or functions.
Release is one of the top-level objects that you can access in your templates.
Release.Namespace: The namespace to be released into (if the manifest doesn’t override)
Values: Values passed into the template from the values.yaml file and from user-supplied files. By default, Values is empty.
Chart: The contents of the Chart.yaml file.
Files: This provides access to all non-special files in a chart.
Files.Get is a function for getting a file by name
Files.GetBytes is a function for getting the contents of a file as an array of bytes instead of as a string. This is useful for things like images.
Template: Contains information about the current template that is being executed
BasePath: The namespaced path to the templates directory of the current chart
The built-in values always begin with a capital letter.
Go’s naming convention
use only initial lower case letters in order to distinguish local names from those built-in.
If this is a subchart, the values.yaml file of a parent chart
Individual parameters passed with --set
values.yaml is the default, which can be overridden by a parent chart’s values.yaml, which can in turn be overridden by a user-supplied values file, which can in turn be overridden by --set parameters.
While structuring data this way is possible, the recommendation is that you keep your values trees shallow, favoring flatness.
If you need to delete a key from the default values, you may override the value of the key to be null, in which case Helm will remove the key from the overridden values merge.
Kubernetes would then fail because you can not declare more than one livenessProbe handler.
When injecting strings from the .Values object into the template, we ought to quote these strings.
quote
Template functions follow the syntax functionName arg1 arg2...
While we talk about the “Helm template language” as if it is Helm-specific, it is actually a combination of the Go template language, some extra functions, and a variety of wrappers to expose certain objects to the templates.
Drawing on a concept from UNIX, pipelines are a tool for chaining together a series of template commands to compactly express a series of transformations.
pipelines are an efficient way of getting several things done in sequence
The repeat function will echo the given string the given number of times
default DEFAULT_VALUE GIVEN_VALUE. This function allows you to specify a default value inside of the template, in case the value is omitted.
all static default values should live in the values.yaml, and should not be repeated using the default command
Operators are implemented as functions that return a boolean value.
To use eq, ne, lt, gt, and, or, not etcetera place the operator at the front of the statement followed by its parameters just as you would a function.
if and
if or
with to specify a scope
range, which provides a “for each”-style loop
block declares a special kind of fillable template area
A pipeline is evaluated as false if the value is:
a boolean false
a numeric zero
an empty string
a nil (empty or null)
an empty collection (map, slice, tuple, dict, array)
incorrect YAML because of the whitespacing
When the template engine runs, it removes the contents inside of {{ and }}, but it leaves the remaining whitespace exactly as is.
{{- (with the dash and space added) indicates that whitespace should be chomped left, while -}} means whitespace to the right should be consumed.
Newlines are whitespace!
an * at the end of the line indicates a newline character that would be removed
Be careful with the chomping modifiers.
the indent function
Scopes can be changed. with can allow you to set the current scope (.) to a particular object.
Inside of the restricted scope, you will not be able to access the other objects from the parent scope.
range
The range function will “range over” (iterate through) the pizzaToppings list.
Just like with sets the scope of ., so does a range operator.
The toppings: |- line is declaring a multi-line string.
not a YAML list. It’s a big string.
the data in ConfigMaps data is composed of key/value pairs, where both the key and the value are simple strings.
The |- marker in YAML takes a multi-line string.
range can be used to iterate over collections that have a key and a value (like a map or dict).
In Helm templates, a variable is a named reference to another object. It follows the form $name
Variables are assigned with a special assignment operator: :=
{{- $relname := .Release.Name -}}
capture both the index and the value
the integer index (starting from zero) to $index and the value to $topping
For data structures that have both a key and a value, we can use range to get both
Variables are normally not “global”. They are scoped to the block in which they are declared.
one variable that is always global - $ - this variable will always point to the root context.
$.
$.
Helm template language is its ability to declare multiple templates and use them together.
A named template (sometimes called a partial or a subtemplate) is simply a template defined inside of a file, and given a name.
when naming templates: template names are global.
If you declare two templates with the same name, whichever one is loaded last will be the one used.
you should be careful to name your templates with chart-specific names.
templates in subcharts are compiled together with top-level templates
naming convention is to prefix each defined template with the name of the chart: {{ define "mychart.labels" }}
One popular naming convention is to prefix each defined template with the name
of the chart: {{ define "mychart.labels" }}
using the specific chart name
as a prefix we can avoid any conflicts
But files whose name begins with an underscore (_) are assumed to not have
a manifest inside.
The define action allows us to create a named template inside of a template
file.
include it with the template action
a define does not produce output unless it is called with a template
define functions should have a simple documentation block
({{/* ... */}}) describing what they do.
template names are global.
A
popular naming convention is to prefix each defined template with the name of
the chart
When a named template (created with define) is rendered, it will
receive the scope passed in by the template call.
No scope was passed in, so within the template we cannot access anything in .
Note that we pass . at the end of the template call. We could just as easily
pass .Values or .Values.favorite or whatever scope we want
the template that is substituted in has the text aligned to the left. Because
template is an action, and not a function, there is no way to pass the output
of a template call to other functions; the data is simply inserted inline.