The Twelve-Factor App - 0 views
-
Libraries installed through a packaging system can be installed system-wide (known as “site packages”) or scoped into the directory containing the app (known as “vendoring” or “bundling”).
- ...8 more annotations...
-
The full and explicit dependency specification is applied uniformly to both production and development.
-
Bundler for Ruby offers the Gemfile manifest format for dependency declaration and bundle exec for dependency isolation.
5 Lessons Learned From Writing Over 300,000 Lines of Infrastructure Code - 0 views
DevOps Resources - 0 views
Spinnaker - 0 views
Viral JS - 0 views
There's a fiddle for that! - 0 views
MySQL 到底能不能放到 Docker 里跑? - 0 views
- ...12 more annotations...
-
根据这个需求按照我们的资源筛选规则 (比如主从不能在同一台机器、内存配置不允许超卖等等),从现有的资源池中匹配出可用资源,然后依次创建主从关系、创建高可用管理、检查集群复制状态、推送集群信息到中间件 (选用了中间件的情况下) 控制中心、最后将以上相关信息都同步到 CMDB。
Sandstorm - 0 views
Replication - Redis - 0 views
-
The slave will automatically reconnect to the master every time the link breaks, and will attempt to be an exact copy of it regardless of what happens to the master.
- ...2 more annotations...
Volumes - Kubernetes - 0 views
-
when a Container crashes, kubelet will restart it, but the files will be lost - the Container starts with a clean state
- ...105 more annotations...
-
A Kubernetes volume, on the other hand, has an explicit lifetime - the same as the Pod that encloses it.
-
a volume outlives any Containers that run within the Pod, and data is preserved across Container restarts.
-
To use a volume, a Pod specifies what volumes to provide for the Pod (the .spec.volumes field) and where to mount those into Containers (the .spec.containers.volumeMounts field).
-
the contents of a cephfs volume are preserved and the volume is merely unmounted.
-
volumeMounts: - name: config-vol mountPath: /etc/config volumes: - name: config-vol configMap: name: log-config items: - key: log_level path: log_level
-
An emptyDir volume is first created when a Pod is assigned to a Node, and exists as long as that Pod is running on that node.
-
By default, emptyDir volumes are stored on whatever medium is backing the node - that might be disk or SSD or network storage, depending on your environment.
-
you can set the emptyDir.medium field to "Memory" to tell Kubernetes to mount a tmpfs (RAM-backed filesystem)
-
configure FC SAN Zoning to allocate and mask those LUNs (volumes) to the target WWNs beforehand so that Kubernetes hosts can access them.
-
Flocker is an open-source clustered Container data volume manager. It provides management and orchestration of data volumes backed by a variety of storage backends.
-
Using a PD on a Pod controlled by a ReplicationController will fail unless the PD is read-only or the replica count is 0 or 1
-
A glusterfs volume allows a Glusterfs (an open source networked filesystem) volume to be mounted into your Pod.
-
allowing a Pod to specify whether a given hostPath should exist prior to the Pod running, whether it should be created, and what it should exist as
-
Compared to hostPath volumes, local volumes can be used in a durable and portable manner without manually scheduling Pods to nodes, as the system is aware of the volume’s node constraints by looking at the node affinity on the PersistentVolume.
-
If a node becomes unhealthy, then the local volume will also become inaccessible, and a Pod using it will not be able to run.
-
PersistentVolume nodeAffinity is required when using local volumes. It enables the Kubernetes scheduler to correctly schedule Pods using local volumes to the correct node.
-
PersistentVolume volumeMode can now be set to “Block” (instead of the default value “Filesystem”) to expose the local volume as a raw block device.
-
When using local volumes, it is recommended to create a StorageClass with volumeBindingMode set to WaitForFirstConsumer
-
PersistentVolumes are a way for users to “claim” durable storage (such as a GCE PersistentDisk or an iSCSI volume) without knowing the details of the particular cloud environment.
-
All sources are required to be in the same namespace as the Pod. For more details, see the all-in-one volume design document.
-
A Container using a projected volume source as a subPath volume mount will not receive updates for those volume sources.
-
RBD volumes can only be mounted by a single consumer in read-write mode - no simultaneous writers allowed
-
secret volumes are backed by tmpfs (a RAM-backed filesystem) so they are never written to non-volatile storage.
-
StorageOS runs as a Container within your Kubernetes environment, making local or attached storage accessible from any node within the Kubernetes cluster.
-
Data can be replicated to protect against node failure. Thin provisioning and compression can improve utilization and reduce cost.
-
The volumeMounts.subPath property can be used to specify a sub-path inside the referenced volume instead of its root.
-
Use the subPathExpr field to construct subPath directory names from Downward API environment variables
-
There is no limit on how much space an emptyDir or hostPath volume can consume, and no isolation between Containers or between Pods.
-
emptyDir and hostPath volumes will be able to request a certain amount of space using a resource specification, and to select the type of media to use, for clusters that have several media types.
-
the Container Storage Interface (CSI) and Flexvolume. They enable storage vendors to create custom storage plugins without adding them to the Kubernetes repository.
-
all volume plugins (like volume types listed above) were “in-tree” meaning they were built, linked, compiled, and shipped with the core Kubernetes binaries and extend the core Kubernetes API.
-
Container Storage Interface (CSI) defines a standard interface for container orchestration systems (like Kubernetes) to expose arbitrary storage systems to their container workloads.
-
Once a CSI compatible volume driver is deployed on a Kubernetes cluster, users may use the csi volume type to attach, mount, etc. the volumes exposed by the CSI driver.
-
The csi volume type does not support direct reference from Pod and may only be referenced in a Pod via a PersistentVolumeClaim object.
-
This feature requires CSIInlineVolume feature gate to be enabled:--feature-gates=CSIInlineVolume=true
-
In-tree plugins that support CSI Migration and have a corresponding CSI driver implemented are listed in the “Types of Volumes” section above.
-
Mount propagation allows for sharing volumes mounted by a Container to other Containers in the same Pod, or even to other Pods on the same node.
-
HostToContainer - This volume mount will receive all subsequent mounts that are mounted to this volume or any of its subdirectories.
-
Bidirectional - This volume mount behaves the same the HostToContainer mount. In addition, all volume mounts created by the Container will be propagated back to the host and to all Containers of all Pods that use the same volume.
Http 压测工具 -- wrk - 0 views
Secrets - Kubernetes - 0 views
-
Putting this information in a secret is safer and more flexible than putting it verbatim in a PodThe smallest and simplest Kubernetes object. A Pod represents a set of running containers on your cluster. definition or in a container imageStored instance of a container that holds a set of software needed to run an application. .
-
A Secret is an object that contains a small amount of sensitive data such as a password, a token, or a key.
- ...63 more annotations...
-
A secret can be used with a pod in two ways: as files in a volumeA directory containing data, accessible to the containers in a pod. mounted on one or more of its containers, or used by kubelet when pulling images for the pod.
-
Kubernetes automatically creates secrets which contain credentials for accessing the API and it automatically modifies your pods to use this type of secret.
-
stringData field is provided for convenience, and allows you to provide secret data as unencoded strings.
-
where you are deploying an application that uses a Secret to store a configuration file, and you want to populate parts of that configuration file during your deployment process.
-
When using the base64 utility on Darwin/macOS users should avoid using the -b option to split long lines.
-
Secrets can be mounted as data volumes or be exposed as environment variablesContainer environment variables are name=value pairs that provide useful information into containers running in a Pod. to be used by a container in a pod.
-
If .spec.volumes[].secret.items is used, only keys specified in items are projected. To consume all keys from the secret, all of them must be listed in the items field.
-
You can also specify the permission mode bits files part of a secret will have. If you don’t specify any, 0644 is used by default.
-
Inside the container that mounts a secret volume, the secret keys appear as files and the secret values are base-64 decoded and stored inside these files.
-
Inside a container that consumes a secret in an environment variables, the secret keys appear as normal environment variables containing the base-64 decoded values of the secret data.
-
An imagePullSecret is a way to pass a secret that contains a Docker (or other) image registry password to the Kubelet so it can pull a private image on behalf of your Pod.
-
Secret API objects reside in a namespaceAn abstraction used by Kubernetes to support multiple virtual clusters on the same physical cluster. . They can only be referenced by pods in that same namespace.
-
Secrets must be created before they are consumed in pods as environment variables unless they are marked as optional.
-
References via secretKeyRef to keys that do not exist in a named Secret will prevent the pod from starting.
-
Think carefully before sending your own ssh keys: other users of the cluster may have access to the secret.
-
Special characters such as $, \*, and ! require escaping. If the password you are using has special characters, you need to escape them using the \\ character.
-
a frontend container which handles user interaction and business logic, but which cannot see the private key;
-
a signer container that can see the private key, and responds to simple signing requests from the frontend
-
When deploying applications that interact with the secrets API, access should be limited using authorization policies such as RBAC
-
watch and list requests for secrets within a namespace are extremely powerful capabilities and should be avoided
-
watch and list all secrets in a cluster should be reserved for only the most privileged, system-level components.
-
each container in a pod has to request the secret volume in its volumeMounts for it to be visible within the container.
-
In the API server secret data is stored in etcdConsistent and highly-available key value store used as Kubernetes’ backing store for all cluster data.
Pods - Kubernetes - 0 views
-
A Pod (as in a pod of whales or pea pod) is a group of one or more containersA lightweight and portable executable image that contains software and all of its dependencies. (such as Docker containers), with shared storage/network, and a specification for how to run the containers.
- ...32 more annotations...
-
being executed on the same physical or virtual machine would mean being executed on the same logical host.
-
The shared context of a Pod is a set of Linux namespaces, cgroups, and potentially other facets of isolation
-
Containers in different Pods have distinct IP addresses and can not communicate by IPC without special configuration. These containers usually communicate with each other via Pod IP addresses.
-
Applications within a Pod also have access to shared volumesA directory containing data, accessible to the containers in a pod. , which are defined as part of a Pod and are made available to be mounted into each application’s filesystem.
-
a Pod is modelled as a group of Docker containers with shared namespaces and shared filesystem volumes
-
Pods are created, assigned a unique ID (UID), and scheduled to nodes where they remain until termination (according to restart policy) or deletion.
-
When something is said to have the same lifetime as a Pod, such as a volume, that means that it exists as long as that Pod (with that UID) exists.
-
The applications in a Pod all use the same network namespace (same IP and port space), and can thus “find” each other and communicate using localhost
-
Containers within the Pod see the system hostname as being the same as the configured name for the Pod.
-
Volumes enable data to survive container restarts and to be shared among the applications within the Pod.
-
When a user requests deletion of a Pod, the system records the intended grace period before the Pod is allowed to be forcefully killed, and a TERM signal is sent to the main process in each container.
-
Once the grace period has expired, the KILL signal is sent to those processes, and the Pod is then deleted from the API server.
-
Pod is removed from endpoints list for service, and are no longer considered part of the set of running Pods for replication controllers.
-
You must specify an additional flag --force along with --grace-period=0 in order to perform force deletions.
-
Force deletion of a Pod is defined as deletion of a Pod from the cluster state and etcd immediately.
-
Processes within the container get almost the same privileges that are available to processes outside a container.
vSphere Storage for Kubernetes | vSphere Storage for Kubernetes - 0 views
-
When containers are re-scheduled, they can die on one host and might get scheduled on a different host.
- ...3 more annotations...
-
the storage should also be shifted and made available on the new host for the container to start gracefully.
-
Kubernetes provides abstractions to ensure that the storage details are separated from allocation and usage of storage.
Helm | - 0 views
-
A single chart might be used to deploy something simple, like a memcached pod, or something complex, like a full web app stack with HTTP servers, databases, caches, and so on.
-
Charts are created as files laid out in a particular directory tree, then they can be packaged into versioned archives to be deployed.
- ...170 more annotations...
-
templates/ # A directory of templates that, when combined with values, # will generate valid Kubernetes manifest files.
-
When generating a package, the helm package command will use the version that it finds in the Chart.yaml as a token in the package name.
-
the appVersion field is not related to the version field. It is a way of specifying the version of the application.
-
If the latest version of a chart in the repository is marked as deprecated, then the chart as a whole is considered to be deprecated.
-
dependencies can be dynamically linked through the requirements.yaml file or brought in to the charts/ directory and managed manually.
-
the preferred method of declaring dependencies is by using a requirements.yaml file inside of your chart.
-
helm dependency update and it will use your dependency file to download all the specified charts into your charts/ directory for you.
-
When helm dependency update retrieves charts, it will store them as chart archives in the charts/ directory.
-
Managing charts with requirements.yaml is a good way to easily keep charts updated, and also share requirements information throughout a team.
-
The condition field holds one or more YAML paths (delimited by commas). If this path exists in the top parent’s values and resolves to a boolean value, the chart will be enabled or disabled based on that boolean value.
-
The keys containing the values to be imported can be specified in the parent chart’s requirements.yaml file using a YAML list. Each item in the list is a key which is imported from the child chart’s exports field.
-
specifying the key data in our import list, Helm looks in the exports field of the child chart for data key and imports its contents.
-
the parent key data is not contained in the parent’s final values. If you need to specify the parent key, use the ‘child-parent’ format.
-
To access values that are not contained in the exports key of the child chart’s values, you will need to specify the source key of the values to be imported (child) and the destination path in the parent chart’s values (parent).
-
Helm Chart templates are written in the Go template language, with the addition of 50 or so add-on template functions from the Sprig library and a few other specialized functions
-
When Helm renders the charts, it will pass every file in that directory through the template engine.
-
Chart developers may supply a file called values.yaml inside of a chart. This file can contain default values.
-
Chart users may supply a YAML file that contains values. This can be provided on the command line with helm install.
-
When a user supplies custom values, these values will override the values in the chart’s values.yaml file.
-
Values that are supplied via a values.yaml file (or via the --set flag) are accessible from the .Values object in a template.
-
Files can be accessed using {{index .Files "file.name"}} or using the {{.Files.Get name}} or {{.Files.GetString name}} functions.
-
Values files can declare values for the top-level chart, as well as for any of the charts that are included in that chart’s charts/ directory.
-
a way of sharing one top-level variable with all subcharts, which is useful for things like setting metadata properties like labels.
-
If a subchart declares a global variable, that global will be passed downward (to the subchart’s subcharts), but not upward to the parent chart.
-
Any HTTP server that can serve YAML files and tar files and can answer GET requests can be used as a repository server.
-
Helm provides a hook mechanism to allow chart developers to intervene at certain points in a release’s life cycle.
-
Execute a Job to back up a database before installing a new chart, and then execute a second job after the upgrade in order to restore data.
-
test-success: Executes when running helm test and expects the pod to return successfully (return code == 0).
-
Hooks allow you, the chart developer, an opportunity to perform operations at strategic points in a release lifecycle
-
if the job fails, the release will fail. This is a blocking operation, so the Helm client will pause while the Job is run.
-
If they have hook weights (see below), they are executed in weighted order. Otherwise, ordering is not guaranteed.
-
To destroy such resources, you need to either write code to perform this operation in a pre-delete or post-delete hook or add "helm.sh/hook-delete-policy" annotation to the hook template file.
-
When subcharts declare hooks, those are also evaluated. There is no way for a top-level chart to disable the hooks declared by subcharts.
-
"before-hook-creation" specifies Tiller should delete the previous hook before the new hook is launched.
-
By default Tiller will wait for 60 seconds for a deleted hook to no longer exist in the API server before timing out.
-
The crd-install hook is executed very early during an installation, before the rest of the manifests are verified.
-
A common reason why the hook resource might already exist is that it was not deleted following use on a previous install/upgrade.
-
include function allows you to bring in another template, and then pass the results to other template functions.
-
The required function allows you to declare a particular values entry as required for template rendering.
-
When you are working with string data, you are always safer quoting the strings than leaving them as bare words
-
to include a template, and then perform an operation on that template’s output, Helm has a special include function
-
The above includes a template called toYaml, passes it $value, and then passes the output of that template to the nindent function.
-
Go provides a way for setting template options to control behavior when a map is indexed with a key that’s not present in the map
-
The required function gives developers the ability to declare a value entry as required for template rendering.
-
The sha256sum function can be used to ensure a deployment’s annotation section is updated if another file changes
-
In the templates/ directory, any file that begins with an underscore(_) is not expected to output a Kubernetes manifest file.
-
The current best practice for composing a complex application from discrete parts is to create a top-level umbrella chart that exposes the global configurations, and then use the charts/ subdirectory to embed each of the components.
-
SAP’s Converged charts: These charts install SAP Converged Cloud a full OpenStack IaaS on Kubernetes. All of the charts are collected together in one GitHub repository, except for a few submodules.
-
Deis’s Workflow: This chart exposes the entire Deis PaaS system with one chart. But it’s different from the SAP chart in that this umbrella chart is built from each component, and each component is tracked in a different Git repository.
-
As a best practice, templates should follow a YAML-like syntax unless the JSON syntax substantially reduces the risk of a formatting issue.
-
A chart repository is an HTTP server that houses an index.yaml file and optionally some packaged charts.
-
Because a chart repository can be any HTTP server that can serve YAML and tar files and can answer GET requests, you have a plethora of options when it comes down to hosting your own chart repository.
-
A valid chart repository must have an index file. The index file contains information about each chart in the chart repository.
-
The Helm project provides an open-source Helm repository server called ChartMuseum that you can host yourself.
-
add the repository to their helm client via the helm repo add [NAME] [URL] command with any name they would like to use to reference the repository.
-
Chart repositories must make it possible to serve provenance files over HTTP via a specific request, and must make them available at the same URI path as the chart.
-
We don’t want to be “the certificate authority” for all chart signers. Instead, we strongly favor a decentralized model, which is part of the reason we chose OpenPGP as our foundational technology.
-
A test in a helm chart lives under the templates/ directory and is a pod definition that specifies a container with a given command to run.
-
The pod definition must contain one of the helm test hook annotations: helm.sh/hook: test-success or helm.sh/hook: test-failure
Dynamic Provisioning | vSphere Storage for Kubernetes - 0 views
-
Storage Policy based Management (SPBM). SPBM provides a single unified control plane across a broad range of data services and storage solutions
-
Kubernetes StorageClasses allow the creation of PersistentVolumes on-demand without having to create storage and mount it into K8s nodes upfront
-
When a PVC is created, the PersistentVolume will be provisioned on a compatible datastore with the most free space that satisfies the gold storage policy requirements.
- ...2 more annotations...
-
When a PVC is created, the vSphere Cloud Provider checks if the user specified datastore satisfies the gold storage policy requirements. If it does, the vSphere Cloud Provider will provision the PersistentVolume on the user specified datastore. If not, it will create an error telling the user that the specified datastore is not compatible with gold storage policy requirements.
-
The Kubernetes user will have the ability to specify custom vSAN Storage Capabilities during dynamic volume provisioning.