Skip to main content

Home/ Larvata/ Group items tagged namespace

Rss Feed Group items tagged

張 旭

Helm | - 0 views

  • Helm will figure out where to install Tiller by reading your Kubernetes configuration file (usually $HOME/.kube/config). This is the same file that kubectl uses.
  • kubectl cluster-info
  • Role-Based Access Control (RBAC) enabled
  • ...133 more annotations...
  • initialize the local CLI
  • install Tiller into your Kubernetes cluster
  • helm install
  • helm init --upgrade
  • By default, when Tiller is installed, it does not have authentication enabled.
  • helm repo update
  • Without a max history set the history is kept indefinitely, leaving a large number of records for helm and tiller to maintain.
  • helm init --upgrade
  • Whenever you install a chart, a new release is created.
  • one chart can be installed multiple times into the same cluster. And each can be independently managed and upgraded.
  • helm list function will show you a list of all deployed releases.
  • helm delete
  • helm status
  • you can audit a cluster’s history, and even undelete a release (with helm rollback).
  • the Helm server (Tiller).
  • The Helm client (helm)
  • brew install kubernetes-helm
  • Tiller, the server portion of Helm, typically runs inside of your Kubernetes cluster.
  • it can also be run locally, and configured to talk to a remote Kubernetes cluster.
  • Role-Based Access Control - RBAC for short
  • create a service account for Tiller with the right roles and permissions to access resources.
  • run Tiller in an RBAC-enabled Kubernetes cluster.
  • run kubectl get pods --namespace kube-system and see Tiller running.
  • helm inspect
  • Helm will look for Tiller in the kube-system namespace unless --tiller-namespace or TILLER_NAMESPACE is set.
  • For development, it is sometimes easier to work on Tiller locally, and configure it to connect to a remote Kubernetes cluster.
  • even when running locally, Tiller will store release configuration in ConfigMaps inside of Kubernetes.
  • helm version should show you both the client and server version.
  • Tiller stores its data in Kubernetes ConfigMaps, you can safely delete and re-install Tiller without worrying about losing any data.
  • helm reset
  • The --node-selectors flag allows us to specify the node labels required for scheduling the Tiller pod.
  • --override allows you to specify properties of Tiller’s deployment manifest.
  • helm init --override manipulates the specified properties of the final manifest (there is no “values” file).
  • The --output flag allows us skip the installation of Tiller’s deployment manifest and simply output the deployment manifest to stdout in either JSON or YAML format.
  • By default, tiller stores release information in ConfigMaps in the namespace where it is running.
  • switch from the default backend to the secrets backend, you’ll have to do the migration for this on your own.
  • a beta SQL storage backend that stores release information in an SQL database (only postgres has been tested so far).
  • Once you have the Helm Client and Tiller successfully installed, you can move on to using Helm to manage charts.
  • Helm requires that kubelet have access to a copy of the socat program to proxy connections to the Tiller API.
  • A Release is an instance of a chart running in a Kubernetes cluster. One chart can often be installed many times into the same cluster.
  • helm init --client-only
  • helm init --dry-run --debug
  • A panic in Tiller is almost always the result of a failure to negotiate with the Kubernetes API server
  • Tiller and Helm have to negotiate a common version to make sure that they can safely communicate without breaking API assumptions
  • helm delete --purge
  • Helm stores some files in $HELM_HOME, which is located by default in ~/.helm
  • A Chart is a Helm package. It contains all of the resource definitions necessary to run an application, tool, or service inside of a Kubernetes cluster.
  • it like the Kubernetes equivalent of a Homebrew formula, an Apt dpkg, or a Yum RPM file.
  • A Repository is the place where charts can be collected and shared.
  • Set the $HELM_HOME environment variable
  • each time it is installed, a new release is created.
  • Helm installs charts into Kubernetes, creating a new release for each installation. And to find new charts, you can search Helm chart repositories.
  • chart repository is named stable by default
  • helm search shows you all of the available charts
  • helm inspect
  • To install a new package, use the helm install command. At its simplest, it takes only one argument: The name of the chart.
  • If you want to use your own release name, simply use the --name flag on helm install
  • additional configuration steps you can or should take.
  • Helm does not wait until all of the resources are running before it exits. Many charts require Docker images that are over 600M in size, and may take a long time to install into the cluster.
  • helm status
  • helm inspect values
  • helm inspect values stable/mariadb
  • override any of these settings in a YAML formatted file, and then pass that file during installation.
  • helm install -f config.yaml stable/mariadb
  • --values (or -f): Specify a YAML file with overrides.
  • --set (and its variants --set-string and --set-file): Specify overrides on the command line.
  • Values that have been --set can be cleared by running helm upgrade with --reset-values specified.
  • Chart designers are encouraged to consider the --set usage when designing the format of a values.yaml file.
  • --set-file key=filepath is another variant of --set. It reads the file and use its content as a value.
  • inject a multi-line text into values without dealing with indentation in YAML.
  • An unpacked chart directory
  • When a new version of a chart is released, or when you want to change the configuration of your release, you can use the helm upgrade command.
  • Kubernetes charts can be large and complex, Helm tries to perform the least invasive upgrade.
  • It will only update things that have changed since the last release
  • $ helm upgrade -f panda.yaml happy-panda stable/mariadb
  • deployment
  • If both are used, --set values are merged into --values with higher precedence.
  • The helm get command is a useful tool for looking at a release in the cluster.
  • helm rollback
  • A release version is an incremental revision. Every time an install, upgrade, or rollback happens, the revision number is incremented by 1.
  • helm history
  • a release name cannot be re-used.
  • you can rollback a deleted resource, and have it re-activate.
  • helm repo list
  • helm repo add
  • helm repo update
  • The Chart Development Guide explains how to develop your own charts.
  • helm create
  • helm lint
  • helm package
  • Charts that are archived can be loaded into chart repositories.
  • chart repository server
  • Tiller can be installed into any namespace.
  • Limiting Tiller to only be able to install into specific namespaces and/or resource types is controlled by Kubernetes RBAC roles and rolebindings
  • Release names are unique PER TILLER INSTANCE
  • Charts should only contain resources that exist in a single namespace.
  • not recommended to have multiple Tillers configured to manage resources in the same namespace.
  • a client-side Helm plugin. A plugin is a tool that can be accessed through the helm CLI, but which is not part of the built-in Helm codebase.
  • Helm plugins are add-on tools that integrate seamlessly with Helm. They provide a way to extend the core feature set of Helm, but without requiring every new feature to be written in Go and added to the core tool.
  • Helm plugins live in $(helm home)/plugins
  • The Helm plugin model is partially modeled on Git’s plugin model
  • helm referred to as the porcelain layer, with plugins being the plumbing.
  • helm plugin install https://github.com/technosophos/helm-template
  • command is the command that this plugin will execute when it is called.
  • Environment variables are interpolated before the plugin is executed.
  • The command itself is not executed in a shell. So you can’t oneline a shell script.
  • Helm is able to fetch Charts using HTTP/S
  • Variables like KUBECONFIG are set for the plugin if they are set in the outer environment.
  • In Kubernetes, granting a role to an application-specific service account is a best practice to ensure that your application is operating in the scope that you have specified.
  • restrict Tiller’s capabilities to install resources to certain namespaces, or to grant a Helm client running access to a Tiller instance.
  • Service account with cluster-admin role
  • The cluster-admin role is created by default in a Kubernetes cluster
  • Deploy Tiller in a namespace, restricted to deploying resources only in that namespace
  • Deploy Tiller in a namespace, restricted to deploying resources in another namespace
  • When running a Helm client in a pod, in order for the Helm client to talk to a Tiller instance, it will need certain privileges to be granted.
  • SSL Between Helm and Tiller
  • The Tiller authentication model uses client-side SSL certificates.
  • creating an internal CA, and using both the cryptographic and identity functions of SSL.
  • Helm is a powerful and flexible package-management and operations tool for Kubernetes.
  • default installation applies no security configurations
  • with a cluster that is well-secured in a private network with no data-sharing or no other users or teams.
  • With great power comes great responsibility.
  • Choose the Best Practices you should apply to your helm installation
  • Role-based access control, or RBAC
  • Tiller’s gRPC endpoint and its usage by Helm
  • Kubernetes employ a role-based access control (or RBAC) system (as do modern operating systems) to help mitigate the damage that can be done if credentials are misused or bugs exist.
  • In the default installation the gRPC endpoint that Tiller offers is available inside the cluster (not external to the cluster) without authentication configuration applied.
  • Tiller stores its release information in ConfigMaps. We suggest changing the default to Secrets.
  • release information
  • charts
  • charts are a kind of package that not only installs containers you may or may not have validated yourself, but it may also install into more than one namespace.
  • As with all shared software, in a controlled or shared environment you must validate all software you install yourself before you install it.
  • Helm’s provenance tools to ensure the provenance and integrity of charts
  •  
    "Helm will figure out where to install Tiller by reading your Kubernetes configuration file (usually $HOME/.kube/config). This is the same file that kubectl uses."
張 旭

Boosting your kubectl productivity ♦︎ Learnk8s - 0 views

  • kubectl is your cockpit to control Kubernetes.
  • kubectl is a client for the Kubernetes API
  • Kubernetes API is an HTTP REST API.
  • ...75 more annotations...
  • This API is the real Kubernetes user interface.
  • Kubernetes is fully controlled through this API
  • every Kubernetes operation is exposed as an API endpoint and can be executed by an HTTP request to this endpoint.
  • the main job of kubectl is to carry out HTTP requests to the Kubernetes API
  • Kubernetes maintains an internal state of resources, and all Kubernetes operations are CRUD operations on these resources.
  • Kubernetes is a fully resource-centred system
  • Kubernetes API reference is organised as a list of resource types with their associated operations.
  • This is how kubectl works for all commands that interact with the Kubernetes cluster.
  • kubectl simply makes HTTP requests to the appropriate Kubernetes API endpoints.
  • it's totally possible to control Kubernetes with a tool like curl by manually issuing HTTP requests to the Kubernetes API.
  • Kubernetes consists of a set of independent components that run as separate processes on the nodes of a cluster.
  • components on the master nodes
  • Storage backend: stores resource definitions (usually etcd is used)
  • API server: provides Kubernetes API and manages storage backend
  • Controller manager: ensures resource statuses match specifications
  • Scheduler: schedules Pods to worker nodes
  • component on the worker nodes
  • Kubelet: manages execution of containers on a worker node
  • triggers the ReplicaSet controller, which is a sub-process of the controller manager.
  • the scheduler, who watches for Pod definitions that are not yet scheduled to a worker node.
  • creating and updating resources in the storage backend on the master node.
  • The kubelet of the worker node your ReplicaSet Pods have been scheduled to instructs the configured container runtime (which may be Docker) to download the required container images and run the containers.
  • Kubernetes components (except the API server and the storage backend) work by watching for resource changes in the storage backend and manipulating resources in the storage backend.
  • However, these components do not access the storage backend directly, but only through the Kubernetes API.
    • 張 旭
       
      很精彩,相互之間都是使用 API call 溝通,良好的微服務行為。
  • double usage of the Kubernetes API for internal components as well as for external users is a fundamental design concept of Kubernetes.
  • All other Kubernetes components and users read, watch, and manipulate the state (i.e. resources) of Kubernetes through the Kubernetes API
  • The storage backend stores the state (i.e. resources) of Kubernetes.
  • command completion is a shell feature that works by the means of a completion script.
  • A completion script is a shell script that defines the completion behaviour for a specific command. Sourcing a completion script enables completion for the corresponding command.
  • kubectl completion zsh
  • /etc/bash_completion.d directory (create it, if it doesn't exist)
  • source <(kubectl completion bash)
  • source <(kubectl completion zsh)
  • autoload -Uz compinit compinit
  • the API reference, which contains the full specifications of all resources.
  • kubectl api-resources
  • displays the resource names in their plural form (e.g. deployments instead of deployment). It also displays the shortname (e.g. deploy) for those resources that have one. Don't worry about these differences. All of these name variants are equivalent for kubectl.
  • .spec
  • custom columns output format comes in. It lets you freely define the columns and the data to display in them. You can choose any field of a resource to be displayed as a separate column in the output
  • kubectl get pods -o custom-columns='NAME:metadata.name,NODE:spec.nodeName'
  • kubectl explain pod.spec.
  • kubectl explain pod.metadata.
  • browse the resource specifications and try it out with any fields you like!
  • JSONPath is a language to extract data from JSON documents (it is similar to XPath for XML).
  • with kubectl explain, only a subset of the JSONPath capabilities is supported
  • Many fields of Kubernetes resources are lists, and this operator allows you to select items of these lists. It is often used with a wildcard as [*] to select all items of the list.
  • kubectl get pods -o custom-columns='NAME:metadata.name,IMAGES:spec.containers[*].image'
  • a Pod may contain more than one container.
  • The availability zones for each node are obtained through the special failure-domain.beta.kubernetes.io/zone label.
  • kubectl get nodes -o yaml kubectl get nodes -o json
  • The default kubeconfig file is ~/.kube/config
  • with multiple clusters, then you have connection parameters for multiple clusters configured in your kubeconfig file.
  • Within a cluster, you can set up multiple namespaces (a namespace is kind of "virtual" clusters within a physical cluster)
  • overwrite the default kubeconfig file with the --kubeconfig option for every kubectl command.
  • Namespace: the namespace to use when connecting to the cluster
  • a one-to-one mapping between clusters and contexts.
  • When kubectl reads a kubeconfig file, it always uses the information from the current context.
  • just change the current context in the kubeconfig file
  • to switch to another namespace in the same cluster, you can change the value of the namespace element of the current context
  • kubectl also provides the --cluster, --user, --namespace, and --context options that allow you to overwrite individual elements and the current context itself, regardless of what is set in the kubeconfig file.
  • for switching between clusters and namespaces is kubectx.
  • kubectl config get-contexts
  • just have to download the shell scripts named kubectl-ctx and kubectl-ns to any directory in your PATH and make them executable (for example, with chmod +x)
  • kubectl proxy
  • kubectl get roles
  • kubectl get pod
  • Kubectl plugins are distributed as simple executable files with a name of the form kubectl-x. The prefix kubectl- is mandatory,
  • To install a plugin, you just have to copy the kubectl-x file to any directory in your PATH and make it executable (for example, with chmod +x)
  • krew itself is a kubectl plugin
  • check out the kubectl-plugins GitHub topic
  • The executable can be of any type, a Bash script, a compiled Go program, a Python script, it really doesn't matter. The only requirement is that it can be directly executed by the operating system.
  • kubectl plugins can be written in any programming or scripting language.
  • you can write more sophisticated plugins with real programming languages, for example, using a Kubernetes client library. If you use Go, you can also use the cli-runtime library, which exists specifically for writing kubectl plugins.
  • a kubeconfig file consists of a set of contexts
  • changing the current context means changing the cluster, if you have only a single context per cluster.
張 旭

Kubernetes 基本概念 · Kubernetes指南 - 0 views

  • Container(容器)是一种便携式、轻量级的操作系统级虚拟化技术。它使用 namespace 隔离不同的软件运行环境,并通过镜像自包含软件的运行环境,从而使得容器可以很方便的在任何地方运行。
  • 每个应用程序用容器封装,管理容器部署就等同于管理应用程序部署。+
  • Pod 是一组紧密关联的容器集合,它们共享 PID、IPC、Network 和 UTS namespace,是 Kubernetes 调度的基本单位。
  • ...9 more annotations...
  • 进程间通信和文件共享
  • 在 Kubernetes 中,所有对象都使用 manifest(yaml 或 json)来定义
  • Node 是 Pod 真正运行的主机,可以是物理机,也可以是虚拟机。
  • 每个 Node 节点上至少要运行 container runtime(比如 docker 或者 rkt)、kubelet 和 kube-proxy 服务。
  • 常见的 pods, services, replication controllers 和 deployments 等都是属于某一个 namespace 的(默认是 default)
  • node, persistentVolumes 等则不属于任何 namespace
  • Service 是应用服务的抽象,通过 labels 为应用提供负载均衡和服务发现。
  • 匹配 labels 的 Pod IP 和端口列表组成 endpoints,由 kube-proxy 负责将服务 IP 负载均衡到这些 endpoints 上。
  • 每个 Service 都会自动分配一个 cluster IP(仅在集群内部可访问的虚拟地址)和 DNS 名
  •  
    "常见的 pods, services, replication controllers 和 deployments 等都是属于某一个 namespace 的(默认是 default),而 node, persistentVolumes 等则不属于任何 namespace。"
張 旭

ruby-grape/grape: An opinionated framework for creating REST-like APIs in Ruby. - 0 views

shared by 張 旭 on 17 Dec 16 - No Cached
  • Grape is a REST-like API framework for Ruby.
  • designed to run on Rack or complement existing web application frameworks such as Rails and Sinatra by providing a simple DSL to easily develop RESTful APIs
  • Grape APIs are Rack applications that are created by subclassing Grape::API
  • ...54 more annotations...
  • Rails expects a subdirectory that matches the name of the Ruby module and a file name that matches the name of the class
  • mount multiple API implementations inside another one
  • mount on a path, which is similar to using prefix inside the mounted API itself.
  • four strategies in which clients can reach your API's endpoints: :path, :header, :accept_version_header and :param
  • clients should pass the desired version as a request parameter, either in the URL query string or in the request body.
  • clients should pass the desired version in the HTTP Accept head
  • clients should pass the desired version in the UR
  • clients should pass the desired version in the HTTP Accept-Version header.
  • add a description to API methods and namespaces
  • Request parameters are available through the params hash object
  • Parameters are automatically populated from the request body on POST and PUT
  • route string parameters will have precedence.
  • Grape allows you to access only the parameters that have been declared by your params block
  • By default declared(params) includes parameters that have nil values
  • all valid types
  • type: File
  • JSON objects and arrays of objects are accepted equally
  • any class can be used as a type so long as an explicit coercion method is supplied
  • As a special case, variant-member-type collections may also be declared, by passing a Set or Array with more than one member to type
  • Parameters can be nested using group or by calling requires or optional with a block
  • relevant if another parameter is given
  • Parameters options can be grouped
  • allow_blank can be combined with both requires and optional
  • Parameters can be restricted to a specific set of values
  • Parameters can be restricted to match a specific regular expression
  • Never define mutually exclusive sets with any required params
  • Namespaces allow parameter definitions and apply to every method within the namespace
  • define a route parameter as a namespace using route_param
  • create custom validation that use request to validate the attribute
  • rescue a Grape::Exceptions::ValidationErrors and respond with a custom response or turn the response into well-formatted JSON for a JSON API that separates individual parameters and the corresponding error messages
  • custom validation messages
  • Request headers are available through the headers helper or from env in their original form
  • define requirements for your named route parameters using regular expressions on namespace or endpoint
  • route will match only if all requirements are met
  • mix in a module
  • define reusable params
  • using cookies method
  • a 201 for POST-Requests
  • 204 for DELETE-Requests
  • 200 status code for all other Requests
  • use status to query and set the actual HTTP Status Code
  • raising errors with error!
  • It is very crucial to define this endpoint at the very end of your API, as it literally accepts every request.
  • rescue_from will rescue the exceptions listed and all their subclasses.
  • Grape::API provides a logger method which by default will return an instance of the Logger class from Ruby's standard library.
  • Grape supports a range of ways to present your data
  • Grape has built-in Basic and Digest authentication (the given block is executed in the context of the current Endpoint).
  • Authentication applies to the current namespace and any children, but not parents.
  • Blocks can be executed before or after every API call, using before, after, before_validation and after_validation
  • Before and after callbacks execute in the following order
  • Grape by default anchors all request paths, which means that the request URL should match from start to end to match
  • The namespace method has a number of aliases, including: group, resource, resources, and segment. Use whichever reads the best for your API.
  • test a Grape API with RSpec by making HTTP requests and examining the response
  • POST JSON data and specify the correct content-type.
張 旭

Share Process Namespace between Containers in a Pod | Kubernetes - 0 views

  • When process namespace sharing is enabled, processes in a container are visible to all other containers in the same pod.
  • It's even possible to access the file system of another container using the /proc/$pid/root link.
  • Pods share many resources so it makes sense they would also share a process namespace.
  • ...2 more annotations...
  • Processes are visible to other containers in the pod. This includes all information visible in /proc, such as passwords that were passed as arguments or environment variables. These are protected only by regular Unix permissions.
  • Container filesystems are visible to other containers in the pod through the /proc/$pid/root link. This makes debugging easier, but it also means that filesystem secrets are protected only by filesystem permissions.
  •  
    "When process namespace sharing is enabled, processes in a container are visible to all other containers in the same pod. "
張 旭

Helm | - 0 views

  • Templates generate manifest files, which are YAML-formatted resource descriptions that Kubernetes can understand.
  • service.yaml: A basic manifest for creating a service endpoint for your deployment
  • In Kubernetes, a ConfigMap is simply a container for storing configuration data.
  • ...88 more annotations...
  • deployment.yaml: A basic manifest for creating a Kubernetes deployment
  • using the suffix .yaml for YAML files and .tpl for helpers.
  • It is just fine to put a plain YAML file like this in the templates/ directory.
  • helm get manifest
  • The helm get manifest command takes a release name (full-coral) and prints out all of the Kubernetes resources that were uploaded to the server. Each file begins with --- to indicate the start of a YAML document
  • Names should be unique to a release
  • The name: field is limited to 63 characters because of limitations to the DNS system.
  • release names are limited to 53 characters
  • {{ .Release.Name }}
  • A template directive is enclosed in {{ and }} blocks.
  • The values that are passed into a template can be thought of as namespaced objects, where a dot (.) separates each namespaced element.
  • The leading dot before Release indicates that we start with the top-most namespace for this scope
  • The Release object is one of the built-in objects for Helm
  • When you want to test the template rendering, but not actually install anything, you can use helm install ./mychart --debug --dry-run
  • Using --dry-run will make it easier to test your code, but it won’t ensure that Kubernetes itself will accept the templates you generate.
  • Objects are passed into a template from the template engine.
  • create new objects within your templates
  • Objects can be simple, and have just one value. Or they can contain other objects or functions.
  • Release is one of the top-level objects that you can access in your templates.
  • Release.Namespace: The namespace to be released into (if the manifest doesn’t override)
  • Values: Values passed into the template from the values.yaml file and from user-supplied files. By default, Values is empty.
  • Chart: The contents of the Chart.yaml file.
  • Files: This provides access to all non-special files in a chart.
  • Files.Get is a function for getting a file by name
  • Files.GetBytes is a function for getting the contents of a file as an array of bytes instead of as a string. This is useful for things like images.
  • Template: Contains information about the current template that is being executed
  • BasePath: The namespaced path to the templates directory of the current chart
  • The built-in values always begin with a capital letter.
  • Go’s naming convention
  • use only initial lower case letters in order to distinguish local names from those built-in.
  • If this is a subchart, the values.yaml file of a parent chart
  • Individual parameters passed with --set
  • values.yaml is the default, which can be overridden by a parent chart’s values.yaml, which can in turn be overridden by a user-supplied values file, which can in turn be overridden by --set parameters.
  • While structuring data this way is possible, the recommendation is that you keep your values trees shallow, favoring flatness.
  • If you need to delete a key from the default values, you may override the value of the key to be null, in which case Helm will remove the key from the overridden values merge.
  • Kubernetes would then fail because you can not declare more than one livenessProbe handler.
  • When injecting strings from the .Values object into the template, we ought to quote these strings.
  • quote
  • Template functions follow the syntax functionName arg1 arg2...
  • While we talk about the “Helm template language” as if it is Helm-specific, it is actually a combination of the Go template language, some extra functions, and a variety of wrappers to expose certain objects to the templates.
  • Drawing on a concept from UNIX, pipelines are a tool for chaining together a series of template commands to compactly express a series of transformations.
  • pipelines are an efficient way of getting several things done in sequence
  • The repeat function will echo the given string the given number of times
  • default DEFAULT_VALUE GIVEN_VALUE. This function allows you to specify a default value inside of the template, in case the value is omitted.
  • all static default values should live in the values.yaml, and should not be repeated using the default command
  • Operators are implemented as functions that return a boolean value.
  • To use eq, ne, lt, gt, and, or, not etcetera place the operator at the front of the statement followed by its parameters just as you would a function.
  • if and
  • if or
  • with to specify a scope
  • range, which provides a “for each”-style loop
  • block declares a special kind of fillable template area
  • A pipeline is evaluated as false if the value is: a boolean false a numeric zero an empty string a nil (empty or null) an empty collection (map, slice, tuple, dict, array)
  • incorrect YAML because of the whitespacing
  • When the template engine runs, it removes the contents inside of {{ and }}, but it leaves the remaining whitespace exactly as is.
  • {{- (with the dash and space added) indicates that whitespace should be chomped left, while -}} means whitespace to the right should be consumed.
  • Newlines are whitespace!
  • an * at the end of the line indicates a newline character that would be removed
  • Be careful with the chomping modifiers.
  • the indent function
  • Scopes can be changed. with can allow you to set the current scope (.) to a particular object.
  • Inside of the restricted scope, you will not be able to access the other objects from the parent scope.
  • range
  • The range function will “range over” (iterate through) the pizzaToppings list.
  • Just like with sets the scope of ., so does a range operator.
  • The toppings: |- line is declaring a multi-line string.
  • not a YAML list. It’s a big string.
  • the data in ConfigMaps data is composed of key/value pairs, where both the key and the value are simple strings.
  • The |- marker in YAML takes a multi-line string.
  • range can be used to iterate over collections that have a key and a value (like a map or dict).
  • In Helm templates, a variable is a named reference to another object. It follows the form $name
  • Variables are assigned with a special assignment operator: :=
  • {{- $relname := .Release.Name -}}
  • capture both the index and the value
  • the integer index (starting from zero) to $index and the value to $topping
  • For data structures that have both a key and a value, we can use range to get both
  • Variables are normally not “global”. They are scoped to the block in which they are declared.
  • one variable that is always global - $ - this variable will always point to the root context.
  • $.
  • $.
  • Helm template language is its ability to declare multiple templates and use them together.
  • A named template (sometimes called a partial or a subtemplate) is simply a template defined inside of a file, and given a name.
  • when naming templates: template names are global.
  • If you declare two templates with the same name, whichever one is loaded last will be the one used.
  • you should be careful to name your templates with chart-specific names.
  • templates in subcharts are compiled together with top-level templates
  • naming convention is to prefix each defined template with the name of the chart: {{ define "mychart.labels" }}
  • Helm has over 60 available functions.
張 旭

Rails Routing from the Outside In - Ruby on Rails Guides - 0 views

  • Resource routing allows you to quickly declare all of the common routes for a given resourceful controller.
  • Rails would dispatch that request to the destroy method on the photos controller with { id: '17' } in params.
  • a resourceful route provides a mapping between HTTP verbs and URLs to controller actions.
  • ...86 more annotations...
  • each action also maps to particular CRUD operations in a database
  • resource :photo and resources :photos creates both singular and plural routes that map to the same controller (PhotosController).
  • One way to avoid deep nesting (as recommended above) is to generate the collection actions scoped under the parent, so as to get a sense of the hierarchy, but to not nest the member actions.
  • to only build routes with the minimal amount of information to uniquely identify the resource
  • The shallow method of the DSL creates a scope inside of which every nesting is shallow
  • These concerns can be used in resources to avoid code duplication and share behavior across routes
  • add a member route, just add a member block into the resource block
  • You can leave out the :on option, this will create the same member route except that the resource id value will be available in params[:photo_id] instead of params[:id].
  • Singular Resources
  • use a singular resource to map /profile (rather than /profile/:id) to the show action
  • Passing a String to get will expect a controller#action format
  • workaround
  • organize groups of controllers under a namespace
  • route /articles (without the prefix /admin) to Admin::ArticlesController
  • route /admin/articles to ArticlesController (without the Admin:: module prefix)
  • Nested routes allow you to capture this relationship in your routing.
  • helpers take an instance of Magazine as the first parameter (magazine_ads_url(@magazine)).
  • Resources should never be nested more than 1 level deep.
  • via the :shallow option
  • a balance between descriptive routes and deep nesting
  • :shallow_path prefixes member paths with the specified parameter
  • Routing Concerns allows you to declare common routes that can be reused inside other resources and routes
  • Rails can also create paths and URLs from an array of parameters.
  • use url_for with a set of objects
  • In helpers like link_to, you can specify just the object in place of the full url_for call
  • insert the action name as the first element of the array
  • This will recognize /photos/1/preview with GET, and route to the preview action of PhotosController, with the resource id value passed in params[:id]. It will also create the preview_photo_url and preview_photo_path helpers.
  • pass :on to a route, eliminating the block:
  • Collection Routes
  • This will enable Rails to recognize paths such as /photos/search with GET, and route to the search action of PhotosController. It will also create the search_photos_url and search_photos_path route helpers.
  • simple routing makes it very easy to map legacy URLs to new Rails actions
  • add an alternate new action using the :on shortcut
  • When you set up a regular route, you supply a series of symbols that Rails maps to parts of an incoming HTTP request.
  • :controller maps to the name of a controller in your application
  • :action maps to the name of an action within that controller
  • optional parameters, denoted by parentheses
  • This route will also route the incoming request of /photos to PhotosController#index, since :action and :id are
  • use a constraint on :controller that matches the namespace you require
  • dynamic segments don't accept dots
  • The params will also include any parameters from the query string
  • :defaults option.
  • set params[:format] to "jpg"
  • cannot override defaults via query parameters
  • specify a name for any route using the :as option
  • create logout_path and logout_url as named helpers in your application.
  • Inside the show action of UsersController, params[:username] will contain the username for the user.
  • should use the get, post, put, patch and delete methods to constrain a route to a particular verb.
  • use the match method with the :via option to match multiple verbs at once
  • Routing both GET and POST requests to a single action has security implications
  • 'GET' in Rails won't check for CSRF token. You should never write to the database from 'GET' requests
  • use the :constraints option to enforce a format for a dynamic segment
  • constraints
  • don't need to use anchors
  • Request-Based Constraints
  • the same name as the hash key and then compare the return value with the hash value.
  • constraint values should match the corresponding Request object method return type
    • 張 旭
       
      應該就是檢查來源的 request, 如果是某個特定的 request 來訪問的,就通過。
  • blacklist
    • 張 旭
       
      這裡有點複雜 ...
  • redirect helper
  • reuse dynamic segments from the match in the path to redirect
  • this redirection is a 301 "Moved Permanently" redirect.
  • root method
  • put the root route at the top of the file
  • The root route only routes GET requests to the action.
  • root inside namespaces and scopes
  • For namespaced controllers you can use the directory notation
  • Only the directory notation is supported
  • use the :constraints option to specify a required format on the implicit id
  • specify a single constraint to apply to a number of routes by using the block
  • non-resourceful routes
  • :id parameter doesn't accept dots
  • :as option lets you override the normal naming for the named route helpers
  • use the :as option to prefix the named route helpers that Rails generates for a rout
  • prevent name collisions
  • prefix routes with a named parameter
  • This will provide you with URLs such as /bob/articles/1 and will allow you to reference the username part of the path as params[:username] in controllers, helpers and views
  • :only option
  • :except option
  • generate only the routes that you actually need can cut down on memory use and speed up the routing process.
  • alter path names
  • http://localhost:3000/rails/info/routes
  • rake routes
  • setting the CONTROLLER environment variable
  • Routes should be included in your testing strategy
  • assert_generates assert_recognizes assert_routing
張 旭

Template Designer Documentation - Jinja2 Documentation (2.10) - 0 views

  • A Jinja template doesn’t need to have a specific extension
  • A Jinja template is simply a text file
  • tags, which control the logic of the template
  • ...106 more annotations...
  • {% ... %} for Statements
  • {{ ... }} for Expressions to print to the template output
  • use a dot (.) to access attributes of a variable
  • the outer double-curly braces are not part of the variable, but the print statement.
  • If you access variables inside tags don’t put the braces around them.
  • If a variable or attribute does not exist, you will get back an undefined value.
  • the default behavior is to evaluate to an empty string if printed or iterated over, and to fail for every other operation.
  • if an object has an item and attribute with the same name. Additionally, the attr() filter only looks up attributes.
  • Variables can be modified by filters. Filters are separated from the variable by a pipe symbol (|) and may have optional arguments in parentheses.
  • Multiple filters can be chained
  • Tests can be used to test a variable against a common expression.
  • add is plus the name of the test after the variable.
  • to find out if a variable is defined, you can do name is defined, which will then return true or false depending on whether name is defined in the current template context.
  • strip whitespace in templates by hand. If you add a minus sign (-) to the start or end of a block (e.g. a For tag), a comment, or a variable expression, the whitespaces before or after that block will be removed
  • not add whitespace between the tag and the minus sign
  • mark a block raw
  • Template inheritance allows you to build a base “skeleton” template that contains all the common elements of your site and defines blocks that child templates can override.
  • The {% extends %} tag is the key here. It tells the template engine that this template “extends” another template.
  • access templates in subdirectories with a slash
  • can’t define multiple {% block %} tags with the same name in the same template
  • use the special self variable and call the block with that name
  • self.title()
  • super()
  • put the name of the block after the end tag for better readability
  • if the block is replaced by a child template, a variable would appear that was not defined in the block or passed to the context.
  • setting the block to “scoped” by adding the scoped modifier to a block declaration
  • If you have a variable that may include any of the following chars (>, <, &, or ") you SHOULD escape it unless the variable contains well-formed and trusted HTML.
  • Jinja2 functions (macros, super, self.BLOCKNAME) always return template data that is marked as safe.
  • With the default syntax, control structures appear inside {% ... %} blocks.
  • the dictsort filter
  • loop.cycle
  • Unlike in Python, it’s not possible to break or continue in a loop
  • use loops recursively
  • add the recursive modifier to the loop definition and call the loop variable with the new iterable where you want to recurse.
  • The loop variable always refers to the closest (innermost) loop.
  • whether the value changed at all,
  • use it to test if a variable is defined, not empty and not false
  • Macros are comparable with functions in regular programming languages.
  • If a macro name starts with an underscore, it’s not exported and can’t be imported.
  • pass a macro to another macro
  • caller()
  • a single trailing newline is stripped if present
  • other whitespace (spaces, tabs, newlines etc.) is returned unchanged
  • a block tag works in “both” directions. That is, a block tag doesn’t just provide a placeholder to fill - it also defines the content that fills the placeholder in the parent.
  • Python dicts are not ordered
  • caller(user)
  • call(user)
  • This is a simple dialog rendered by using a macro and a call block.
  • Filter sections allow you to apply regular Jinja2 filters on a block of template data.
  • Assignments at top level (outside of blocks, macros or loops) are exported from the template like top level macros and can be imported by other templates.
  • using namespace objects which allow propagating of changes across scopes
  • use block assignments to capture the contents of a block into a variable name.
  • The extends tag can be used to extend one template from another.
  • Blocks are used for inheritance and act as both placeholders and replacements at the same time.
  • The include statement is useful to include a template and return the rendered contents of that file into the current namespace
  • Included templates have access to the variables of the active context by default.
  • putting often used code into macros
  • imports are cached and imported templates don’t have access to the current template variables, just the globals by default.
  • Macros and variables starting with one or more underscores are private and cannot be imported.
  • By default, included templates are passed the current context and imported templates are not.
  • imports are often used just as a module that holds macros.
  • Integers and floating point numbers are created by just writing the number down
  • Everything between two brackets is a list.
  • Tuples are like lists that cannot be modified (“immutable”).
  • A dict in Python is a structure that combines keys and values.
  • // Divide two numbers and return the truncated integer result
  • The special constants true, false, and none are indeed lowercase
  • all Jinja identifiers are lowercase
  • (expr) group an expression.
  • The is and in operators support negation using an infix notation
  • in Perform a sequence / mapping containment test.
  • | Applies a filter.
  • ~ Converts all operands into strings and concatenates them.
  • use inline if expressions.
  • always an attribute is returned and items are not looked up.
  • default(value, default_value=u'', boolean=False)¶ If the value is undefined it will return the passed default value, otherwise the value of the variable
  • dictsort(value, case_sensitive=False, by='key', reverse=False)¶ Sort a dict and yield (key, value) pairs.
  • format(value, *args, **kwargs)¶ Apply python string formatting on an object
  • groupby(value, attribute)¶ Group a sequence of objects by a common attribute.
  • grouping by is stored in the grouper attribute and the list contains all the objects that have this grouper in common.
  • indent(s, width=4, first=False, blank=False, indentfirst=None)¶ Return a copy of the string with each line indented by 4 spaces. The first line and blank lines are not indented by default.
  • join(value, d=u'', attribute=None)¶ Return a string which is the concatenation of the strings in the sequence.
  • map()¶ Applies a filter on a sequence of objects or looks up an attribute.
  • pprint(value, verbose=False)¶ Pretty print a variable. Useful for debugging.
  • reject()¶ Filters a sequence of objects by applying a test to each object, and rejecting the objects with the test succeeding.
  • replace(s, old, new, count=None)¶ Return a copy of the value with all occurrences of a substring replaced with a new one.
  • round(value, precision=0, method='common')¶ Round the number to a given precision
  • even if rounded to 0 precision, a float is returned.
  • select()¶ Filters a sequence of objects by applying a test to each object, and only selecting the objects with the test succeeding.
  • sort(value, reverse=False, case_sensitive=False, attribute=None)¶ Sort an iterable. Per default it sorts ascending, if you pass it true as first argument it will reverse the sorting.
  • striptags(value)¶ Strip SGML/XML tags and replace adjacent whitespace by one space.
  • tojson(value, indent=None)¶ Dumps a structure to JSON so that it’s safe to use in <script> tags.
  • trim(value)¶ Strip leading and trailing whitespace.
  • unique(value, case_sensitive=False, attribute=None)¶ Returns a list of unique items from the the given iterable
  • urlize(value, trim_url_limit=None, nofollow=False, target=None, rel=None)¶ Converts URLs in plain text into clickable links.
  • defined(value)¶ Return true if the variable is defined
  • in(value, seq)¶ Check if value is in seq.
  • mapping(value)¶ Return true if the object is a mapping (dict etc.).
  • number(value)¶ Return true if the variable is a number.
  • sameas(value, other)¶ Check if an object points to the same memory address than another object
  • undefined(value)¶ Like defined() but the other way round.
  • A joiner is passed a string and will return that string every time it’s called, except the first time (in which case it returns an empty string).
  • namespace(...)¶ Creates a new container that allows attribute assignment using the {% set %} tag
  • The with statement makes it possible to create a new inner scope. Variables set within this scope are not visible outside of the scope.
  • activate and deactivate the autoescaping from within the templates
  • With both trim_blocks and lstrip_blocks enabled, you can put block tags on their own lines, and the entire block line will be removed when rendered, preserving the whitespace of the contents
張 旭

Extend the Kubernetes API with CustomResourceDefinitions | Kubernetes - 0 views

  • When you create a new CustomResourceDefinition (CRD), the Kubernetes API Server creates a new RESTful resource path for each version you specify.
  • The CRD can be either namespaced or cluster-scoped, as specified in the CRD's scope field
  • deleting a namespace deletes all custom objects in that namespace.
  • ...7 more annotations...
  • CustomResourceDefinitions themselves are non-namespaced and are available to all namespaces.
  • Custom objects can contain custom fields. These fields can contain arbitrary JSON.
  • When you delete a CustomResourceDefinition, the server will uninstall the RESTful API endpoint and delete all custom objects stored in it
  • CustomResourceDefinitions store validated resource data in the cluster's persistence store, etcd.
  • By default, all unspecified fields for a custom resource, across all versions, are pruned.
  • The field json can store any JSON value, without anything being pruned.
  • Finalizers allow controllers to implement asynchronous pre-delete hooks.
張 旭

Run the Docker daemon as a non-root user (Rootless mode) | Docker Documentation - 0 views

  • running the Docker daemon and containers as a non-root user
  • Rootless mode does not require root privileges even during the installation of the Docker daemon
  • Rootless mode executes the Docker daemon and containers inside a user namespace.
  • ...9 more annotations...
  • in rootless mode, both the daemon and the container are running without root privileges.
  • Rootless mode does not use binaries with SETUID bits or file capabilities, except newuidmap and newgidmap, which are needed to allow multiple UIDs/GIDs to be used in the user namespace.
  • expose privileged ports (< 1024)
  • add net.ipv4.ip_unprivileged_port_start=0 to /etc/sysctl.conf (or /etc/sysctl.d) and run sudo sysctl --system
  • dockerd-rootless.sh uses slirp4netns (if installed) or VPNKit as the network stack by default.
  • These network stacks run in userspace and might have performance overhead
  • This error occurs when the number of available entries in /etc/subuid or /etc/subgid is not sufficient.
  • This error occurs mostly when the host is running in cgroup v2. See the section Fedora 31 or later for information on switching the host to use cgroup v1.
  • --net=host doesn’t listen ports on the host network namespace This is an expected behavior, as the daemon is namespaced inside RootlessKit’s network namespace. Use docker run -p instead.
張 旭

Secrets - Kubernetes - 0 views

  • Putting this information in a secret is safer and more flexible than putting it verbatim in a PodThe smallest and simplest Kubernetes object. A Pod represents a set of running containers on your cluster. definition or in a container imageStored instance of a container that holds a set of software needed to run an application. .
  • A Secret is an object that contains a small amount of sensitive data such as a password, a token, or a key.
  • Users can create secrets, and the system also creates some secrets.
  • ...63 more annotations...
  • To use a secret, a pod needs to reference the secret.
  • A secret can be used with a pod in two ways: as files in a volumeA directory containing data, accessible to the containers in a pod. mounted on one or more of its containers, or used by kubelet when pulling images for the pod.
  • --from-file
  • You can also create a Secret in a file first, in json or yaml format, and then create that object.
  • The Secret contains two maps: data and stringData.
  • The data field is used to store arbitrary data, encoded using base64.
  • Kubernetes automatically creates secrets which contain credentials for accessing the API and it automatically modifies your pods to use this type of secret.
  • kubectl get and kubectl describe avoid showing the contents of a secret by default.
  • stringData field is provided for convenience, and allows you to provide secret data as unencoded strings.
  • where you are deploying an application that uses a Secret to store a configuration file, and you want to populate parts of that configuration file during your deployment process.
  • a field is specified in both data and stringData, the value from stringData is used.
  • The keys of data and stringData must consist of alphanumeric characters, ‘-’, ‘_’ or ‘.’.
  • Newlines are not valid within these strings and must be omitted.
  • When using the base64 utility on Darwin/macOS users should avoid using the -b option to split long lines.
  • create a Secret from generators and then apply it to create the object on the Apiserver.
  • The generated Secrets name has a suffix appended by hashing the contents.
  • base64 --decode
  • Secrets can be mounted as data volumes or be exposed as environment variablesContainer environment variables are name=value pairs that provide useful information into containers running in a Pod. to be used by a container in a pod.
  • Multiple pods can reference the same secret.
  • Each key in the secret data map becomes the filename under mountPath
  • each container needs its own volumeMounts block, but only one .spec.volumes is needed per secret
  • use .spec.volumes[].secret.items field to change target path of each key:
  • If .spec.volumes[].secret.items is used, only keys specified in items are projected. To consume all keys from the secret, all of them must be listed in the items field.
  • You can also specify the permission mode bits files part of a secret will have. If you don’t specify any, 0644 is used by default.
  • JSON spec doesn’t support octal notation, so use the value 256 for 0400 permissions.
  • Inside the container that mounts a secret volume, the secret keys appear as files and the secret values are base-64 decoded and stored inside these files.
  • Mounted Secrets are updated automatically
  • Kubelet is checking whether the mounted secret is fresh on every periodic sync.
  • cache propagation delay depends on the chosen cache type
  • A container using a Secret as a subPath volume mount will not receive Secret updates.
  • Multiple pods can reference the same secret.
  • env: - name: SECRET_USERNAME valueFrom: secretKeyRef: name: mysecret key: username
  • Inside a container that consumes a secret in an environment variables, the secret keys appear as normal environment variables containing the base-64 decoded values of the secret data.
  • An imagePullSecret is a way to pass a secret that contains a Docker (or other) image registry password to the Kubelet so it can pull a private image on behalf of your Pod.
  • a secret needs to be created before any pods that depend on it.
  • Secret API objects reside in a namespaceAn abstraction used by Kubernetes to support multiple virtual clusters on the same physical cluster. . They can only be referenced by pods in that same namespace.
  • Individual secrets are limited to 1MiB in size.
  • Kubelet only supports use of secrets for Pods it gets from the API server.
  • Secrets must be created before they are consumed in pods as environment variables unless they are marked as optional.
  • References to Secrets that do not exist will prevent the pod from starting.
  • References via secretKeyRef to keys that do not exist in a named Secret will prevent the pod from starting.
  • Once a pod is scheduled, the kubelet will try to fetch the secret value.
  • Think carefully before sending your own ssh keys: other users of the cluster may have access to the secret.
  • volumes: - name: secret-volume secret: secretName: ssh-key-secret
  • Special characters such as $, \*, and ! require escaping. If the password you are using has special characters, you need to escape them using the \\ character.
  • You do not need to escape special characters in passwords from files
  • make that key begin with a dot
  • Dotfiles in secret volume
  • .secret-file
  • a frontend container which handles user interaction and business logic, but which cannot see the private key;
  • a signer container that can see the private key, and responds to simple signing requests from the frontend
  • When deploying applications that interact with the secrets API, access should be limited using authorization policies such as RBAC
  • watch and list requests for secrets within a namespace are extremely powerful capabilities and should be avoided
  • watch and list all secrets in a cluster should be reserved for only the most privileged, system-level components.
  • additional precautions with secret objects, such as avoiding writing them to disk where possible.
  • A secret is only sent to a node if a pod on that node requires it
  • only the secrets that a pod requests are potentially visible within its containers
  • each container in a pod has to request the secret volume in its volumeMounts for it to be visible within the container.
  • In the API server secret data is stored in etcdConsistent and highly-available key value store used as Kubernetes’ backing store for all cluster data.
  • limit access to etcd to admin users
  • Base64 encoding is not an encryption method and is considered the same as plain text.
  • A user who can create a pod that uses a secret can also see the value of that secret.
  • anyone with root on any node can read any secret from the apiserver, by impersonating the kubelet.
張 旭

Kubernetes - Traefik - 0 views

  • allow fine-grained control of Kubernetes resources and API.
  • authorize Traefik to use the Kubernetes API.
  • namespace-specific RoleBindings
  • ...29 more annotations...
  • a single, global ClusterRoleBinding.
  • RoleBindings per namespace enable to restrict granted permissions to the very namespaces only that Traefik is watching over, thereby following the least-privileges principle.
  • The scalability can be much better when using a Deployment
  • you will have a Single-Pod-per-Node model when using a DaemonSet,
  • DaemonSets automatically scale to new nodes, when the nodes join the cluster
  • DaemonSets ensure that only one replica of pods run on any single node.
  • DaemonSets can be run with the NET_BIND_SERVICE capability, which will allow it to bind to port 80/443/etc on each host. This will allow bypassing the kube-proxy, and reduce traffic hops.
  • start with the Daemonset
  • The Deployment has easier up and down scaling possibilities.
  • The DaemonSet automatically scales to all nodes that meets a specific selector and guarantees to fill nodes one at a time.
  • Rolling updates are fully supported from Kubernetes 1.7 for DaemonSets as well.
  • provide the TLS certificate via a Kubernetes secret in the same namespace as the ingress.
  • If there are any errors while loading the TLS section of an ingress, the whole ingress will be skipped.
  • create secret generic
  • Name-based Routing
  • Path-based Routing
  • Traefik will merge multiple Ingress definitions for the same host/path pair into one definition.
  • specify priority for ingress routes
  • traefik.frontend.priority
  • When specifying an ExternalName, Traefik will forward requests to the given host accordingly and use HTTPS when the Service port matches 443.
  • By default Traefik will pass the incoming Host header to the upstream resource.
  • traefik.frontend.passHostHeader: "false"
  • type: ExternalName
  • By default, Traefik processes every Ingress objects it observes.
  • It is also possible to set the ingressClass option in Traefik to a particular value. Traefik will only process matching Ingress objects.
  • It is possible to split Ingress traffic in a fine-grained manner between multiple deployments using service weights.
  • use case is canary releases where a deployment representing a newer release is to receive an initially small but ever-increasing fraction of the requests over time.
  • annotations: traefik.ingress.kubernetes.io/service-weights: | my-app: 99% my-app-canary: 1%
  • Over time, the ratio may slowly shift towards the canary deployment until it is deemed to replace the previous main application, in steps such as 5%/95%, 10%/90%, 50%/50%, and finally 100%/0%.
張 旭

Running rootless Podman as a non-root user | Enable Sysadmin - 0 views

  • By default, rootless Podman runs as root within the container.
  • the processes in the container have the default list of namespaced capabilities which allow the processes to act like root inside of the user namespace
  • the directory is owned by UID 26, but UID 26 is not mapped into the container and is not the same UID that Postgres runs with while in the container.
  • ...8 more annotations...
  • Podman launches a container inside of the user namespace, which is mapped with the range of UIDs defined for the user in /etc/subuid and /etc/subgid
  • The easy solution to this problem is to chown the html directory to match the UID that Postgresql runs with inside of the container.
  • use the podman unshare command, which drops you into the same user namespace that rootless Podman uses
  • This setup also means that the processes inside of the container are running as the user’s UID. If the container process escaped the container, the process would have full access to files in your home directory based on UID separation.
  • SELinux would still block the access, but I have heard that some people disable SELinux.
  • If you run the processes within the container as a different non-root UID, however, then those processes will run as that UID. If they escape the container, they would only have world access to content in your home directory.
  • run a podman unshare command, or set up the directories' group ownership as owned by your UID (root inside of the container).
  • running containers as non-root should always be your top priority for security reasons.
張 旭

Kubernetes 架构浅析 - 0 views

  • 将Loadbalancer改造成Smart Loadbalancer,通过服务发现机制,应用实例启动或者销毁时自动注册到一个配置中心(etcd/zookeeper),Loadbalancer监听应用配置的变化自动修改自己的配置。
  • Mysql计划该成域名访问方式,而不是ip。为了避免dns变更时的延迟问题,需要在内网架设私有dns。
  • 配合服务发现机制自动修改dns
  • ...23 more annotations...
  • 通过增加一层代理的机制实现
  • 操作系统和基础库的依赖允许应用自定义
  • 对磁盘路径以及端口的依赖通过Docker运行参数动态注入
  • Docker的自定义变量以及参数,需要提供标准化的配置文件
  • 每个服务器节点上要有个agent来执行具体的操作,监控该节点上的应用
  • 还要提供接口以及工具去操作。
  • 应用进程和资源(包括 cpu,内存,磁盘,网络)的解耦
  • 服务依赖关系的解耦
  • scheduler在Kubernetes中是一个plugin,可以用其他的实现替换(比如mesos)
  • 大多数接口都是直接读写etcd中的数据。
  • etcd 作为配置中心和存储服务
  • kubelet 主要包含容器管理,镜像管理,Volume管理等。同时kubelet也是一个rest服务,和pod相关的命令操作都是通过调用接口实现的。
  • kube-proxy 主要用于实现Kubernetes的service机制。提供一部分SDN功能以及集群内部的智能LoadBalancer。
  • Pods Kubernetes将应用的具体实例抽象为pod。每个pod首先会启动一个google_containers/pause docker容器,然后再启动应用真正的docker容器。这样做的目的是为了可以将多个docker容器封装到一个pod中,共享网络地址。
  • Replication Controller 控制pod的副本数量
  • Services service是对一组pods的抽象,通过kube-proxy的智能LoadBalancer机制,pods的销毁迁移不会影响services的功能以及上层的调用方。
  • Namespace Kubernetes中的namespace主要用来避免pod,service的名称冲突。同一个namespace内的pod,service的名称必须是唯一的。
  • Kubernetes的理念里,pod之间是可以直接通讯的
  • 需要用户自己选择解决方案: Flannel,OpenVSwitch,Weave 等。
  • Hypernetes就是一个实现了多租户的Kubernetes版本。
  • 如果运维系统跟不上,服务拆太细,很容易出现某个服务器的角落里部署着一个很古老的不常更新的服务,后来大家竟然忘记了,最后服务器迁移的时候给丢了,用户投诉才发现。
  • 在Kubernetes上的微服务治理框架可以一揽子解决微服务的rpc,监控,容灾问题
  • 同一个pod的多个容器定义中没有优先级,启动顺序不能保证
張 旭

Pods - Kubernetes - 0 views

  • Pods are the smallest deployable units of computing
  • A Pod (as in a pod of whales or pea pod) is a group of one or more containersA lightweight and portable executable image that contains software and all of its dependencies. (such as Docker containers), with shared storage/network, and a specification for how to run the containers.
  • A Pod’s contents are always co-located and co-scheduled, and run in a shared context.
  • ...32 more annotations...
  • A Pod models an application-specific “logical host”
  • application containers which are relatively tightly coupled
  • being executed on the same physical or virtual machine would mean being executed on the same logical host.
  • The shared context of a Pod is a set of Linux namespaces, cgroups, and potentially other facets of isolation
  • Containers within a Pod share an IP address and port space, and can find each other via localhost
  • Containers in different Pods have distinct IP addresses and can not communicate by IPC without special configuration. These containers usually communicate with each other via Pod IP addresses.
  • Applications within a Pod also have access to shared volumesA directory containing data, accessible to the containers in a pod. , which are defined as part of a Pod and are made available to be mounted into each application’s filesystem.
  • a Pod is modelled as a group of Docker containers with shared namespaces and shared filesystem volumes
    • 張 旭
       
      類似 docker-compose 裡面宣告的同一坨?
  • Pods are considered to be relatively ephemeral (rather than durable) entities.
  • Pods are created, assigned a unique ID (UID), and scheduled to nodes where they remain until termination (according to restart policy) or deletion.
  • it can be replaced by an identical Pod
  • When something is said to have the same lifetime as a Pod, such as a volume, that means that it exists as long as that Pod (with that UID) exists.
  • uses a persistent volume for shared storage between the containers
  • Pods serve as unit of deployment, horizontal scaling, and replication
  • The applications in a Pod all use the same network namespace (same IP and port space), and can thus “find” each other and communicate using localhost
  • flat shared networking space
  • Containers within the Pod see the system hostname as being the same as the configured name for the Pod.
  • Volumes enable data to survive container restarts and to be shared among the applications within the Pod.
  • Individual Pods are not intended to run multiple instances of the same application
  • The individual containers may be versioned, rebuilt and redeployed independently.
  • Pods aren’t intended to be treated as durable entities.
  • Controllers like StatefulSet can also provide support to stateful Pods.
  • When a user requests deletion of a Pod, the system records the intended grace period before the Pod is allowed to be forcefully killed, and a TERM signal is sent to the main process in each container.
  • Once the grace period has expired, the KILL signal is sent to those processes, and the Pod is then deleted from the API server.
  • grace period
  • Pod is removed from endpoints list for service, and are no longer considered part of the set of running Pods for replication controllers.
  • When the grace period expires, any processes still running in the Pod are killed with SIGKILL.
  • By default, all deletes are graceful within 30 seconds.
  • You must specify an additional flag --force along with --grace-period=0 in order to perform force deletions.
  • Force deletion of a Pod is defined as deletion of a Pod from the cluster state and etcd immediately.
  • StatefulSet Pods
  • Processes within the container get almost the same privileges that are available to processes outside a container.
張 旭

Helm | - 0 views

  • A chart is a collection of files that describe a related set of Kubernetes resources.
  • A single chart might be used to deploy something simple, like a memcached pod, or something complex, like a full web app stack with HTTP servers, databases, caches, and so on.
  • Charts are created as files laid out in a particular directory tree, then they can be packaged into versioned archives to be deployed.
  • ...170 more annotations...
  • A chart is organized as a collection of files inside of a directory.
  • values.yaml # The default configuration values for this chart
  • charts/ # A directory containing any charts upon which this chart depends.
  • templates/ # A directory of templates that, when combined with values, # will generate valid Kubernetes manifest files.
  • version: A SemVer 2 version (required)
  • apiVersion: The chart API version, always "v1" (required)
  • Every chart must have a version number. A version must follow the SemVer 2 standard.
  • non-SemVer names are explicitly disallowed by the system.
  • When generating a package, the helm package command will use the version that it finds in the Chart.yaml as a token in the package name.
  • the appVersion field is not related to the version field. It is a way of specifying the version of the application.
  • appVersion: The version of the app that this contains (optional). This needn't be SemVer.
  • If the latest version of a chart in the repository is marked as deprecated, then the chart as a whole is considered to be deprecated.
  • deprecated: Whether this chart is deprecated (optional, boolean)
  • one chart may depend on any number of other charts.
  • dependencies can be dynamically linked through the requirements.yaml file or brought in to the charts/ directory and managed manually.
  • the preferred method of declaring dependencies is by using a requirements.yaml file inside of your chart.
  • A requirements.yaml file is a simple file for listing your dependencies.
  • The repository field is the full URL to the chart repository.
  • you must also use helm repo add to add that repo locally.
  • helm dependency update and it will use your dependency file to download all the specified charts into your charts/ directory for you.
  • When helm dependency update retrieves charts, it will store them as chart archives in the charts/ directory.
  • Managing charts with requirements.yaml is a good way to easily keep charts updated, and also share requirements information throughout a team.
  • All charts are loaded by default.
  • The condition field holds one or more YAML paths (delimited by commas). If this path exists in the top parent’s values and resolves to a boolean value, the chart will be enabled or disabled based on that boolean value.
  • The tags field is a YAML list of labels to associate with this chart.
  • all charts with tags can be enabled or disabled by specifying the tag and a boolean value.
  • The --set parameter can be used as usual to alter tag and condition values.
  • Conditions (when set in values) always override tags.
  • The first condition path that exists wins and subsequent ones for that chart are ignored.
  • The keys containing the values to be imported can be specified in the parent chart’s requirements.yaml file using a YAML list. Each item in the list is a key which is imported from the child chart’s exports field.
  • specifying the key data in our import list, Helm looks in the exports field of the child chart for data key and imports its contents.
  • the parent key data is not contained in the parent’s final values. If you need to specify the parent key, use the ‘child-parent’ format.
  • To access values that are not contained in the exports key of the child chart’s values, you will need to specify the source key of the values to be imported (child) and the destination path in the parent chart’s values (parent).
  • To drop a dependency into your charts/ directory, use the helm fetch command
  • A dependency can be either a chart archive (foo-1.2.3.tgz) or an unpacked chart directory.
  • name cannot start with _ or .. Such files are ignored by the chart loader.
  • a single release is created with all the objects for the chart and its dependencies.
  • Helm Chart templates are written in the Go template language, with the addition of 50 or so add-on template functions from the Sprig library and a few other specialized functions
  • When Helm renders the charts, it will pass every file in that directory through the template engine.
  • Chart developers may supply a file called values.yaml inside of a chart. This file can contain default values.
  • Chart users may supply a YAML file that contains values. This can be provided on the command line with helm install.
  • When a user supplies custom values, these values will override the values in the chart’s values.yaml file.
  • Template files follow the standard conventions for writing Go templates
  • {{default "minio" .Values.storage}}
  • Values that are supplied via a values.yaml file (or via the --set flag) are accessible from the .Values object in a template.
  • pre-defined, are available to every template, and cannot be overridden
  • the names are case sensitive
  • Release.Name: The name of the release (not the chart)
  • Release.IsUpgrade: This is set to true if the current operation is an upgrade or rollback.
  • Release.Revision: The revision number. It begins at 1, and increments with each helm upgrade
  • Chart: The contents of the Chart.yaml
  • Files: A map-like object containing all non-special files in the chart.
  • Files can be accessed using {{index .Files "file.name"}} or using the {{.Files.Get name}} or {{.Files.GetString name}} functions.
  • .helmignore
  • access the contents of the file as []byte using {{.Files.GetBytes}}
  • Any unknown Chart.yaml fields will be dropped
  • Chart.yaml cannot be used to pass arbitrarily structured data into the template.
  • A values file is formatted in YAML.
  • A chart may include a default values.yaml file
  • be merged into the default values file.
  • The default values file included inside of a chart must be named values.yaml
  • accessible inside of templates using the .Values object
  • Values files can declare values for the top-level chart, as well as for any of the charts that are included in that chart’s charts/ directory.
  • Charts at a higher level have access to all of the variables defined beneath.
  • lower level charts cannot access things in parent charts
  • Values are namespaced, but namespaces are pruned.
  • the scope of the values has been reduced and the namespace prefix removed
  • Helm supports special “global” value.
  • a way of sharing one top-level variable with all subcharts, which is useful for things like setting metadata properties like labels.
  • If a subchart declares a global variable, that global will be passed downward (to the subchart’s subcharts), but not upward to the parent chart.
  • global variables of parent charts take precedence over the global variables from subcharts.
  • helm lint
  • A chart repository is an HTTP server that houses one or more packaged charts
  • Any HTTP server that can serve YAML files and tar files and can answer GET requests can be used as a repository server.
  • Helm does not provide tools for uploading charts to remote repository servers.
  • the only way to add a chart to $HELM_HOME/starters is to manually copy it there.
  • Helm provides a hook mechanism to allow chart developers to intervene at certain points in a release’s life cycle.
  • Execute a Job to back up a database before installing a new chart, and then execute a second job after the upgrade in order to restore data.
  • Hooks are declared as an annotation in the metadata section of a manifest
  • Hooks work like regular templates, but they have special annotations
  • pre-install
  • post-install: Executes after all resources are loaded into Kubernetes
  • pre-delete
  • post-delete: Executes on a deletion request after all of the release’s resources have been deleted.
  • pre-upgrade
  • post-upgrade
  • pre-rollback
  • post-rollback: Executes on a rollback request after all resources have been modified.
  • crd-install
  • test-success: Executes when running helm test and expects the pod to return successfully (return code == 0).
  • test-failure: Executes when running helm test and expects the pod to fail (return code != 0).
  • Hooks allow you, the chart developer, an opportunity to perform operations at strategic points in a release lifecycle
  • Tiller then loads the hook with the lowest weight first (negative to positive)
  • Tiller returns the release name (and other data) to the client
  • If the resources is a Job kind, Tiller will wait until the job successfully runs to completion.
  • if the job fails, the release will fail. This is a blocking operation, so the Helm client will pause while the Job is run.
  • If they have hook weights (see below), they are executed in weighted order. Otherwise, ordering is not guaranteed.
  • good practice to add a hook weight, and set it to 0 if weight is not important.
  • The resources that a hook creates are not tracked or managed as part of the release.
  • leave the hook resource alone.
  • To destroy such resources, you need to either write code to perform this operation in a pre-delete or post-delete hook or add "helm.sh/hook-delete-policy" annotation to the hook template file.
  • Hooks are just Kubernetes manifest files with special annotations in the metadata section
  • One resource can implement multiple hooks
  • no limit to the number of different resources that may implement a given hook.
  • When subcharts declare hooks, those are also evaluated. There is no way for a top-level chart to disable the hooks declared by subcharts.
  • Hook weights can be positive or negative numbers but must be represented as strings.
  • sort those hooks in ascending order.
  • Hook deletion policies
  • "before-hook-creation" specifies Tiller should delete the previous hook before the new hook is launched.
  • By default Tiller will wait for 60 seconds for a deleted hook to no longer exist in the API server before timing out.
  • Custom Resource Definitions (CRDs) are a special kind in Kubernetes.
  • The crd-install hook is executed very early during an installation, before the rest of the manifests are verified.
  • A common reason why the hook resource might already exist is that it was not deleted following use on a previous install/upgrade.
  • Helm uses Go templates for templating your resource files.
  • two special template functions: include and required
  • include function allows you to bring in another template, and then pass the results to other template functions.
  • The required function allows you to declare a particular values entry as required for template rendering.
  • If the value is empty, the template rendering will fail with a user submitted error message.
  • When you are working with string data, you are always safer quoting the strings than leaving them as bare words
  • Quote Strings, Don’t Quote Integers
  • when working with integers do not quote the values
  • env variables values which are expected to be string
  • to include a template, and then perform an operation on that template’s output, Helm has a special include function
  • The above includes a template called toYaml, passes it $value, and then passes the output of that template to the nindent function.
  • Go provides a way for setting template options to control behavior when a map is indexed with a key that’s not present in the map
  • The required function gives developers the ability to declare a value entry as required for template rendering.
  • The tpl function allows developers to evaluate strings as templates inside a template.
  • Rendering a external configuration file
  • (.Files.Get "conf/app.conf")
  • Image pull secrets are essentially a combination of registry, username, and password.
  • Automatically Roll Deployments When ConfigMaps or Secrets change
  • configmaps or secrets are injected as configuration files in containers
  • a restart may be required should those be updated with a subsequent helm upgrade
  • The sha256sum function can be used to ensure a deployment’s annotation section is updated if another file changes
  • checksum/config: {{ include (print $.Template.BasePath "/configmap.yaml") . | sha256sum }}
  • helm upgrade --recreate-pods
  • "helm.sh/resource-policy": keep
  • resources that should not be deleted when Helm runs a helm delete
  • this resource becomes orphaned. Helm will no longer manage it in any way.
  • create some reusable parts in your chart
  • In the templates/ directory, any file that begins with an underscore(_) is not expected to output a Kubernetes manifest file.
  • by convention, helper templates and partials are placed in a _helpers.tpl file.
  • The current best practice for composing a complex application from discrete parts is to create a top-level umbrella chart that exposes the global configurations, and then use the charts/ subdirectory to embed each of the components.
  • SAP’s Converged charts: These charts install SAP Converged Cloud a full OpenStack IaaS on Kubernetes. All of the charts are collected together in one GitHub repository, except for a few submodules.
  • Deis’s Workflow: This chart exposes the entire Deis PaaS system with one chart. But it’s different from the SAP chart in that this umbrella chart is built from each component, and each component is tracked in a different Git repository.
  • YAML is a superset of JSON
  • any valid JSON structure ought to be valid in YAML.
  • As a best practice, templates should follow a YAML-like syntax unless the JSON syntax substantially reduces the risk of a formatting issue.
  • There are functions in Helm that allow you to generate random data, cryptographic keys, and so on.
  • a chart repository is a location where packaged charts can be stored and shared.
  • A chart repository is an HTTP server that houses an index.yaml file and optionally some packaged charts.
  • Because a chart repository can be any HTTP server that can serve YAML and tar files and can answer GET requests, you have a plethora of options when it comes down to hosting your own chart repository.
  • It is not required that a chart package be located on the same server as the index.yaml file.
  • A valid chart repository must have an index file. The index file contains information about each chart in the chart repository.
  • The Helm project provides an open-source Helm repository server called ChartMuseum that you can host yourself.
  • $ helm repo index fantastic-charts --url https://fantastic-charts.storage.googleapis.com
  • A repository will not be added if it does not contain a valid index.yaml
  • add the repository to their helm client via the helm repo add [NAME] [URL] command with any name they would like to use to reference the repository.
  • Helm has provenance tools which help chart users verify the integrity and origin of a package.
  • Integrity is established by comparing a chart to a provenance record
  • The provenance file contains a chart’s YAML file plus several pieces of verification information
  • Chart repositories serve as a centralized collection of Helm charts.
  • Chart repositories must make it possible to serve provenance files over HTTP via a specific request, and must make them available at the same URI path as the chart.
  • We don’t want to be “the certificate authority” for all chart signers. Instead, we strongly favor a decentralized model, which is part of the reason we chose OpenPGP as our foundational technology.
  • The Keybase platform provides a public centralized repository for trust information.
  • A chart contains a number of Kubernetes resources and components that work together.
  • A test in a helm chart lives under the templates/ directory and is a pod definition that specifies a container with a given command to run.
  • The pod definition must contain one of the helm test hook annotations: helm.sh/hook: test-success or helm.sh/hook: test-failure
  • helm test
  • nest your test suite under a tests/ directory like <chart-name>/templates/tests/
張 旭

探索 Docker bridge 的正确姿势,亲测有效! | DaoCloud - 1 views

  • Docker bridge 和 Linux bridge 二者,初看如出一辙,再看又相去甚远
  • Linux bridge 模式下,Linux Kernel 会创建出一个虚拟网桥 ,用以实现主机网络接口与虚拟网络接口间的通信
  • Linux bridge 像一台虚拟交换机
  • ...15 more annotations...
  • Docker Daemon 会创建出一个名为 docker0 的虚拟网桥 ,用来连接宿主机与容器,或者连接不同的容器
  • veth pair 技术的特性可以保证无论哪一个 veth 接收到网络报文,都会无条件地传输给另一方
  • 在桥接模式下,Docker Daemon 将 veth0 附加到 docker0 网桥上,保证宿主机的报文有能力发往 veth0。
  • 将 veth1 添加到 Docker 容器所属的网络命名空间[注释2],保证宿主机的网络报文若发往 veth0 可以立即被 veth1 收到
  • NATP 包含两种转换方式:SNAT 和 DNAT
  • 目的 NAT (Destination NAT,DNAT): 修改数据包的目的地址
  • 容器的 IP 与端口对外都是不可见的
  • 数据包的目的地址为宿主机的 ip 和端口
  • 将数据包发送附加到 docker0 网桥上的 veth0 接口,veth0 接口再将数据包发送给容器内部的 veth1 接口,容器接收数据包并作出响应
  • 源 NAT (Source NAT,SNAT): 修改数据包的源地址
  • 宿主机上的 docker0 网桥发现数据包的目的地址为外界的 IP 和端口,便会将数据包转发给 eth0 ,并从 eth0 发出去。由于存在 SNAT 规则,会将数据包的源地址转换为宿主机的 ip 和端口
  • Docker 容器对外是不可见的
  • veth pair是用于不同network namespace间进行通信的方式,veth pair 将一个 network namespace 数据发往另一个 network namespace 的 veth
  • 网络命名空间是用于隔离网络资源(/proc/net、IP 地址、网卡、路由等)
  • NAT 为网络地址转换(Network Address Translation)的缩写
張 旭

Deploying Rails Apps, Part 6: Writing Capistrano Tasks - Vladi Gleba - 0 views

  • we can write our own tasks to help us automate various things.
  • organizing all of the tasks here under a namespace
  • upload a file from our local computer.
  • ...27 more annotations...
  • learn about is SSHKit and the various methods it provides
  • SSHKit was actually developed and released with Capistrano 3, and it’s basically a lower-level tool that provides methods for connecting and interacting with remote servers
  • on(): specifies the server to run on
  • within(): specifies the directory path to run in
  • with(): specifies the environment variables to run with
  • run on the application server
  • within the path specified
  • with certain environment variables set
  • execute(): the workhorse that runs the commands on your server
  • upload(): uploads a file from your local computer to your remote server
  • capture(): executes a command and returns its output as a string
    • 張 旭
       
      capture 是跑在遠端伺服器上
  • upload() has the bang symbol (!) because that’s how it’s defined in SSHKit, and it’s just a convention letting us know that the method will block until it finishes.
  • But in order to ensure rake runs with the proper environment variables set, we have to use rake as a symbol and pass db:seed as a string
  • This format will also be necessary whenever you’re running any other Rails-specific commands that rely on certain environment variables being set
  • I recommend you take a look at SSHKit’s example page to learn more
  • make sure we pushed all our local changes to the remote master branch
  • run this task before Capistrano runs its own deploy task
  • actually creates three separate tasks
  • I created a namespace called deploy to contain these tasks since that’s what they’re related to.
  • we’re using the callbacks inside a namespace to make sure Capistrano knows which tasks the callbacks are referencing.
  • custom recipe (a Capistrano term meaning a series of tasks)
  • /shared: holds files and directories that persist throughout deploys
  • When you run cap production deploy, you’re actually calling a Capistrano task called deploy, which then sequentially invokes other tasks
  • your favorite browser (I hope it’s not Internet Explorer)
  • Deployment is hard and takes a while to sink in.
  • the most important thing is to not get discouraged
  • I didn’t want other people going through the same thing
張 旭

Helm | Getting Started - 0 views

  • The templates/ directory is for template files. When Helm evaluates a chart, it will send all of the files in the templates/ directory through the template rendering engine. It then collects the results of those templates and sends them on to Kubernetes.
  • The charts/ directory may contain other charts (which we call subcharts).
  • we recommend using the suffix .yaml for YAML files and .tpl for helpers.
  • ...8 more annotations...
  • The helm get manifest command takes a release name (full-coral) and prints out all of the Kubernetes resources that were uploaded to the server.
  • Each file begins with --- to indicate the start of a YAML document, and then is followed by an automatically generated comment line that tells us what template file generated this YAML document.
  • name: field is limited to 63 characters because of limitations to the DNS system.
  • The template directive {{ .Release.Name }} injects the release name into the template. The values that are passed into a template can be thought of as namespaced objects, where a dot (.) separates each namespaced element.
  • The leading dot before Release indicates that we start with the top-most namespace for this scope
  • helm install --debug --dry-run goodly-guppy ./mychart. This will render the templates. But instead of installing the chart, it will return the rendered template to you
  • Using --dry-run will make it easier to test your code, but it won't ensure that Kubernetes itself will accept the templates you generate.
  • It's best not to assume that your chart will install just because --dry-run works.
張 旭

Production environment | Kubernetes - 0 views

  • to promote an existing cluster for production use
  • Separating the control plane from the worker nodes.
  • Having enough worker nodes available
  • ...22 more annotations...
  • You can use role-based access control (RBAC) and other security mechanisms to make sure that users and workloads can get access to the resources they need, while keeping workloads, and the cluster itself, secure. You can set limits on the resources that users and workloads can access by managing policies and container resources.
  • you need to plan how to scale to relieve increased pressure from more requests to the control plane and worker nodes or scale down to reduce unused resources.
  • Managed control plane: Let the provider manage the scale and availability of the cluster's control plane, as well as handle patches and upgrades.
  • The simplest Kubernetes cluster has the entire control plane and worker node services running on the same machine.
  • You can deploy a control plane using tools such as kubeadm, kops, and kubespray.
  • Secure communications between control plane services are implemented using certificates.
  • Certificates are automatically generated during deployment or you can generate them using your own certificate authority.
  • Separate and backup etcd service: The etcd services can either run on the same machines as other control plane services or run on separate machines
  • Create multiple control plane systems: For high availability, the control plane should not be limited to a single machine
  • Some deployment tools set up Raft consensus algorithm to do leader election of Kubernetes services. If the primary goes away, another service elects itself and take over.
  • Groups of zones are referred to as regions.
  • if you installed with kubeadm, there are instructions to help you with Certificate Management and Upgrading kubeadm clusters.
  • Production-quality workloads need to be resilient and anything they rely on needs to be resilient (such as CoreDNS).
  • Add nodes to the cluster: If you are managing your own cluster you can add nodes by setting up your own machines and either adding them manually or having them register themselves to the cluster’s apiserver.
  • Set up node health checks: For important workloads, you want to make sure that the nodes and pods running on those nodes are healthy.
  • Authentication: The apiserver can authenticate users using client certificates, bearer tokens, an authenticating proxy, or HTTP basic auth.
  • Authorization: When you set out to authorize your regular users, you will probably choose between RBAC and ABAC authorization.
  • Role-based access control (RBAC): Lets you assign access to your cluster by allowing specific sets of permissions to authenticated users. Permissions can be assigned for a specific namespace (Role) or across the entire cluster (ClusterRole).
  • Attribute-based access control (ABAC): Lets you create policies based on resource attributes in the cluster and will allow or deny access based on those attributes.
  • Set limits on workload resources
  • Set namespace limits: Set per-namespace quotas on things like memory and CPU
  • Prepare for DNS demand: If you expect workloads to massively scale up, your DNS service must be ready to scale up as well.
1 - 20 of 42 Next › Last »
Showing 20 items per page