Skip to main content

Home/ Larvata/ Group items tagged computer

Rss Feed Group items tagged

張 旭

Swarm mode key concepts | Docker Documentation - 0 views

  • The cluster management and orchestration features embedded in the Docker Engine are built using SwarmKit.
  • Docker engines participating in a cluster are running in swarm mode
  • A swarm is a cluster of Docker engines, or nodes, where you deploy services
  • ...19 more annotations...
  • When you run Docker without using swarm mode, you execute container commands.
  • When you run the Docker in swarm mode, you orchestrate services.
  • You can run swarm services and standalone containers on the same Docker instances.
  • A node is an instance of the Docker engine participating in the swarm
  • You can run one or more nodes on a single physical computer or cloud server
  • To deploy your application to a swarm, you submit a service definition to a manager node.
  • Manager nodes also perform the orchestration and cluster management functions required to maintain the desired state of the swarm.
  • Manager nodes elect a single leader to conduct orchestration tasks.
  • Worker nodes receive and execute tasks dispatched from manager nodes.
  • service is the definition of the tasks to execute on the worker nodes
  • When you create a service, you specify which container image to use and which commands to execute inside running containers.
  • replicated services model, the swarm manager distributes a specific number of replica tasks among the nodes based upon the scale you set in the desired state.
  • global services, the swarm runs one task for the service on every available node in the cluster.
  • A task carries a Docker container and the commands to run inside the container
  • Manager nodes assign tasks to worker nodes according to the number of replicas set in the service scale.
  • Once a task is assigned to a node, it cannot move to another node
  • If you do not specify a port, the swarm manager assigns the service a port in the 30000-32767 range.
  • External components, such as cloud load balancers, can access the service on the PublishedPort of any node in the cluster whether or not the node is currently running the task for the service.
  • Swarm mode has an internal DNS component that automatically assigns each service in the swarm a DNS entry.
張 旭

Introducing the MinIO Operator and Operator Console - 0 views

  • Object-storage-as-a-service is a game changer for IT.
  • provision multi-tenant object storage as a service.
  • have the skill set to create, deploy, tune, scale and manage modern, application oriented object storage using Kubernetes
  • ...12 more annotations...
  • MinIO is purpose-built to take full advantage of the Kubernetes architecture.
  • MinIO and Kubernetes work together to simplify infrastructure management, providing a way to manage object storage infrastructure within the Kubernetes toolset.  
  • The operator pattern extends Kubernetes's familiar declarative API model with custom resource definitions (CRDs) to perform common operations like resource orchestration, non-disruptive upgrades, cluster expansion and to maintain high-availability
  • The Operator uses the command set kubectl that the Kubernetes community was already familiar with and adds the kubectl minio plugin . The MinIO Operator and the MinIO kubectl plugin facilitate the deployment and management of MinIO Object Storage on Kubernetes - which is how multi-tenant object storage as a service is delivered.
  • choosing a leader for a distributed application without an internal member election process
  • The Operator Console makes Kubernetes object storage easier still. In this graphical user interface, MinIO created something so simple that anyone in the organization can create, deploy and manage object storage as a service.
  • The primary unit of managing MinIO on Kubernetes is the tenant.
  • The MinIO Operator can allocate multiple tenants within the same Kubernetes cluster.
  • Each tenant, in turn, can have different capacity (i.e: a small 500GB tenant vs a 100TB tenant), resources (1000m CPU and 4Gi RAM vs 4000m CPU and 16Gi RAM) and servers (4 pods vs 16 pods), as well a separate configurations regarding Identity Providers, Encryption and versions.
  • each tenant is a cluster of server pools (independent sets of nodes with their own compute, network, and storage resources), that, while sharing the same physical infrastructure, are fully isolated from each other in their own namespaces.
  • Each tenant runs their own MinIO cluster, fully isolated from other tenants
  • Each tenant scales independently by federating clusters across geographies.
張 旭

Helm | Template Functions and Pipelines - 0 views

  • When injecting strings from the .Values object into the template, we ought to quote these strings.
  • Helm has over 60 available functions. Some of them are defined by the Go template language itself. Most of the others are part of the Sprig template library
  • the "Helm template language" as if it is Helm-specific, it is actually a combination of the Go template language, some extra functions, and a variety of wrappers to expose certain objects to the templates.
  • ...10 more annotations...
  • Drawing on a concept from UNIX, pipelines are a tool for chaining together a series of template commands to compactly express a series of transformations.
  • the default function: default DEFAULT_VALUE GIVEN_VALUE
  • all static default values should live in the values.yaml, and should not be repeated using the default command (otherwise they would be redundant).
  • the default command is perfect for computed values, which can not be declared inside values.yaml.
  • When lookup returns an object, it will return a dictionary.
  • The synopsis of the lookup function is lookup apiVersion, kind, namespace, name -> resource or resource list
  • When no object is found, an empty value is returned. This can be used to check for the existence of an object.
  • The lookup function uses Helm's existing Kubernetes connection configuration to query Kubernetes.
  • Helm is not supposed to contact the Kubernetes API Server during a helm template or a helm install|update|delete|rollback --dry-run, so the lookup function will return an empty list (i.e. dict) in such a case.
  • the operators (eq, ne, lt, gt, and, or and so on) are all implemented as functions. In pipelines, operations can be grouped with parentheses ((, and )).
  •  
    "When injecting strings from the .Values object into the template, we ought to quote these strings. "
張 旭

Creating a cluster with kubeadm | Kubernetes - 0 views

  • (Recommended) If you have plans to upgrade this single control-plane kubeadm cluster to high availability you should specify the --control-plane-endpoint to set the shared endpoint for all control-plane nodes
  • set the --pod-network-cidr to a provider-specific value.
  • kubeadm tries to detect the container runtime by using a list of well known endpoints.
  • ...12 more annotations...
  • kubeadm uses the network interface associated with the default gateway to set the advertise address for this particular control-plane node's API server. To use a different network interface, specify the --apiserver-advertise-address=<ip-address> argument to kubeadm init
  • Do not share the admin.conf file with anyone and instead grant users custom permissions by generating them a kubeconfig file using the kubeadm kubeconfig user command.
  • The token is used for mutual authentication between the control-plane node and the joining nodes. The token included here is secret. Keep it safe, because anyone with this token can add authenticated nodes to your cluster.
  • You must deploy a Container Network Interface (CNI) based Pod network add-on so that your Pods can communicate with each other. Cluster DNS (CoreDNS) will not start up before a network is installed.
  • Take care that your Pod network must not overlap with any of the host networks
  • Make sure that your Pod network plugin supports RBAC, and so do any manifests that you use to deploy it.
  • You can install only one Pod network per cluster.
  • The cluster created here has a single control-plane node, with a single etcd database running on it.
  • The node-role.kubernetes.io/control-plane label is such a restricted label and kubeadm manually applies it using a privileged client after a node has been created.
  • By default, your cluster will not schedule Pods on the control plane nodes for security reasons.
  • kubectl taint nodes --all node-role.kubernetes.io/control-plane-
  • remove the node-role.kubernetes.io/control-plane:NoSchedule taint from any nodes that have it, including the control plane nodes, meaning that the scheduler will then be able to schedule Pods everywhere.
張 旭

Data Sources - Configuration Language | Terraform | HashiCorp Developer - 0 views

  • Each provider may offer data sources alongside its set of resource types.
  • When distinguishing from data resources, the primary kind of resource (as declared by a resource block) is known as a managed resource.
  • Each data resource is associated with a single data source, which determines the kind of object (or objects) it reads and what query constraint arguments are available.
  • ...4 more annotations...
  • Terraform reads data resources during the planning phase when possible, but announces in the plan when it must defer reading resources until the apply phase to preserve the order of operations.
  • local-only data sources exist for rendering templates, reading local files, and rendering AWS IAM policies.
  • As with managed resources, when count or for_each is present it is important to distinguish the resource itself from the multiple resource instances it creates. Each instance will separately read from its data source with its own variant of the constraint arguments, producing an indexed result.
  • Data instance arguments may refer to computed values, in which case the attributes of the instance itself cannot be resolved until all of its arguments are defined. I
« First ‹ Previous 41 - 45 of 45
Showing 20 items per page