Skip to main content

Home/ Larvata/ Group items tagged replication

Rss Feed Group items tagged

張 旭

MySQL :: MySQL 5.7 Reference Manual :: 19.1.1.2 Group Replication - 0 views

  • The replication group is a set of servers that interact with each other through message passing.
  • The communication layer provides a set of guarantees such as atomic message and total order message delivery.
  • a multi-master update everywhere replication protocol
  • ...8 more annotations...
  • a replication group is formed by multiple servers and each server in the group may execute transactions independently
  • Read-only (RO) transactions need no coordination within the group and thus commit immediately
  • any RW transaction the group needs to decide whether it commits or not, thus the commit operation is not a unilateral decision from the originating server
  • when a transaction is ready to commit at the originating server, the server atomically broadcasts the write values (rows changed) and the correspondent write set (unique identifiers of the rows that were updated). Then a global total order is established for that transaction.
  • all servers receive the same set of transactions in the same order
  • The resolution procedure states that the transaction that was ordered first commits on all servers, whereas the transaction ordered second aborts, and thus is rolled back on the originating server and dropped by the other servers in the group. This is in fact a distributed first commit wins rule
  • Group Replication is a shared-nothing replication scheme where each server has its own entire copy of the data
  • MySQL Group Replication protocol
crazylion lee

Overview of Different MySQL Replication Solutions - Percona Database Performance Blog - 0 views

  •  
    "In this blog post, I will review some of the MySQL replication concepts that are part of the MySQL environment (and Percona Server for MySQL specifically). I will also try to clarify some of the misconceptions people have about replication. Since I've been working on the Solution Engineering team, I've noticed that - although information is plentiful - replication is often misunderstood or incompletely understood."
張 旭

Replication - MongoDB Manual - 0 views

  • A replica set in MongoDB is a group of mongod processes that maintain the same data set.
  • Replica sets provide redundancy and high availability, and are the basis for all production deployments.
  • With multiple copies of data on different database servers, replication provides a level of fault tolerance against the loss of a single database server.
  • ...18 more annotations...
  • replication can provide increased read capacity as clients can send read operations to different servers.
  • A replica set is a group of mongod instances that maintain the same data set.
  • A replica set contains several data bearing nodes and optionally one arbiter node.
  • one and only one member is deemed the primary node, while the other nodes are deemed secondary nodes.
  • A replica set can have only one primary capable of confirming writes with { w: "majority" } write concern; although in some circumstances, another mongod instance may transiently believe itself to also be primary.
  • The secondaries replicate the primary’s oplog and apply the operations to their data sets such that the secondaries’ data sets reflect the primary’s data set
  • add a mongod instance to a replica set as an arbiter. An arbiter participates in elections but does not hold data
  • An arbiter will always be an arbiter whereas a primary may step down and become a secondary and a secondary may become the primary during an election.
  • Secondaries replicate the primary’s oplog and apply the operations to their data sets asynchronously.
  • These slow oplog messages are logged for the secondaries in the diagnostic log under the REPL component with the text applied op: <oplog entry> took <num>ms.
  • Replication lag refers to the amount of time that it takes to copy (i.e. replicate) a write operation on the primary to a secondary.
  • When a primary does not communicate with the other members of the set for more than the configured electionTimeoutMillis period (10 seconds by default), an eligible secondary calls for an election to nominate itself as the new primary.
  • The replica set cannot process write operations until the election completes successfully.
  • The median time before a cluster elects a new primary should not typically exceed 12 seconds, assuming default replica configuration settings.
  • Factors such as network latency may extend the time required for replica set elections to complete, which in turn affects the amount of time your cluster may operate without a primary.
  • Your application connection logic should include tolerance for automatic failovers and the subsequent elections.
  • MongoDB drivers can detect the loss of the primary and automatically retry certain write operations a single time, providing additional built-in handling of automatic failovers and elections
  • By default, clients read from the primary [1]; however, clients can specify a read preference to send read operations to secondaries.
張 旭

MySQL :: MySQL 5.7 Reference Manual :: 19.1 Group Replication Background - 0 views

  • the component can be removed and the system should continue to operate as expected
  • network partitioning
  • split brain scenarios
  • ...8 more annotations...
  • the ultimate challenge is to fuse the logic of the database and data replication with the logic of having several servers coordinated in a consistent and simple way
  • MySQL Group Replication provides distributed state machine replication with strong coordination between servers.
  • Servers coordinate themselves automatically when they are part of the same group
  • The group can operate in a single-primary mode with automatic primary election, where only one server accepts updates at a time.
  • For a transaction to commit, the majority of the group have to agree on the order of a given transaction in the global sequence of transactions
  • Deciding to commit or abort a transaction is done by each server individually, but all servers make the same decision
  • group communication protocols
  • the Paxos algorithm. It acts as the group communication systems engine.
張 旭

MySQL :: MySQL 5.7 Reference Manual :: 19.2.1.2 Configuring an Instance for Group Repli... - 0 views

  • store replication metadata in system tables instead of files
  • collect the write set and encode it as a hash using the XXHASH64 hashing algorithm
  • not start operations automatically when the server starts
  • ...10 more annotations...
  • for incoming connections from other members in the group
  • The server listens on this port for member-to-member connections. This port must not be used for user applications at all
  • The loose- prefix used for the group_replication variables above instructs the server to continue to start if the Group Replication plugin has not been loaded at the time the server is started.
  • For example, if each server instance is on a different machine use the IP and port of the machine, such as 10.0.0.1:33061. The recommended port for group_replication_local_address is 33061
  • does not need to list all members in the group
  • The server that starts the group does not make use of this option, since it is the initial server and as such, it is in charge of bootstrapping the group
  • start the bootstrap member first, and let it create the group
  • Creating a group and joining multiple members at the same time is not supported.
  • must only be used on one server instance at any time
  • Disable this option after the first server instance comes online
張 旭

MongoDB Performance Tuning: Everything You Need to Know - Stackify - 0 views

  • db.serverStatus().globalLock
  • db.serverStatus().locks
  • globalLock.currentQueue.total: This number can indicate a possible concurrency issue if it’s consistently high. This can happen if a lot of requests are waiting for a lock to be released.
  • ...35 more annotations...
  • globalLock.totalTime: If this is higher than the total database uptime, the database has been in a lock state for too long.
  • Unlike relational databases such as MySQL or PostgreSQL, MongoDB uses JSON-like documents for storing data.
  • Databases operate in an environment that consists of numerous reads, writes, and updates.
  • When a lock occurs, no other operation can read or modify the data until the operation that initiated the lock is finished.
  • locks.deadlockCount: Number of times the lock acquisitions have encountered deadlocks
  • Is the database frequently locking from queries? This might indicate issues with the schema design, query structure, or system architecture.
  • For version 3.2 on, WiredTiger is the default.
  • MMAPv1 locks whole collections, not individual documents.
  • WiredTiger performs locking at the document level.
  • When the MMAPv1 storage engine is in use, MongoDB will use memory-mapped files to store data.
  • All available memory will be allocated for this usage if the data set is large enough.
  • db.serverStatus().mem
  • mem.resident: Roughly equivalent to the amount of RAM in megabytes that the database process uses
  • If mem.resident exceeds the value of system memory and there’s a large amount of unmapped data on disk, we’ve most likely exceeded system capacity.
  • If the value of mem.mapped is greater than the amount of system memory, some operations will experience page faults.
  • The WiredTiger storage engine is a significant improvement over MMAPv1 in performance and concurrency.
  • By default, MongoDB will reserve 50 percent of the available memory for the WiredTiger data cache.
  • wiredTiger.cache.bytes currently in the cache – This is the size of the data currently in the cache.
  • wiredTiger.cache.tracked dirty bytes in the cache – This is the size of the dirty data in the cache.
  • we can look at the wiredTiger.cache.bytes read into cache value for read-heavy applications. If this value is consistently high, increasing the cache size may improve overall read performance.
  • check whether the application is read-heavy. If it is, increase the size of the replica set and distribute the read operations to secondary members of the set.
  • write-heavy, use sharding within a sharded cluster to distribute the load.
  • Replication is the propagation of data from one node to another
  • Replication sets handle this replication.
  • Sometimes, data isn’t replicated as quickly as we’d like.
  • a particularly thorny problem if the lag between a primary and secondary node is high and the secondary becomes the primary
  • use the db.printSlaveReplicationInfo() or the rs.printSlaveReplicationInfo() command to see the status of a replica set from the perspective of the secondary member of the set.
  • shows how far behind the secondary members are from the primary. This number should be as low as possible.
  • monitor this metric closely.
  • watch for any spikes in replication delay.
  • Always investigate these issues to understand the reasons for the lag.
  • One replica set is primary. All others are secondary.
  • it’s not normal for nodes to change back and forth between primary and secondary.
  • use the profiler to gain a deeper understanding of the database’s behavior.
  • Enabling the profiler can affect system performance, due to the additional activity.
  •  
    "globalLock.currentQueue.total: This number can indicate a possible concurrency issue if it's consistently high. This can happen if a lot of requests are waiting for a lock to be released."
crazylion lee

Overview - DistributedLog 1.0 documentation - 0 views

  •  
    "DistributedLog (DL) is a high-performance, replicated log service, offering durability, replication and strong consistency as essentials for building reliable distributed systems. "
張 旭

Replication - Redis - 0 views

  • leader follower (master-slave) replication
  • slave Redis instances to be exact copies of master instances.
  • The slave will automatically reconnect to the master every time the link breaks, and will attempt to be an exact copy of it regardless of what happens to the master.
  • ...2 more annotations...
  • the master keeps the slave updated by sending a stream of commands to the slave
  • When a partial resynchronization is not possible, the slave will ask for a full resynchronization.
crazylion lee

twitter/distributedlog: A high performance replicated log service. - 0 views

  •  
    "A high performance replicated log service. http://distributedlog.io"
張 旭

Introduction to MongoDB - MongoDB Manual - 0 views

  • MongoDB is a document database designed for ease of development and scaling
  • MongoDB offers both a Community and an Enterprise version
  • A record in MongoDB is a document, which is a data structure composed of field and value pairs.
  • ...12 more annotations...
  • MongoDB documents are similar to JSON objects.
  • The values of fields may include other documents, arrays, and arrays of documents.
  • reduce need for expensive joins
  • MongoDB stores documents in collections.
  • Collections are analogous to tables in relational databases.
  • Read-only Views
  • Indexes support faster queries and can include keys from embedded documents and arrays.
  • MongoDB's replication facility, called replica set
  • A replica set is a group of MongoDB servers that maintain the same data set, providing redundancy and increasing data availability.
  • Sharding distributes data across a cluster of machines.
  • MongoDB supports creating zones of data based on the shard key.
  • MongoDB provides pluggable storage engine API
張 旭

Kubernetes 基本概念 · Kubernetes指南 - 0 views

  • Container(容器)是一种便携式、轻量级的操作系统级虚拟化技术。它使用 namespace 隔离不同的软件运行环境,并通过镜像自包含软件的运行环境,从而使得容器可以很方便的在任何地方运行。
  • 每个应用程序用容器封装,管理容器部署就等同于管理应用程序部署。+
  • Pod 是一组紧密关联的容器集合,它们共享 PID、IPC、Network 和 UTS namespace,是 Kubernetes 调度的基本单位。
  • ...9 more annotations...
  • 进程间通信和文件共享
  • 在 Kubernetes 中,所有对象都使用 manifest(yaml 或 json)来定义
  • Node 是 Pod 真正运行的主机,可以是物理机,也可以是虚拟机。
  • 每个 Node 节点上至少要运行 container runtime(比如 docker 或者 rkt)、kubelet 和 kube-proxy 服务。
  • 常见的 pods, services, replication controllers 和 deployments 等都是属于某一个 namespace 的(默认是 default)
  • node, persistentVolumes 等则不属于任何 namespace
  • Service 是应用服务的抽象,通过 labels 为应用提供负载均衡和服务发现。
  • 匹配 labels 的 Pod IP 和端口列表组成 endpoints,由 kube-proxy 负责将服务 IP 负载均衡到这些 endpoints 上。
  • 每个 Service 都会自动分配一个 cluster IP(仅在集群内部可访问的虚拟地址)和 DNS 名
  •  
    "常见的 pods, services, replication controllers 和 deployments 等都是属于某一个 namespace 的(默认是 default),而 node, persistentVolumes 等则不属于任何 namespace。"
張 旭

MySQL cluster vs Galera - How to make the right choice - 0 views

  • there is no “one size fits all” solution when coming to database clustering.
  • MySQL cluster contains the data nodes that store the cluster data and management node that store the cluster’s configuration.
  • MySQL clients first communicate with the management node and then connect directly to these data nodes.
  • ...13 more annotations...
  • For synchronization of data in the data nodes, MySQL cluster uses a special data engine called NDB (Network Database).
  • it uses automatic shrading aka splitting of a large database into small units.
  • MySQL cluster avoids single point failure and ensures 99.99% availability.
  • MySQL cluster can provide a response time as low as less than 3 ms.
  • Galera Cluster consists of a database server and uses the Galera Replication Plugin to manage replication.
  • a multi-master database cluster that supports synchronous replication.
  • it provides multiple, up-to-date copies of the data.
  • there is a need for instant fail-over.
  • Galera cluster allows the read and write of data in any node.
  • Galera cluster include guaranteed write consistency, automatic node provisioning, etc.
  • Upon restoring the connection, the separated nodes will sync back and rejoin the cluster automatically.
  • there is no need to have management node like MySQL cluster.
  • it gives best results with the InnoDB storage engine.
crazylion lee

Apache Helix - Near-Realtime Rsync Replicated File System - 1 views

  •  
    "Near-Realtime Rsync Replicated File System "
張 旭

The package-lock.json file - 0 views

  • You don't commit to Git your node_modules folder, which is generally huge, and when you try to replicate the project on another machine by using the npm install command,
  • Even if a patch or minor release should not introduce breaking changes
  • The package-lock.json sets your currently installed version of each package in stone, and npm will use those exact versions when running npm ci
  • ...1 more annotation...
  • The package-lock.json file needs to be committed to your Git repository
  •  
    "You don't commit to Git your node_modules folder, which is generally huge, and when you try to replicate the project on another machine by using the npm install command,"
張 旭

DNS Records: an Introduction - 0 views

  • reading from right to left
  • top-level domain, or TLD
  • first-level subdomains plus their TLDs (example.com) are referred to as “domains.”
  • ...37 more annotations...
  • Name servers host a domain’s DNS information in a text file called the zone file
  • Start of Authority (SOA) records
  • You’ll want to specify at least two name servers. That way, if one of them is down, the next one can continue to serve your DNS information.
  • Every domain’s zone file contains the admin’s email address, the name servers, and the DNS records.
  • a zone file, which lists domains and their corresponding IP addresses (and a few other things)
  • TLD nameserver
  • ISPs cache a lot of DNS information after they’ve looked it up the first time
  • Usually caching is a good thing, but it can be a problem if you’ve recently made a change to your DNS information
  • An A record matches up a domain (or subdomain) to an IP address
  • point different subdomains to different IP addresses
  • An AAAA record is just like an A record, but for IPv6 IP addresses.
  • An AXFR record is a type of DNS record used for DNS replication
  • used on a slave DNS server to replicate the zone file from a master DNS server
  • DNS Certification Authority Authorization uses DNS to allow the holder of a domain to specify which certificate authorities are allowed to issue certificates for that domain.
  • A CNAME record or Canonical Name record matches up a domain (or subdomain) to a different domain.
  • You should not use a CNAME record for a domain that gets email, because some mail servers handle mail oddly for domains with CNAME records
  • the target domain for a CNAME record should have a normal A-record resolution
  • a CNAME record does not function the same way as a URL redirect
  • A DKIM record or domain keys identified mail record displays the public key for authenticating messages that have been signed with the DKIM protocol
  • An MX record or mail exchange record sets the mail delivery destination for a domain (or subdomain).
  • Ideally, an MX record should point to a domain that is also the hostname for its server.
  • Your MX records don’t necessarily have to point to your Linode. If you’re using a third-party mail service, like Google Apps, you should use the MX records they provide.
  • Lower numbers have a higher priority
  • NS records or name server records set the nameservers for a domain (or subdomain).
  • You can also set up different nameservers for any of your subdomains.
  • The order of NS records does not matter; DNS requests are sent randomly to the different servers, and if one host fails to respond, another one will be queried.
  • A PTR record or pointer record matches up an IP address to a domain (or subdomain), allowing reverse DNS queries to function.
  • PTR records are usually set with your hosting provider. They are not part of your domain’s zone file.
  • An SOA record or Start of Authority record labels a zone file with the name of the host where it was originally created.
  • The administrative email address is written with a period (.) instead of an at symbol (<@>).
  • The single nameserver mentioned in the SOA record is considered the primary master for the purposes of Dynamic DNS and is the server where zone file changes get made before they are propagated to all other nameservers.
  • An SPF record or Sender Policy Framework record lists the designated mail servers for a domain (or subdomain).
  • An SPF record for your domain tells other receiving mail servers which outgoing server(s) are valid sources of email, so they can reject spoofed email from your domain that has originated from unauthorized servers.
  • Your SPF record will have a domain or subdomain, type (which is TXT, or SPF if your name server supports it), and text (which starts with “v=spf1” and contains the SPF record settings).
  • An SRV record or service record matches up a specific service that runs on your domain (or subdomain) to a target domain.
  • A TXT record or text record provides information about the domain in question to other resources on the Internet.
  • One common use of the TXT record is to create an SPF record on nameservers that don’t natively support SPF.
張 旭

Pods - Kubernetes - 0 views

  • Pods are the smallest deployable units of computing
  • A Pod (as in a pod of whales or pea pod) is a group of one or more containersA lightweight and portable executable image that contains software and all of its dependencies. (such as Docker containers), with shared storage/network, and a specification for how to run the containers.
  • A Pod’s contents are always co-located and co-scheduled, and run in a shared context.
  • ...32 more annotations...
  • A Pod models an application-specific “logical host”
  • application containers which are relatively tightly coupled
  • being executed on the same physical or virtual machine would mean being executed on the same logical host.
  • The shared context of a Pod is a set of Linux namespaces, cgroups, and potentially other facets of isolation
  • Containers within a Pod share an IP address and port space, and can find each other via localhost
  • Containers in different Pods have distinct IP addresses and can not communicate by IPC without special configuration. These containers usually communicate with each other via Pod IP addresses.
  • Applications within a Pod also have access to shared volumesA directory containing data, accessible to the containers in a pod. , which are defined as part of a Pod and are made available to be mounted into each application’s filesystem.
  • a Pod is modelled as a group of Docker containers with shared namespaces and shared filesystem volumes
    • 張 旭
       
      類似 docker-compose 裡面宣告的同一坨?
  • Pods are considered to be relatively ephemeral (rather than durable) entities.
  • Pods are created, assigned a unique ID (UID), and scheduled to nodes where they remain until termination (according to restart policy) or deletion.
  • it can be replaced by an identical Pod
  • When something is said to have the same lifetime as a Pod, such as a volume, that means that it exists as long as that Pod (with that UID) exists.
  • uses a persistent volume for shared storage between the containers
  • Pods serve as unit of deployment, horizontal scaling, and replication
  • The applications in a Pod all use the same network namespace (same IP and port space), and can thus “find” each other and communicate using localhost
  • flat shared networking space
  • Containers within the Pod see the system hostname as being the same as the configured name for the Pod.
  • Volumes enable data to survive container restarts and to be shared among the applications within the Pod.
  • Individual Pods are not intended to run multiple instances of the same application
  • The individual containers may be versioned, rebuilt and redeployed independently.
  • Pods aren’t intended to be treated as durable entities.
  • Controllers like StatefulSet can also provide support to stateful Pods.
  • When a user requests deletion of a Pod, the system records the intended grace period before the Pod is allowed to be forcefully killed, and a TERM signal is sent to the main process in each container.
  • Once the grace period has expired, the KILL signal is sent to those processes, and the Pod is then deleted from the API server.
  • grace period
  • Pod is removed from endpoints list for service, and are no longer considered part of the set of running Pods for replication controllers.
  • When the grace period expires, any processes still running in the Pod are killed with SIGKILL.
  • By default, all deletes are graceful within 30 seconds.
  • You must specify an additional flag --force along with --grace-period=0 in order to perform force deletions.
  • Force deletion of a Pod is defined as deletion of a Pod from the cluster state and etcd immediately.
  • StatefulSet Pods
  • Processes within the container get almost the same privileges that are available to processes outside a container.
張 旭

MySQL :: MySQL 5.7 Reference Manual :: 20.2 Introducing InnoDB Cluster - 0 views

  • A group of MySQL servers can be configured to create a cluster using MySQL Shell
  • The cluster of servers has a single master, called the primary, which acts as the read-write master.
  • Multiple secondary servers are replicas of the master
  • ...6 more annotations...
  • A client application is connected to the primary via MySQL Router
  • MySQL Shell also requires Python 2.7 and above to run cluster provisioning scripts
  • AdminAPI, which enables you to create and administer an InnoDB cluster, using either JavaScript or Python scripting
  • Caches the metadata of the InnoDB cluster and performs high availability routing to the MySQL Server instances which make up the cluster
  • Group Replication mechanism to allow data to be replicated from the primary to the secondaries in the cluster
  • AdminAPI is available as of MySQL Shell 1.0.8.
張 旭

Supported DDL operations for a CDC Replication Engine for Db2 Database - IBM Documentation - 1 views

  • SQL statements are divided into two categories: Data Definition Language (DDL) and Data Manipulation Language (DML).
  • DDL operations on a table may affect dependent objects such as constraints and Indexes.
  •  
    "DDL statements are used to describe a database, to define its structure, to create its objects and to create the table's sub-objects."
張 旭

Kubernetes 架构浅析 - 0 views

  • 将Loadbalancer改造成Smart Loadbalancer,通过服务发现机制,应用实例启动或者销毁时自动注册到一个配置中心(etcd/zookeeper),Loadbalancer监听应用配置的变化自动修改自己的配置。
  • Mysql计划该成域名访问方式,而不是ip。为了避免dns变更时的延迟问题,需要在内网架设私有dns。
  • 配合服务发现机制自动修改dns
  • ...23 more annotations...
  • 通过增加一层代理的机制实现
  • 操作系统和基础库的依赖允许应用自定义
  • 对磁盘路径以及端口的依赖通过Docker运行参数动态注入
  • Docker的自定义变量以及参数,需要提供标准化的配置文件
  • 每个服务器节点上要有个agent来执行具体的操作,监控该节点上的应用
  • 还要提供接口以及工具去操作。
  • 应用进程和资源(包括 cpu,内存,磁盘,网络)的解耦
  • 服务依赖关系的解耦
  • scheduler在Kubernetes中是一个plugin,可以用其他的实现替换(比如mesos)
  • 大多数接口都是直接读写etcd中的数据。
  • etcd 作为配置中心和存储服务
  • kubelet 主要包含容器管理,镜像管理,Volume管理等。同时kubelet也是一个rest服务,和pod相关的命令操作都是通过调用接口实现的。
  • kube-proxy 主要用于实现Kubernetes的service机制。提供一部分SDN功能以及集群内部的智能LoadBalancer。
  • Pods Kubernetes将应用的具体实例抽象为pod。每个pod首先会启动一个google_containers/pause docker容器,然后再启动应用真正的docker容器。这样做的目的是为了可以将多个docker容器封装到一个pod中,共享网络地址。
  • Replication Controller 控制pod的副本数量
  • Services service是对一组pods的抽象,通过kube-proxy的智能LoadBalancer机制,pods的销毁迁移不会影响services的功能以及上层的调用方。
  • Namespace Kubernetes中的namespace主要用来避免pod,service的名称冲突。同一个namespace内的pod,service的名称必须是唯一的。
  • Kubernetes的理念里,pod之间是可以直接通讯的
  • 需要用户自己选择解决方案: Flannel,OpenVSwitch,Weave 等。
  • Hypernetes就是一个实现了多租户的Kubernetes版本。
  • 如果运维系统跟不上,服务拆太细,很容易出现某个服务器的角落里部署着一个很古老的不常更新的服务,后来大家竟然忘记了,最后服务器迁移的时候给丢了,用户投诉才发现。
  • 在Kubernetes上的微服务治理框架可以一揽子解决微服务的rpc,监控,容灾问题
  • 同一个pod的多个容器定义中没有优先级,启动顺序不能保证
張 旭

How services work | Docker Documentation - 0 views

  • a service is the image for a microservice within the context of some larger application.
  • When you create a service, you specify which container image to use and which commands to execute inside running containers.
  • an overlay network for the service to connect to other services in the swarm
  • ...13 more annotations...
  • In the swarm mode model, each task invokes exactly one container
  • A task is analogous to a “slot” where the scheduler places a container.
  • A task is the atomic unit of scheduling within a swarm.
  • A task is a one-directional mechanism. It progresses monotonically through a series of states: assigned, prepared, running, etc.
  • Docker swarm mode is a general purpose scheduler and orchestrator.
  • Hypothetically, you could implement other types of tasks such as virtual machine tasks or non-containerized process tasks.
  • If all nodes are paused or drained, and you create a service, it is pending until a node becomes available.
  • reserve a specific amount of memory for a service.
  • impose placement constraints on the service
  • As the administrator of a swarm, you declare the desired state of your swarm, and the manager works with the nodes in the swarm to create that state.
  • two types of service deployments, replicated and global.
  • A global service is a service that runs one task on every node.
  • Good candidates for global services are monitoring agents, an anti-virus scanners or other types of containers that you want to run on every node in the swarm.
1 - 20 of 29 Next ›
Showing 20 items per page