Skip to main content

Home/ Larvata/ Group items tagged networking

Rss Feed Group items tagged

張 旭

What's the story behind the names of CloudFlare's name servers? - 0 views

  • what would happen if two people signed up the same domain at the same time?
  • we commissioned an artist to draw representations of the 100 name servers as if they were ninjas. While we've never done much with the drawings, we liked the metaphor of two ninja name servers protecting your website.
  • The servers in CloudFlare's infrastructure are configured to be able to respond to any request for any one of our customers.
  • ...1 more annotation...
  • While you may get Bob and Lola, and someone else may get Stan and Amy, in fact they are both sending requests to the same elastic pool of resources.
  •  
    "what would happen if two people signed up the same domain at the same time?"
張 旭

Howto/DNS updates and zone transfers with TSIG - FreeIPA - 0 views

  • dnssec-keygen -a HMAC-SHA512 -b 512 -n HOST keyname
  • vim /etc/named.conf
  • keyvalue
  • ...2 more annotations...
  • ipa dnszone-mod example.com. --update-policy="grant keyname name example.com A;"
    • 張 旭
       
      先執行 kinit admin
  • ipa dnszone-mod example.com. --dynamic-update=1
    • 張 旭
       
      ipa dnszone-show --all example.com.
張 旭

Helm | - 0 views

  • Helm will figure out where to install Tiller by reading your Kubernetes configuration file (usually $HOME/.kube/config). This is the same file that kubectl uses.
  • kubectl cluster-info
  • Role-Based Access Control (RBAC) enabled
  • ...133 more annotations...
  • initialize the local CLI
  • install Tiller into your Kubernetes cluster
  • helm install
  • helm init --upgrade
  • By default, when Tiller is installed, it does not have authentication enabled.
  • helm repo update
  • Without a max history set the history is kept indefinitely, leaving a large number of records for helm and tiller to maintain.
  • helm init --upgrade
  • Whenever you install a chart, a new release is created.
  • one chart can be installed multiple times into the same cluster. And each can be independently managed and upgraded.
  • helm list function will show you a list of all deployed releases.
  • helm delete
  • helm status
  • you can audit a cluster’s history, and even undelete a release (with helm rollback).
  • the Helm server (Tiller).
  • The Helm client (helm)
  • brew install kubernetes-helm
  • Tiller, the server portion of Helm, typically runs inside of your Kubernetes cluster.
  • it can also be run locally, and configured to talk to a remote Kubernetes cluster.
  • Role-Based Access Control - RBAC for short
  • create a service account for Tiller with the right roles and permissions to access resources.
  • run Tiller in an RBAC-enabled Kubernetes cluster.
  • run kubectl get pods --namespace kube-system and see Tiller running.
  • helm inspect
  • Helm will look for Tiller in the kube-system namespace unless --tiller-namespace or TILLER_NAMESPACE is set.
  • For development, it is sometimes easier to work on Tiller locally, and configure it to connect to a remote Kubernetes cluster.
  • even when running locally, Tiller will store release configuration in ConfigMaps inside of Kubernetes.
  • helm version should show you both the client and server version.
  • Tiller stores its data in Kubernetes ConfigMaps, you can safely delete and re-install Tiller without worrying about losing any data.
  • helm reset
  • The --node-selectors flag allows us to specify the node labels required for scheduling the Tiller pod.
  • --override allows you to specify properties of Tiller’s deployment manifest.
  • helm init --override manipulates the specified properties of the final manifest (there is no “values” file).
  • The --output flag allows us skip the installation of Tiller’s deployment manifest and simply output the deployment manifest to stdout in either JSON or YAML format.
  • By default, tiller stores release information in ConfigMaps in the namespace where it is running.
  • switch from the default backend to the secrets backend, you’ll have to do the migration for this on your own.
  • a beta SQL storage backend that stores release information in an SQL database (only postgres has been tested so far).
  • Once you have the Helm Client and Tiller successfully installed, you can move on to using Helm to manage charts.
  • Helm requires that kubelet have access to a copy of the socat program to proxy connections to the Tiller API.
  • A Release is an instance of a chart running in a Kubernetes cluster. One chart can often be installed many times into the same cluster.
  • helm init --client-only
  • helm init --dry-run --debug
  • A panic in Tiller is almost always the result of a failure to negotiate with the Kubernetes API server
  • Tiller and Helm have to negotiate a common version to make sure that they can safely communicate without breaking API assumptions
  • helm delete --purge
  • Helm stores some files in $HELM_HOME, which is located by default in ~/.helm
  • A Chart is a Helm package. It contains all of the resource definitions necessary to run an application, tool, or service inside of a Kubernetes cluster.
  • it like the Kubernetes equivalent of a Homebrew formula, an Apt dpkg, or a Yum RPM file.
  • A Repository is the place where charts can be collected and shared.
  • Set the $HELM_HOME environment variable
  • each time it is installed, a new release is created.
  • Helm installs charts into Kubernetes, creating a new release for each installation. And to find new charts, you can search Helm chart repositories.
  • chart repository is named stable by default
  • helm search shows you all of the available charts
  • helm inspect
  • To install a new package, use the helm install command. At its simplest, it takes only one argument: The name of the chart.
  • If you want to use your own release name, simply use the --name flag on helm install
  • additional configuration steps you can or should take.
  • Helm does not wait until all of the resources are running before it exits. Many charts require Docker images that are over 600M in size, and may take a long time to install into the cluster.
  • helm status
  • helm inspect values
  • helm inspect values stable/mariadb
  • override any of these settings in a YAML formatted file, and then pass that file during installation.
  • helm install -f config.yaml stable/mariadb
  • --values (or -f): Specify a YAML file with overrides.
  • --set (and its variants --set-string and --set-file): Specify overrides on the command line.
  • Values that have been --set can be cleared by running helm upgrade with --reset-values specified.
  • Chart designers are encouraged to consider the --set usage when designing the format of a values.yaml file.
  • --set-file key=filepath is another variant of --set. It reads the file and use its content as a value.
  • inject a multi-line text into values without dealing with indentation in YAML.
  • An unpacked chart directory
  • When a new version of a chart is released, or when you want to change the configuration of your release, you can use the helm upgrade command.
  • Kubernetes charts can be large and complex, Helm tries to perform the least invasive upgrade.
  • It will only update things that have changed since the last release
  • $ helm upgrade -f panda.yaml happy-panda stable/mariadb
  • deployment
  • If both are used, --set values are merged into --values with higher precedence.
  • The helm get command is a useful tool for looking at a release in the cluster.
  • helm rollback
  • A release version is an incremental revision. Every time an install, upgrade, or rollback happens, the revision number is incremented by 1.
  • helm history
  • a release name cannot be re-used.
  • you can rollback a deleted resource, and have it re-activate.
  • helm repo list
  • helm repo add
  • helm repo update
  • The Chart Development Guide explains how to develop your own charts.
  • helm create
  • helm lint
  • helm package
  • Charts that are archived can be loaded into chart repositories.
  • chart repository server
  • Tiller can be installed into any namespace.
  • Limiting Tiller to only be able to install into specific namespaces and/or resource types is controlled by Kubernetes RBAC roles and rolebindings
  • Release names are unique PER TILLER INSTANCE
  • Charts should only contain resources that exist in a single namespace.
  • not recommended to have multiple Tillers configured to manage resources in the same namespace.
  • a client-side Helm plugin. A plugin is a tool that can be accessed through the helm CLI, but which is not part of the built-in Helm codebase.
  • Helm plugins are add-on tools that integrate seamlessly with Helm. They provide a way to extend the core feature set of Helm, but without requiring every new feature to be written in Go and added to the core tool.
  • Helm plugins live in $(helm home)/plugins
  • The Helm plugin model is partially modeled on Git’s plugin model
  • helm referred to as the porcelain layer, with plugins being the plumbing.
  • helm plugin install https://github.com/technosophos/helm-template
  • command is the command that this plugin will execute when it is called.
  • Environment variables are interpolated before the plugin is executed.
  • The command itself is not executed in a shell. So you can’t oneline a shell script.
  • Helm is able to fetch Charts using HTTP/S
  • Variables like KUBECONFIG are set for the plugin if they are set in the outer environment.
  • In Kubernetes, granting a role to an application-specific service account is a best practice to ensure that your application is operating in the scope that you have specified.
  • restrict Tiller’s capabilities to install resources to certain namespaces, or to grant a Helm client running access to a Tiller instance.
  • Service account with cluster-admin role
  • The cluster-admin role is created by default in a Kubernetes cluster
  • Deploy Tiller in a namespace, restricted to deploying resources only in that namespace
  • Deploy Tiller in a namespace, restricted to deploying resources in another namespace
  • When running a Helm client in a pod, in order for the Helm client to talk to a Tiller instance, it will need certain privileges to be granted.
  • SSL Between Helm and Tiller
  • The Tiller authentication model uses client-side SSL certificates.
  • creating an internal CA, and using both the cryptographic and identity functions of SSL.
  • Helm is a powerful and flexible package-management and operations tool for Kubernetes.
  • default installation applies no security configurations
  • with a cluster that is well-secured in a private network with no data-sharing or no other users or teams.
  • With great power comes great responsibility.
  • Choose the Best Practices you should apply to your helm installation
  • Role-based access control, or RBAC
  • Tiller’s gRPC endpoint and its usage by Helm
  • Kubernetes employ a role-based access control (or RBAC) system (as do modern operating systems) to help mitigate the damage that can be done if credentials are misused or bugs exist.
  • In the default installation the gRPC endpoint that Tiller offers is available inside the cluster (not external to the cluster) without authentication configuration applied.
  • Tiller stores its release information in ConfigMaps. We suggest changing the default to Secrets.
  • release information
  • charts
  • charts are a kind of package that not only installs containers you may or may not have validated yourself, but it may also install into more than one namespace.
  • As with all shared software, in a controlled or shared environment you must validate all software you install yourself before you install it.
  • Helm’s provenance tools to ensure the provenance and integrity of charts
  •  
    "Helm will figure out where to install Tiller by reading your Kubernetes configuration file (usually $HOME/.kube/config). This is the same file that kubectl uses."
張 旭

What is a DNS Zone? Master and Slave DNS Zone and how to create it. - 0 views

  • DNS zone is a container of DNS settings and DNS records of a DNS namespace.
  • The DNS namespace can have single or multiple DNS zones, each managed by a particular DNS host/service.
  • Don’t directly associate a DNS zone with a specific domain.
  • ...9 more annotations...
  • DNS zones can be on the same servers
  • A DNS zone may contain multiple domain names or a single one;
  • Master zones, contain a read/write copy of the zone data.
  • There could be only one Master zone on one DNS server at a time.
  • If you want to have redundancy, you must have the zone data accessible on multiple servers.
  • The Slave zone is a read-only copy of the zone data.
  • Most of the times Slave DNS zones are copies of Master zones.
  • If you try to change a DNS record on a Secondary zone, it can redirect you to another zone with read/write access. By itself, it can’t change it.
  • the primary purposes of a Slave zone is to serve as a backup
張 旭

AskF5 | Manual Chapter: Working with Partitions - 0 views

  • During BIG-IP® system installation, the system automatically creates a partition named Common
  • An administrative partition is a logical container that you create, containing a defined set of BIG-IP® system objects.
  • No user can delete partition Common itself.
  • ...9 more annotations...
  • With respect to permissions, all users on the system except those with a user role of No Access have read access to objects in partition Common, and by default, partition Common is their current partition.
  • The current partition is the specific partition to which the system is currently set for a logged-in user.
  • A partition access assignment gives a user some level of access to the specified partition.
  • assigning partition access to a user does not necessarily give the user full access to all objects in the partition
  • user account objects also reside in partitions
  • when you first install the BIG-IP system, every existing user account (root and admin) resides in partition Common
  • the partition in which a user account object resides does not affect the partition or partitions to which that user is granted access to manage other BIG-IP objects
  • the object it references resides in partition Common
  • a referenced object must reside either in the same partition as the object that is referencing it
張 旭

Understanding Nginx HTTP Proxying, Load Balancing, Buffering, and Caching | DigitalOcean - 0 views

  • allow Nginx to pass requests off to backend http servers for further processing
  • Nginx is often set up as a reverse proxy solution to help scale out infrastructure or to pass requests to other servers that are not designed to handle large client loads
  • explore buffering and caching to improve the performance of proxying operations for clients
  • ...48 more annotations...
  • Nginx is built to handle many concurrent connections at the same time.
  • provides you with flexibility in easily adding backend servers or taking them down as needed for maintenance
  • Proxying in Nginx is accomplished by manipulating a request aimed at the Nginx server and passing it to other servers for the actual processing
  • The servers that Nginx proxies requests to are known as upstream servers.
  • Nginx can proxy requests to servers that communicate using the http(s), FastCGI, SCGI, and uwsgi, or memcached protocols through separate sets of directives for each type of proxy
  • When a request matches a location with a proxy_pass directive inside, the request is forwarded to the URL given by the directive
  • For example, when a request for /match/here/please is handled by this block, the request URI will be sent to the example.com server as http://example.com/match/here/please
  • The request coming from Nginx on behalf of a client will look different than a request coming directly from a client
  • Nginx gets rid of any empty headers
  • Nginx, by default, will consider any header that contains underscores as invalid. It will remove these from the proxied request
    • 張 旭
       
      這裡要注意一下,header 欄位名稱有設定底線的,要設定 Nginx 讓它可以通過。
  • The "Host" header is re-written to the value defined by the $proxy_host variable.
  • The upstream should not expect this connection to be persistent
  • Headers with empty values are completely removed from the passed request.
  • if your backend application will be processing non-standard headers, you must make sure that they do not have underscores
  • by default, this will be set to the value of $proxy_host, a variable that will contain the domain name or IP address and port taken directly from the proxy_pass definition
  • This is selected by default as it is the only address Nginx can be sure the upstream server responds to
  • (as it is pulled directly from the connection info)
  • $http_host: Sets the "Host" header to the "Host" header from the client request.
  • The headers sent by the client are always available in Nginx as variables. The variables will start with an $http_ prefix, followed by the header name in lowercase, with any dashes replaced by underscores.
  • preference to: the host name from the request line itself
  • set the "Host" header to the $host variable. It is the most flexible and will usually provide the proxied servers with a "Host" header filled in as accurately as possible
  • sets the "Host" header to the $host variable, which should contain information about the original host being requested
  • This variable takes the value of the original X-Forwarded-For header retrieved from the client and adds the Nginx server's IP address to the end.
  • The upstream directive must be set in the http context of your Nginx configuration.
  • http context
  • Once defined, this name will be available for use within proxy passes as if it were a regular domain name
  • By default, this is just a simple round-robin selection process (each request will be routed to a different host in turn)
  • Specifies that new connections should always be given to the backend that has the least number of active connections.
  • distributes requests to different servers based on the client's IP address.
  • mainly used with memcached proxying
  • As for the hash method, you must provide the key to hash against
  • Server Weight
  • Nginx's buffering and caching capabilities
  • Without buffers, data is sent from the proxied server and immediately begins to be transmitted to the client.
  • With buffers, the Nginx proxy will temporarily store the backend's response and then feed this data to the client
  • Nginx defaults to a buffering design
  • can be set in the http, server, or location contexts.
  • the sizing directives are configured per request, so increasing them beyond your need can affect your performance
  • When buffering is "off" only the buffer defined by the proxy_buffer_size directive will be used
  • A high availability (HA) setup is an infrastructure without a single point of failure, and your load balancers are a part of this configuration.
  • multiple load balancers (one active and one or more passive) behind a static IP address that can be remapped from one server to another.
  • Nginx also provides a way to cache content from backend servers
  • The proxy_cache_path directive must be set in the http context.
  • proxy_cache backcache;
    • 張 旭
       
      這裡的 backcache 是前文設定的 backcache 變數,看起來每個 location 都可以有自己的 cache 目錄。
  • The proxy_cache_bypass directive is set to the $http_cache_control variable. This will contain an indicator as to whether the client is explicitly requesting a fresh, non-cached version of the resource
  • any user-related data should not be cached
  • For private content, you should set the Cache-Control header to "no-cache", "no-store", or "private" depending on the nature of the data
張 旭

MySQL :: MySQL 5.7 Reference Manual :: 19.1 Group Replication Background - 0 views

  • the component can be removed and the system should continue to operate as expected
  • network partitioning
  • split brain scenarios
  • ...8 more annotations...
  • the ultimate challenge is to fuse the logic of the database and data replication with the logic of having several servers coordinated in a consistent and simple way
  • MySQL Group Replication provides distributed state machine replication with strong coordination between servers.
  • Servers coordinate themselves automatically when they are part of the same group
  • The group can operate in a single-primary mode with automatic primary election, where only one server accepts updates at a time.
  • For a transaction to commit, the majority of the group have to agree on the order of a given transaction in the global sequence of transactions
  • Deciding to commit or abort a transaction is done by each server individually, but all servers make the same decision
  • group communication protocols
  • the Paxos algorithm. It acts as the group communication systems engine.
張 旭

Open source load testing tool review 2020 - 0 views

  • Hey is a simple tool, written in Go, with good performance and the most common features you'll need to run simple static URL tests.
  • Hey supports HTTP/2, which neither Wrk nor Apachebench does
  • Apachebench is very fast, so often you will not need more than one CPU core to generate enough traffic
  • ...16 more annotations...
  • Hey has rate limiting, which can be used to run fixed-rate tests.
  • Vegeta was designed to be run on the command line; it reads from stdin a list of HTTP transactions to generate, and sends results in binary format to stdout,
  • Vegeta is a really strong tool that caters to people who want a tool to test simple, static URLs (perhaps API end points) but also want a bit more functionality.
  • Vegeta can even be used as a Golang library/package if you want to create your own load testing tool.
  • Wrk is so damn fast
  • being fast and measuring correctly is about all that Wrk does
  • k6 is scriptable in plain Javascript
  • k6 is average or better. In some categories (documentation, scripting API, command line UX) it is outstanding.
  • Jmeter is a huge beast compared to most other tools.
  • Siege is a simple tool, similar to e.g. Apachebench in that it has no scripting and is primarily used when you want to hit a single, static URL repeatedly.
  • A good way of testing the testing tools is to not test them on your code, but on some third-party thing that is sure to be very high-performing.
  • use a tool like e.g. top to keep track of Nginx CPU usage while testing. If you see just one process, and see it using close to 100% CPU, it means you could be CPU-bound on the target side.
  • If you see multiple Nginx processes but only one is using a lot of CPU, it means your load testing tool is only talking to that particular worker process.
  • Network delay is also important to take into account as it sets an upper limit on the number of requests per second you can push through.
  • If, say, the Nginx default page requires a transfer of 250 bytes to load, it means that if the servers are connected via a 100 Mbit/s link, the theoretical max RPS rate would be around 100,000,000 divided by 8 (bits per byte) divided by 250 => 100M/2000 = 50,000 RPS. Though that is a very optimistic calculation - protocol overhead will make the actual number a lot lower so in the case above I would start to get worried bandwidth was an issue if I saw I could push through max 30,000 RPS, or something like that.
  • Wrk managed to push through over 50,000 RPS and that made 8 Nginx workers on the target system consume about 600% CPU.
張 旭

Introducing Infrastructure as Code | Linode - 0 views

  • Infrastructure as Code (IaC) is a technique for deploying and managing infrastructure using software, configuration files, and automated tools.
  • With the older methods, technicians must configure a device manually, perhaps with the aid of an interactive tool. Information is added to configuration files by hand or through the use of ad-hoc scripts. Configuration wizards and similar utilities are helpful, but they still require hands-on management. A small group of experts owns the expertise, the process is typically poorly defined, and errors are common.
  • The development of the continuous integration and continuous delivery (CI/CD) pipeline made the idea of treating infrastructure as software much more attractive.
  • ...20 more annotations...
  • Infrastructure as Code takes advantage of the software development process, making use of quality assurance and test automation techniques.
  • Consistency/Standardization
  • Each node in the network becomes what is known as a snowflake, with its own unique settings. This leads to a system state that cannot easily be reproduced and is difficult to debug.
  • With standard configuration files and software-based configuration, there is greater consistency between all equipment of the same type. A key IaC concept is idempotence.
  • Idempotence makes it easy to troubleshoot, test, stabilize, and upgrade all the equipment.
  • Infrastructure as Code is central to the culture of DevOps, which is a mix of development and operations
  • edits are always made to the source configuration files, never on the target.
  • A declarative approach describes the final state of a device, but does not mandate how it should get there. The specific IaC tool makes all the procedural decisions. The end state is typically defined through a configuration file, a JSON specification, or a similar encoding.
  • An imperative approach defines specific functions or procedures that must be used to configure the device. It focuses on what must happen, but does not necessarily describe the final state. Imperative techniques typically use scripts for the implementation.
  • With a push configuration, the central server pushes the configuration to the destination device.
  • If a device is mutable, its configuration can be changed while it is active
  • Immutable devices cannot be changed. They must be decommissioned or rebooted and then completely rebuilt.
  • an immutable approach ensures consistency and avoids drift. However, it usually takes more time to remove or rebuild a configuration than it does to change it.
  • System administrators should consider security issues as part of the development process.
  • Ansible is a very popular open source IaC application from Red Hat
  • Ansible is often used in conjunction with Kubernetes and Docker.
  • Linode offers a collection of several Ansible guides for a more comprehensive overview.
  • Pulumi permits the use of a variety of programming languages to deploy and manage infrastructure within a cloud environment.
  • Terraform allows users to provision data center infrastructure using either JSON or Terraform’s own declarative language.
  • Terraform manages resources through the use of providers, which are similar to APIs.
張 旭

kubernetes 简介:service 和 kube-proxy 原理 | Cizixs Write Here - 0 views

  • kubernetes 对网络的要求是:容器之间(包括同一台主机上的容器,和不同主机的容器)可以互相通信,容器和集群中所有的节点也能直接通信。
  • 跨主机网络配置:flannel
  • flannel 也能够通过 CNI 插件的形式使用。
  • ...8 more annotations...
  • 从集群中获取每个 pod ip 地址,然后也能在集群内部直接通过 podIP:Port 来获取对应的服务。
  • pod 是经常变化的,每次更新 ip 地址都可能会发生变化,如果直接访问容器 ip 的话,会有很大的问题。
  • “服务”(service),每个服务都一个固定的虚拟 ip(这个 ip 也被称为 cluster IP),自动并且动态地绑定后面的 pod,所有的网络请求直接访问服务 ip,服务会自动向后端做转发。
  • 实现 service 这一功能的关键,就是 kube-proxy。
  • kube-proxy 运行在每个节点上,监听 API Server 中服务对象的变化,通过管理 iptables 来实现网络的转发。
  • kube-proxy 要求 NODE 节点操作系统中要具备 /sys/module/br_netfilter 文件,而且还要设置 bridge-nf-call-iptables=1
  • iptables 完全实现 iptables 来实现 service,是目前默认的方式,也是推荐的方式,效率很高(只有内核中 netfilter 一些损耗)。
  • 可以在终端上启动 kube-proxy,也可以使用诸如 systemd 这样的工具来管理它
張 旭

LXC vs Docker: Why Docker is Better | UpGuard - 0 views

  • LXC (LinuX Containers) is a OS-level virtualization technology that allows creation and running of multiple isolated Linux virtual environments (VE) on a single control host.
  • Docker, previously called dotCloud, was started as a side project and only open-sourced in 2013. It is really an extension of LXC’s capabilities.
  • run processes in isolation.
  • ...35 more annotations...
  • Docker is developed in the Go language and utilizes LXC, cgroups, and the Linux kernel itself. Since it’s based on LXC, a Docker container does not include a separate operating system; instead it relies on the operating system’s own functionality as provided by the underlying infrastructure.
  • Docker acts as a portable container engine, packaging the application and all its dependencies in a virtual container that can run on any Linux server.
  • a VE there is no preloaded emulation manager software as in a VM.
  • In a VE, the application (or OS) is spawned in a container and runs with no added overhead, except for a usually minuscule VE initialization process.
  • LXC will boast bare metal performance characteristics because it only packages the needed applications.
  • the OS is also just another application that can be packaged too.
  • a VM, which packages the entire OS and machine setup, including hard drive, virtual processors and network interfaces. The resulting bloated mass usually takes a long time to boot and consumes a lot of CPU and RAM.
  • don’t offer some other neat features of VM’s such as IaaS setups and live migration.
  • LXC as supercharged chroot on Linux. It allows you to not only isolate applications, but even the entire OS.
  • Libvirt, which allows the use of containers through the LXC driver by connecting to 'lxc:///'.
  • 'LXC', is not compatible with libvirt, but is more flexible with more userspace tools.
  • Portable deployment across machines
  • Versioning: Docker includes git-like capabilities for tracking successive versions of a container
  • Component reuse: Docker allows building or stacking of already created packages.
  • Shared libraries: There is already a public registry (http://index.docker.io/ ) where thousands have already uploaded the useful containers they have created.
  • Docker taking the devops world by storm since its launch back in 2013.
  • LXC, while older, has not been as popular with developers as Docker has proven to be
  • LXC having a focus on sys admins that’s similar to what solutions like the Solaris operating system, with its Solaris Zones, Linux OpenVZ, and FreeBSD, with its BSD Jails virtualization system
  • it started out being built on top of LXC, Docker later moved beyond LXC containers to its own execution environment called libcontainer.
  • Unlike LXC, which launches an operating system init for each container, Docker provides one OS environment, supplied by the Docker Engine
  • LXC tooling sticks close to what system administrators running bare metal servers are used to
  • The LXC command line provides essential commands that cover routine management tasks, including the creation, launch, and deletion of LXC containers.
  • Docker containers aim to be even lighter weight in order to support the fast, highly scalable, deployment of applications with microservice architecture.
  • With backing from Canonical, LXC and LXD have an ecosystem tightly bound to the rest of the open source Linux community.
  • Docker Swarm
  • Docker Trusted Registry
  • Docker Compose
  • Docker Machine
  • Kubernetes facilitates the deployment of containers in your data center by representing a cluster of servers as a single system.
  • Swarm is Docker’s clustering, scheduling and orchestration tool for managing a cluster of Docker hosts. 
  • rkt is a security minded container engine that uses KVM for VM-based isolation and packs other enhanced security features. 
  • Apache Mesos can run different kinds of distributed jobs, including containers. 
  • Elastic Container Service is Amazon’s service for running and orchestrating containerized applications on AWS
  • LXC offers the advantages of a VE on Linux, mainly the ability to isolate your own private workloads from one another. It is a cheaper and faster solution to implement than a VM, but doing so requires a bit of extra learning and expertise.
  • Docker is a significant improvement of LXC’s capabilities.
張 旭

Kubernetes Deployments: The Ultimate Guide - Semaphore - 1 views

  • Continuous integration gives you confidence in your code. To extend that confidence to the release process, your deployment operations need to come with a safety belt.
  • these Kubernetes objects ensure that you can progressively deploy, roll back and scale your applications without downtime.
  • A pod is just a group of containers (it can be a group of one container) that run on the same machine, and share a few things together.
  • ...34 more annotations...
  • the containers within a pod can communicate with each other over localhost
  • From a network perspective, all the processes in these containers are local.
  • we can never create a standalone container: the closest we can do is create a pod, with a single container in it.
  • Kubernetes is a declarative system (by opposition to imperative systems).
  • All we can do, is describe what we want to have, and wait for Kubernetes to take action to reconcile what we have, with what we want to have.
  • In other words, we can say, “I would like a 40-feet long blue container with yellow doors“, and Kubernetes will find such a container for us. If it doesn’t exist, it will build it; if there is already one but it’s green with red doors, it will paint it for us; if there is already a container of the right size and color, Kubernetes will do nothing, since what we have already matches what we want.
  • The specification of a replica set looks very much like the specification of a pod, except that it carries a number, indicating how many replicas
  • What happens if we change that definition? Suddenly, there are zero pods matching the new specification.
  • the creation of new pods could happen in a more gradual manner.
  • the specification for a deployment looks very much like the one for a replica set: it features a pod specification, and a number of replicas.
  • Deployments, however, don’t create or delete pods directly.
  • When we update a deployment and adjust the number of replicas, it passes that update down to the replica set.
  • When we update the pod specification, the deployment creates a new replica set with the updated pod specification. That replica set has an initial size of zero. Then, the size of that replica set is progressively increased, while decreasing the size of the other replica set.
  • we are going to fade in (turn up the volume) on the new replica set, while we fade out (turn down the volume) on the old one.
  • During the whole process, requests are sent to pods of both the old and new replica sets, without any downtime for our users.
  • A readiness probe is a test that we add to a container specification.
  • Kubernetes supports three ways of implementing readiness probes:Running a command inside a container;Making an HTTP(S) request against a container; orOpening a TCP socket against a container.
  • When we roll out a new version, Kubernetes will wait for the new pod to mark itself as “ready” before moving on to the next one.
  • If there is no readiness probe, then the container is considered as ready, as long as it could be started.
  • MaxSurge indicates how many extra pods we are willing to run during a rolling update, while MaxUnavailable indicates how many pods we can lose during the rolling update.
  • Setting MaxUnavailable to 0 means, “do not shutdown any old pod before a new one is up and ready to serve traffic“.
  • Setting MaxSurge to 100% means, “immediately start all the new pods“, implying that we have enough spare capacity on our cluster, and that we want to go as fast as possible.
  • kubectl rollout undo deployment web
  • the replica set doesn’t look at the pods’ specifications, but only at their labels.
  • A replica set contains a selector, which is a logical expression that “selects” (just like a SELECT query in SQL) a number of pods.
  • it is absolutely possible to manually create pods with these labels, but running a different image (or with different settings), and fool our replica set.
  • Selectors are also used by services, which act as the load balancers for Kubernetes traffic, internal and external.
  • internal IP address (denoted by the name ClusterIP)
  • during a rollout, the deployment doesn’t reconfigure or inform the load balancer that pods are started and stopped. It happens automatically through the selector of the service associated to the load balancer.
  • a pod is added as a valid endpoint for a service only if all its containers pass their readiness check. In other words, a pod starts receiving traffic only once it’s actually ready for it.
  • In blue/green deployment, we want to instantly switch over all the traffic from the old version to the new, instead of doing it progressively
  • We can achieve blue/green deployment by creating multiple deployments (in the Kubernetes sense), and then switching from one to another by changing the selector of our service
  • kubectl label pods -l app=blue,version=v1.5 status=enabled
  • kubectl label pods -l app=blue,version=v1.4 status-
  •  
    "Continuous integration gives you confidence in your code. To extend that confidence to the release process, your deployment operations need to come with a safety belt."
張 旭

Helm | - 0 views

  • Templates generate manifest files, which are YAML-formatted resource descriptions that Kubernetes can understand.
  • service.yaml: A basic manifest for creating a service endpoint for your deployment
  • In Kubernetes, a ConfigMap is simply a container for storing configuration data.
  • ...88 more annotations...
  • deployment.yaml: A basic manifest for creating a Kubernetes deployment
  • using the suffix .yaml for YAML files and .tpl for helpers.
  • It is just fine to put a plain YAML file like this in the templates/ directory.
  • helm get manifest
  • The helm get manifest command takes a release name (full-coral) and prints out all of the Kubernetes resources that were uploaded to the server. Each file begins with --- to indicate the start of a YAML document
  • Names should be unique to a release
  • The name: field is limited to 63 characters because of limitations to the DNS system.
  • release names are limited to 53 characters
  • {{ .Release.Name }}
  • A template directive is enclosed in {{ and }} blocks.
  • The values that are passed into a template can be thought of as namespaced objects, where a dot (.) separates each namespaced element.
  • The leading dot before Release indicates that we start with the top-most namespace for this scope
  • The Release object is one of the built-in objects for Helm
  • When you want to test the template rendering, but not actually install anything, you can use helm install ./mychart --debug --dry-run
  • Using --dry-run will make it easier to test your code, but it won’t ensure that Kubernetes itself will accept the templates you generate.
  • Objects are passed into a template from the template engine.
  • create new objects within your templates
  • Objects can be simple, and have just one value. Or they can contain other objects or functions.
  • Release is one of the top-level objects that you can access in your templates.
  • Release.Namespace: The namespace to be released into (if the manifest doesn’t override)
  • Values: Values passed into the template from the values.yaml file and from user-supplied files. By default, Values is empty.
  • Chart: The contents of the Chart.yaml file.
  • Files: This provides access to all non-special files in a chart.
  • Files.Get is a function for getting a file by name
  • Files.GetBytes is a function for getting the contents of a file as an array of bytes instead of as a string. This is useful for things like images.
  • Template: Contains information about the current template that is being executed
  • BasePath: The namespaced path to the templates directory of the current chart
  • The built-in values always begin with a capital letter.
  • Go’s naming convention
  • use only initial lower case letters in order to distinguish local names from those built-in.
  • If this is a subchart, the values.yaml file of a parent chart
  • Individual parameters passed with --set
  • values.yaml is the default, which can be overridden by a parent chart’s values.yaml, which can in turn be overridden by a user-supplied values file, which can in turn be overridden by --set parameters.
  • While structuring data this way is possible, the recommendation is that you keep your values trees shallow, favoring flatness.
  • If you need to delete a key from the default values, you may override the value of the key to be null, in which case Helm will remove the key from the overridden values merge.
  • Kubernetes would then fail because you can not declare more than one livenessProbe handler.
  • When injecting strings from the .Values object into the template, we ought to quote these strings.
  • quote
  • Template functions follow the syntax functionName arg1 arg2...
  • While we talk about the “Helm template language” as if it is Helm-specific, it is actually a combination of the Go template language, some extra functions, and a variety of wrappers to expose certain objects to the templates.
  • Drawing on a concept from UNIX, pipelines are a tool for chaining together a series of template commands to compactly express a series of transformations.
  • pipelines are an efficient way of getting several things done in sequence
  • The repeat function will echo the given string the given number of times
  • default DEFAULT_VALUE GIVEN_VALUE. This function allows you to specify a default value inside of the template, in case the value is omitted.
  • all static default values should live in the values.yaml, and should not be repeated using the default command
  • Operators are implemented as functions that return a boolean value.
  • To use eq, ne, lt, gt, and, or, not etcetera place the operator at the front of the statement followed by its parameters just as you would a function.
  • if and
  • if or
  • with to specify a scope
  • range, which provides a “for each”-style loop
  • block declares a special kind of fillable template area
  • A pipeline is evaluated as false if the value is: a boolean false a numeric zero an empty string a nil (empty or null) an empty collection (map, slice, tuple, dict, array)
  • incorrect YAML because of the whitespacing
  • When the template engine runs, it removes the contents inside of {{ and }}, but it leaves the remaining whitespace exactly as is.
  • {{- (with the dash and space added) indicates that whitespace should be chomped left, while -}} means whitespace to the right should be consumed.
  • Newlines are whitespace!
  • an * at the end of the line indicates a newline character that would be removed
  • Be careful with the chomping modifiers.
  • the indent function
  • Scopes can be changed. with can allow you to set the current scope (.) to a particular object.
  • Inside of the restricted scope, you will not be able to access the other objects from the parent scope.
  • range
  • The range function will “range over” (iterate through) the pizzaToppings list.
  • Just like with sets the scope of ., so does a range operator.
  • The toppings: |- line is declaring a multi-line string.
  • not a YAML list. It’s a big string.
  • the data in ConfigMaps data is composed of key/value pairs, where both the key and the value are simple strings.
  • The |- marker in YAML takes a multi-line string.
  • range can be used to iterate over collections that have a key and a value (like a map or dict).
  • In Helm templates, a variable is a named reference to another object. It follows the form $name
  • Variables are assigned with a special assignment operator: :=
  • {{- $relname := .Release.Name -}}
  • capture both the index and the value
  • the integer index (starting from zero) to $index and the value to $topping
  • For data structures that have both a key and a value, we can use range to get both
  • Variables are normally not “global”. They are scoped to the block in which they are declared.
  • one variable that is always global - $ - this variable will always point to the root context.
  • $.
  • $.
  • Helm template language is its ability to declare multiple templates and use them together.
  • A named template (sometimes called a partial or a subtemplate) is simply a template defined inside of a file, and given a name.
  • when naming templates: template names are global.
  • If you declare two templates with the same name, whichever one is loaded last will be the one used.
  • you should be careful to name your templates with chart-specific names.
  • templates in subcharts are compiled together with top-level templates
  • naming convention is to prefix each defined template with the name of the chart: {{ define "mychart.labels" }}
  • Helm has over 60 available functions.
張 旭

Kubernetes 基本概念 · Kubernetes指南 - 0 views

  • Container(容器)是一种便携式、轻量级的操作系统级虚拟化技术。它使用 namespace 隔离不同的软件运行环境,并通过镜像自包含软件的运行环境,从而使得容器可以很方便的在任何地方运行。
  • 每个应用程序用容器封装,管理容器部署就等同于管理应用程序部署。+
  • Pod 是一组紧密关联的容器集合,它们共享 PID、IPC、Network 和 UTS namespace,是 Kubernetes 调度的基本单位。
  • ...9 more annotations...
  • 进程间通信和文件共享
  • 在 Kubernetes 中,所有对象都使用 manifest(yaml 或 json)来定义
  • Node 是 Pod 真正运行的主机,可以是物理机,也可以是虚拟机。
  • 每个 Node 节点上至少要运行 container runtime(比如 docker 或者 rkt)、kubelet 和 kube-proxy 服务。
  • 常见的 pods, services, replication controllers 和 deployments 等都是属于某一个 namespace 的(默认是 default)
  • node, persistentVolumes 等则不属于任何 namespace
  • Service 是应用服务的抽象,通过 labels 为应用提供负载均衡和服务发现。
  • 匹配 labels 的 Pod IP 和端口列表组成 endpoints,由 kube-proxy 负责将服务 IP 负载均衡到这些 endpoints 上。
  • 每个 Service 都会自动分配一个 cluster IP(仅在集群内部可访问的虚拟地址)和 DNS 名
  •  
    "常见的 pods, services, replication controllers 和 deployments 等都是属于某一个 namespace 的(默认是 default),而 node, persistentVolumes 等则不属于任何 namespace。"
張 旭

MySQL cluster vs Galera - How to make the right choice - 0 views

  • there is no “one size fits all” solution when coming to database clustering.
  • MySQL cluster contains the data nodes that store the cluster data and management node that store the cluster’s configuration.
  • MySQL clients first communicate with the management node and then connect directly to these data nodes.
  • ...13 more annotations...
  • For synchronization of data in the data nodes, MySQL cluster uses a special data engine called NDB (Network Database).
  • it uses automatic shrading aka splitting of a large database into small units.
  • MySQL cluster avoids single point failure and ensures 99.99% availability.
  • MySQL cluster can provide a response time as low as less than 3 ms.
  • Galera Cluster consists of a database server and uses the Galera Replication Plugin to manage replication.
  • a multi-master database cluster that supports synchronous replication.
  • it provides multiple, up-to-date copies of the data.
  • there is a need for instant fail-over.
  • Galera cluster allows the read and write of data in any node.
  • Galera cluster include guaranteed write consistency, automatic node provisioning, etc.
  • Upon restoring the connection, the separated nodes will sync back and rejoin the cluster automatically.
  • there is no need to have management node like MySQL cluster.
  • it gives best results with the InnoDB storage engine.
張 旭

Replication - MongoDB Manual - 0 views

  • A replica set in MongoDB is a group of mongod processes that maintain the same data set.
  • Replica sets provide redundancy and high availability, and are the basis for all production deployments.
  • With multiple copies of data on different database servers, replication provides a level of fault tolerance against the loss of a single database server.
  • ...18 more annotations...
  • replication can provide increased read capacity as clients can send read operations to different servers.
  • A replica set is a group of mongod instances that maintain the same data set.
  • A replica set contains several data bearing nodes and optionally one arbiter node.
  • one and only one member is deemed the primary node, while the other nodes are deemed secondary nodes.
  • A replica set can have only one primary capable of confirming writes with { w: "majority" } write concern; although in some circumstances, another mongod instance may transiently believe itself to also be primary.
  • The secondaries replicate the primary’s oplog and apply the operations to their data sets such that the secondaries’ data sets reflect the primary’s data set
  • add a mongod instance to a replica set as an arbiter. An arbiter participates in elections but does not hold data
  • An arbiter will always be an arbiter whereas a primary may step down and become a secondary and a secondary may become the primary during an election.
  • Secondaries replicate the primary’s oplog and apply the operations to their data sets asynchronously.
  • These slow oplog messages are logged for the secondaries in the diagnostic log under the REPL component with the text applied op: <oplog entry> took <num>ms.
  • Replication lag refers to the amount of time that it takes to copy (i.e. replicate) a write operation on the primary to a secondary.
  • When a primary does not communicate with the other members of the set for more than the configured electionTimeoutMillis period (10 seconds by default), an eligible secondary calls for an election to nominate itself as the new primary.
  • The replica set cannot process write operations until the election completes successfully.
  • The median time before a cluster elects a new primary should not typically exceed 12 seconds, assuming default replica configuration settings.
  • Factors such as network latency may extend the time required for replica set elections to complete, which in turn affects the amount of time your cluster may operate without a primary.
  • Your application connection logic should include tolerance for automatic failovers and the subsequent elections.
  • MongoDB drivers can detect the loss of the primary and automatically retry certain write operations a single time, providing additional built-in handling of automatic failovers and elections
  • By default, clients read from the primary [1]; however, clients can specify a read preference to send read operations to secondaries.
張 旭

Deploy a Replica Set - MongoDB Manual - 0 views

  • Three member replica sets provide enough redundancy to survive most network partitions and other system failures.
張 旭

Introducing the MinIO Operator and Operator Console - 0 views

  • Object-storage-as-a-service is a game changer for IT.
  • provision multi-tenant object storage as a service.
  • have the skill set to create, deploy, tune, scale and manage modern, application oriented object storage using Kubernetes
  • ...12 more annotations...
  • MinIO is purpose-built to take full advantage of the Kubernetes architecture.
  • MinIO and Kubernetes work together to simplify infrastructure management, providing a way to manage object storage infrastructure within the Kubernetes toolset.  
  • The operator pattern extends Kubernetes's familiar declarative API model with custom resource definitions (CRDs) to perform common operations like resource orchestration, non-disruptive upgrades, cluster expansion and to maintain high-availability
  • The Operator uses the command set kubectl that the Kubernetes community was already familiar with and adds the kubectl minio plugin . The MinIO Operator and the MinIO kubectl plugin facilitate the deployment and management of MinIO Object Storage on Kubernetes - which is how multi-tenant object storage as a service is delivered.
  • choosing a leader for a distributed application without an internal member election process
  • The Operator Console makes Kubernetes object storage easier still. In this graphical user interface, MinIO created something so simple that anyone in the organization can create, deploy and manage object storage as a service.
  • The primary unit of managing MinIO on Kubernetes is the tenant.
  • The MinIO Operator can allocate multiple tenants within the same Kubernetes cluster.
  • Each tenant, in turn, can have different capacity (i.e: a small 500GB tenant vs a 100TB tenant), resources (1000m CPU and 4Gi RAM vs 4000m CPU and 16Gi RAM) and servers (4 pods vs 16 pods), as well a separate configurations regarding Identity Providers, Encryption and versions.
  • each tenant is a cluster of server pools (independent sets of nodes with their own compute, network, and storage resources), that, while sharing the same physical infrastructure, are fully isolated from each other in their own namespaces.
  • Each tenant runs their own MinIO cluster, fully isolated from other tenants
  • Each tenant scales independently by federating clusters across geographies.
« First ‹ Previous 101 - 120 of 126 Next ›
Showing 20 items per page