Skip to main content

Home/ Larvata/ Group items tagged ios

Rss Feed Group items tagged

張 旭

MetalLB, bare metal load-balancer for Kubernetes - 0 views

  • Kubernetes does not offer an implementation of network load-balancers (Services of type LoadBalancer) for bare metal clusters
  • If you’re not running on a supported IaaS platform (GCP, AWS, Azure…), LoadBalancers will remain in the “pending” state indefinitely when created.
  • Bare metal cluster operators are left with two lesser tools to bring user traffic into their clusters, “NodePort” and “externalIPs” services.
張 旭

Introducing Infrastructure as Code | Linode - 0 views

  • Infrastructure as Code (IaC) is a technique for deploying and managing infrastructure using software, configuration files, and automated tools.
  • With the older methods, technicians must configure a device manually, perhaps with the aid of an interactive tool. Information is added to configuration files by hand or through the use of ad-hoc scripts. Configuration wizards and similar utilities are helpful, but they still require hands-on management. A small group of experts owns the expertise, the process is typically poorly defined, and errors are common.
  • The development of the continuous integration and continuous delivery (CI/CD) pipeline made the idea of treating infrastructure as software much more attractive.
  • ...20 more annotations...
  • Infrastructure as Code takes advantage of the software development process, making use of quality assurance and test automation techniques.
  • Consistency/Standardization
  • Each node in the network becomes what is known as a snowflake, with its own unique settings. This leads to a system state that cannot easily be reproduced and is difficult to debug.
  • With standard configuration files and software-based configuration, there is greater consistency between all equipment of the same type. A key IaC concept is idempotence.
  • Idempotence makes it easy to troubleshoot, test, stabilize, and upgrade all the equipment.
  • Infrastructure as Code is central to the culture of DevOps, which is a mix of development and operations
  • edits are always made to the source configuration files, never on the target.
  • A declarative approach describes the final state of a device, but does not mandate how it should get there. The specific IaC tool makes all the procedural decisions. The end state is typically defined through a configuration file, a JSON specification, or a similar encoding.
  • An imperative approach defines specific functions or procedures that must be used to configure the device. It focuses on what must happen, but does not necessarily describe the final state. Imperative techniques typically use scripts for the implementation.
  • With a push configuration, the central server pushes the configuration to the destination device.
  • If a device is mutable, its configuration can be changed while it is active
  • Immutable devices cannot be changed. They must be decommissioned or rebooted and then completely rebuilt.
  • an immutable approach ensures consistency and avoids drift. However, it usually takes more time to remove or rebuild a configuration than it does to change it.
  • System administrators should consider security issues as part of the development process.
  • Ansible is a very popular open source IaC application from Red Hat
  • Ansible is often used in conjunction with Kubernetes and Docker.
  • Linode offers a collection of several Ansible guides for a more comprehensive overview.
  • Pulumi permits the use of a variety of programming languages to deploy and manage infrastructure within a cloud environment.
  • Terraform allows users to provision data center infrastructure using either JSON or Terraform’s own declarative language.
  • Terraform manages resources through the use of providers, which are similar to APIs.
張 旭

[Elasticsearch] 分散式特性 & 分散式搜尋的機制 | 小信豬的原始部落 - 0 views

  • 水平擴展儲存空間
  • Data HA:若有 node 掛掉,資料不會遺失
  • 若是要查詢 cluster 中的 node 狀態,可以使用 GET /_cat/nodes API
  • ...39 more annotations...
  • 決定每個 shard 要被分配到哪個 data node 上
  • 為 cluster 設置多個 master node
  • 一旦發現被選中的 master node 出現問題,就會選出新的 master node
  • 每個 node 啟動時就預設是一個 master eligible node,可以透過設定 node.master: false 取消此預設設定
  • 處理 request 的 node 稱為 Coordinating Node,其功能是將 request 轉發到合適的 node 上
  • 所有的 node 都預設是 Coordinating Node
  • coordinating node 可以直接接收 search request 並處理,不需要透過 master node 轉過來
  • 可以保存資料的 node,每個 node 啟動後都會預設是 data node,可以透過設定 node.data: false 停用 data node 功能
  • 由 master node 決定如何把分片分發到不同的 data node 上
  • 每個 node 上都保存了 cluster state
  • 只有 master 才可以修改 cluster state 並負責同步給其他 node
  • 每個 node 都會詳細紀錄本身的狀態資訊
  • shard 是 Elasticsearch 分散式儲存的基礎,包含 primary shard & replica shard
  • 每一個 shard 就是一個 Lucene instance
  • primary shard 功能是將一份被索引後的資料,分散到多個 data node 上存放,實現儲存方面的水平擴展
  • primary shard 的數量在建立 index 時就會指定,後續是無法修改的,若要修改就必須要進行 reindex
  • 當 primary shard 遺失時,replica shard 就可以被 promote 成 primary shard 來保持資料完整性
  • replica shard 數量可以動態調整,讓每個 data node 上都有完整的資料
  • ES 7.0 開始,primary shard 預設為 1,replica shard 預設為 0
  • replica shard 若設定過多,會降低 cluster 整體的寫入效能
  • replica shard 必須和 primary shard 被分配在不同的 data node 上
  • 所有的 primary shard 可以在同一個 data node 上
  • 透過 GET _cluster/health/<target> 可以取得目前 cluster 的健康狀態
  • Yellow:表示 primary shard 可以正常分配,但 replica shard 分配有問題
  • 透過 GET /_cat/shards/<target> 可以取得目前的 shard 狀態
  • replica shard 無法被分配,因此 cluster 健康狀態為黃色
  • 若是擔心 reboot 機器造成 failover 動作開始執行,可以設定將 replication 延遲一段時間後再執行(透過調整 settings 中的 index.unassigned.node_left.delayed_timeout 參數),避免無謂的 data copy 動作 (此功能稱為 delay allocation)
  • 集群變紅,代表有 primary shard 丟失,這個時候會影響讀寫。
  • 如果 node 重新回來,會從 translog 中恢復沒有寫入的資料
  • 設定 index settings 之後,primary shard 數量無法隨意變更
  • 不建議直接發送請求到master節點,雖然也會工作,但是大量請求發送到 master,會有潛在的性能問題
  • shard 是 ES 中最小的工作單元
  • shard 是一個 Lucene 的 index
  • 將 Index Buffer 中的內容寫入 Segment,而這寫入的過程就稱為 Refresh
  • 當 document 被 refresh 進入到 segment 之後,就可以被搜尋到了
  • 在進行 refresh 時先將 segment 寫入 cache 以開放查詢
  • 將 document 進行索引時,同時也會寫入 transaction log,且預設都會寫入磁碟中
  • 每個 shard 都會有對應的 transaction log
  • 由於 transaction log 都會寫入磁碟中,因此當 node 從故障中恢復時,就會優先讀取 transaction log 來恢復資料
張 旭

Storage Classes | Kubernetes - 0 views

  • A StorageClass provides a way for administrators to describe the "classes" of storage they offer.
  • Kubernetes itself is unopinionated about what classes represent.
  • Each StorageClass contains the fields provisioner, parameters, and reclaimPolicy, which are used when a PersistentVolume belonging to the class needs to be dynamically provisioned.
  • ...2 more annotations...
  • The name of a StorageClass object is significant, and is how users can request a particular class.
  • Administrators can specify a default StorageClass only for PVCs that don't request any particular class to bind to
張 旭

Custom Resources | Kubernetes - 0 views

  • Custom resources are extensions of the Kubernetes API
  • A resource is an endpoint in the Kubernetes API that stores a collection of API objects of a certain kind
  • Custom resources can appear and disappear in a running cluster through dynamic registration
  • ...30 more annotations...
  • Once a custom resource is installed, users can create and access its objects using kubectl
  • When you combine a custom resource with a custom controller, custom resources provide a true declarative API.
  • A declarative API allows you to declare or specify the desired state of your resource and tries to keep the current state of Kubernetes objects in sync with the desired state.
  • Custom controllers can work with any kind of resource, but they are especially effective when combined with custom resources.
  • The Operator pattern combines custom resources and custom controllers.
  • the API represents a desired state, not an exact state.
  • define configuration of applications or infrastructure.
  • The main operations on the objects are CRUD-y (creating, reading, updating and deleting).
  • The client says "do this", and then gets an operation ID back, and has to check a separate Operation object to determine completion of the request.
  • The natural operations on the objects are not CRUD-y.
  • High bandwidth access (10s of requests per second sustained) needed.
  • Use a ConfigMap if any of the following apply
  • You want to put the entire config file into one key of a configMap.
  • You want to perform rolling updates via Deployment, etc., when the file is updated.
  • Use a secret for sensitive data, which is similar to a configMap but more secure.
  • You want to build new automation that watches for updates on the new object, and then CRUD other objects, or vice versa.
  • You want the object to be an abstraction over a collection of controlled resources, or a summarization of other resources.
  • CRDs are simple and can be created without any programming.
  • Aggregated APIs are subordinate API servers that sit behind the primary API server
  • CRDs allow users to create new types of resources without adding another API server
  • Defining a CRD object creates a new custom resource with a name and schema that you specify.
  • The name of a CRD object must be a valid DNS subdomain name
  • each resource in the Kubernetes API requires code that handles REST requests and manages persistent storage of objects.
  • The main API server delegates requests to you for the custom resources that you handle, making them available to all of its clients.
  • The new endpoints support CRUD basic operations via HTTP and kubectl
  • Custom resources consume storage space in the same way that ConfigMaps do.
  • Aggregated API servers may use the same storage as the main API server
  • CRDs always use the same authentication, authorization, and audit logging as the built-in resources of your API server.
  • most RBAC roles will not grant access to the new resources (except the cluster-admin role or any role created with wildcard rules).
  • CRDs and Aggregated APIs often come bundled with new role definitions for the types they add.
張 旭

Extend the Kubernetes API with CustomResourceDefinitions | Kubernetes - 0 views

  • When you create a new CustomResourceDefinition (CRD), the Kubernetes API Server creates a new RESTful resource path for each version you specify.
  • The CRD can be either namespaced or cluster-scoped, as specified in the CRD's scope field
  • deleting a namespace deletes all custom objects in that namespace.
  • ...7 more annotations...
  • CustomResourceDefinitions themselves are non-namespaced and are available to all namespaces.
  • Custom objects can contain custom fields. These fields can contain arbitrary JSON.
  • When you delete a CustomResourceDefinition, the server will uninstall the RESTful API endpoint and delete all custom objects stored in it
  • CustomResourceDefinitions store validated resource data in the cluster's persistence store, etcd.
  • By default, all unspecified fields for a custom resource, across all versions, are pruned.
  • The field json can store any JSON value, without anything being pruned.
  • Finalizers allow controllers to implement asynchronous pre-delete hooks.
crazylion lee

Fig - 0 views

shared by crazylion lee on 30 May 21 - No Cached
張 旭

Logging Architecture | Kubernetes - 0 views

  • Application logs can help you understand what is happening inside your application
  • container engines are designed to support logging.
  • The easiest and most adopted logging method for containerized applications is writing to standard output and standard error streams.
  • ...26 more annotations...
  • In a cluster, logs should have a separate storage and lifecycle independent of nodes, pods, or containers. This concept is called cluster-level logging.
  • Cluster-level logging architectures require a separate backend to store, analyze, and query logs
  • Kubernetes does not provide a native storage solution for log data.
  • use kubectl logs --previous to retrieve logs from a previous instantiation of a container.
  • A container engine handles and redirects any output generated to a containerized application's stdout and stderr streams
  • The Docker JSON logging driver treats each line as a separate message.
  • By default, if a container restarts, the kubelet keeps one terminated container with its logs.
  • An important consideration in node-level logging is implementing log rotation, so that logs don't consume all available storage on the node
  • You can also set up a container runtime to rotate an application's logs automatically.
  • The two kubelet flags container-log-max-size and container-log-max-files can be used to configure the maximum size for each log file and the maximum number of files allowed for each container respectively.
  • The kubelet and container runtime do not run in containers.
  • On machines with systemd, the kubelet and container runtime write to journald. If systemd is not present, the kubelet and container runtime write to .log files in the /var/log directory.
  • System components inside containers always write to the /var/log directory, bypassing the default logging mechanism.
  • Kubernetes does not provide a native solution for cluster-level logging
  • Use a node-level logging agent that runs on every node.
  • implement cluster-level logging by including a node-level logging agent on each node.
  • the logging agent is a container that has access to a directory with log files from all of the application containers on that node.
  • the logging agent must run on every node, it is recommended to run the agent as a DaemonSet
  • Node-level logging creates only one agent per node and doesn't require any changes to the applications running on the node.
  • Containers write stdout and stderr, but with no agreed format. A node-level agent collects these logs and forwards them for aggregation.
  • Each sidecar container prints a log to its own stdout or stderr stream.
  • It is not recommended to write log entries with different formats to the same log stream
  • writing logs to a file and then streaming them to stdout can double disk usage.
  • If you have an application that writes to a single file, it's recommended to set /dev/stdout as the destination
  • it's recommended to use stdout and stderr directly and leave rotation and retention policies to the kubelet.
  • Using a logging agent in a sidecar container can lead to significant resource consumption. Moreover, you won't be able to access those logs using kubectl logs because they are not controlled by the kubelet.
張 旭

Quick start - 0 views

  • Terragrunt will forward almost all commands, arguments, and options directly to Terraform, but based on the settings in your terragrunt.hcl file
  • the backend configuration does not support variables or expressions of any sort
  • the path_relative_to_include() built-in function,
  • ...9 more annotations...
  • The generate attribute is used to inform Terragrunt to generate the Terraform code for configuring the backend.
  • The find_in_parent_folders() helper will automatically search up the directory tree to find the root terragrunt.hcl and inherit the remote_state configuration from it.
  • Unlike the backend configurations, provider configurations support variables,
  • if you needed to modify the configuration to expose another parameter (e.g session_name), you would have to then go through each of your modules to make this change.
  • instructs Terragrunt to create the file provider.tf in the working directory (where Terragrunt calls terraform) before it calls any of the Terraform commands
  • large modules should be considered harmful.
  • it is a Bad Idea to define all of your environments (dev, stage, prod, etc), or even a large amount of infrastructure (servers, databases, load balancers, DNS, etc), in a single Terraform module.
  • Large modules are slow, insecure, hard to update, hard to code review, hard to test, and brittle (i.e., you have all your eggs in one basket).
  • Terragrunt allows you to define your Terraform code once and to promote a versioned, immutable “artifact” of that exact same code from environment to environment.
張 旭

Keep your Terraform code DRY - 0 views

  • Each root terragrunt.hcl file (the one at the environment level, e.g prod/terragrunt.hcl) should define a generate block to generate the AWS provider configuration to assume the role for that environment.
  • The include block tells Terragrunt to use the exact same Terragrunt configuration from the terragrunt.hcl file specified via the path parameter.
  •  
    "Each root terragrunt.hcl file (the one at the environment level, e.g prod/terragrunt.hcl) should define a generate block to generate the AWS provider configuration to assume the role for that environment. "
張 旭

Locals - 0 views

  • common_vars = yamldecode(file(find_in_parent_folders("common_vars.yaml")))
  •  
    "common_vars = yamldecode(file(find_in_parent_folders("common_vars.yaml")))"
張 旭

Override Files - Configuration Language - Terraform by HashiCorp - 0 views

  • In both the required_version and required_providers settings, each override constraint entirely replaces the constraints for the same component in the original block.
  • If both the base block and the override block both set required_version then the constraints in the base block are entirely ignored.
  • Terraform normally loads all of the .tf and .tf.json files within a directory and expects each one to define a distinct set of configuration objects.
  • ...14 more annotations...
  • If two files attempt to define the same object, Terraform returns an error.
  • a human-edited configuration file in the Terraform language native syntax could be partially overridden using a programmatically-generated file in JSON syntax.
  • Terraform has special handling of any configuration file whose name ends in _override.tf or _override.tf.json
  • Terraform initially skips these override files when loading configuration, and then afterwards processes each one in turn (in lexicographical order).
  • merges the override block contents into the existing object.
  • Over-use of override files hurts readability, since a reader looking only at the original files cannot easily see that some portions of those files have been overridden without consulting all of the override files that are present.
  • When using override files, use comments in the original files to warn future readers about which override files apply changes to each block.
  • A top-level block in an override file merges with a block in a normal configuration file that has the same block header.
  • Within a top-level block, an attribute argument within an override block replaces any argument of the same name in the original block.
  • Within a top-level block, any nested blocks within an override block replace all blocks of the same type in the original block.
  • The contents of nested configuration blocks are not merged.
  • If more than one override file defines the same top-level block, the overriding effect is compounded, with later blocks taking precedence over earlier blocks
  • The settings within terraform blocks are considered individually when merging.
  • If the required_providers argument is set, its value is merged on an element-by-element basis, which allows an override block to adjust the constraint for a single provider without affecting the constraints for other providers.
  •  
    "In both the required_version and required_providers settings, each override constraint entirely replaces the constraints for the same component in the original block. "
張 旭

Configuration Blocks and Attributes - 0 views

  • The generate block can be used to arbitrarily generate a file in the terragrunt working directory (where terraform is called).
  • This can be used to generate common terraform configurations that are shared across multiple terraform modules.
  •  
    "The generate block can be used to arbitrarily generate a file in the terragrunt working directory (where terraform is called). "
張 旭

Controllers | Kubernetes - 0 views

  • In robotics and automation, a control loop is a non-terminating loop that regulates the state of a system.
  • controllers are control loops that watch the state of your cluster, then make or request changes where needed
  • Each controller tries to move the current cluster state closer to the desired state.
  • ...12 more annotations...
  • A controller tracks at least one Kubernetes resource type.
  • The controller(s) for that resource are responsible for making the current state come closer to that desired state.
  • in Kubernetes, a controller will send messages to the API server that have useful side effects.
  • Built-in controllers manage state by interacting with the cluster API server.
  • By contrast with Job, some controllers need to make changes to things outside of your cluster.
  • the controller makes some change to bring about your desired state, and then reports current state back to your cluster's API server. Other control loops can observe that reported data and take their own actions.
  • As long as the controllers for your cluster are running and able to make useful changes, it doesn't matter if the overall state is stable or not.
  • Kubernetes uses lots of controllers that each manage a particular aspect of cluster state.
  • a particular control loop (controller) uses one kind of resource as its desired state, and has a different kind of resource that it manages to make that desired state happen.
  • There can be several controllers that create or update the same kind of object.
  • you can have Deployments and Jobs; these both create Pods. The Job controller does not delete the Pods that your Deployment created, because there is information (labels) the controllers can use to tell those Pods apart.
  • Kubernetes comes with a set of built-in controllers that run inside the kube-controller-manager.
  •  
    "In robotics and automation, a control loop is a non-terminating loop that regulates the state of a system. "
張 旭

Operator pattern - Kubernetes - 1 views

  • The Operator pattern aims to capture the key aim of a human operator who is managing a service or set of services
  • Operators are software extensions to Kubernetes that make use of custom resources to manage applications and their components
  • The Operator pattern captures how you can write code to automate a task beyond what Kubernetes itself provides.
  • ...7 more annotations...
  • Operators are clients of the Kubernetes API that act as controllers for a Custom Resource.
  • choosing a leader for a distributed application without an internal member election process
  • publishing a Service to applications that don't support Kubernetes APIs to discover them
  • The core of the Operator is code to tell the API server how to make reality match the configured resources.
  • If you add a new SampleDB, the operator sets up PersistentVolumeClaims to provide durable database storage, a StatefulSet to run SampleDB and a Job to handle initial configuration.If you delete it, the Operator takes a snapshot, then makes sure that the StatefulSet and Volumes are also removed.
  • to deploy an Operator is to add the Custom Resource Definition and its associated Controller to your cluster.
  • Once you have an Operator deployed, you'd use it by adding, modifying or deleting the kind of resource that the Operator uses.
張 旭

Monitor Node Health | Kubernetes - 0 views

  • Node Problem Detector is a daemon for monitoring and reporting about a node's health
  • Node Problem Detector collects information about node problems from various daemons and reports these conditions to the API server as NodeCondition and Event.
  • Node Problem Detector only supports file based kernel log. Log tools such as journald are not supported.
  • ...2 more annotations...
  • kubectl provides the most flexible management of Node Problem Detector.
  • run the Node Problem Detector in your cluster to monitor node health.
張 旭

Use a fake DB adapter to avoid connection errors with rails assets precompile - 0 views

  • # Dockerfile DB_ADAPTER=nulldb bundle exec rake assets:precompile
  • gem "activerecord-nulldb-adapter"
張 旭

[Kubernetes] Taints and Tolerations | 小信豬的原始部落 - 0 views

  • 如果有特定的 node 被加上了 taint(汙點),pod 就不會被分派到上面,除非 pod spec 有設定 toleration(容忍) 來接受這些 taint (必須全部 taint 都接受才行)
  • 假設某個 node 被設定了 effect 為 NoExecute 的 taint,那 k8s 還會把已經存在該 node 上的 pod 趕走,也不會把該 pod 分派到該 node 上。
  • taint 機制設計的目的,就是不要讓 pod 被分派到某個 node 上
  • ...1 more annotation...
  • 當 node 發生問題時(或是任何其他會造成該 node 無法繼續提供服務的情況),管理者需要考慮驅逐目前在上面運行中的 pod,可以透過加上 taint(Effect=NoExecute) 的方式達成
張 旭

A visual guide on troubleshooting Kubernetes deployments - 0 views

  • Service and Deployment aren't connected at all.
  • the Service points to the Pods directly and skips the Deployment altogether.
張 旭

Kubernetes 基本概念 · Kubernetes指南 - 0 views

  • Container(容器)是一种便携式、轻量级的操作系统级虚拟化技术。它使用 namespace 隔离不同的软件运行环境,并通过镜像自包含软件的运行环境,从而使得容器可以很方便的在任何地方运行。
  • 每个应用程序用容器封装,管理容器部署就等同于管理应用程序部署。+
  • Pod 是一组紧密关联的容器集合,它们共享 PID、IPC、Network 和 UTS namespace,是 Kubernetes 调度的基本单位。
  • ...9 more annotations...
  • 进程间通信和文件共享
  • 在 Kubernetes 中,所有对象都使用 manifest(yaml 或 json)来定义
  • Node 是 Pod 真正运行的主机,可以是物理机,也可以是虚拟机。
  • 每个 Node 节点上至少要运行 container runtime(比如 docker 或者 rkt)、kubelet 和 kube-proxy 服务。
  • 常见的 pods, services, replication controllers 和 deployments 等都是属于某一个 namespace 的(默认是 default)
  • node, persistentVolumes 等则不属于任何 namespace
  • Service 是应用服务的抽象,通过 labels 为应用提供负载均衡和服务发现。
  • 匹配 labels 的 Pod IP 和端口列表组成 endpoints,由 kube-proxy 负责将服务 IP 负载均衡到这些 endpoints 上。
  • 每个 Service 都会自动分配一个 cluster IP(仅在集群内部可访问的虚拟地址)和 DNS 名
  •  
    "常见的 pods, services, replication controllers 和 deployments 等都是属于某一个 namespace 的(默认是 default),而 node, persistentVolumes 等则不属于任何 namespace。"
« First ‹ Previous 221 - 240 of 270 Next › Last »
Showing 20 items per page