Skip to main content

Home/ Larvata/ Group items tagged service

Rss Feed Group items tagged

張 旭

Orbs, Jobs, Steps, and Workflows - CircleCI - 0 views

  • Orbs are packages of config that you either import by name or configure inline to simplify your config, share, and reuse config within and across projects.
  • Jobs are a collection of Steps.
  • All of the steps in the job are executed in a single unit which consumes a CircleCI container from your plan while it’s running.
  • ...11 more annotations...
  • Workspaces persist data between jobs in a single Workflow.
  • Caching persists data between the same job in different Workflow builds.
  • Artifacts persist data after a Workflow has finished.
  • run using the machine executor which enables reuse of recently used machine executor runs,
  • docker executor which can compose Docker containers to run your tests and any services they require
  • macos executor
  • Steps are a collection of executable commands which are run during a job
  • In addition to the run: key, keys for save_cache:, restore_cache:, deploy:, store_artifacts:, store_test_results: and add_ssh_keys are nested under Steps.
  • checkout: key is required to checkout your code
  • run: enables addition of arbitrary, multi-line shell command scripting
  • orchestrating job runs with parallel, sequential, and manual approval workflows.
張 旭

What's the Docker Swarm "-advertise-addr"? - Blog | BoxBoat - 0 views

  • To put it simply, the --advertise-addr is the address other nodes in the Docker swarm use to connect into your node.
  • a port number which defaults to 2377
  • The --listen-addr is the address that the swarm service listens on for incoming connections.
  • ...2 more annotations...
  • The default for --listen-addr is to listen on all interfaces on TCP port 2377 (0.0.0.0:2377)
  • Depending on your network architecture, you may want your swarm management interface only accessible on a management network that could be separate from a data and/or public network that are each attached to a physical server.
張 旭

Manage nodes in a swarm | Docker Documentation - 0 views

  • Drain means the scheduler doesn’t assign new tasks to the node. The scheduler shuts down any existing tasks and schedules them on an available node.
  • Reachable means the node is a manager node participating in the Raft consensus quorum. If the leader node becomes unavailable, the node is eligible for election as the new leader.
  • If a manager node becomes unavailable, you should either join a new manager node to the swarm or promote a worker node to be a manager.
  • ...8 more annotations...
  • docker node inspect self --pretty
  • docker node update --availability drain node
  • use node labels in service constraints
  • The labels you set for nodes using docker node update apply only to the node entity within the swarm
  • node labels can be used to limit critical tasks to nodes that meet certain requirements
  • promote a worker node to the manager role
  • demote a manager node to the worker role
  • If the last manager node leaves the swarm, the swarm becomes unavailable requiring you to take disaster recovery measures.
張 旭

Swarm task states | Docker Documentation - 0 views

  • Each service can start multiple tasks.
  • Tasks are execution units that run once to completion.
  • The task progresses forward through a number of states, and its state doesn’t go backward.
張 旭

NAT Gateways - Amazon Virtual Private Cloud - 0 views

  • a network address translation (NAT) gateway to enable instances in a private subnet to connect to the internet or other AWS services
  • but prevent the internet from initiating a connection with those instances
  • NAT gateways are not supported for IPv6 traffic
  • ...11 more annotations...
  • must specify the public subnet in which the NAT gateway should reside
  • update the route table associated with one or more of your private subnets to point Internet-bound traffic to the NAT gateway.
  • NAT gateway is created in a specific Availability Zone and implemented with redundancy in that zone.
  • ensure that resources use the NAT gateway in the same Availability Zone
  • The main route table sends internet traffic from the instances in the private subnet to the NAT gateway. The NAT gateway sends the traffic to the internet gateway using the NAT gateway’s Elastic IP address as the source IP address
  • A NAT gateway supports 5 Gbps of bandwidth and automatically scales up to 45 Gbps
  • You can associate exactly one Elastic IP address with a NAT gateway
  • A NAT gateway supports the following protocols: TCP, UDP, and ICMP
  • cannot associate a security group with a NAT gateway.
  • create a NAT gateway in the same subnet as your NAT instance, and then replace the existing route in your route table that points to the NAT instance with a route that points to the NAT gateway
  • A NAT gateway cannot send traffic over VPC endpoints, VPN connections, AWS Direct Connect, or VPC peering connections.
張 旭

Using Infrastructure as Code to Automate VMware Deployments - 1 views

  • Infrastructure as code is at the heart of provisioning for cloud infrastructure marking a significant shift away from monolithic point-and-click management tools.
  • infrastructure as code enables operators to take a programmatic approach to provisioning.
  • provides a single workflow to provision and maintain infrastructure and services from all of your vendors, making it not only easier to switch providers
  • ...5 more annotations...
  • A Terraform Provider is responsible for understanding API interactions between and exposing the resources from a given Infrastructure, Platform, or SaaS offering to Terraform.
  • write a Terraform file that describes the Virtual Machine that you want, apply that file with Terraform and create that VM as you described without ever needing to log into the vSphere dashboard.
  • HashiCorp Configuration Language (HCL)
  • the provider credentials are passed in at the top of the script to connect to the vSphere account.
  • modules— a way to encapsulate infrastructure resources into a reusable format.
  •  
    "revolutionizing"
張 旭

The Twelve-Factor App - 0 views

  • Logs are the stream of aggregated, time-ordered events collected from the output streams of all running processes and backing services.
  • Logs have no fixed beginning or end, but flow continuously as long as the app is operating.
  • each running process writes its event stream, unbuffered, to stdout.
  • ...2 more annotations...
  • long-term archival. These archival destinations are not visible to or configurable by the app, and instead are completely managed by the execution environment.
  • Most significantly, the stream can be sent to a log indexing and analysis system such as Splunk, or a general-purpose data warehousing system such as Hadoop/Hive.
張 旭

The Twelve-Factor App - 0 views

  • PHP processes run as child processes of Apache, started on demand as needed by request volume.
  • Java processes take the opposite approach, with the JVM providing one massive uberprocess that reserves a large block of system resources (CPU and memory) on startup, with concurrency managed internally via threads
  • Processes in the twelve-factor app take strong cues from the unix process model for running service daemons.
  • ...3 more annotations...
  • application must also be able to span multiple processes running on multiple physical machines.
  • The array of process types and number of processes of each type is known as the process formation.
  • Twelve-factor app processes should never daemonize or write PID files.
張 旭

DevOps - 0 views

  • 对于运维来说,知识的传承非常重要,非常有必要建立运维的知识库。一方面 有利于对事件的复盘回顾,另一方面也有助于日后参加运维的人员尽快熟悉与掌握系统的运维技能
  • 云平台主要从以下3个方面对DevOps提供支撑(括号内为承载此能力的软件工具): 1. 基于IaaS的自服务与环境编排能力(VMWare) 2. 基于PaaS的弹性伸缩能力(K8s) 3. 基于SaaS的软件服务能力
  • 考虑自建私有云,至少是混合云。
  • ...11 more annotations...
  • 内网建立所谓的私库,作为代理与外网的公共库同步。
  • 很难做到真正意义上的DevOps to Production
  • 可视化是为了实时展现持续交付流水线执行情况与单元测试的执行报告
  • 通过持续交付流水线串联自动化测试,在测试环境部署成功后触发自动化测试。
  • 测试阶段也需要测试报告的可视化与结果通知
  • 企业的持续交付流水线往往都打不通到生产环境
  • Service Desk不是某一款软件的名字,而是ITIL(信息技术基础架构库,可认为是ITSM的落地实现)里面承载变更管理与事件管理的工具统称。
  • 构建底层的云平台,是整个DevOps基础架构的基石
  • 架构不是一成不变的,而是应该随着实际需求变化而持续演化,能力也要跟着持续提升。
  • 并行测试的执行环境通过PaaS平台按需自动生成,测试执行完毕后自动销毁。
  • 即使是雷同的项目,在对编译构建上的一些细枝末节的差别也很可能导致它们的持续交付流水线设计非常不一样。
  •  
    "对于运维来说,知识的传承非常重要,非常有必要建立运维的知识库。一方面 有利于对事件的复盘回顾,另一方面也有助于日后参加运维的人员尽快熟悉与掌握系统的运维技能。"
張 旭

jwilder/nginx-proxy: Automated nginx proxy for Docker containers using docker-gen - 0 views

  • docker-gen generates reverse proxy configs for nginx and reloads nginx when containers are started and stopped.
  • /var/run/docker.sock:/tmp/docker.sock:ro
  • Use this image to fully support HTTP/2 (including ALPN required by recent Chrome versions).
  • ...10 more annotations...
  • support multiple virtual hosts for a container
  • to connect to your backend using HTTPS instead of HTTP, set VIRTUAL_PROTO=https on the backend container.
  • The contents of /path/to/certs should contain the certificates and private keys for any virtual hosts in use.
  • to replace the default proxy settings for the nginx container, add a configuration file at /etc/nginx/proxy.conf
  • The default configuration blocks the Proxy HTTP request header from being sent to downstream servers
  • add your configuration file under /etc/nginx/conf.d using a name ending in .conf
  • If your container exposes multiple ports, nginx-proxy will default to the service running on port 80. If you need to specify a different port, you can set a VIRTUAL_PORT env var to select a different one.
  • To add settings on a per-VIRTUAL_HOST basis, add your configuration file under /etc/nginx/vhost.d
  • SNI
  • The default behavior for the proxy when port 80 and 443 are exposed is as follows: If a container has a usable cert, port 80 will redirect to 443 for that container so that HTTPS is always preferred when available. If the container does not have a usable cert, a 503 will be returned.
張 旭

Azure 101: Networking Part 1 - Cloud Solution Architect - 0 views

  • Virtual Private Gateways and it is these combined set of services that allow you to provide traffic flow to/from your Virtual Network and any external network, such as your On-Prem DataCenter.
  • No matter which version of the gateway you plan on implementing, there are three resources within Azure that you will need to implement and then connect to one of your Virtual Networks.
  • "Gateway Subnet". This is a specialized Subnet within your Virtual Network that can only be used for connecting Virtual Private Gateways to a VPN connection of some kind.
  • ...2 more annotations...
  • The Local Gateway is where you define the configuration of your external network's VPN access point with the most important piece being the external IP of that device so that Azure knows exactly how to establish the VPN connection.
  • The VPN Gateway is the Azure resource that you tie into your Gateway Subnet within your Virtual Network.
張 旭

Pods - Kubernetes - 0 views

  • Pods are the smallest deployable units of computing
  • A Pod (as in a pod of whales or pea pod) is a group of one or more containersA lightweight and portable executable image that contains software and all of its dependencies. (such as Docker containers), with shared storage/network, and a specification for how to run the containers.
  • A Pod’s contents are always co-located and co-scheduled, and run in a shared context.
  • ...32 more annotations...
  • A Pod models an application-specific “logical host”
  • application containers which are relatively tightly coupled
  • being executed on the same physical or virtual machine would mean being executed on the same logical host.
  • The shared context of a Pod is a set of Linux namespaces, cgroups, and potentially other facets of isolation
  • Containers within a Pod share an IP address and port space, and can find each other via localhost
  • Containers in different Pods have distinct IP addresses and can not communicate by IPC without special configuration. These containers usually communicate with each other via Pod IP addresses.
  • Applications within a Pod also have access to shared volumesA directory containing data, accessible to the containers in a pod. , which are defined as part of a Pod and are made available to be mounted into each application’s filesystem.
  • a Pod is modelled as a group of Docker containers with shared namespaces and shared filesystem volumes
    • 張 旭
       
      類似 docker-compose 裡面宣告的同一坨?
  • Pods are considered to be relatively ephemeral (rather than durable) entities.
  • Pods are created, assigned a unique ID (UID), and scheduled to nodes where they remain until termination (according to restart policy) or deletion.
  • it can be replaced by an identical Pod
  • When something is said to have the same lifetime as a Pod, such as a volume, that means that it exists as long as that Pod (with that UID) exists.
  • uses a persistent volume for shared storage between the containers
  • Pods serve as unit of deployment, horizontal scaling, and replication
  • The applications in a Pod all use the same network namespace (same IP and port space), and can thus “find” each other and communicate using localhost
  • flat shared networking space
  • Containers within the Pod see the system hostname as being the same as the configured name for the Pod.
  • Volumes enable data to survive container restarts and to be shared among the applications within the Pod.
  • Individual Pods are not intended to run multiple instances of the same application
  • The individual containers may be versioned, rebuilt and redeployed independently.
  • Pods aren’t intended to be treated as durable entities.
  • Controllers like StatefulSet can also provide support to stateful Pods.
  • When a user requests deletion of a Pod, the system records the intended grace period before the Pod is allowed to be forcefully killed, and a TERM signal is sent to the main process in each container.
  • Once the grace period has expired, the KILL signal is sent to those processes, and the Pod is then deleted from the API server.
  • grace period
  • Pod is removed from endpoints list for service, and are no longer considered part of the set of running Pods for replication controllers.
  • When the grace period expires, any processes still running in the Pod are killed with SIGKILL.
  • By default, all deletes are graceful within 30 seconds.
  • You must specify an additional flag --force along with --grace-period=0 in order to perform force deletions.
  • Force deletion of a Pod is defined as deletion of a Pod from the cluster state and etcd immediately.
  • StatefulSet Pods
  • Processes within the container get almost the same privileges that are available to processes outside a container.
張 旭

What is a DNS Zone? Master and Slave DNS Zone and how to create it. - 0 views

  • DNS zone is a container of DNS settings and DNS records of a DNS namespace.
  • The DNS namespace can have single or multiple DNS zones, each managed by a particular DNS host/service.
  • Don’t directly associate a DNS zone with a specific domain.
  • ...9 more annotations...
  • DNS zones can be on the same servers
  • A DNS zone may contain multiple domain names or a single one;
  • Master zones, contain a read/write copy of the zone data.
  • There could be only one Master zone on one DNS server at a time.
  • If you want to have redundancy, you must have the zone data accessible on multiple servers.
  • The Slave zone is a read-only copy of the zone data.
  • Most of the times Slave DNS zones are copies of Master zones.
  • If you try to change a DNS record on a Secondary zone, it can redirect you to another zone with read/write access. By itself, it can’t change it.
  • the primary purposes of a Slave zone is to serve as a backup
張 旭

Warnings, Notes, & Tips - 0 views

  • AS3 manages topology records globally in /Common, it is required that records only be managed through AS3, as it will treat the records declaratively.
  • If a record is added outside of AS3, it will be removed if it is not included in the next AS3 declaration for topology records (AS3 completely overwrites non-AS3 topologies when a declaration is submitted).
  • using AS3 to delete a tenant (for example, sending DELETE to the /declare/<TENANT> endpoint) that contains GSLB topologies will completely remove ALL GSLB topologies from the BIG-IP.
  • ...12 more annotations...
  • When posting a large declaration (hundreds of application services in a single declaration), you may experience a 500 error stating that the save sys config operation failed.
  • Even if you have asynchronous mode set to false, after 45 seconds AS3 sets asynchronous mode to true (API swap), and returns an async response.
  • When creating a new tenant using AS3, it must not use the same name as a partition you separately create on the target BIG-IP system.
  • If you use the same name and then post the declaration, AS3 overwrites (or removes) the existing partition completely, including all configuration objects in that partition.
  • use AS3 to create a tenant (which creates a BIG-IP partition), manually adding configuration objects to the partition created by AS3 can have unexpected results
  • When you delete the Tenant using AS3, the system deletes both virtual servers.
  • if a Firewall_Address_List contains zero addresses, a dummy IPv6 address of ::1:5ee:bad:c0de is added in order to maintain a valid Firewall_Address_List. If an address is added to the list, the dummy address is removed.
  • use /mgmt/shared/appsvcs/declare?async=true if you have a particularly large declaration which will take a long time to process.
  • reviewing the Sizing BIG-IP Virtual Editions section (page 7) of Deploying BIG-IP VEs in a Hyper-Converged Infrastructure
  • To test whether your system has AS3 installed or not, use GET with the /mgmt/shared/appsvcs/info URI.
  • You may find it more convenient to put multi-line texts such as iRules into AS3 declarations by first encoding them in Base64.
  • no matter your BIG-IP user account name, audit logs show all messages from admin and not the specific user name.
張 旭

An Introduction to HAProxy and Load Balancing Concepts | DigitalOcean - 0 views

  • HAProxy, which stands for High Availability Proxy
  • improve the performance and reliability of a server environment by distributing the workload across multiple servers (e.g. web, application, database).
  • ACLs are used to test some condition and perform an action (e.g. select a server, or block a request) based on the test result.
  • ...28 more annotations...
  • Access Control List (ACL)
  • ACLs allows flexible network traffic forwarding based on a variety of factors like pattern-matching and the number of connections to a backend
  • A backend is a set of servers that receives forwarded requests
  • adding more servers to your backend will increase your potential load capacity by spreading the load over multiple servers
  • mode http specifies that layer 7 proxying will be used
  • specifies the load balancing algorithm
  • health checks
  • A frontend defines how requests should be forwarded to backends
  • use_backend rules, which define which backends to use depending on which ACL conditions are matched, and/or a default_backend rule that handles every other case
  • A frontend can be configured to various types of network traffic
  • Load balancing this way will forward user traffic based on IP range and port
  • Generally, all of the servers in the web-backend should be serving identical content--otherwise the user might receive inconsistent content.
  • Using layer 7 allows the load balancer to forward requests to different backend servers based on the content of the user's request.
  • allows you to run multiple web application servers under the same domain and port
  • acl url_blog path_beg /blog matches a request if the path of the user's request begins with /blog.
  • Round Robin selects servers in turns
  • Selects the server with the least number of connections--it is recommended for longer sessions
  • This selects which server to use based on a hash of the source IP
  • ensure that a user will connect to the same server
  • require that a user continues to connect to the same backend server. This persistence is achieved through sticky sessions, using the appsession parameter in the backend that requires it.
  • HAProxy uses health checks to determine if a backend server is available to process requests.
  • The default health check is to try to establish a TCP connection to the server
  • If a server fails a health check, and therefore is unable to serve requests, it is automatically disabled in the backend
  • For certain types of backends, like database servers in certain situations, the default health check is insufficient to determine whether a server is still healthy.
  • However, your load balancer is a single point of failure in these setups; if it goes down or gets overwhelmed with requests, it can cause high latency or downtime for your service.
  • A high availability (HA) setup is an infrastructure without a single point of failure
  • a static IP address that can be remapped from one server to another.
  • If that load balancer fails, your failover mechanism will detect it and automatically reassign the IP address to one of the passive servers.
張 旭

GitLab Auto DevOps 深入淺出,自動部署,連設定檔不用?! | 五倍紅寶石・專業程式教育 - 0 views

  • 一個 K8S 的 Cluster,Auto DevOps 將會把網站部署到這個 Cluster
  • 需要有一個 wildcard 的 DNS 讓部署在這個環境的網站有 Domain name
  • 一個可以跑 Docker 的 GitLab Runner,將會為由它來執行 CI / CD 的流程。
  • ...37 more annotations...
  • 其實 Auto DevOps 就是一份官方寫好的 gitlab-ci.yml,在啟動 Auto DevOps 的專案裡,如果找不到 gitlab-ci.yml 檔,那就會直接用官方 gitlab-ci.yml 去跑 CI / CD 流程。
  • Pod 是 K8S 中可以被部署的最小元件,一個 Pod 是由一到多個 Container 組成,同個 Pod 的不同 Container 之間彼此共享網路資源。
  • 每個 Pod 都會有它的 yaml 檔,用以描述 Pod 會使用的 Image 還有連接的 Port 等資訊。
  • Node 又分成 Worker Node 和 Master Node 兩種
  • Helm 透過參數 (parameter) 跟模板 (template) 的方式,讓我們可以在只修改參數的方式重複利用模板。
  • 為了要有 CI CD 的功能我們會把 .gitlab-ci.yml 放在專案的根目錄裡, GitLab 會依造 .gitlab-ci.yml 的設定產生 CI/CD Pipeline,每個 Pipeline 裡面可能有多個 Job,這時候就會需要有 GitLab Runner 來執行這些 Job 並把執行的結果回傳給 GitLab 讓它知道這個 Job 是否有正常執行。
  • 把專案打包成 Docker Image 這工作又或是 helm 的操作都會在 Container 內執行
  • CI/CD Pipeline 是由 stage 還有 job 組成的,stage 是有順序性的,前面的 stage 完成後才會開始下一個 stage。
  • 每個 stage 裡面包含一到多個 Job
  • Auto Devops 裡也會大量用到這種在指定 Container 內運行的工作。
  • 可以通過 health checks
  • 開 private 的話還要注意使用 Container Registry 的權限問題
  • 申請好的 wildcard 的 DNS
  • Auto Devops 也提供只要設定環境變數就能一定程度客製化的選項
  • 特別注意 namespace 有沒有設定對,不然會找不到資料喔
  • Auto Devops,如果想要進一步的客製化,而且是改 GitLab 環境變數都無法實現的客製化,這時候還是得回到 .gitlab-ci.yml 設定檔
  • 在 Docker in Docker 的環境用 Dockerfile 打包 Image
  • 用 helm upgrade 把 chart 部署到 K8S 上
  • GitLab CI 的環境變數主要有三個來源,優先度高到低依序為Settings > CI/CD 介面定義的變數gitlab_ci.yml 定義環境變數GitLab 預設環境變數
  • 把專案打包成 Docker Image 首先需要在專案下新增一份 Dockerfile
  • Auto Devops 裡面的做法,用 herokuish 提供的 Image 來打包專案
  • 在 Runner 的環境中是沒有 docker 指令可以用的,所以這邊啟動一個 Docker Container 在裡面執行就可以用 docker 指令了。
  • 其中 $CI_COMMIT_SHA $CI_COMMIT_BEFORE_SHA 這兩個都是 GitLab 預設環境變數,代表這次 commit 還有上次 commit 的 SHA 值。
  • dind 則是直接啟動 docker daemon,此外 dind 還會自動產生 TLS certificates
  • 為了在 Docker Container 內運行 Docker,會把 Host 上面的 Docker API 分享給 Container。
  • docker:stable 有執行 docker 需要的執行檔,他裡面也包含要啟動 docker 的程式(docker daemon),但啟動 Container 的 entrypoint 是 sh
  • docker:dind 繼承自 docker:stable,而且它 entrypoint 就是啟動 docker 的腳本,此外還會做完 TLS certificates
  • Container 要去連 Host 上的 Docker API 。但現在連線失敗卻是找 http://docker:2375,現在的 dind 已經不是被當做 services 來用了,而是要直接在裡面跑 Docker,所以他應該是要 unix:///var/run/docker.sock 用這種連線,於是把環境變數 DOCKER_HOST 從 tcp://docker:2375 改成空字串,讓 docker daemon 走預設連線就能成功囉!
  • auto-deploy preparationhelm init 建立 helm 專案設定 tiller 在背景執行設定 cluster 的 namespace
  • auto-deploy deploy使用 helm upgrade 部署 chart 到 K8S 上透過 --set 來設定要注入 template 的參數
  • set -x,這樣就能在執行前,顯示指令內容。
  • 用 helm repo list 看看現在有註冊哪些 Chart Repository
  • helm fetch gitlab/auto-deploy-app --untar
  • nohup 可以讓你在離線或登出系統後,還能夠讓工作繼續進行
  • 在不特別設定 CI_APPLICATION_REPOSITORY 的情況下,image_repository 的值就是預設環境變數 CI_REGISTRY_IMAGE/CI_COMMIT_REF_SLUG
  • A:-B 的意思是如果有 A 就用它,沒有就用 B
  • 研究 Auto Devops 難度最高的地方就是太多工具整合在一起,搞不清楚他們之間的關係,出錯也不知道從何查起
張 旭

Best practices for building Kubernetes Operators and stateful apps | Google Cloud Blog - 0 views

  • use the StatefulSet workload controller to maintain identity for each of the pods, and to use Persistent Volumes to persist data so it can survive a service restart.
  • a way to extend Kubernetes functionality with application specific logic using custom resources and custom controllers.
  • An Operator can automate various features of an application, but it should be specific to a single application
  • ...12 more annotations...
  • Kubebuilder is a comprehensive development kit for building and publishing Kubernetes APIs and Controllers using CRDs
  • Design declarative APIs for operators, not imperative APIs. This aligns well with Kubernetes APIs that are declarative in nature.
  • With declarative APIs, users only need to express their desired cluster state, while letting the operator perform all necessary steps to achieve it.
  • scaling, backup, restore, and monitoring. An operator should be made up of multiple controllers that specifically handle each of the those features.
  • the operator can have a main controller to spawn and manage application instances, a backup controller to handle backup operations, and a restore controller to handle restore operations.
  • each controller should correspond to a specific CRD so that the domain of each controller's responsibility is clear.
  • If you keep a log for every container, you will likely end up with unmanageable amount of logs.
  • integrate application-specific details to the log messages such as adding a prefix for the application name.
  • you may have to use external logging tools such as Google Stackdriver, Elasticsearch, Fluentd, or Kibana to perform the aggregations.
  • adding labels to metrics to facilitate aggregation and analysis by monitoring systems.
  • a more viable option is for application pods to expose a metrics HTTP endpoint for monitoring tools to scrape.
  • A good way to achieve this is to use open-source application-specific exporters for exposing Prometheus-style metrics.
« First ‹ Previous 101 - 120 of 130 Next ›
Showing 20 items per page