Skip to main content

Home/ Larvata/ Group items tagged generator

Rss Feed Group items tagged

張 旭

Rate Limits - Let's Encrypt - Free SSL/TLS Certificates - 0 views

  • If you have a lot of subdomains, you may want to combine them into a single certificate, up to a limit of 100 Names per Certificate.
  • A certificate with multiple names is often called a SAN certificate, or sometimes a UCC certificate
  • The main limit is Certificates per Registered Domain (20 per week).
  • ...12 more annotations...
  • A certificate is considered a duplicate of an earlier certificate if they contain the exact same set of hostnames, ignoring capitalization and ordering of hostnames.
  • We also have a Duplicate Certificate limit of 5 certificates per week.
  • a Renewal Exemption to the Certificates per Registered Domain limit.
  • The Duplicate Certificate limit and the Renewal Exemption ignore the public key and extensions requested
  • You can issue 20 certificates in week 1, 20 more certificates in week 2, and so on, while not interfering with renewals of existing certificates.
  • Revoking certificates does not reset rate limits
  • If you’ve hit a rate limit, we don’t have a way to temporarily reset it.
  • get a list of certificates issued for your registered domain by searching on crt.sh
  • Revoking certificates does not reset rate limits
  • If you have a large number of pending authorization objects and are getting a rate limiting error, you can trigger a validation attempt for those authorization objects by submitting a JWS-signed POST to one of its challenges, as described in the ACME spec.
  • If you do not have logs containing the relevant authorization URLs, you need to wait for the rate limit to expire.
  • having a large number of pending authorizations is generally the result of a buggy client
張 旭

Automated Nginx Reverse Proxy for Docker - 0 views

  • Docker containers are assigned random IPs and ports which makes addressing them much more complicated from a client perspsective
  • Binding the container to the hosts port can prevent multiple containers from running on the same host. For example, only one container can bind to port 80 at a time.
  • Docker provides a remote API to inspect containers and access their IP, Ports and other configuration meta-data.
  • ...1 more annotation...
  • nginx template can be used to generate a reverse proxy configuration for docker containers using virtual hosts for routing.
張 旭

An Introduction to HAProxy and Load Balancing Concepts | DigitalOcean - 0 views

  • HAProxy, which stands for High Availability Proxy
  • improve the performance and reliability of a server environment by distributing the workload across multiple servers (e.g. web, application, database).
  • ACLs are used to test some condition and perform an action (e.g. select a server, or block a request) based on the test result.
  • ...28 more annotations...
  • Access Control List (ACL)
  • ACLs allows flexible network traffic forwarding based on a variety of factors like pattern-matching and the number of connections to a backend
  • A backend is a set of servers that receives forwarded requests
  • adding more servers to your backend will increase your potential load capacity by spreading the load over multiple servers
  • mode http specifies that layer 7 proxying will be used
  • specifies the load balancing algorithm
  • health checks
  • A frontend defines how requests should be forwarded to backends
  • use_backend rules, which define which backends to use depending on which ACL conditions are matched, and/or a default_backend rule that handles every other case
  • A frontend can be configured to various types of network traffic
  • Load balancing this way will forward user traffic based on IP range and port
  • Generally, all of the servers in the web-backend should be serving identical content--otherwise the user might receive inconsistent content.
  • Using layer 7 allows the load balancer to forward requests to different backend servers based on the content of the user's request.
  • allows you to run multiple web application servers under the same domain and port
  • acl url_blog path_beg /blog matches a request if the path of the user's request begins with /blog.
  • Round Robin selects servers in turns
  • Selects the server with the least number of connections--it is recommended for longer sessions
  • This selects which server to use based on a hash of the source IP
  • ensure that a user will connect to the same server
  • require that a user continues to connect to the same backend server. This persistence is achieved through sticky sessions, using the appsession parameter in the backend that requires it.
  • HAProxy uses health checks to determine if a backend server is available to process requests.
  • The default health check is to try to establish a TCP connection to the server
  • If a server fails a health check, and therefore is unable to serve requests, it is automatically disabled in the backend
  • For certain types of backends, like database servers in certain situations, the default health check is insufficient to determine whether a server is still healthy.
  • However, your load balancer is a single point of failure in these setups; if it goes down or gets overwhelmed with requests, it can cause high latency or downtime for your service.
  • A high availability (HA) setup is an infrastructure without a single point of failure
  • a static IP address that can be remapped from one server to another.
  • If that load balancer fails, your failover mechanism will detect it and automatically reassign the IP address to one of the passive servers.
張 旭

Logging Architecture | Kubernetes - 0 views

  • Application logs can help you understand what is happening inside your application
  • container engines are designed to support logging.
  • The easiest and most adopted logging method for containerized applications is writing to standard output and standard error streams.
  • ...26 more annotations...
  • In a cluster, logs should have a separate storage and lifecycle independent of nodes, pods, or containers. This concept is called cluster-level logging.
  • Cluster-level logging architectures require a separate backend to store, analyze, and query logs
  • Kubernetes does not provide a native storage solution for log data.
  • use kubectl logs --previous to retrieve logs from a previous instantiation of a container.
  • A container engine handles and redirects any output generated to a containerized application's stdout and stderr streams
  • The Docker JSON logging driver treats each line as a separate message.
  • By default, if a container restarts, the kubelet keeps one terminated container with its logs.
  • An important consideration in node-level logging is implementing log rotation, so that logs don't consume all available storage on the node
  • You can also set up a container runtime to rotate an application's logs automatically.
  • The two kubelet flags container-log-max-size and container-log-max-files can be used to configure the maximum size for each log file and the maximum number of files allowed for each container respectively.
  • The kubelet and container runtime do not run in containers.
  • On machines with systemd, the kubelet and container runtime write to journald. If systemd is not present, the kubelet and container runtime write to .log files in the /var/log directory.
  • System components inside containers always write to the /var/log directory, bypassing the default logging mechanism.
  • Kubernetes does not provide a native solution for cluster-level logging
  • Use a node-level logging agent that runs on every node.
  • implement cluster-level logging by including a node-level logging agent on each node.
  • the logging agent is a container that has access to a directory with log files from all of the application containers on that node.
  • the logging agent must run on every node, it is recommended to run the agent as a DaemonSet
  • Node-level logging creates only one agent per node and doesn't require any changes to the applications running on the node.
  • Containers write stdout and stderr, but with no agreed format. A node-level agent collects these logs and forwards them for aggregation.
  • Each sidecar container prints a log to its own stdout or stderr stream.
  • It is not recommended to write log entries with different formats to the same log stream
  • writing logs to a file and then streaming them to stdout can double disk usage.
  • If you have an application that writes to a single file, it's recommended to set /dev/stdout as the destination
  • it's recommended to use stdout and stderr directly and leave rotation and retention policies to the kubelet.
  • Using a logging agent in a sidecar container can lead to significant resource consumption. Moreover, you won't be able to access those logs using kubectl logs because they are not controlled by the kubelet.
張 旭

Override Files - Configuration Language - Terraform by HashiCorp - 0 views

  • In both the required_version and required_providers settings, each override constraint entirely replaces the constraints for the same component in the original block.
  • If both the base block and the override block both set required_version then the constraints in the base block are entirely ignored.
  • Terraform normally loads all of the .tf and .tf.json files within a directory and expects each one to define a distinct set of configuration objects.
  • ...14 more annotations...
  • If two files attempt to define the same object, Terraform returns an error.
  • a human-edited configuration file in the Terraform language native syntax could be partially overridden using a programmatically-generated file in JSON syntax.
  • Terraform has special handling of any configuration file whose name ends in _override.tf or _override.tf.json
  • Terraform initially skips these override files when loading configuration, and then afterwards processes each one in turn (in lexicographical order).
  • merges the override block contents into the existing object.
  • Over-use of override files hurts readability, since a reader looking only at the original files cannot easily see that some portions of those files have been overridden without consulting all of the override files that are present.
  • When using override files, use comments in the original files to warn future readers about which override files apply changes to each block.
  • A top-level block in an override file merges with a block in a normal configuration file that has the same block header.
  • Within a top-level block, an attribute argument within an override block replaces any argument of the same name in the original block.
  • Within a top-level block, any nested blocks within an override block replace all blocks of the same type in the original block.
  • The contents of nested configuration blocks are not merged.
  • If more than one override file defines the same top-level block, the overriding effect is compounded, with later blocks taking precedence over earlier blocks
  • The settings within terraform blocks are considered individually when merging.
  • If the required_providers argument is set, its value is merged on an element-by-element basis, which allows an override block to adjust the constraint for a single provider without affecting the constraints for other providers.
  •  
    "In both the required_version and required_providers settings, each override constraint entirely replaces the constraints for the same component in the original block. "
張 旭

Kubernetes Components | Kubernetes - 0 views

  • A Kubernetes cluster consists of a set of worker machines, called nodes, that run containerized applications
  • Every cluster has at least one worker node.
  • The control plane manages the worker nodes and the Pods in the cluster.
  • ...29 more annotations...
  • The control plane's components make global decisions about the cluster
  • Control plane components can be run on any machine in the cluster.
  • for simplicity, set up scripts typically start all control plane components on the same machine, and do not run user containers on this machine
  • The API server is the front end for the Kubernetes control plane.
  • kube-apiserver is designed to scale horizontally—that is, it scales by deploying more instances. You can run several instances of kube-apiserver and balance traffic between those instances.
  • Kubernetes cluster uses etcd as its backing store, make sure you have a back up plan for those data.
  • watches for newly created Pods with no assigned node, and selects a node for them to run on.
  • Factors taken into account for scheduling decisions include: individual and collective resource requirements, hardware/software/policy constraints, affinity and anti-affinity specifications, data locality, inter-workload interference, and deadlines.
  • each controller is a separate process, but to reduce complexity, they are all compiled into a single binary and run in a single process.
  • Node controller
  • Job controller
  • Endpoints controller
  • Service Account & Token controllers
  • The cloud controller manager lets you link your cluster into your cloud provider's API, and separates out the components that interact with that cloud platform from components that only interact with your cluster.
  • If you are running Kubernetes on your own premises, or in a learning environment inside your own PC, the cluster does not have a cloud controller manager.
  • An agent that runs on each node in the cluster. It makes sure that containers are running in a Pod.
  • The kubelet takes a set of PodSpecs that are provided through various mechanisms and ensures that the containers described in those PodSpecs are running and healthy.
  • The kubelet doesn't manage containers which were not created by Kubernetes.
  • kube-proxy is a network proxy that runs on each node in your cluster, implementing part of the Kubernetes Service concept.
  • kube-proxy maintains network rules on nodes. These network rules allow network communication to your Pods from network sessions inside or outside of your cluster.
  • kube-proxy uses the operating system packet filtering layer if there is one and it's available.
  • Kubernetes supports several container runtimes: Docker, containerd, CRI-O, and any implementation of the Kubernetes CRI (Container Runtime Interface).
  • Addons use Kubernetes resources (DaemonSet, Deployment, etc) to implement cluster features
  • namespaced resources for addons belong within the kube-system namespace.
  • all Kubernetes clusters should have cluster DNS,
  • Cluster DNS is a DNS server, in addition to the other DNS server(s) in your environment, which serves DNS records for Kubernetes services.
  • Containers started by Kubernetes automatically include this DNS server in their DNS searches.
  • Container Resource Monitoring records generic time-series metrics about containers in a central database, and provides a UI for browsing that data.
  • A cluster-level logging mechanism is responsible for saving container logs to a central log store with search/browsing interface.
張 旭

Monitor Node Health | Kubernetes - 0 views

  • Node Problem Detector is a daemon for monitoring and reporting about a node's health
  • Node Problem Detector collects information about node problems from various daemons and reports these conditions to the API server as NodeCondition and Event.
  • Node Problem Detector only supports file based kernel log. Log tools such as journald are not supported.
  • ...2 more annotations...
  • kubectl provides the most flexible management of Node Problem Detector.
  • run the Node Problem Detector in your cluster to monitor node health.
張 旭

How to Benchmark Performance of MySQL & MariaDB Using SysBench | Severalnines - 1 views

  • SysBench is a C binary which uses LUA scripts to execute benchmarks
  • support for parallelization in the LUA scripts, multiple queries can be executed in parallel
  • by default, benchmarks which cover most of the cases - OLTP workloads, read-only or read-write, primary key lookups and primary key updates.
  • ...21 more annotations...
  • SysBench is not a tool which you can use to tune configurations of your MySQL servers (unless you prepared LUA scripts with custom workload or your workload happen to be very similar to the benchmark workloads that SysBench comes with)
  • it is great for is to compare performance of different hardware.
  • Every new server acquired should go through a warm-up period during which you will stress it to pinpoint potential hardware defects
  • by executing OLTP workload which overloads the server, or you can also use dedicated benchmarks for CPU, disk and memory.
  • bulk_insert.lua. This test can be used to benchmark the ability of MySQL to perform multi-row inserts.
  • All oltp_* scripts share a common table structure. First two of them (oltp_delete.lua and oltp_insert.lua) execute single DELETE and INSERT statements.
  • oltp_point_select, oltp_update_index and oltp_update_non_index. These will execute a subset of queries - primary key-based selects, index-based updates and non-index-based updates.
  • you can run different workload patterns using the same benchmark.
  • Warmup helps to identify “regular” throughput by executing benchmark for a predefined time, allowing to warm up the cache, buffer pools etc.
  • By default SysBench will attempt to execute queries as fast as possible. To simulate slower traffic this option may be used. You can define here how many transactions should be executed per second.
  • SysBench gives you ability to generate different types of data distribution.
  • decide if SysBench should use prepared statements (as long as they are available in the given datastore - for MySQL it means PS will be enabled by default) or not.
  • sysbench ./sysbench/src/lua/oltp_read_write.lua  help
  • By default, SysBench will attempt to execute queries in explicit transaction. This way the dataset will stay consistent and not affected: SysBench will, for example, execute INSERT and DELETE on the same row, making sure the data set will not grow (impacting your ability to reproduce results).
  • specify error codes from MySQL which SysBench should ignore (and not kill the connection).
  • the two most popular benchmarks - OLTP read only and OLTP read/write.
  • 1 million rows will result in ~240 MB of data. Ten tables, 1000 000 rows each equals to 2.4GB
  • by default, SysBench looks for ‘sbtest’ schema which has to exist before you prepare the data set. You may have to create it manually.
  • pass ‘--histogram’ argument to SysBench
  • ~48GB of data (20 tables, 10 000 000 rows each).
  • if you don’t understand why the performance was like it was, you may draw incorrect conclusions out of the benchmarks.
張 旭

Overriding Auto Devops - 0 views

  • most customers need to modify the devops pipeline to suit there needs
  • include Auto Devops and override it.
  • include all of Auto Devops, just as if the Auto Devops checkbox were checked for the project
  • ...4 more annotations...
  • skips for all the scans, as a way of speeding up the build process while working on the CI configuration
  • The Auto Devops test job, which uses Herokuish for testing, does not rely on the Docker image that’s generated during the Build job
  • moving the Test job to the Build stage to speed things along
  • Literally any part of Auto Devops can be overridden in your own CI configuration.
張 旭

Using Traefik as a reverse proxy | Blog Eleven Labs - 0 views

  • a proxy is associated with the client(s), while a reverse proxy is associated with the server(s); a reverse proxy is usually an internal-facing proxy used as a ‘front-end’ to control and protect access to a server on a private network.
  • the restart: always instruction will allow our reverse-proxy service to restart automatically, on its own.
  • add an [api] section to enable the dashboard and the API
  • ...6 more annotations...
  • double the $ symbols in order to escape the $ symbols as it tries to reference a variable.
  • stay consistent with names inside routers and middlewares.
  • providers.file
  • the service name is always in the form of [service name]@[provider
  • write different routing rules for a service and how to generate SSL certificates
  • traefik.http.services.home.loadbalancer.server.port=8123 indicates that the service port I want to expose is 8123.
  •  
    "a proxy is associated with the client(s), while a reverse proxy is associated with the server(s); a reverse proxy is usually an internal-facing proxy used as a 'front-end' to control and protect access to a server on a private network."
張 旭

Creating Highly Available clusters with kubeadm | Kubernetes - 0 views

  • If instead, you prefer to copy certs across control-plane nodes manually or using automation tools, please remove this flag and refer to Manual certificate distribution section below.
  • if you are using a kubeadm configuration file set the podSubnet field under the networking object of ClusterConfiguration.
  • manually copy the certificates from the primary control plane node to the joining control plane nodes.
  • ...1 more annotation...
  • Copy only the certificates in the above list. kubeadm will take care of generating the rest of the certificates with the required SANs for the joining control-plane instances.
張 旭

Creating a cluster with kubeadm | Kubernetes - 0 views

  • (Recommended) If you have plans to upgrade this single control-plane kubeadm cluster to high availability you should specify the --control-plane-endpoint to set the shared endpoint for all control-plane nodes
  • set the --pod-network-cidr to a provider-specific value.
  • kubeadm tries to detect the container runtime by using a list of well known endpoints.
  • ...12 more annotations...
  • kubeadm uses the network interface associated with the default gateway to set the advertise address for this particular control-plane node's API server. To use a different network interface, specify the --apiserver-advertise-address=<ip-address> argument to kubeadm init
  • Do not share the admin.conf file with anyone and instead grant users custom permissions by generating them a kubeconfig file using the kubeadm kubeconfig user command.
  • The token is used for mutual authentication between the control-plane node and the joining nodes. The token included here is secret. Keep it safe, because anyone with this token can add authenticated nodes to your cluster.
  • You must deploy a Container Network Interface (CNI) based Pod network add-on so that your Pods can communicate with each other. Cluster DNS (CoreDNS) will not start up before a network is installed.
  • Take care that your Pod network must not overlap with any of the host networks
  • Make sure that your Pod network plugin supports RBAC, and so do any manifests that you use to deploy it.
  • You can install only one Pod network per cluster.
  • The cluster created here has a single control-plane node, with a single etcd database running on it.
  • The node-role.kubernetes.io/control-plane label is such a restricted label and kubeadm manually applies it using a privileged client after a node has been created.
  • By default, your cluster will not schedule Pods on the control plane nodes for security reasons.
  • kubectl taint nodes --all node-role.kubernetes.io/control-plane-
  • remove the node-role.kubernetes.io/control-plane:NoSchedule taint from any nodes that have it, including the control plane nodes, meaning that the scheduler will then be able to schedule Pods everywhere.
張 旭

The for_each Meta-Argument - Configuration Language | Terraform | HashiCorp Developer - 0 views

  • A given resource or module block cannot use both count and for_each
  • The for_each meta-argument accepts a map or a set of strings, and creates an instance for each item in that map or set
  • each.key — The map key (or set member) corresponding to this instance.
  • ...10 more annotations...
  • each.value — The map value corresponding to this instance. (If a set was provided, this is the same as each.key.)
  • for_each keys cannot be the result (or rely on the result of) of impure functions, including uuid, bcrypt, or timestamp, as their evaluation is deferred during the main evaluation step.
  • The value used in for_each is used to identify the resource instance and will always be disclosed in UI output, which is why sensitive values are not allowed.
  • if you would like to call keys(local.map), where local.map is an object with sensitive values (but non-sensitive keys), you can create a value to pass to for_each with toset([for k,v in local.map : k]).
  • for_each can't refer to any resource attributes that aren't known until after a configuration is applied (such as a unique ID generated by the remote API when an object is created).
  • he for_each argument does not implicitly convert lists or tuples to sets.
  • Transform a multi-level nested structure into a flat list by using nested for expressions with the flatten function.
  • Instances are identified by a map key (or set member) from the value provided to for_each
  • Within nested provisioner or connection blocks, the special self object refers to the current resource instance, not the resource block as a whole.
  • Conversion from list to set discards the ordering of the items in the list and removes any duplicate elements.
張 旭

Kubernetes YAML Generator - 0 views

shared by 張 旭 on 24 Mar 22 - No Cached
« First ‹ Previous 81 - 94 of 94
Showing 20 items per page