Skip to main content

Home/ Larvata/ Group items tagged worker

Rss Feed Group items tagged

張 旭

Queues - Laravel - The PHP Framework For Web Artisans - 0 views

  • Laravel queues provide a unified API across a variety of different queue backends, such as Beanstalk, Amazon SQS, Redis, or even a relational database.
  • The queue configuration file is stored in config/queue.php
  • a synchronous driver that will execute jobs immediately (for local use)
  • ...56 more annotations...
  • A null queue driver is also included which discards queued jobs.
  • In your config/queue.php configuration file, there is a connections configuration option.
  • any given queue connection may have multiple "queues" which may be thought of as different stacks or piles of queued jobs.
  • each connection configuration example in the queue configuration file contains a queue attribute.
  • if you dispatch a job without explicitly defining which queue it should be dispatched to, the job will be placed on the queue that is defined in the queue attribute of the connection configuration
  • pushing jobs to multiple queues can be especially useful for applications that wish to prioritize or segment how jobs are processed
  • specify which queues it should process by priority.
  • If your Redis queue connection uses a Redis Cluster, your queue names must contain a key hash tag.
  • ensure all of the Redis keys for a given queue are placed into the same hash slot
  • all of the queueable jobs for your application are stored in the app/Jobs directory.
  • Job classes are very simple, normally containing only a handle method which is called when the job is processed by the queue.
  • we were able to pass an Eloquent model directly into the queued job's constructor. Because of the SerializesModels trait that the job is using, Eloquent models will be gracefully serialized and unserialized when the job is processing.
  • When the job is actually handled, the queue system will automatically re-retrieve the full model instance from the database.
  • The handle method is called when the job is processed by the queue
  • The arguments passed to the dispatch method will be given to the job's constructor
  • delay the execution of a queued job, you may use the delay method when dispatching a job.
  • dispatch a job immediately (synchronously), you may use the dispatchNow method.
  • When using this method, the job will not be queued and will be run immediately within the current process
  • specify a list of queued jobs that should be run in sequence.
  • Deleting jobs using the $this->delete() method will not prevent chained jobs from being processed. The chain will only stop executing if a job in the chain fails.
  • this does not push jobs to different queue "connections" as defined by your queue configuration file, but only to specific queues within a single connection.
  • To specify the queue, use the onQueue method when dispatching the job
  • To specify the connection, use the onConnection method when dispatching the job
  • defining the maximum number of attempts on the job class itself.
  • to defining how many times a job may be attempted before it fails, you may define a time at which the job should timeout.
  • using the funnel method, you may limit jobs of a given type to only be processed by one worker at a time
  • using the throttle method, you may throttle a given type of job to only run 10 times every 60 seconds.
  • If an exception is thrown while the job is being processed, the job will automatically be released back onto the queue so it may be attempted again.
  • dispatch a Closure. This is great for quick, simple tasks that need to be executed outside of the current request cycle
  • When dispatching Closures to the queue, the Closure's code contents is cryptographically signed so it can not be modified in transit.
  • Laravel includes a queue worker that will process new jobs as they are pushed onto the queue.
  • once the queue:work command has started, it will continue to run until it is manually stopped or you close your terminal
  • queue workers are long-lived processes and store the booted application state in memory.
  • they will not notice changes in your code base after they have been started.
  • during your deployment process, be sure to restart your queue workers.
  • customize your queue worker even further by only processing particular queues for a given connection
  • The --once option may be used to instruct the worker to only process a single job from the queue
  • The --stop-when-empty option may be used to instruct the worker to process all jobs and then exit gracefully.
  • Daemon queue workers do not "reboot" the framework before processing each job.
  • you should free any heavy resources after each job completes.
  • Since queue workers are long-lived processes, they will not pick up changes to your code without being restarted.
  • restart the workers during your deployment process.
  • php artisan queue:restart
  • The queue uses the cache to store restart signals
  • the queue workers will die when the queue:restart command is executed, you should be running a process manager such as Supervisor to automatically restart the queue workers.
  • each queue connection defines a retry_after option. This option specifies how many seconds the queue connection should wait before retrying a job that is being processed.
  • The --timeout option specifies how long the Laravel queue master process will wait before killing off a child queue worker that is processing a job.
  • When jobs are available on the queue, the worker will keep processing jobs with no delay in between them.
  • While sleeping, the worker will not process any new jobs - the jobs will be processed after the worker wakes up again
  • the numprocs directive will instruct Supervisor to run 8 queue:work processes and monitor all of them, automatically restarting them if they fail.
  • Laravel includes a convenient way to specify the maximum number of times a job should be attempted.
  • define a failed method directly on your job class, allowing you to perform job specific clean-up when a failure occurs.
  • a great opportunity to notify your team via email or Slack.
  • php artisan queue:retry all
  • php artisan queue:flush
  • When injecting an Eloquent model into a job, it is automatically serialized before being placed on the queue and restored when the job is processed
張 旭

Queue Workers: How they work - Diving Laravel - 0 views

  • define workers as a simple PHP process that runs in the background with the purpose of extracting jobs from a storage space and run them with respect to several configuration options.
  • have to manually restart the worker to reflect any code change you made in your application.
  • avoiding booting up the whole app on every job
  • ...7 more annotations...
  • instruct Laravel to create an instance of your application and start executing jobs, this instance will stay alive indefinitely which means the action of starting your Laravel application happens only once when the command was run & the same instance will be used to execute your jobs
  • This will start an instance of the application, process a single job,
  • and then kill the script.
  • Using queue:listen ensures that a new instance of the app is created for every job, that means you don't have to manually restart the worker in case you made changes to your code, but also means more server resources will be consumed.
  • the queue:listen command runs the WorkCommand inside a loop
  • The connection this worker will be pulling jobs from
  • The queue the worker will use to find jobs
  •  
    "define workers as a simple PHP process that runs in the background with the purpose of extracting jobs from a storage space and run them with respect to several configuration options."
張 旭

Kubernetes Components | Kubernetes - 0 views

  • A Kubernetes cluster consists of a set of worker machines, called nodes, that run containerized applications
  • Every cluster has at least one worker node.
  • The control plane manages the worker nodes and the Pods in the cluster.
  • ...29 more annotations...
  • The control plane's components make global decisions about the cluster
  • Control plane components can be run on any machine in the cluster.
  • for simplicity, set up scripts typically start all control plane components on the same machine, and do not run user containers on this machine
  • The API server is the front end for the Kubernetes control plane.
  • kube-apiserver is designed to scale horizontally—that is, it scales by deploying more instances. You can run several instances of kube-apiserver and balance traffic between those instances.
  • Kubernetes cluster uses etcd as its backing store, make sure you have a back up plan for those data.
  • watches for newly created Pods with no assigned node, and selects a node for them to run on.
  • Factors taken into account for scheduling decisions include: individual and collective resource requirements, hardware/software/policy constraints, affinity and anti-affinity specifications, data locality, inter-workload interference, and deadlines.
  • each controller is a separate process, but to reduce complexity, they are all compiled into a single binary and run in a single process.
  • Node controller
  • Job controller
  • Endpoints controller
  • Service Account & Token controllers
  • The cloud controller manager lets you link your cluster into your cloud provider's API, and separates out the components that interact with that cloud platform from components that only interact with your cluster.
  • If you are running Kubernetes on your own premises, or in a learning environment inside your own PC, the cluster does not have a cloud controller manager.
  • An agent that runs on each node in the cluster. It makes sure that containers are running in a Pod.
  • The kubelet takes a set of PodSpecs that are provided through various mechanisms and ensures that the containers described in those PodSpecs are running and healthy.
  • The kubelet doesn't manage containers which were not created by Kubernetes.
  • kube-proxy is a network proxy that runs on each node in your cluster, implementing part of the Kubernetes Service concept.
  • kube-proxy maintains network rules on nodes. These network rules allow network communication to your Pods from network sessions inside or outside of your cluster.
  • kube-proxy uses the operating system packet filtering layer if there is one and it's available.
  • Kubernetes supports several container runtimes: Docker, containerd, CRI-O, and any implementation of the Kubernetes CRI (Container Runtime Interface).
  • Addons use Kubernetes resources (DaemonSet, Deployment, etc) to implement cluster features
  • namespaced resources for addons belong within the kube-system namespace.
  • all Kubernetes clusters should have cluster DNS,
  • Cluster DNS is a DNS server, in addition to the other DNS server(s) in your environment, which serves DNS records for Kubernetes services.
  • Containers started by Kubernetes automatically include this DNS server in their DNS searches.
  • Container Resource Monitoring records generic time-series metrics about containers in a central database, and provides a UI for browsing that data.
  • A cluster-level logging mechanism is responsible for saving container logs to a central log store with search/browsing interface.
snow9816

Apache mpm 模組的 worker 和 prefork 差別為何 ? - 1 views

  • 一種是採用Multi-Thread (多重執行緒 ) 的方式,另一種便是 Pre-forking (預載分流 )
  • Multi-Thread 的方式便是 worker 模組的運作方式,適合運用在多核心的 CPU 上,而 Pre-Forking 的方式則是 prefork 的運行方式,適合在多顆 CPU 執行環境
  •  
    Multi-Thread 的方式便是 worker 模組的運作方式,適合運用在多核心的 CPU 上,而 Pre-Forking 的方式則是 prefork 的運行方式,適合在多顆 CPU 執行環境
張 旭

Networking with overlay networks | Docker Documentation - 0 views

  • The manager host will function as both a manager and a worker, which means it can both run service tasks and manage the swarm.
  • connected together using an overlay network called ingress
  • each of them now has an overlay network called ingress and a bridge network called docker_gwbridge
  • ...7 more annotations...
  • The docker_gwbridge connects the ingress network to the Docker host’s network interface so that traffic can flow to and from swarm managers and workers
  • recommended that you use separate overlay networks for each application or group of applications which will work together
  • You don’t need to create the overlay network on the other nodes, beacause it will be automatically created when one of those nodes starts running a service task which requires it.
  • The default publish mode of ingress, which is used when you do not specify a mode for the --publish flag, means that if you browse to port 80 on manager, worker-1, or worker-2, you will be connected to port 80 on one of the 5 service tasks, even if no tasks are currently running on the node you browse to.
  • Even though overlay networks are automatically created on swarm worker nodes as needed, they are not automatically removed.
  • The -dit flags mean to start the container detached (in the background), interactive (with the ability to type into it), and with a TTY (so you can see the input and output).
  • alpine containers running ash, which is Alpine’s default shell rather than bash
張 旭

Boosting your kubectl productivity ♦︎ Learnk8s - 0 views

  • kubectl is your cockpit to control Kubernetes.
  • kubectl is a client for the Kubernetes API
  • Kubernetes API is an HTTP REST API.
  • ...75 more annotations...
  • This API is the real Kubernetes user interface.
  • Kubernetes is fully controlled through this API
  • every Kubernetes operation is exposed as an API endpoint and can be executed by an HTTP request to this endpoint.
  • the main job of kubectl is to carry out HTTP requests to the Kubernetes API
  • Kubernetes maintains an internal state of resources, and all Kubernetes operations are CRUD operations on these resources.
  • Kubernetes is a fully resource-centred system
  • Kubernetes API reference is organised as a list of resource types with their associated operations.
  • This is how kubectl works for all commands that interact with the Kubernetes cluster.
  • kubectl simply makes HTTP requests to the appropriate Kubernetes API endpoints.
  • it's totally possible to control Kubernetes with a tool like curl by manually issuing HTTP requests to the Kubernetes API.
  • Kubernetes consists of a set of independent components that run as separate processes on the nodes of a cluster.
  • components on the master nodes
  • Storage backend: stores resource definitions (usually etcd is used)
  • API server: provides Kubernetes API and manages storage backend
  • Controller manager: ensures resource statuses match specifications
  • Scheduler: schedules Pods to worker nodes
  • component on the worker nodes
  • Kubelet: manages execution of containers on a worker node
  • triggers the ReplicaSet controller, which is a sub-process of the controller manager.
  • the scheduler, who watches for Pod definitions that are not yet scheduled to a worker node.
  • creating and updating resources in the storage backend on the master node.
  • The kubelet of the worker node your ReplicaSet Pods have been scheduled to instructs the configured container runtime (which may be Docker) to download the required container images and run the containers.
  • Kubernetes components (except the API server and the storage backend) work by watching for resource changes in the storage backend and manipulating resources in the storage backend.
  • However, these components do not access the storage backend directly, but only through the Kubernetes API.
    • 張 旭
       
      很精彩,相互之間都是使用 API call 溝通,良好的微服務行為。
  • double usage of the Kubernetes API for internal components as well as for external users is a fundamental design concept of Kubernetes.
  • All other Kubernetes components and users read, watch, and manipulate the state (i.e. resources) of Kubernetes through the Kubernetes API
  • The storage backend stores the state (i.e. resources) of Kubernetes.
  • command completion is a shell feature that works by the means of a completion script.
  • A completion script is a shell script that defines the completion behaviour for a specific command. Sourcing a completion script enables completion for the corresponding command.
  • kubectl completion zsh
  • /etc/bash_completion.d directory (create it, if it doesn't exist)
  • source <(kubectl completion bash)
  • source <(kubectl completion zsh)
  • autoload -Uz compinit compinit
  • the API reference, which contains the full specifications of all resources.
  • kubectl api-resources
  • displays the resource names in their plural form (e.g. deployments instead of deployment). It also displays the shortname (e.g. deploy) for those resources that have one. Don't worry about these differences. All of these name variants are equivalent for kubectl.
  • .spec
  • custom columns output format comes in. It lets you freely define the columns and the data to display in them. You can choose any field of a resource to be displayed as a separate column in the output
  • kubectl get pods -o custom-columns='NAME:metadata.name,NODE:spec.nodeName'
  • kubectl explain pod.spec.
  • kubectl explain pod.metadata.
  • browse the resource specifications and try it out with any fields you like!
  • JSONPath is a language to extract data from JSON documents (it is similar to XPath for XML).
  • with kubectl explain, only a subset of the JSONPath capabilities is supported
  • Many fields of Kubernetes resources are lists, and this operator allows you to select items of these lists. It is often used with a wildcard as [*] to select all items of the list.
  • kubectl get pods -o custom-columns='NAME:metadata.name,IMAGES:spec.containers[*].image'
  • a Pod may contain more than one container.
  • The availability zones for each node are obtained through the special failure-domain.beta.kubernetes.io/zone label.
  • kubectl get nodes -o yaml kubectl get nodes -o json
  • The default kubeconfig file is ~/.kube/config
  • with multiple clusters, then you have connection parameters for multiple clusters configured in your kubeconfig file.
  • Within a cluster, you can set up multiple namespaces (a namespace is kind of "virtual" clusters within a physical cluster)
  • overwrite the default kubeconfig file with the --kubeconfig option for every kubectl command.
  • Namespace: the namespace to use when connecting to the cluster
  • a one-to-one mapping between clusters and contexts.
  • When kubectl reads a kubeconfig file, it always uses the information from the current context.
  • just change the current context in the kubeconfig file
  • to switch to another namespace in the same cluster, you can change the value of the namespace element of the current context
  • kubectl also provides the --cluster, --user, --namespace, and --context options that allow you to overwrite individual elements and the current context itself, regardless of what is set in the kubeconfig file.
  • for switching between clusters and namespaces is kubectx.
  • kubectl config get-contexts
  • just have to download the shell scripts named kubectl-ctx and kubectl-ns to any directory in your PATH and make them executable (for example, with chmod +x)
  • kubectl proxy
  • kubectl get roles
  • kubectl get pod
  • Kubectl plugins are distributed as simple executable files with a name of the form kubectl-x. The prefix kubectl- is mandatory,
  • To install a plugin, you just have to copy the kubectl-x file to any directory in your PATH and make it executable (for example, with chmod +x)
  • krew itself is a kubectl plugin
  • check out the kubectl-plugins GitHub topic
  • The executable can be of any type, a Bash script, a compiled Go program, a Python script, it really doesn't matter. The only requirement is that it can be directly executed by the operating system.
  • kubectl plugins can be written in any programming or scripting language.
  • you can write more sophisticated plugins with real programming languages, for example, using a Kubernetes client library. If you use Go, you can also use the cli-runtime library, which exists specifically for writing kubectl plugins.
  • a kubeconfig file consists of a set of contexts
  • changing the current context means changing the cluster, if you have only a single context per cluster.
張 旭

Production environment | Kubernetes - 0 views

  • to promote an existing cluster for production use
  • Separating the control plane from the worker nodes.
  • Having enough worker nodes available
  • ...22 more annotations...
  • You can use role-based access control (RBAC) and other security mechanisms to make sure that users and workloads can get access to the resources they need, while keeping workloads, and the cluster itself, secure. You can set limits on the resources that users and workloads can access by managing policies and container resources.
  • you need to plan how to scale to relieve increased pressure from more requests to the control plane and worker nodes or scale down to reduce unused resources.
  • Managed control plane: Let the provider manage the scale and availability of the cluster's control plane, as well as handle patches and upgrades.
  • The simplest Kubernetes cluster has the entire control plane and worker node services running on the same machine.
  • You can deploy a control plane using tools such as kubeadm, kops, and kubespray.
  • Secure communications between control plane services are implemented using certificates.
  • Certificates are automatically generated during deployment or you can generate them using your own certificate authority.
  • Separate and backup etcd service: The etcd services can either run on the same machines as other control plane services or run on separate machines
  • Create multiple control plane systems: For high availability, the control plane should not be limited to a single machine
  • Some deployment tools set up Raft consensus algorithm to do leader election of Kubernetes services. If the primary goes away, another service elects itself and take over.
  • Groups of zones are referred to as regions.
  • if you installed with kubeadm, there are instructions to help you with Certificate Management and Upgrading kubeadm clusters.
  • Production-quality workloads need to be resilient and anything they rely on needs to be resilient (such as CoreDNS).
  • Add nodes to the cluster: If you are managing your own cluster you can add nodes by setting up your own machines and either adding them manually or having them register themselves to the cluster’s apiserver.
  • Set up node health checks: For important workloads, you want to make sure that the nodes and pods running on those nodes are healthy.
  • Authentication: The apiserver can authenticate users using client certificates, bearer tokens, an authenticating proxy, or HTTP basic auth.
  • Authorization: When you set out to authorize your regular users, you will probably choose between RBAC and ABAC authorization.
  • Role-based access control (RBAC): Lets you assign access to your cluster by allowing specific sets of permissions to authenticated users. Permissions can be assigned for a specific namespace (Role) or across the entire cluster (ClusterRole).
  • Attribute-based access control (ABAC): Lets you create policies based on resource attributes in the cluster and will allow or deny access based on those attributes.
  • Set limits on workload resources
  • Set namespace limits: Set per-namespace quotas on things like memory and CPU
  • Prepare for DNS demand: If you expect workloads to massively scale up, your DNS service must be ready to scale up as well.
張 旭

Manage nodes in a swarm | Docker Documentation - 0 views

  • Drain means the scheduler doesn’t assign new tasks to the node. The scheduler shuts down any existing tasks and schedules them on an available node.
  • Reachable means the node is a manager node participating in the Raft consensus quorum. If the leader node becomes unavailable, the node is eligible for election as the new leader.
  • If a manager node becomes unavailable, you should either join a new manager node to the swarm or promote a worker node to be a manager.
  • ...8 more annotations...
  • docker node inspect self --pretty
  • docker node update --availability drain node
  • use node labels in service constraints
  • The labels you set for nodes using docker node update apply only to the node entity within the swarm
  • node labels can be used to limit critical tasks to nodes that meet certain requirements
  • promote a worker node to the manager role
  • demote a manager node to the worker role
  • If the last manager node leaves the swarm, the swarm becomes unavailable requiring you to take disaster recovery measures.
張 旭

2. Swoole Structure · swooletw/laravel-swoole Wiki - 0 views

  • Laravel application will exist in Worker processes.
  • means Laravel can be stored and kept in memory.
  • Laravel application will exist in the memory and only initialize at the first time. Any changes you did to Laravel will be kept unless you reset them by yourself.
  •  
    "Laravel application will exist in Worker processes. "
張 旭

Swarm mode key concepts | Docker Documentation - 0 views

  • The cluster management and orchestration features embedded in the Docker Engine are built using SwarmKit.
  • Docker engines participating in a cluster are running in swarm mode
  • A swarm is a cluster of Docker engines, or nodes, where you deploy services
  • ...19 more annotations...
  • When you run Docker without using swarm mode, you execute container commands.
  • When you run the Docker in swarm mode, you orchestrate services.
  • You can run swarm services and standalone containers on the same Docker instances.
  • A node is an instance of the Docker engine participating in the swarm
  • You can run one or more nodes on a single physical computer or cloud server
  • To deploy your application to a swarm, you submit a service definition to a manager node.
  • Manager nodes also perform the orchestration and cluster management functions required to maintain the desired state of the swarm.
  • Manager nodes elect a single leader to conduct orchestration tasks.
  • Worker nodes receive and execute tasks dispatched from manager nodes.
  • service is the definition of the tasks to execute on the worker nodes
  • When you create a service, you specify which container image to use and which commands to execute inside running containers.
  • replicated services model, the swarm manager distributes a specific number of replica tasks among the nodes based upon the scale you set in the desired state.
  • global services, the swarm runs one task for the service on every available node in the cluster.
  • A task carries a Docker container and the commands to run inside the container
  • Manager nodes assign tasks to worker nodes according to the number of replicas set in the service scale.
  • Once a task is assigned to a node, it cannot move to another node
  • If you do not specify a port, the swarm manager assigns the service a port in the 30000-32767 range.
  • External components, such as cloud load balancers, can access the service on the PublishedPort of any node in the cluster whether or not the node is currently running the task for the service.
  • Swarm mode has an internal DNS component that automatically assigns each service in the swarm a DNS entry.
crazylion lee

Homepage | Celery: Distributed Task Queue - 0 views

  •  
    "Celery is an asynchronous task queue/job queue based on distributed message passing. It is focused on real-time operation, but supports scheduling as well. The execution units, called tasks, are executed concurrently on a single or more worker servers using multiprocessing, Eventlet, or gevent. Tasks can execute asynchronously (in the background) or synchronously (wait until ready)."
張 旭

Understanding the Nginx Configuration File Structure and Configuration Contexts | Digit... - 0 views

  • discussing the basic structure of an Nginx configuration file along with some guidelines on how to design your files
  • /etc/nginx/nginx.conf
  • In Nginx parlance, the areas that these brackets define are called "contexts" because they contain configuration details that are separated according to their area of concern
  • ...50 more annotations...
  • contexts can be layered within one another
  • if a directive is valid in multiple nested scopes, a declaration in a broader context will be passed on to any child contexts as default values.
  • The children contexts can override these values at will
  • Nginx will error out on reading a configuration file with directives that are declared in the wrong context.
  • The most general context is the "main" or "global" context
  • Any directive that exist entirely outside of these blocks is said to inhabit the "main" context
  • The main context represents the broadest environment for Nginx configuration.
  • The "events" context is contained within the "main" context. It is used to set global options that affect how Nginx handles connections at a general level.
  • Nginx uses an event-based connection processing model, so the directives defined within this context determine how worker processes should handle connections.
  • the connection processing method is automatically selected based on the most efficient choice that the platform has available
  • a worker will only take a single connection at a time
  • When configuring Nginx as a web server or reverse proxy, the "http" context will hold the majority of the configuration.
  • The http context is a sibling of the events context, so they should be listed side-by-side, rather than nested
  • fine-tune the TCP keep alive settings (keepalive_disable, keepalive_requests, and keepalive_timeout)
  • The "server" context is declared within the "http" context.
  • multiple declarations
  • each instance defines a specific virtual server to handle client requests
  • Each client request will be handled according to the configuration defined in a single server context, so Nginx must decide which server context is most appropriate based on details of the request.
  • listen: The ip address / port combination that this server block is designed to respond to.
  • server_name: This directive is the other component used to select a server block for processing.
  • "Host" header
  • configure files to try to respond to requests (try_files)
  • issue redirects and rewrites (return and rewrite)
  • set arbitrary variables (set)
  • Location contexts share many relational qualities with server contexts
  • multiple location contexts can be defined, each location is used to handle a certain type of client request, and each location is selected by virtue of matching the location definition against the client request through a selection algorithm
  • Location blocks live within server contexts and, unlike server blocks, can be nested inside one another.
  • While server contexts are selected based on the requested IP address/port combination and the host name in the "Host" header, location blocks further divide up the request handling within a server block by looking at the request URI
  • The request URI is the portion of the request that comes after the domain name or IP address/port combination.
  • New directives at this level allow you to reach locations outside of the document root (alias), mark the location as only internally accessible (internal), and proxy to other servers or locations (using http, fastcgi, scgi, and uwsgi proxying).
  • These can then be used to do A/B testing by providing different content to different hosts.
  • configures Perl handlers for the location they appear in
  • set the value of a variable depending on the value of another variable
  • used to map MIME types to the file extensions that should be associated with them.
  • this context defines a named pool of servers that Nginx can then proxy requests to
  • The upstream context should be placed within the http context, outside of any specific server contexts.
  • The upstream context can then be referenced by name within server or location blocks to pass requests of a certain type to the pool of servers that have been defined.
  • function as a high performance mail proxy server
  • The mail context is defined within the "main" or "global" context (outside of the http context).
  • Nginx has the ability to redirect authentication requests to an external authentication server
  • the if directive in Nginx will execute the instructions contained if a given test returns "true".
  • Since Nginx will test conditions of a request with many other purpose-made directives, if should not be used for most forms of conditional execution.
  • The limit_except context is used to restrict the use of certain HTTP methods within a location context.
  • The result of the above example is that any client can use the GET and HEAD verbs, but only clients coming from the 192.168.1.1/24 subnet are allowed to use other methods.
  • Many directives are valid in more than one context
  • it is usually best to declare directives in the highest context to which they are applicable, and overriding them in lower contexts as necessary.
  • Declaring at higher levels provides you with a sane default
  • Nginx already engages in a well-documented selection algorithm for things like selecting server blocks and location blocks.
  • instead of relying on rewrites to get a user supplied request into the format that you would like to work with, you should try to set up two blocks for the request, one of which represents the desired method, and the other that catches messy requests and redirects (and possibly rewrites) them to your correct block.
  • incorrect requests can get by with a redirect rather than a rewrite, which should execute with lower overhead.
張 旭

Open source load testing tool review 2020 - 0 views

  • Hey is a simple tool, written in Go, with good performance and the most common features you'll need to run simple static URL tests.
  • Hey supports HTTP/2, which neither Wrk nor Apachebench does
  • Apachebench is very fast, so often you will not need more than one CPU core to generate enough traffic
  • ...16 more annotations...
  • Hey has rate limiting, which can be used to run fixed-rate tests.
  • Vegeta was designed to be run on the command line; it reads from stdin a list of HTTP transactions to generate, and sends results in binary format to stdout,
  • Vegeta is a really strong tool that caters to people who want a tool to test simple, static URLs (perhaps API end points) but also want a bit more functionality.
  • Vegeta can even be used as a Golang library/package if you want to create your own load testing tool.
  • Wrk is so damn fast
  • being fast and measuring correctly is about all that Wrk does
  • k6 is scriptable in plain Javascript
  • k6 is average or better. In some categories (documentation, scripting API, command line UX) it is outstanding.
  • Jmeter is a huge beast compared to most other tools.
  • Siege is a simple tool, similar to e.g. Apachebench in that it has no scripting and is primarily used when you want to hit a single, static URL repeatedly.
  • A good way of testing the testing tools is to not test them on your code, but on some third-party thing that is sure to be very high-performing.
  • use a tool like e.g. top to keep track of Nginx CPU usage while testing. If you see just one process, and see it using close to 100% CPU, it means you could be CPU-bound on the target side.
  • If you see multiple Nginx processes but only one is using a lot of CPU, it means your load testing tool is only talking to that particular worker process.
  • Network delay is also important to take into account as it sets an upper limit on the number of requests per second you can push through.
  • If, say, the Nginx default page requires a transfer of 250 bytes to load, it means that if the servers are connected via a 100 Mbit/s link, the theoretical max RPS rate would be around 100,000,000 divided by 8 (bits per byte) divided by 250 => 100M/2000 = 50,000 RPS. Though that is a very optimistic calculation - protocol overhead will make the actual number a lot lower so in the case above I would start to get worried bandwidth was an issue if I saw I could push through max 30,000 RPS, or something like that.
  • Wrk managed to push through over 50,000 RPS and that made 8 Nginx workers on the target system consume about 600% CPU.
張 旭

Controllers | Kubernetes - 0 views

  • In robotics and automation, a control loop is a non-terminating loop that regulates the state of a system.
  • controllers are control loops that watch the state of your cluster, then make or request changes where needed
  • Each controller tries to move the current cluster state closer to the desired state.
  • ...12 more annotations...
  • A controller tracks at least one Kubernetes resource type.
  • The controller(s) for that resource are responsible for making the current state come closer to that desired state.
  • in Kubernetes, a controller will send messages to the API server that have useful side effects.
  • Built-in controllers manage state by interacting with the cluster API server.
  • By contrast with Job, some controllers need to make changes to things outside of your cluster.
  • the controller makes some change to bring about your desired state, and then reports current state back to your cluster's API server. Other control loops can observe that reported data and take their own actions.
  • As long as the controllers for your cluster are running and able to make useful changes, it doesn't matter if the overall state is stable or not.
  • Kubernetes uses lots of controllers that each manage a particular aspect of cluster state.
  • a particular control loop (controller) uses one kind of resource as its desired state, and has a different kind of resource that it manages to make that desired state happen.
  • There can be several controllers that create or update the same kind of object.
  • you can have Deployments and Jobs; these both create Pods. The Job controller does not delete the Pods that your Deployment created, because there is information (labels) the controllers can use to tell those Pods apart.
  • Kubernetes comes with a set of built-in controllers that run inside the kube-controller-manager.
  •  
    "In robotics and automation, a control loop is a non-terminating loop that regulates the state of a system. "
張 旭

Manage swarm security with public key infrastructure (PKI) | Docker Documentation - 0 views

  • The nodes in a swarm use mutual Transport Layer Security (TLS) to authenticate, authorize, and encrypt the communications with other nodes in the swarm.
  • By default, the manager node generates a new root Certificate Authority (CA) along with a key pair, which are used to secure communications with other nodes that join the swarm.
  • The manager node also generates two tokens to use when you join additional nodes to the swarm: one worker token and one manager token.
  • ...3 more annotations...
  • Each time a new node joins the swarm, the manager issues a certificate to the node
  • By default, each node in the swarm renews its certificate every three months.
  • a cluster CA key or a manager node is compromised, you can rotate the swarm root CA so that none of the nodes trust certificates signed by the old root CA anymore.
  •  
    "The nodes in a swarm use mutual Transport Layer Security (TLS) to authenticate, authorize, and encrypt the communications with other nodes in the swarm."
張 旭

Deploy a registry server | Docker Documentation - 0 views

  • By default, secrets are mounted into a service at /run/secrets/<secret-name>
  • docker secret create
  • If you use a distributed storage driver, such as Amazon S3, you can use a fully replicated service. Each worker can write to the storage back-end without causing write conflicts.
  • ...10 more annotations...
  • You can access the service on port 443 of any swarm node. Docker sends the requests to the node which is running the service.
  • --publish published=443,target=443
  • The most important aspect is that a load balanced cluster of registries must share the same resources
  • S3 or Azure, they should be accessing the same resource and share an identical configuration.
  • you must make sure you are properly sending the X-Forwarded-Proto, X-Forwarded-For, and Host headers to their “client-side” values. Failure to do so usually makes the registry issue redirects to internal hostnames or downgrading from https to http.
  • A properly secured registry should return 401 when the “/v2/” endpoint is hit without credentials
  • registries should always implement access restrictions.
  • REGISTRY_AUTH=htpasswd
  • REGISTRY_AUTH_HTPASSWD_PATH=/auth/htpasswd
  • The registry also supports delegated authentication which redirects users to a specific trusted token server. This approach is more complicated to set up, and only makes sense if you need to fully configure ACLs and need more control over the registry’s integration into your global authorization and authentication systems.
  •  
    "You can access the service on port 443 of any swarm node. Docker sends the requests to the node which is running the service. "
張 旭

Deploy services to a swarm | Docker Documentation - 0 views

  • Swarm services use a declarative model, which means that you define the desired state of the service, and rely upon Docker to maintain this state.
  • To create a single-replica service with no extra configuration, you only need to supply the image name.
  • A service can be in a pending state if its image is unavailable
  • ...12 more annotations...
  • If your image is available on a private registry which requires login, use the --with-registry-auth flag
  • When you update a service, Docker stops its containers and restarts them with the new configuration.
  • When updating an existing service, the flag is --publish-add. There is also a --publish-rm flag to remove a port that was previously published.
  • To update the command an existing service runs, you can use the --args flag.
  • force the service to use a specific version of the image
  • If the manager can’t resolve the tag to a digest, each worker node is responsible for resolving the tag to a digest, and different nodes may use different versions of the image.
  • After you create a service, its image is never updated unless you explicitly run docker service update with the --image flag as described below.
  • When you run service update with the --image flag, the swarm manager queries Docker Hub or your private Docker registry for the digest the tag currently points to and updates the service tasks to use that digest.
  • You can publish a service task’s port directly on the swarm node where that service is running.
  • You can rely on the routing mesh. When you publish a service port, the swarm makes the service accessible at the target port on every node, regardless of whether there is a task for the service running on that node or not.
  • To publish a service’s ports externally to the swarm, use the --publish <PUBLISHED-PORT>:<SERVICE-PORT> flag.
  • published port on every swarm node
張 旭

GitLab Auto DevOps 深入淺出,自動部署,連設定檔不用?! | 五倍紅寶石・專業程式教育 - 0 views

  • 一個 K8S 的 Cluster,Auto DevOps 將會把網站部署到這個 Cluster
  • 需要有一個 wildcard 的 DNS 讓部署在這個環境的網站有 Domain name
  • 一個可以跑 Docker 的 GitLab Runner,將會為由它來執行 CI / CD 的流程。
  • ...37 more annotations...
  • 其實 Auto DevOps 就是一份官方寫好的 gitlab-ci.yml,在啟動 Auto DevOps 的專案裡,如果找不到 gitlab-ci.yml 檔,那就會直接用官方 gitlab-ci.yml 去跑 CI / CD 流程。
  • Pod 是 K8S 中可以被部署的最小元件,一個 Pod 是由一到多個 Container 組成,同個 Pod 的不同 Container 之間彼此共享網路資源。
  • 每個 Pod 都會有它的 yaml 檔,用以描述 Pod 會使用的 Image 還有連接的 Port 等資訊。
  • Node 又分成 Worker Node 和 Master Node 兩種
  • Helm 透過參數 (parameter) 跟模板 (template) 的方式,讓我們可以在只修改參數的方式重複利用模板。
  • 為了要有 CI CD 的功能我們會把 .gitlab-ci.yml 放在專案的根目錄裡, GitLab 會依造 .gitlab-ci.yml 的設定產生 CI/CD Pipeline,每個 Pipeline 裡面可能有多個 Job,這時候就會需要有 GitLab Runner 來執行這些 Job 並把執行的結果回傳給 GitLab 讓它知道這個 Job 是否有正常執行。
  • 把專案打包成 Docker Image 這工作又或是 helm 的操作都會在 Container 內執行
  • CI/CD Pipeline 是由 stage 還有 job 組成的,stage 是有順序性的,前面的 stage 完成後才會開始下一個 stage。
  • 每個 stage 裡面包含一到多個 Job
  • Auto Devops 裡也會大量用到這種在指定 Container 內運行的工作。
  • 可以通過 health checks
  • 開 private 的話還要注意使用 Container Registry 的權限問題
  • 申請好的 wildcard 的 DNS
  • Auto Devops 也提供只要設定環境變數就能一定程度客製化的選項
  • 特別注意 namespace 有沒有設定對,不然會找不到資料喔
  • Auto Devops,如果想要進一步的客製化,而且是改 GitLab 環境變數都無法實現的客製化,這時候還是得回到 .gitlab-ci.yml 設定檔
  • 在 Docker in Docker 的環境用 Dockerfile 打包 Image
  • 用 helm upgrade 把 chart 部署到 K8S 上
  • GitLab CI 的環境變數主要有三個來源,優先度高到低依序為Settings > CI/CD 介面定義的變數gitlab_ci.yml 定義環境變數GitLab 預設環境變數
  • 把專案打包成 Docker Image 首先需要在專案下新增一份 Dockerfile
  • Auto Devops 裡面的做法,用 herokuish 提供的 Image 來打包專案
  • 在 Runner 的環境中是沒有 docker 指令可以用的,所以這邊啟動一個 Docker Container 在裡面執行就可以用 docker 指令了。
  • 其中 $CI_COMMIT_SHA $CI_COMMIT_BEFORE_SHA 這兩個都是 GitLab 預設環境變數,代表這次 commit 還有上次 commit 的 SHA 值。
  • dind 則是直接啟動 docker daemon,此外 dind 還會自動產生 TLS certificates
  • 為了在 Docker Container 內運行 Docker,會把 Host 上面的 Docker API 分享給 Container。
  • docker:stable 有執行 docker 需要的執行檔,他裡面也包含要啟動 docker 的程式(docker daemon),但啟動 Container 的 entrypoint 是 sh
  • docker:dind 繼承自 docker:stable,而且它 entrypoint 就是啟動 docker 的腳本,此外還會做完 TLS certificates
  • Container 要去連 Host 上的 Docker API 。但現在連線失敗卻是找 http://docker:2375,現在的 dind 已經不是被當做 services 來用了,而是要直接在裡面跑 Docker,所以他應該是要 unix:///var/run/docker.sock 用這種連線,於是把環境變數 DOCKER_HOST 從 tcp://docker:2375 改成空字串,讓 docker daemon 走預設連線就能成功囉!
  • auto-deploy preparationhelm init 建立 helm 專案設定 tiller 在背景執行設定 cluster 的 namespace
  • auto-deploy deploy使用 helm upgrade 部署 chart 到 K8S 上透過 --set 來設定要注入 template 的參數
  • set -x,這樣就能在執行前,顯示指令內容。
  • 用 helm repo list 看看現在有註冊哪些 Chart Repository
  • helm fetch gitlab/auto-deploy-app --untar
  • nohup 可以讓你在離線或登出系統後,還能夠讓工作繼續進行
  • 在不特別設定 CI_APPLICATION_REPOSITORY 的情況下,image_repository 的值就是預設環境變數 CI_REGISTRY_IMAGE/CI_COMMIT_REF_SLUG
  • A:-B 的意思是如果有 A 就用它,沒有就用 B
  • 研究 Auto Devops 難度最高的地方就是太多工具整合在一起,搞不清楚他們之間的關係,出錯也不知道從何查起
張 旭

Ingress - Kubernetes - 0 views

  • An API object that manages external access to the services in a cluster, typically HTTP.
  • load balancing
  • SSL termination
  • ...62 more annotations...
  • name-based virtual hosting
  • Edge routerA router that enforces the firewall policy for your cluster.
  • Cluster networkA set of links, logical or physical, that facilitate communication within a cluster according to the Kubernetes networking model.
  • A Kubernetes ServiceA way to expose an application running on a set of Pods as a network service. that identifies a set of Pods using labelTags objects with identifying attributes that are meaningful and relevant to users. selectors.
  • Services are assumed to have virtual IPs only routable within the cluster network.
  • Ingress exposes HTTP and HTTPS routes from outside the cluster to services within the cluster.
  • Traffic routing is controlled by rules defined on the Ingress resource.
  • An Ingress can be configured to give Services externally-reachable URLs, load balance traffic, terminate SSL / TLS, and offer name based virtual hosting.
  • Exposing services other than HTTP and HTTPS to the internet typically uses a service of type Service.Type=NodePort or Service.Type=LoadBalancer.
  • You must have an ingress controller to satisfy an Ingress. Only creating an Ingress resource has no effect.
  • As with all other Kubernetes resources, an Ingress needs apiVersion, kind, and metadata fields
  • Ingress frequently uses annotations to configure some options depending on the Ingress controller,
  • Ingress resource only supports rules for directing HTTP traffic.
  • An optional host.
  • A list of paths
  • A backend is a combination of Service and port names
  • has an associated backend
  • Both the host and path must match the content of an incoming request before the load balancer directs traffic to the referenced Service.
  • HTTP (and HTTPS) requests to the Ingress that matches the host and path of the rule are sent to the listed backend.
  • A default backend is often configured in an Ingress controller to service any requests that do not match a path in the spec.
  • An Ingress with no rules sends all traffic to a single default backend.
  • Ingress controllers and load balancers may take a minute or two to allocate an IP address.
  • A fanout configuration routes traffic from a single IP address to more than one Service, based on the HTTP URI being requested.
  • nginx.ingress.kubernetes.io/rewrite-target: /
  • describe ingress
  • get ingress
  • Name-based virtual hosts support routing HTTP traffic to multiple host names at the same IP address.
  • route requests based on the Host header.
  • an Ingress resource without any hosts defined in the rules, then any web traffic to the IP address of your Ingress controller can be matched without a name based virtual host being required.
  • secure an Ingress by specifying a SecretStores sensitive information, such as passwords, OAuth tokens, and ssh keys. that contains a TLS private key and certificate.
  • Currently the Ingress only supports a single TLS port, 443, and assumes TLS termination.
  • An Ingress controller is bootstrapped with some load balancing policy settings that it applies to all Ingress, such as the load balancing algorithm, backend weight scheme, and others.
  • persistent sessions, dynamic weights) are not yet exposed through the Ingress. You can instead get these features through the load balancer used for a Service.
  • review the controller specific documentation to see how they handle health checks
  • edit ingress
  • After you save your changes, kubectl updates the resource in the API server, which tells the Ingress controller to reconfigure the load balancer.
  • kubectl replace -f on a modified Ingress YAML file.
  • Node: A worker machine in Kubernetes, part of a cluster.
  • in most common Kubernetes deployments, nodes in the cluster are not part of the public internet.
  • Edge router: A router that enforces the firewall policy for your cluster.
  • a gateway managed by a cloud provider or a physical piece of hardware.
  • Cluster network: A set of links, logical or physical, that facilitate communication within a cluster according to the Kubernetes networking model.
  • Service: A Kubernetes Service that identifies a set of Pods using label selectors.
  • An Ingress may be configured to give Services externally-reachable URLs, load balance traffic, terminate SSL / TLS, and offer name-based virtual hosting.
  • An Ingress does not expose arbitrary ports or protocols.
  • You must have an Ingress controller to satisfy an Ingress. Only creating an Ingress resource has no effect.
  • The name of an Ingress object must be a valid DNS subdomain name
  • The Ingress spec has all the information needed to configure a load balancer or proxy server.
  • Ingress resource only supports rules for directing HTTP(S) traffic.
  • An Ingress with no rules sends all traffic to a single default backend and .spec.defaultBackend is the backend that should handle requests in that case.
  • If defaultBackend is not set, the handling of requests that do not match any of the rules will be up to the ingress controller
  • A common usage for a Resource backend is to ingress data to an object storage backend with static assets.
  • Exact: Matches the URL path exactly and with case sensitivity.
  • Prefix: Matches based on a URL path prefix split by /. Matching is case sensitive and done on a path element by element basis.
  • multiple paths within an Ingress will match a request. In those cases precedence will be given first to the longest matching path.
  • Hosts can be precise matches (for example “foo.bar.com”) or a wildcard (for example “*.foo.com”).
  • No match, wildcard only covers a single DNS label
  • Each Ingress should specify a class, a reference to an IngressClass resource that contains additional configuration including the name of the controller that should implement the class.
  • secure an Ingress by specifying a Secret that contains a TLS private key and certificate.
  • The Ingress resource only supports a single TLS port, 443, and assumes TLS termination at the ingress point (traffic to the Service and its Pods is in plaintext).
  • TLS will not work on the default rule because the certificates would have to be issued for all the possible sub-domains.
  • hosts in the tls section need to explicitly match the host in the rules section.
張 旭

Considerations for large clusters | Kubernetes - 0 views

  • A cluster is a set of nodes (physical or virtual machines) running Kubernetes agents, managed by the control plane.
  • Kubernetes v1.23 supports clusters with up to 5000 nodes.
  • criteria: No more than 110 pods per node No more than 5000 nodes No more than 150000 total pods No more than 300000 total containers
  • ...14 more annotations...
  • In-use IP addresses
  • run one or two control plane instances per failure zone, scaling those instances vertically first and then scaling horizontally after reaching the point of falling returns to (vertical) scale.
  • Kubernetes nodes do not automatically steer traffic towards control-plane endpoints that are in the same failure zone
  • store Event objects in a separate dedicated etcd instance.
  • start and configure additional etcd instance
  • Kubernetes resource limits help to minimize the impact of memory leaks and other ways that pods and containers can impact on other components.
  • Addons' default limits are typically based on data collected from experience running each addon on small or medium Kubernetes clusters.
  • When running on large clusters, addons often consume more of some resources than their default limits.
  • Many addons scale horizontally - you add capacity by running more pods
  • The VerticalPodAutoscaler can run in recommender mode to provide suggested figures for requests and limits.
  • Some addons run as one copy per node, controlled by a DaemonSet: for example, a node-level log aggregator.
  • VerticalPodAutoscaler is a custom resource that you can deploy into your cluster to help you manage resource requests and limits for pods.
  • The cluster autoscaler integrates with a number of cloud providers to help you run the right number of nodes for the level of resource demand in your cluster.
  • The addon resizer helps you in resizing the addons automatically as your cluster's scale changes.
1 - 20 of 20
Showing 20 items per page