Skip to main content

Home/ Larvata/ Group items tagged digitalocean

Rss Feed Group items tagged

張 旭

Docker Explained: Using Dockerfiles to Automate Building of Images | DigitalOcean - 0 views

  • CMD would be running an application upon creation of a container which is already installed using RUN (e.g. RUN apt-get install …) inside the image
  • ENTRYPOINT argument sets the concrete default application that is used every time a container is created using the image.
  • ENV command is used to set the environment variables (one or more).
  • ...6 more annotations...
  • EXPOSE command is used to associate a specified port to enable networking between the running process inside the container and the outside world
  • defines the base image to use to start the build process
  • Unlike CMD, it actually is used to build the image (forming another layer on top of the previous one which is committed).
  • VOLUME command is used to enable access from your container to a directory on the host machine
  • set where the command defined with CMD is to be executed
  • To detach yourself from the container, use the escape sequence CTRL+P followed by CTRL+Q
張 旭

How To Install and Use Docker: Getting Started | DigitalOcean - 0 views

  • docker as a project offers you the complete set of higher-level tools to carry everything that forms an application across systems and machines - virtual or physical - and brings along loads more of great benefits with it
  • docker daemon: used to manage docker (LXC) containers on the host it runs
  • docker CLI: used to command and communicate with the docker daemon
  • ...20 more annotations...
  • containers: directories containing everything-your-application
  • images: snapshots of containers or base OS (e.g. Ubuntu) images
  • Dockerfiles: scripts automating the building process of images
  • Docker containers are basically directories which can be packed (e.g. tar-archived) like any other, then shared and run across various different machines and platforms (hosts).
  • Linux Containers can be defined as a combination various kernel-level features (i.e. things that Linux-kernel can do) which allow management of applications (and resources they use) contained within their own environment
  • Each container is layered like an onion and each action taken within a container consists of putting another block (which actually translates to a simple change within the file system) on top of the previous one.
  • Each docker container starts from a docker image which forms the base for other applications and layers to come.
  • Docker images constitute the base of docker containers from which everything starts to form
  • a solid, consistent and dependable base with everything that is needed to run the applications
  • As more layers (tools, applications etc.) are added on top of the base, new images can be formed by committing these changes.
  • a Dockerfile for automated image building
  • Dockerfiles are scripts containing a successive series of instructions, directions, and commands which are to be executed to form a new docker image.
  • As you work with a container and continue to perform actions on it (e.g. download and install software, configure files etc.), to have it keep its state, you need to “commit”.
  • Please remember to “commit” all your changes.
  • When you "run" any process using an image, in return, you will have a container.
  • When the process is not actively running, this container will be a non-running container. Nonetheless, all of them will reside on your system until you remove them via rm command.
  • To create a new container, you need to use a base image and specify a command to run.
  • you can not change the command you run after having created a container (hence specifying one during "creation")
  • If you would like to save the progress and changes you made with a container, you can use “commit”
  • turns your container to an image
張 旭

How To Create a Kubernetes Cluster Using Kubeadm on Ubuntu 18.04 | DigitalOcean - 0 views

  • A pod is an atomic unit that runs one or more containers.
  • Pods are the basic unit of scheduling in Kubernetes: all containers in a pod are guaranteed to run on the same node that the pod is scheduled on.
  • Each pod has its own IP address, and a pod on one node should be able to access a pod on another node using the pod's IP.
  • ...12 more annotations...
  • Communication between pods is more complicated, however, and requires a separate networking component that can transparently route traffic from a pod on one node to a pod on another.
  • pod network plugins. For this cluster, you will use Flannel, a stable and performant option.
  • Passing the argument --pod-network-cidr=10.244.0.0/16 specifies the private subnet that the pod IPs will be assigned from.
  • kubectl apply -f descriptor.[yml|json] is the syntax for telling kubectl to create the objects described in the descriptor.[yml|json] file.
  • deploy Nginx using Deployments and Services
  • A deployment is a type of Kubernetes object that ensures there's always a specified number of pods running based on a defined template, even if the pod crashes during the cluster's lifetime.
  • NodePort, a scheme that will make the pod accessible through an arbitrary port opened on each node of the cluster
  • Services are another type of Kubernetes object that expose cluster internal services to clients, both internal and external.
  • load balancing requests to multiple pods
  • Pods are ubiquitous in Kubernetes, so understanding them will facilitate your work
  • how controllers such as deployments work since they are used frequently in stateless applications for scaling and the automated healing of unhealthy applications.
  • Understanding the types of services and the options they have is essential for running both stateless and stateful applications.
張 旭

How To Benchmark HTTP Latency with wrk on Ubuntu 14.04 | DigitalOcean - 0 views

  • wrk, which measures the latency of your HTTP services at high loads.
  • Latency refers to the time interval between the moment the request was made (by wrk) and the moment the response was received (from the service).
  • Tests can't be compared to real users, but they should give you a good estimate of expected latency
張 旭

How To Use Bash's Job Control to Manage Foreground and Background Processes | DigitalOcean - 0 views

  • Most processes that you start on a Linux machine will run in the foreground. The command will begin execution, blocking use of the shell for the duration of the process.
  • By default, processes are started in the foreground. Until the program exits or changes state, you will not be able to interact with the shell.
  • stop the process by sending it a signal
  • ...17 more annotations...
  • Linux terminals are usually configured to send the "SIGINT" signal (typically signal number 2) to current foreground process when the CTRL-C key combination is pressed.
  • Another signal that we can send is the "SIGTSTP" signal (typically signal number 20).
  • A background process is associated with the specific terminal that started it, but does not block access to the shell
  • start a background process by appending an ampersand character ("&") to the end of your commands.
  • type commands at the same time.
  • The [1] represents the command's "job spec" or job number. We can reference this with other job and process control commands, like kill, fg, and bg by preceding the job number with a percentage sign. In this case, we'd reference this job as %1.
  • Once the process is stopped, we can use the bg command to start it again in the background
  • By default, the bg command operates on the most recently stopped process.
  • Whether a process is in the background or in the foreground, it is rather tightly tied with the terminal instance that started it
  • When a terminal closes, it typically sends a SIGHUP signal to all of the processes (foreground, background, or stopped) that are tied to the terminal.
  • a terminal multiplexer
  • start it using the nohup command
  • appending output to ‘nohup.out’
  • pgrep -a
  • The disown command, in its default configuration, removes a job from the jobs queue of a terminal.
  • You can pass the -h flag to the disown process instead in order to mark the process to ignore SIGHUP signals, but to otherwise continue on as a regular job
  • The huponexit shell option controls whether bash will send its child processes the SIGHUP signal when it exits.
張 旭

Understanding Nginx Server and Location Block Selection Algorithms | DigitalOcean - 0 views

  • A server block is a subset of Nginx’s configuration that defines a virtual server used to handle requests of a defined type. Administrators often configure multiple server blocks and decide which block should handle which connection based on the requested domain name, port, and IP address.
  • A location block lives within a server block and is used to define how Nginx should handle requests for different resources and URIs for the parent server. The URI space can be subdivided in whatever way the administrator likes using these blocks. It is an extremely flexible model.
  • Nginx logically divides the configurations meant to serve different content into blocks, which live in a hierarchical structure. Each time a client request is made, Nginx begins a process of determining which configuration blocks should be used to handle the request.
  • ...37 more annotations...
  • Nginx is one of the most popular web servers in the world. It can successfully handle high loads with many concurrent client connections, and can easily function as a web server, a mail server, or a reverse proxy server.
  • The main server block directives that Nginx is concerned with during this process are the listen directive, and the server_name directive.
  • The listen directive typically defines which IP address and port that the server block will respond to.
  • 0.0.0.0:8080 if Nginx is being run by a normal, non-root user
  • Nginx translates all “incomplete” listen directives by substituting missing values with their default values so that each block can be evaluated by its IP address and port.
  • In any case, the port must be matched exactly.
  • If there are multiple server blocks with the same level of specificity matching, Nginx then begins to evaluate the server_name directive of each server block.
  • Nginx will only evaluate the server_name directive when it needs to distinguish between server blocks that match to the same level of specificity in the listen directive.
  • Nginx checks the request’s “Host” header. This value holds the domain or IP address that the client was actually trying to reach.
  • Nginx will first try to find a server block with a server_name that matches the value in the “Host” header of the request exactly.
  • If no exact match is found, Nginx will then try to find a server block with a server_name that matches using a leading wildcard (indicated by a * at the beginning of the name in the config).
  • If no match is found using a leading wildcard, Nginx then looks for a server block with a server_name that matches using a trailing wildcard (indicated by a server name ending with a * in the config)
  • If no match is found using a trailing wildcard, Nginx then evaluates server blocks that define the server_name using regular expressions (indicated by a ~ before the name).
  • If no regular expression match is found, Nginx then selects the default server block for that IP address and port.
  • There can be only one default_server declaration per each IP address/port combination.
  • Location blocks live within server blocks (or other location blocks) and are used to decide how to process the request URI (the part of the request that comes after the domain name or IP address/port).
  • If no modifiers are present, the location is interpreted as a prefix match.
  • =: If an equal sign is used, this block will be considered a match if the request URI exactly matches the location given.
  • ~: If a tilde modifier is present, this location will be interpreted as a case-sensitive regular expression match.
  • ~*: If a tilde and asterisk modifier is used, the location block will be interpreted as a case-insensitive regular expression match.
  • ^~: If a carat and tilde modifier is present, and if this block is selected as the best non-regular expression match, regular expression matching will not take place.
  • Keep in mind that if this block is selected and the request is fulfilled using an index page, an internal redirect will take place to another location that will be the actual handler of the request
  • Keeping in mind the types of location declarations we described above, Nginx evaluates the possible location contexts by comparing the request URI to each of the locations.
  • Nginx begins by checking all prefix-based location matches (all location types not involving a regular expression).
  • First, Nginx looks for an exact match.
  • If no exact (with the = modifier) location block matches are found, Nginx then moves on to evaluating non-exact prefixes.
  • After the longest matching prefix location is determined and stored, Nginx moves on to evaluating the regular expression locations (both case sensitive and insensitive).
  • by default, Nginx will serve regular expression matches in preference to prefix matches.
  • regular expression matches within the longest prefix match will “jump the line” when Nginx evaluates regex locations.
  • The exceptions to the “only one location block” rule may have implications on how the request is actually served and may not align with the expectations you had when designing your location blocks.
  • The index directive always leads to an internal redirect if it is used to handle the request.
  • In the case above, if you really need the execution to stay in the first block, you will have to come up with a different method of satisfying the request to the directory.
  • one way of preventing an index from switching contexts, but it’s probably not useful for most configurations
  • the try_files directive. This directive tells Nginx to check for the existence of a named set of files or directories.
  • the rewrite directive. When using the last parameter with the rewrite directive, or when using no parameter at all, Nginx will search for a new matching location based on the results of the rewrite.
  • The error_page directive can lead to an internal redirect similar to that created by try_files.
  • when certain status codes are encountered.
張 旭

Understanding Nginx HTTP Proxying, Load Balancing, Buffering, and Caching | DigitalOcean - 0 views

  • allow Nginx to pass requests off to backend http servers for further processing
  • Nginx is often set up as a reverse proxy solution to help scale out infrastructure or to pass requests to other servers that are not designed to handle large client loads
  • explore buffering and caching to improve the performance of proxying operations for clients
  • ...48 more annotations...
  • Nginx is built to handle many concurrent connections at the same time.
  • provides you with flexibility in easily adding backend servers or taking them down as needed for maintenance
  • Proxying in Nginx is accomplished by manipulating a request aimed at the Nginx server and passing it to other servers for the actual processing
  • The servers that Nginx proxies requests to are known as upstream servers.
  • Nginx can proxy requests to servers that communicate using the http(s), FastCGI, SCGI, and uwsgi, or memcached protocols through separate sets of directives for each type of proxy
  • When a request matches a location with a proxy_pass directive inside, the request is forwarded to the URL given by the directive
  • For example, when a request for /match/here/please is handled by this block, the request URI will be sent to the example.com server as http://example.com/match/here/please
  • The request coming from Nginx on behalf of a client will look different than a request coming directly from a client
  • Nginx gets rid of any empty headers
  • Nginx, by default, will consider any header that contains underscores as invalid. It will remove these from the proxied request
    • 張 旭
       
      這裡要注意一下,header 欄位名稱有設定底線的,要設定 Nginx 讓它可以通過。
  • The "Host" header is re-written to the value defined by the $proxy_host variable.
  • The upstream should not expect this connection to be persistent
  • Headers with empty values are completely removed from the passed request.
  • if your backend application will be processing non-standard headers, you must make sure that they do not have underscores
  • by default, this will be set to the value of $proxy_host, a variable that will contain the domain name or IP address and port taken directly from the proxy_pass definition
  • This is selected by default as it is the only address Nginx can be sure the upstream server responds to
  • (as it is pulled directly from the connection info)
  • $http_host: Sets the "Host" header to the "Host" header from the client request.
  • The headers sent by the client are always available in Nginx as variables. The variables will start with an $http_ prefix, followed by the header name in lowercase, with any dashes replaced by underscores.
  • preference to: the host name from the request line itself
  • set the "Host" header to the $host variable. It is the most flexible and will usually provide the proxied servers with a "Host" header filled in as accurately as possible
  • sets the "Host" header to the $host variable, which should contain information about the original host being requested
  • This variable takes the value of the original X-Forwarded-For header retrieved from the client and adds the Nginx server's IP address to the end.
  • The upstream directive must be set in the http context of your Nginx configuration.
  • http context
  • Once defined, this name will be available for use within proxy passes as if it were a regular domain name
  • By default, this is just a simple round-robin selection process (each request will be routed to a different host in turn)
  • Specifies that new connections should always be given to the backend that has the least number of active connections.
  • distributes requests to different servers based on the client's IP address.
  • mainly used with memcached proxying
  • As for the hash method, you must provide the key to hash against
  • Server Weight
  • Nginx's buffering and caching capabilities
  • Without buffers, data is sent from the proxied server and immediately begins to be transmitted to the client.
  • With buffers, the Nginx proxy will temporarily store the backend's response and then feed this data to the client
  • Nginx defaults to a buffering design
  • can be set in the http, server, or location contexts.
  • the sizing directives are configured per request, so increasing them beyond your need can affect your performance
  • When buffering is "off" only the buffer defined by the proxy_buffer_size directive will be used
  • A high availability (HA) setup is an infrastructure without a single point of failure, and your load balancers are a part of this configuration.
  • multiple load balancers (one active and one or more passive) behind a static IP address that can be remapped from one server to another.
  • Nginx also provides a way to cache content from backend servers
  • The proxy_cache_path directive must be set in the http context.
  • proxy_cache backcache;
    • 張 旭
       
      這裡的 backcache 是前文設定的 backcache 變數,看起來每個 location 都可以有自己的 cache 目錄。
  • The proxy_cache_bypass directive is set to the $http_cache_control variable. This will contain an indicator as to whether the client is explicitly requesting a fresh, non-cached version of the resource
  • any user-related data should not be cached
  • For private content, you should set the Cache-Control header to "no-cache", "no-store", or "private" depending on the nature of the data
張 旭

Understanding the Nginx Configuration File Structure and Configuration Contexts | Digit... - 0 views

  • discussing the basic structure of an Nginx configuration file along with some guidelines on how to design your files
  • /etc/nginx/nginx.conf
  • In Nginx parlance, the areas that these brackets define are called "contexts" because they contain configuration details that are separated according to their area of concern
  • ...50 more annotations...
  • contexts can be layered within one another
  • if a directive is valid in multiple nested scopes, a declaration in a broader context will be passed on to any child contexts as default values.
  • The children contexts can override these values at will
  • Nginx will error out on reading a configuration file with directives that are declared in the wrong context.
  • The most general context is the "main" or "global" context
  • Any directive that exist entirely outside of these blocks is said to inhabit the "main" context
  • The main context represents the broadest environment for Nginx configuration.
  • The "events" context is contained within the "main" context. It is used to set global options that affect how Nginx handles connections at a general level.
  • Nginx uses an event-based connection processing model, so the directives defined within this context determine how worker processes should handle connections.
  • the connection processing method is automatically selected based on the most efficient choice that the platform has available
  • a worker will only take a single connection at a time
  • When configuring Nginx as a web server or reverse proxy, the "http" context will hold the majority of the configuration.
  • The http context is a sibling of the events context, so they should be listed side-by-side, rather than nested
  • fine-tune the TCP keep alive settings (keepalive_disable, keepalive_requests, and keepalive_timeout)
  • The "server" context is declared within the "http" context.
  • multiple declarations
  • each instance defines a specific virtual server to handle client requests
  • Each client request will be handled according to the configuration defined in a single server context, so Nginx must decide which server context is most appropriate based on details of the request.
  • listen: The ip address / port combination that this server block is designed to respond to.
  • server_name: This directive is the other component used to select a server block for processing.
  • "Host" header
  • configure files to try to respond to requests (try_files)
  • issue redirects and rewrites (return and rewrite)
  • set arbitrary variables (set)
  • Location contexts share many relational qualities with server contexts
  • multiple location contexts can be defined, each location is used to handle a certain type of client request, and each location is selected by virtue of matching the location definition against the client request through a selection algorithm
  • Location blocks live within server contexts and, unlike server blocks, can be nested inside one another.
  • While server contexts are selected based on the requested IP address/port combination and the host name in the "Host" header, location blocks further divide up the request handling within a server block by looking at the request URI
  • The request URI is the portion of the request that comes after the domain name or IP address/port combination.
  • New directives at this level allow you to reach locations outside of the document root (alias), mark the location as only internally accessible (internal), and proxy to other servers or locations (using http, fastcgi, scgi, and uwsgi proxying).
  • These can then be used to do A/B testing by providing different content to different hosts.
  • configures Perl handlers for the location they appear in
  • set the value of a variable depending on the value of another variable
  • used to map MIME types to the file extensions that should be associated with them.
  • this context defines a named pool of servers that Nginx can then proxy requests to
  • The upstream context should be placed within the http context, outside of any specific server contexts.
  • The upstream context can then be referenced by name within server or location blocks to pass requests of a certain type to the pool of servers that have been defined.
  • function as a high performance mail proxy server
  • The mail context is defined within the "main" or "global" context (outside of the http context).
  • Nginx has the ability to redirect authentication requests to an external authentication server
  • the if directive in Nginx will execute the instructions contained if a given test returns "true".
  • Since Nginx will test conditions of a request with many other purpose-made directives, if should not be used for most forms of conditional execution.
  • The limit_except context is used to restrict the use of certain HTTP methods within a location context.
  • The result of the above example is that any client can use the GET and HEAD verbs, but only clients coming from the 192.168.1.1/24 subnet are allowed to use other methods.
  • Many directives are valid in more than one context
  • it is usually best to declare directives in the highest context to which they are applicable, and overriding them in lower contexts as necessary.
  • Declaring at higher levels provides you with a sane default
  • Nginx already engages in a well-documented selection algorithm for things like selecting server blocks and location blocks.
  • instead of relying on rewrites to get a user supplied request into the format that you would like to work with, you should try to set up two blocks for the request, one of which represents the desired method, and the other that catches messy requests and redirects (and possibly rewrites) them to your correct block.
  • incorrect requests can get by with a redirect rather than a rewrite, which should execute with lower overhead.
張 旭

An Introduction to HAProxy and Load Balancing Concepts | DigitalOcean - 0 views

  • HAProxy, which stands for High Availability Proxy
  • improve the performance and reliability of a server environment by distributing the workload across multiple servers (e.g. web, application, database).
  • ACLs are used to test some condition and perform an action (e.g. select a server, or block a request) based on the test result.
  • ...28 more annotations...
  • Access Control List (ACL)
  • ACLs allows flexible network traffic forwarding based on a variety of factors like pattern-matching and the number of connections to a backend
  • A backend is a set of servers that receives forwarded requests
  • adding more servers to your backend will increase your potential load capacity by spreading the load over multiple servers
  • mode http specifies that layer 7 proxying will be used
  • specifies the load balancing algorithm
  • health checks
  • A frontend defines how requests should be forwarded to backends
  • use_backend rules, which define which backends to use depending on which ACL conditions are matched, and/or a default_backend rule that handles every other case
  • A frontend can be configured to various types of network traffic
  • Load balancing this way will forward user traffic based on IP range and port
  • Generally, all of the servers in the web-backend should be serving identical content--otherwise the user might receive inconsistent content.
  • Using layer 7 allows the load balancer to forward requests to different backend servers based on the content of the user's request.
  • allows you to run multiple web application servers under the same domain and port
  • acl url_blog path_beg /blog matches a request if the path of the user's request begins with /blog.
  • Round Robin selects servers in turns
  • Selects the server with the least number of connections--it is recommended for longer sessions
  • This selects which server to use based on a hash of the source IP
  • ensure that a user will connect to the same server
  • require that a user continues to connect to the same backend server. This persistence is achieved through sticky sessions, using the appsession parameter in the backend that requires it.
  • HAProxy uses health checks to determine if a backend server is available to process requests.
  • The default health check is to try to establish a TCP connection to the server
  • If a server fails a health check, and therefore is unable to serve requests, it is automatically disabled in the backend
  • For certain types of backends, like database servers in certain situations, the default health check is insufficient to determine whether a server is still healthy.
  • However, your load balancer is a single point of failure in these setups; if it goes down or gets overwhelmed with requests, it can cause high latency or downtime for your service.
  • A high availability (HA) setup is an infrastructure without a single point of failure
  • a static IP address that can be remapped from one server to another.
  • If that load balancer fails, your failover mechanism will detect it and automatically reassign the IP address to one of the passive servers.
張 旭

How to Set Up Squid Proxy for Private Connections on Ubuntu 20.04 | DigitalOcean - 0 views

  • it’s a good idea to keep the deny all rule at the bottom of this configuration block.
  •  
    "it's a good idea to keep the deny all rule at the bottom of this configuration block."
crazylion lee

Packer - 0 views

  •  
    Packer is a tool for creating identical machine images for multiple platforms from a single source configuration.
1 - 13 of 13
Showing 20 items per page