Skip to main content

Home/ Larvata/ Group items tagged server

Rss Feed Group items tagged

張 旭

Understanding Nginx Server and Location Block Selection Algorithms | DigitalOcean - 0 views

  • A server block is a subset of Nginx’s configuration that defines a virtual server used to handle requests of a defined type. Administrators often configure multiple server blocks and decide which block should handle which connection based on the requested domain name, port, and IP address.
  • A location block lives within a server block and is used to define how Nginx should handle requests for different resources and URIs for the parent server. The URI space can be subdivided in whatever way the administrator likes using these blocks. It is an extremely flexible model.
  • Nginx logically divides the configurations meant to serve different content into blocks, which live in a hierarchical structure. Each time a client request is made, Nginx begins a process of determining which configuration blocks should be used to handle the request.
  • ...37 more annotations...
  • Nginx is one of the most popular web servers in the world. It can successfully handle high loads with many concurrent client connections, and can easily function as a web server, a mail server, or a reverse proxy server.
  • The main server block directives that Nginx is concerned with during this process are the listen directive, and the server_name directive.
  • The listen directive typically defines which IP address and port that the server block will respond to.
  • 0.0.0.0:8080 if Nginx is being run by a normal, non-root user
  • Nginx translates all “incomplete” listen directives by substituting missing values with their default values so that each block can be evaluated by its IP address and port.
  • In any case, the port must be matched exactly.
  • If there are multiple server blocks with the same level of specificity matching, Nginx then begins to evaluate the server_name directive of each server block.
  • Nginx will only evaluate the server_name directive when it needs to distinguish between server blocks that match to the same level of specificity in the listen directive.
  • Nginx checks the request’s “Host” header. This value holds the domain or IP address that the client was actually trying to reach.
  • Nginx will first try to find a server block with a server_name that matches the value in the “Host” header of the request exactly.
  • If no exact match is found, Nginx will then try to find a server block with a server_name that matches using a leading wildcard (indicated by a * at the beginning of the name in the config).
  • If no match is found using a leading wildcard, Nginx then looks for a server block with a server_name that matches using a trailing wildcard (indicated by a server name ending with a * in the config)
  • If no match is found using a trailing wildcard, Nginx then evaluates server blocks that define the server_name using regular expressions (indicated by a ~ before the name).
  • If no regular expression match is found, Nginx then selects the default server block for that IP address and port.
  • There can be only one default_server declaration per each IP address/port combination.
  • Location blocks live within server blocks (or other location blocks) and are used to decide how to process the request URI (the part of the request that comes after the domain name or IP address/port).
  • If no modifiers are present, the location is interpreted as a prefix match.
  • =: If an equal sign is used, this block will be considered a match if the request URI exactly matches the location given.
  • ~: If a tilde modifier is present, this location will be interpreted as a case-sensitive regular expression match.
  • ~*: If a tilde and asterisk modifier is used, the location block will be interpreted as a case-insensitive regular expression match.
  • ^~: If a carat and tilde modifier is present, and if this block is selected as the best non-regular expression match, regular expression matching will not take place.
  • Keep in mind that if this block is selected and the request is fulfilled using an index page, an internal redirect will take place to another location that will be the actual handler of the request
  • Keeping in mind the types of location declarations we described above, Nginx evaluates the possible location contexts by comparing the request URI to each of the locations.
  • Nginx begins by checking all prefix-based location matches (all location types not involving a regular expression).
  • First, Nginx looks for an exact match.
  • If no exact (with the = modifier) location block matches are found, Nginx then moves on to evaluating non-exact prefixes.
  • After the longest matching prefix location is determined and stored, Nginx moves on to evaluating the regular expression locations (both case sensitive and insensitive).
  • by default, Nginx will serve regular expression matches in preference to prefix matches.
  • regular expression matches within the longest prefix match will “jump the line” when Nginx evaluates regex locations.
  • The exceptions to the “only one location block” rule may have implications on how the request is actually served and may not align with the expectations you had when designing your location blocks.
  • The index directive always leads to an internal redirect if it is used to handle the request.
  • In the case above, if you really need the execution to stay in the first block, you will have to come up with a different method of satisfying the request to the directory.
  • one way of preventing an index from switching contexts, but it’s probably not useful for most configurations
  • the try_files directive. This directive tells Nginx to check for the existence of a named set of files or directories.
  • the rewrite directive. When using the last parameter with the rewrite directive, or when using no parameter at all, Nginx will search for a new matching location based on the results of the rewrite.
  • The error_page directive can lead to an internal redirect similar to that created by try_files.
  • when certain status codes are encountered.
張 旭

An Introduction to HAProxy and Load Balancing Concepts | DigitalOcean - 0 views

  • HAProxy, which stands for High Availability Proxy
  • improve the performance and reliability of a server environment by distributing the workload across multiple servers (e.g. web, application, database).
  • ACLs are used to test some condition and perform an action (e.g. select a server, or block a request) based on the test result.
  • ...28 more annotations...
  • Access Control List (ACL)
  • ACLs allows flexible network traffic forwarding based on a variety of factors like pattern-matching and the number of connections to a backend
  • A backend is a set of servers that receives forwarded requests
  • adding more servers to your backend will increase your potential load capacity by spreading the load over multiple servers
  • mode http specifies that layer 7 proxying will be used
  • specifies the load balancing algorithm
  • health checks
  • A frontend defines how requests should be forwarded to backends
  • use_backend rules, which define which backends to use depending on which ACL conditions are matched, and/or a default_backend rule that handles every other case
  • A frontend can be configured to various types of network traffic
  • Load balancing this way will forward user traffic based on IP range and port
  • Generally, all of the servers in the web-backend should be serving identical content--otherwise the user might receive inconsistent content.
  • Using layer 7 allows the load balancer to forward requests to different backend servers based on the content of the user's request.
  • allows you to run multiple web application servers under the same domain and port
  • acl url_blog path_beg /blog matches a request if the path of the user's request begins with /blog.
  • Round Robin selects servers in turns
  • Selects the server with the least number of connections--it is recommended for longer sessions
  • This selects which server to use based on a hash of the source IP
  • ensure that a user will connect to the same server
  • require that a user continues to connect to the same backend server. This persistence is achieved through sticky sessions, using the appsession parameter in the backend that requires it.
  • HAProxy uses health checks to determine if a backend server is available to process requests.
  • The default health check is to try to establish a TCP connection to the server
  • If a server fails a health check, and therefore is unable to serve requests, it is automatically disabled in the backend
  • For certain types of backends, like database servers in certain situations, the default health check is insufficient to determine whether a server is still healthy.
  • However, your load balancer is a single point of failure in these setups; if it goes down or gets overwhelmed with requests, it can cause high latency or downtime for your service.
  • A high availability (HA) setup is an infrastructure without a single point of failure
  • a static IP address that can be remapped from one server to another.
  • If that load balancer fails, your failover mechanism will detect it and automatically reassign the IP address to one of the passive servers.
張 旭

Understanding the Nginx Configuration File Structure and Configuration Contexts | Digit... - 0 views

  • discussing the basic structure of an Nginx configuration file along with some guidelines on how to design your files
  • /etc/nginx/nginx.conf
  • In Nginx parlance, the areas that these brackets define are called "contexts" because they contain configuration details that are separated according to their area of concern
  • ...50 more annotations...
  • contexts can be layered within one another
  • if a directive is valid in multiple nested scopes, a declaration in a broader context will be passed on to any child contexts as default values.
  • The children contexts can override these values at will
  • Nginx will error out on reading a configuration file with directives that are declared in the wrong context.
  • The most general context is the "main" or "global" context
  • Any directive that exist entirely outside of these blocks is said to inhabit the "main" context
  • The main context represents the broadest environment for Nginx configuration.
  • The "events" context is contained within the "main" context. It is used to set global options that affect how Nginx handles connections at a general level.
  • Nginx uses an event-based connection processing model, so the directives defined within this context determine how worker processes should handle connections.
  • the connection processing method is automatically selected based on the most efficient choice that the platform has available
  • a worker will only take a single connection at a time
  • When configuring Nginx as a web server or reverse proxy, the "http" context will hold the majority of the configuration.
  • The http context is a sibling of the events context, so they should be listed side-by-side, rather than nested
  • fine-tune the TCP keep alive settings (keepalive_disable, keepalive_requests, and keepalive_timeout)
  • The "server" context is declared within the "http" context.
  • multiple declarations
  • each instance defines a specific virtual server to handle client requests
  • Each client request will be handled according to the configuration defined in a single server context, so Nginx must decide which server context is most appropriate based on details of the request.
  • listen: The ip address / port combination that this server block is designed to respond to.
  • server_name: This directive is the other component used to select a server block for processing.
  • "Host" header
  • configure files to try to respond to requests (try_files)
  • issue redirects and rewrites (return and rewrite)
  • set arbitrary variables (set)
  • Location contexts share many relational qualities with server contexts
  • multiple location contexts can be defined, each location is used to handle a certain type of client request, and each location is selected by virtue of matching the location definition against the client request through a selection algorithm
  • Location blocks live within server contexts and, unlike server blocks, can be nested inside one another.
  • While server contexts are selected based on the requested IP address/port combination and the host name in the "Host" header, location blocks further divide up the request handling within a server block by looking at the request URI
  • The request URI is the portion of the request that comes after the domain name or IP address/port combination.
  • New directives at this level allow you to reach locations outside of the document root (alias), mark the location as only internally accessible (internal), and proxy to other servers or locations (using http, fastcgi, scgi, and uwsgi proxying).
  • These can then be used to do A/B testing by providing different content to different hosts.
  • configures Perl handlers for the location they appear in
  • set the value of a variable depending on the value of another variable
  • used to map MIME types to the file extensions that should be associated with them.
  • this context defines a named pool of servers that Nginx can then proxy requests to
  • The upstream context should be placed within the http context, outside of any specific server contexts.
  • The upstream context can then be referenced by name within server or location blocks to pass requests of a certain type to the pool of servers that have been defined.
  • function as a high performance mail proxy server
  • The mail context is defined within the "main" or "global" context (outside of the http context).
  • Nginx has the ability to redirect authentication requests to an external authentication server
  • the if directive in Nginx will execute the instructions contained if a given test returns "true".
  • Since Nginx will test conditions of a request with many other purpose-made directives, if should not be used for most forms of conditional execution.
  • The limit_except context is used to restrict the use of certain HTTP methods within a location context.
  • The result of the above example is that any client can use the GET and HEAD verbs, but only clients coming from the 192.168.1.1/24 subnet are allowed to use other methods.
  • Many directives are valid in more than one context
  • it is usually best to declare directives in the highest context to which they are applicable, and overriding them in lower contexts as necessary.
  • Declaring at higher levels provides you with a sane default
  • Nginx already engages in a well-documented selection algorithm for things like selecting server blocks and location blocks.
  • instead of relying on rewrites to get a user supplied request into the format that you would like to work with, you should try to set up two blocks for the request, one of which represents the desired method, and the other that catches messy requests and redirects (and possibly rewrites) them to your correct block.
  • incorrect requests can get by with a redirect rather than a rewrite, which should execute with lower overhead.
張 旭

Understanding Nginx HTTP Proxying, Load Balancing, Buffering, and Caching | DigitalOcean - 0 views

  • allow Nginx to pass requests off to backend http servers for further processing
  • Nginx is often set up as a reverse proxy solution to help scale out infrastructure or to pass requests to other servers that are not designed to handle large client loads
  • explore buffering and caching to improve the performance of proxying operations for clients
  • ...48 more annotations...
  • Nginx is built to handle many concurrent connections at the same time.
  • provides you with flexibility in easily adding backend servers or taking them down as needed for maintenance
  • Proxying in Nginx is accomplished by manipulating a request aimed at the Nginx server and passing it to other servers for the actual processing
  • The servers that Nginx proxies requests to are known as upstream servers.
  • Nginx can proxy requests to servers that communicate using the http(s), FastCGI, SCGI, and uwsgi, or memcached protocols through separate sets of directives for each type of proxy
  • When a request matches a location with a proxy_pass directive inside, the request is forwarded to the URL given by the directive
  • For example, when a request for /match/here/please is handled by this block, the request URI will be sent to the example.com server as http://example.com/match/here/please
  • The request coming from Nginx on behalf of a client will look different than a request coming directly from a client
  • Nginx gets rid of any empty headers
  • Nginx, by default, will consider any header that contains underscores as invalid. It will remove these from the proxied request
    • 張 旭
       
      這裡要注意一下,header 欄位名稱有設定底線的,要設定 Nginx 讓它可以通過。
  • The "Host" header is re-written to the value defined by the $proxy_host variable.
  • The upstream should not expect this connection to be persistent
  • Headers with empty values are completely removed from the passed request.
  • if your backend application will be processing non-standard headers, you must make sure that they do not have underscores
  • by default, this will be set to the value of $proxy_host, a variable that will contain the domain name or IP address and port taken directly from the proxy_pass definition
  • This is selected by default as it is the only address Nginx can be sure the upstream server responds to
  • (as it is pulled directly from the connection info)
  • $http_host: Sets the "Host" header to the "Host" header from the client request.
  • The headers sent by the client are always available in Nginx as variables. The variables will start with an $http_ prefix, followed by the header name in lowercase, with any dashes replaced by underscores.
  • preference to: the host name from the request line itself
  • set the "Host" header to the $host variable. It is the most flexible and will usually provide the proxied servers with a "Host" header filled in as accurately as possible
  • sets the "Host" header to the $host variable, which should contain information about the original host being requested
  • This variable takes the value of the original X-Forwarded-For header retrieved from the client and adds the Nginx server's IP address to the end.
  • The upstream directive must be set in the http context of your Nginx configuration.
  • http context
  • Once defined, this name will be available for use within proxy passes as if it were a regular domain name
  • By default, this is just a simple round-robin selection process (each request will be routed to a different host in turn)
  • Specifies that new connections should always be given to the backend that has the least number of active connections.
  • distributes requests to different servers based on the client's IP address.
  • mainly used with memcached proxying
  • As for the hash method, you must provide the key to hash against
  • Server Weight
  • Nginx's buffering and caching capabilities
  • Without buffers, data is sent from the proxied server and immediately begins to be transmitted to the client.
  • With buffers, the Nginx proxy will temporarily store the backend's response and then feed this data to the client
  • Nginx defaults to a buffering design
  • can be set in the http, server, or location contexts.
  • the sizing directives are configured per request, so increasing them beyond your need can affect your performance
  • When buffering is "off" only the buffer defined by the proxy_buffer_size directive will be used
  • A high availability (HA) setup is an infrastructure without a single point of failure, and your load balancers are a part of this configuration.
  • multiple load balancers (one active and one or more passive) behind a static IP address that can be remapped from one server to another.
  • Nginx also provides a way to cache content from backend servers
  • The proxy_cache_path directive must be set in the http context.
  • proxy_cache backcache;
    • 張 旭
       
      這裡的 backcache 是前文設定的 backcache 變數,看起來每個 location 都可以有自己的 cache 目錄。
  • The proxy_cache_bypass directive is set to the $http_cache_control variable. This will contain an indicator as to whether the client is explicitly requesting a fresh, non-cached version of the resource
  • any user-related data should not be cached
  • For private content, you should set the Cache-Control header to "no-cache", "no-store", or "private" depending on the nature of the data
張 旭

DNS Records: an Introduction - 0 views

  • reading from right to left
  • top-level domain, or TLD
  • first-level subdomains plus their TLDs (example.com) are referred to as “domains.”
  • ...37 more annotations...
  • Name servers host a domain’s DNS information in a text file called the zone file
  • Start of Authority (SOA) records
  • You’ll want to specify at least two name servers. That way, if one of them is down, the next one can continue to serve your DNS information.
  • Every domain’s zone file contains the admin’s email address, the name servers, and the DNS records.
  • a zone file, which lists domains and their corresponding IP addresses (and a few other things)
  • TLD nameserver
  • ISPs cache a lot of DNS information after they’ve looked it up the first time
  • Usually caching is a good thing, but it can be a problem if you’ve recently made a change to your DNS information
  • An A record matches up a domain (or subdomain) to an IP address
  • point different subdomains to different IP addresses
  • An AAAA record is just like an A record, but for IPv6 IP addresses.
  • An AXFR record is a type of DNS record used for DNS replication
  • used on a slave DNS server to replicate the zone file from a master DNS server
  • DNS Certification Authority Authorization uses DNS to allow the holder of a domain to specify which certificate authorities are allowed to issue certificates for that domain.
  • A CNAME record or Canonical Name record matches up a domain (or subdomain) to a different domain.
  • You should not use a CNAME record for a domain that gets email, because some mail servers handle mail oddly for domains with CNAME records
  • the target domain for a CNAME record should have a normal A-record resolution
  • a CNAME record does not function the same way as a URL redirect
  • A DKIM record or domain keys identified mail record displays the public key for authenticating messages that have been signed with the DKIM protocol
  • An MX record or mail exchange record sets the mail delivery destination for a domain (or subdomain).
  • Ideally, an MX record should point to a domain that is also the hostname for its server.
  • Your MX records don’t necessarily have to point to your Linode. If you’re using a third-party mail service, like Google Apps, you should use the MX records they provide.
  • Lower numbers have a higher priority
  • NS records or name server records set the nameservers for a domain (or subdomain).
  • You can also set up different nameservers for any of your subdomains.
  • The order of NS records does not matter; DNS requests are sent randomly to the different servers, and if one host fails to respond, another one will be queried.
  • A PTR record or pointer record matches up an IP address to a domain (or subdomain), allowing reverse DNS queries to function.
  • PTR records are usually set with your hosting provider. They are not part of your domain’s zone file.
  • An SOA record or Start of Authority record labels a zone file with the name of the host where it was originally created.
  • The administrative email address is written with a period (.) instead of an at symbol (<@>).
  • The single nameserver mentioned in the SOA record is considered the primary master for the purposes of Dynamic DNS and is the server where zone file changes get made before they are propagated to all other nameservers.
  • An SPF record or Sender Policy Framework record lists the designated mail servers for a domain (or subdomain).
  • An SPF record for your domain tells other receiving mail servers which outgoing server(s) are valid sources of email, so they can reject spoofed email from your domain that has originated from unauthorized servers.
  • Your SPF record will have a domain or subdomain, type (which is TXT, or SPF if your name server supports it), and text (which starts with “v=spf1” and contains the SPF record settings).
  • An SRV record or service record matches up a specific service that runs on your domain (or subdomain) to a target domain.
  • A TXT record or text record provides information about the domain in question to other resources on the Internet.
  • One common use of the TXT record is to create an SPF record on nameservers that don’t natively support SPF.
張 旭

DNS Records: An Introduction - 0 views

  • Domain names are best understood by reading from right to left.
  • the top-level domain, or TLD
  • Every term to the left of the TLD is separated by a period and considered a more specific subdomain
  • ...40 more annotations...
  • Name servers host a domain’s DNS information in a text file called a zone file.
  • Start of Authority (SOA) records
  • specifying DNS records, which match domain names to IP addresses.
  • Every domain’s zone file contains the domain administrator’s email address, the name servers, and the DNS records.
  • Your ISP’s DNS resolver queries a root nameserver for the proper TLD nameserver. In other words, it asks the root nameserver, *Where can I find the nameserver for .com domains?*
  • In actuality, ISPs cache a lot of DNS information after they’ve looked it up the first time.
  • caching is a good thing, but it can be a problem if you’ve recently made a change to your DNS information
  • An A record points your domain or subdomain to your Linode’s IP address,
  • use an asterisk (*) as your subdomain
  • An AAAA record is just like an A record, but for IPv6 IP addresses.
  • An AXFR record is a type of DNS record used for DNS replication
  • DNS Certification Authority Authorization uses DNS to allow the holder of a domain to specify which certificate authorities are allowed to issue certificates for that domain.
  • A CNAME record or Canonical Name record matches a domain or subdomain to a different domain.
  • Some mail servers handle mail oddly for domains with CNAME records, so you should not use a CNAME record for a domain that gets email.
  • MX records cannot reference CNAME-defined hostnames.
  • Chaining or looping CNAME records is not recommended.
  • a CNAME record does not function the same way as a URL redirect.
  • A DKIM record or DomainKeys Identified Mail record displays the public key for authenticating messages that have been signed with the DKIM protocol
  • DKIM records are implemented as text records.
  • An MX record or mail exchanger record sets the mail delivery destination for a domain or subdomain.
  • An MX record should ideally point to a domain that is also the hostname for its server.
  • Priority allows you to designate a fallback server (or servers) for mail for a particular domain. Lower numbers have a higher priority.
  • NS records or name server records set the nameservers for a domain or subdomain.
  • You can also set up different nameservers for any of your subdomains
  • Primary nameservers get configured at your registrar and secondary subdomain nameservers get configured in the primary domain’s zone file.
  • The order of NS records does not matter. DNS requests are sent randomly to the different servers
  • A PTR record or pointer record matches up an IP address to a domain or subdomain, allowing reverse DNS queries to function.
  • opposite service an A record does
  • PTR records are usually set with your hosting provider. They are not part of your domain’s zone file.
  • An SOA record or Start of Authority record labels a zone file with the name of the host where it was originally created.
  • Minimum TTL: The minimum amount of time other servers should keep data cached from this zone file.
  • An SPF record or Sender Policy Framework record lists the designated mail servers for a domain or subdomain.
  • An SPF record for your domain tells other receiving mail servers which outgoing server(s) are valid sources of email so they can reject spoofed mail from your domain that has originated from unauthorized servers.
  • Make sure your SPF records are not too strict.
  • An SRV record or service record matches up a specific service that runs on your domain or subdomain to a target domain.
  • Service: The name of the service must be preceded by an underscore (_) and followed by a period (.)
  • Protocol: The name of the protocol must be proceeded by an underscore (_) and followed by a period (.)
  • Port: The TCP or UDP port on which the service runs.
  • Target: The target domain or subdomain. This domain must have an A or AAAA record that resolves to an IP address.
  • A TXT record or text record provides information about the domain in question to other resources on the internet.
  •  
    "Domain names are best understood by reading from right to left."
張 旭

MySQL :: MySQL 5.7 Reference Manual :: 19.1.1.2 Group Replication - 0 views

  • The replication group is a set of servers that interact with each other through message passing.
  • The communication layer provides a set of guarantees such as atomic message and total order message delivery.
  • a multi-master update everywhere replication protocol
  • ...8 more annotations...
  • a replication group is formed by multiple servers and each server in the group may execute transactions independently
  • Read-only (RO) transactions need no coordination within the group and thus commit immediately
  • any RW transaction the group needs to decide whether it commits or not, thus the commit operation is not a unilateral decision from the originating server
  • when a transaction is ready to commit at the originating server, the server atomically broadcasts the write values (rows changed) and the correspondent write set (unique identifiers of the rows that were updated). Then a global total order is established for that transaction.
  • all servers receive the same set of transactions in the same order
  • The resolution procedure states that the transaction that was ordered first commits on all servers, whereas the transaction ordered second aborts, and thus is rolled back on the originating server and dropped by the other servers in the group. This is in fact a distributed first commit wins rule
  • Group Replication is a shared-nothing replication scheme where each server has its own entire copy of the data
  • MySQL Group Replication protocol
張 旭

Helm | - 0 views

  • A chart is a collection of files that describe a related set of Kubernetes resources.
  • A single chart might be used to deploy something simple, like a memcached pod, or something complex, like a full web app stack with HTTP servers, databases, caches, and so on.
  • Charts are created as files laid out in a particular directory tree, then they can be packaged into versioned archives to be deployed.
  • ...170 more annotations...
  • A chart is organized as a collection of files inside of a directory.
  • values.yaml # The default configuration values for this chart
  • charts/ # A directory containing any charts upon which this chart depends.
  • templates/ # A directory of templates that, when combined with values, # will generate valid Kubernetes manifest files.
  • version: A SemVer 2 version (required)
  • apiVersion: The chart API version, always "v1" (required)
  • Every chart must have a version number. A version must follow the SemVer 2 standard.
  • non-SemVer names are explicitly disallowed by the system.
  • When generating a package, the helm package command will use the version that it finds in the Chart.yaml as a token in the package name.
  • the appVersion field is not related to the version field. It is a way of specifying the version of the application.
  • appVersion: The version of the app that this contains (optional). This needn't be SemVer.
  • If the latest version of a chart in the repository is marked as deprecated, then the chart as a whole is considered to be deprecated.
  • deprecated: Whether this chart is deprecated (optional, boolean)
  • one chart may depend on any number of other charts.
  • dependencies can be dynamically linked through the requirements.yaml file or brought in to the charts/ directory and managed manually.
  • the preferred method of declaring dependencies is by using a requirements.yaml file inside of your chart.
  • A requirements.yaml file is a simple file for listing your dependencies.
  • The repository field is the full URL to the chart repository.
  • you must also use helm repo add to add that repo locally.
  • helm dependency update and it will use your dependency file to download all the specified charts into your charts/ directory for you.
  • When helm dependency update retrieves charts, it will store them as chart archives in the charts/ directory.
  • Managing charts with requirements.yaml is a good way to easily keep charts updated, and also share requirements information throughout a team.
  • All charts are loaded by default.
  • The condition field holds one or more YAML paths (delimited by commas). If this path exists in the top parent’s values and resolves to a boolean value, the chart will be enabled or disabled based on that boolean value.
  • The tags field is a YAML list of labels to associate with this chart.
  • all charts with tags can be enabled or disabled by specifying the tag and a boolean value.
  • The --set parameter can be used as usual to alter tag and condition values.
  • Conditions (when set in values) always override tags.
  • The first condition path that exists wins and subsequent ones for that chart are ignored.
  • The keys containing the values to be imported can be specified in the parent chart’s requirements.yaml file using a YAML list. Each item in the list is a key which is imported from the child chart’s exports field.
  • specifying the key data in our import list, Helm looks in the exports field of the child chart for data key and imports its contents.
  • the parent key data is not contained in the parent’s final values. If you need to specify the parent key, use the ‘child-parent’ format.
  • To access values that are not contained in the exports key of the child chart’s values, you will need to specify the source key of the values to be imported (child) and the destination path in the parent chart’s values (parent).
  • To drop a dependency into your charts/ directory, use the helm fetch command
  • A dependency can be either a chart archive (foo-1.2.3.tgz) or an unpacked chart directory.
  • name cannot start with _ or .. Such files are ignored by the chart loader.
  • a single release is created with all the objects for the chart and its dependencies.
  • Helm Chart templates are written in the Go template language, with the addition of 50 or so add-on template functions from the Sprig library and a few other specialized functions
  • When Helm renders the charts, it will pass every file in that directory through the template engine.
  • Chart developers may supply a file called values.yaml inside of a chart. This file can contain default values.
  • Chart users may supply a YAML file that contains values. This can be provided on the command line with helm install.
  • When a user supplies custom values, these values will override the values in the chart’s values.yaml file.
  • Template files follow the standard conventions for writing Go templates
  • {{default "minio" .Values.storage}}
  • Values that are supplied via a values.yaml file (or via the --set flag) are accessible from the .Values object in a template.
  • pre-defined, are available to every template, and cannot be overridden
  • the names are case sensitive
  • Release.Name: The name of the release (not the chart)
  • Release.IsUpgrade: This is set to true if the current operation is an upgrade or rollback.
  • Release.Revision: The revision number. It begins at 1, and increments with each helm upgrade
  • Chart: The contents of the Chart.yaml
  • Files: A map-like object containing all non-special files in the chart.
  • Files can be accessed using {{index .Files "file.name"}} or using the {{.Files.Get name}} or {{.Files.GetString name}} functions.
  • .helmignore
  • access the contents of the file as []byte using {{.Files.GetBytes}}
  • Any unknown Chart.yaml fields will be dropped
  • Chart.yaml cannot be used to pass arbitrarily structured data into the template.
  • A values file is formatted in YAML.
  • A chart may include a default values.yaml file
  • be merged into the default values file.
  • The default values file included inside of a chart must be named values.yaml
  • accessible inside of templates using the .Values object
  • Values files can declare values for the top-level chart, as well as for any of the charts that are included in that chart’s charts/ directory.
  • Charts at a higher level have access to all of the variables defined beneath.
  • lower level charts cannot access things in parent charts
  • Values are namespaced, but namespaces are pruned.
  • the scope of the values has been reduced and the namespace prefix removed
  • Helm supports special “global” value.
  • a way of sharing one top-level variable with all subcharts, which is useful for things like setting metadata properties like labels.
  • If a subchart declares a global variable, that global will be passed downward (to the subchart’s subcharts), but not upward to the parent chart.
  • global variables of parent charts take precedence over the global variables from subcharts.
  • helm lint
  • A chart repository is an HTTP server that houses one or more packaged charts
  • Any HTTP server that can serve YAML files and tar files and can answer GET requests can be used as a repository server.
  • Helm does not provide tools for uploading charts to remote repository servers.
  • the only way to add a chart to $HELM_HOME/starters is to manually copy it there.
  • Helm provides a hook mechanism to allow chart developers to intervene at certain points in a release’s life cycle.
  • Execute a Job to back up a database before installing a new chart, and then execute a second job after the upgrade in order to restore data.
  • Hooks are declared as an annotation in the metadata section of a manifest
  • Hooks work like regular templates, but they have special annotations
  • pre-install
  • post-install: Executes after all resources are loaded into Kubernetes
  • pre-delete
  • post-delete: Executes on a deletion request after all of the release’s resources have been deleted.
  • pre-upgrade
  • post-upgrade
  • pre-rollback
  • post-rollback: Executes on a rollback request after all resources have been modified.
  • crd-install
  • test-success: Executes when running helm test and expects the pod to return successfully (return code == 0).
  • test-failure: Executes when running helm test and expects the pod to fail (return code != 0).
  • Hooks allow you, the chart developer, an opportunity to perform operations at strategic points in a release lifecycle
  • Tiller then loads the hook with the lowest weight first (negative to positive)
  • Tiller returns the release name (and other data) to the client
  • If the resources is a Job kind, Tiller will wait until the job successfully runs to completion.
  • if the job fails, the release will fail. This is a blocking operation, so the Helm client will pause while the Job is run.
  • If they have hook weights (see below), they are executed in weighted order. Otherwise, ordering is not guaranteed.
  • good practice to add a hook weight, and set it to 0 if weight is not important.
  • The resources that a hook creates are not tracked or managed as part of the release.
  • leave the hook resource alone.
  • To destroy such resources, you need to either write code to perform this operation in a pre-delete or post-delete hook or add "helm.sh/hook-delete-policy" annotation to the hook template file.
  • Hooks are just Kubernetes manifest files with special annotations in the metadata section
  • One resource can implement multiple hooks
  • no limit to the number of different resources that may implement a given hook.
  • When subcharts declare hooks, those are also evaluated. There is no way for a top-level chart to disable the hooks declared by subcharts.
  • Hook weights can be positive or negative numbers but must be represented as strings.
  • sort those hooks in ascending order.
  • Hook deletion policies
  • "before-hook-creation" specifies Tiller should delete the previous hook before the new hook is launched.
  • By default Tiller will wait for 60 seconds for a deleted hook to no longer exist in the API server before timing out.
  • Custom Resource Definitions (CRDs) are a special kind in Kubernetes.
  • The crd-install hook is executed very early during an installation, before the rest of the manifests are verified.
  • A common reason why the hook resource might already exist is that it was not deleted following use on a previous install/upgrade.
  • Helm uses Go templates for templating your resource files.
  • two special template functions: include and required
  • include function allows you to bring in another template, and then pass the results to other template functions.
  • The required function allows you to declare a particular values entry as required for template rendering.
  • If the value is empty, the template rendering will fail with a user submitted error message.
  • When you are working with string data, you are always safer quoting the strings than leaving them as bare words
  • Quote Strings, Don’t Quote Integers
  • when working with integers do not quote the values
  • env variables values which are expected to be string
  • to include a template, and then perform an operation on that template’s output, Helm has a special include function
  • The above includes a template called toYaml, passes it $value, and then passes the output of that template to the nindent function.
  • Go provides a way for setting template options to control behavior when a map is indexed with a key that’s not present in the map
  • The required function gives developers the ability to declare a value entry as required for template rendering.
  • The tpl function allows developers to evaluate strings as templates inside a template.
  • Rendering a external configuration file
  • (.Files.Get "conf/app.conf")
  • Image pull secrets are essentially a combination of registry, username, and password.
  • Automatically Roll Deployments When ConfigMaps or Secrets change
  • configmaps or secrets are injected as configuration files in containers
  • a restart may be required should those be updated with a subsequent helm upgrade
  • The sha256sum function can be used to ensure a deployment’s annotation section is updated if another file changes
  • checksum/config: {{ include (print $.Template.BasePath "/configmap.yaml") . | sha256sum }}
  • helm upgrade --recreate-pods
  • "helm.sh/resource-policy": keep
  • resources that should not be deleted when Helm runs a helm delete
  • this resource becomes orphaned. Helm will no longer manage it in any way.
  • create some reusable parts in your chart
  • In the templates/ directory, any file that begins with an underscore(_) is not expected to output a Kubernetes manifest file.
  • by convention, helper templates and partials are placed in a _helpers.tpl file.
  • The current best practice for composing a complex application from discrete parts is to create a top-level umbrella chart that exposes the global configurations, and then use the charts/ subdirectory to embed each of the components.
  • SAP’s Converged charts: These charts install SAP Converged Cloud a full OpenStack IaaS on Kubernetes. All of the charts are collected together in one GitHub repository, except for a few submodules.
  • Deis’s Workflow: This chart exposes the entire Deis PaaS system with one chart. But it’s different from the SAP chart in that this umbrella chart is built from each component, and each component is tracked in a different Git repository.
  • YAML is a superset of JSON
  • any valid JSON structure ought to be valid in YAML.
  • As a best practice, templates should follow a YAML-like syntax unless the JSON syntax substantially reduces the risk of a formatting issue.
  • There are functions in Helm that allow you to generate random data, cryptographic keys, and so on.
  • a chart repository is a location where packaged charts can be stored and shared.
  • A chart repository is an HTTP server that houses an index.yaml file and optionally some packaged charts.
  • Because a chart repository can be any HTTP server that can serve YAML and tar files and can answer GET requests, you have a plethora of options when it comes down to hosting your own chart repository.
  • It is not required that a chart package be located on the same server as the index.yaml file.
  • A valid chart repository must have an index file. The index file contains information about each chart in the chart repository.
  • The Helm project provides an open-source Helm repository server called ChartMuseum that you can host yourself.
  • $ helm repo index fantastic-charts --url https://fantastic-charts.storage.googleapis.com
  • A repository will not be added if it does not contain a valid index.yaml
  • add the repository to their helm client via the helm repo add [NAME] [URL] command with any name they would like to use to reference the repository.
  • Helm has provenance tools which help chart users verify the integrity and origin of a package.
  • Integrity is established by comparing a chart to a provenance record
  • The provenance file contains a chart’s YAML file plus several pieces of verification information
  • Chart repositories serve as a centralized collection of Helm charts.
  • Chart repositories must make it possible to serve provenance files over HTTP via a specific request, and must make them available at the same URI path as the chart.
  • We don’t want to be “the certificate authority” for all chart signers. Instead, we strongly favor a decentralized model, which is part of the reason we chose OpenPGP as our foundational technology.
  • The Keybase platform provides a public centralized repository for trust information.
  • A chart contains a number of Kubernetes resources and components that work together.
  • A test in a helm chart lives under the templates/ directory and is a pod definition that specifies a container with a given command to run.
  • The pod definition must contain one of the helm test hook annotations: helm.sh/hook: test-success or helm.sh/hook: test-failure
  • helm test
  • nest your test suite under a tests/ directory like <chart-name>/templates/tests/
張 旭

Helm | - 0 views

  • Helm is a tool for managing Kubernetes packages called charts
  • Install and uninstall charts into an existing Kubernetes cluster
  • The chart is a bundle of information necessary to create an instance of a Kubernetes application.
  • ...12 more annotations...
  • The config contains configuration information that can be merged into a packaged chart to create a releasable object.
  • A release is a running instance of a chart, combined with a specific config.
  • The Helm Client is a command-line client for end users.
  • Interacting with the Tiller server
  • The Tiller Server is an in-cluster server that interacts with the Helm client, and interfaces with the Kubernetes API server.
  • Combining a chart and configuration to build a release
  • Installing charts into Kubernetes, and then tracking the subsequent release
  • the client is responsible for managing charts, and the server is responsible for managing releases.
  • The Helm client is written in the Go programming language, and uses the gRPC protocol suite to interact with the Tiller server.
  • The Tiller server is also written in Go. It provides a gRPC server to connect with the client, and it uses the Kubernetes client library to communicate with Kubernetes.
  • The Tiller server stores information in ConfigMaps located inside of Kubernetes.
  • Configuration files are, when possible, written in YAML.
  •  
    "Helm is a tool for managing Kubernetes packages called charts"
張 旭

MySQL :: MySQL 5.7 Reference Manual :: 19.2.1.2 Configuring an Instance for Group Repli... - 0 views

  • store replication metadata in system tables instead of files
  • collect the write set and encode it as a hash using the XXHASH64 hashing algorithm
  • not start operations automatically when the server starts
  • ...10 more annotations...
  • for incoming connections from other members in the group
  • The server listens on this port for member-to-member connections. This port must not be used for user applications at all
  • The loose- prefix used for the group_replication variables above instructs the server to continue to start if the Group Replication plugin has not been loaded at the time the server is started.
  • For example, if each server instance is on a different machine use the IP and port of the machine, such as 10.0.0.1:33061. The recommended port for group_replication_local_address is 33061
  • does not need to list all members in the group
  • The server that starts the group does not make use of this option, since it is the initial server and as such, it is in charge of bootstrapping the group
  • start the bootstrap member first, and let it create the group
  • Creating a group and joining multiple members at the same time is not supported.
  • must only be used on one server instance at any time
  • Disable this option after the first server instance comes online
crazylion lee

google/seesaw: Seesaw v2 is a Linux Virtual Server (LVS) based load balancing platform. - 1 views

  •  
    "Seesaw v2 is a Linux Virtual Server (LVS) based load balancing platform. It is capable of providing basic load balancing for servers that are on the same network, through to advanced load balancing functionality such as anycast, Direct Server Return (DSR), support for multiple VLANs and centralised configuration. Above all, it is designed to be reliable and easy to maintain."
張 旭

Connection and Privileges Needed - 0 views

  • Percona XtraBackup needs to be able to connect to the database server and perform operations on the server and the datadir when creating a backup, when preparing in some scenarios and when restoring it.
  • When xtrabackup is used, there are two actors involved: the user invoking the program - a system user - and the user performing action in the database server - a database user.
  • these are different users in different places, even though they may have the same username.
  • ...1 more annotation...
  • Once connected to the server, in order to perform a backup you will need READ and EXECUTE permissions at a filesystem level in the server’s datadir.
  •  
    "Percona XtraBackup needs to be able to connect to the database server and perform operations on the server and the datadir when creating a backup, when preparing in some scenarios and when restoring it. "
張 旭

Java microservices architecture by example - 0 views

  • A microservices architecture is a particular case of a service-oriented architecture (SOA)
  • What sets microservices apart is the extent to which these modules are interconnected.
  • Every server comprises just one certain business process and never consists of several smaller servers.
  • ...12 more annotations...
  • Microservices also bring a set of additional benefits, such as easier scaling, the possibility to use multiple programming languages and technologies, and others.
  • Java is a frequent choice for building a microservices architecture as it is a mature language tested over decades and has a multitude of microservices-favorable frameworks, such as legendary Spring, Jersey, Play, and others.
  • A monolithic architecture keeps it all simple. An app has just one server and one database.
  • All the connections between units are inside-code calls.
  • split our application into microservices and got a set of units completely independent for deployment and maintenance.
  • Each of microservices responsible for a certain business function communicates either via sync HTTP/REST or async AMQP protocols.
  • ensure seamless communication between newly created distributed components.
  • The gateway became an entry point for all clients’ requests.
  • We also set the Zuul 2 framework for our gateway service so that the application could leverage the benefits of non-blocking HTTP calls.
  • we've implemented the Eureka server as our server discovery that keeps a list of utilized user profile and order servers to help them discover each other.
  • We also have a message broker (RabbitMQ) as an intermediary between the notification server and the rest of the servers to allow async messaging in-between.
  • microservices can definitely help when it comes to creating complex applications that deal with huge loads and need continuous improvement and scaling.
crazylion lee

My First 10 Minutes On a Server - Primer for Securing Ubuntu - 0 views

  •  
    "My First 5 Minutes on a Server, by Bryan Kennedy, is an excellent intro into securing a server against most attacks. We have a few modifications to his approach that we wanted to document as part of our efforts of externalizing our processes and best practices. We also wanted to spend a bit more time explaining a few things that younger engineers may benefit from."
張 旭

Forwarding (DNS and BIND, 4th Edition) - 0 views

  • Forwarders are also useful if you need to shunt name resolution to a particular name server
  • If you designate one or more servers at your site as forwarders, your name servers will send all their off-site queries to the forwarders first.
  • A primary master or slave name server's mode of operation changes slightly when it is configured to use a forwarder.
  • ...2 more annotations...
  • If a resolver requests records that are already in the name server's authoritative data or cached data, the name server answers with that information
  • if the records aren't in its database, the name server sends the query to a forwarder and waits a short period for an answer before resuming normal operation and contacting the remote name servers itself. What the name server is doing differently here is sending a recursive query to the forwarder, expecting it to find the answer.
張 旭

Custom Resources | Kubernetes - 0 views

  • Custom resources are extensions of the Kubernetes API
  • A resource is an endpoint in the Kubernetes API that stores a collection of API objects of a certain kind
  • Custom resources can appear and disappear in a running cluster through dynamic registration
  • ...30 more annotations...
  • Once a custom resource is installed, users can create and access its objects using kubectl
  • When you combine a custom resource with a custom controller, custom resources provide a true declarative API.
  • A declarative API allows you to declare or specify the desired state of your resource and tries to keep the current state of Kubernetes objects in sync with the desired state.
  • Custom controllers can work with any kind of resource, but they are especially effective when combined with custom resources.
  • The Operator pattern combines custom resources and custom controllers.
  • the API represents a desired state, not an exact state.
  • define configuration of applications or infrastructure.
  • The main operations on the objects are CRUD-y (creating, reading, updating and deleting).
  • The client says "do this", and then gets an operation ID back, and has to check a separate Operation object to determine completion of the request.
  • The natural operations on the objects are not CRUD-y.
  • High bandwidth access (10s of requests per second sustained) needed.
  • Use a ConfigMap if any of the following apply
  • You want to put the entire config file into one key of a configMap.
  • You want to perform rolling updates via Deployment, etc., when the file is updated.
  • Use a secret for sensitive data, which is similar to a configMap but more secure.
  • You want to build new automation that watches for updates on the new object, and then CRUD other objects, or vice versa.
  • You want the object to be an abstraction over a collection of controlled resources, or a summarization of other resources.
  • CRDs are simple and can be created without any programming.
  • Aggregated APIs are subordinate API servers that sit behind the primary API server
  • CRDs allow users to create new types of resources without adding another API server
  • Defining a CRD object creates a new custom resource with a name and schema that you specify.
  • The name of a CRD object must be a valid DNS subdomain name
  • each resource in the Kubernetes API requires code that handles REST requests and manages persistent storage of objects.
  • The main API server delegates requests to you for the custom resources that you handle, making them available to all of its clients.
  • The new endpoints support CRUD basic operations via HTTP and kubectl
  • Custom resources consume storage space in the same way that ConfigMaps do.
  • Aggregated API servers may use the same storage as the main API server
  • CRDs always use the same authentication, authorization, and audit logging as the built-in resources of your API server.
  • most RBAC roles will not grant access to the new resources (except the cluster-admin role or any role created with wildcard rules).
  • CRDs and Aggregated APIs often come bundled with new role definitions for the types they add.
張 旭

Using Traefik as a reverse proxy | Blog Eleven Labs - 0 views

  • a proxy is associated with the client(s), while a reverse proxy is associated with the server(s); a reverse proxy is usually an internal-facing proxy used as a ‘front-end’ to control and protect access to a server on a private network.
  • the restart: always instruction will allow our reverse-proxy service to restart automatically, on its own.
  • add an [api] section to enable the dashboard and the API
  • ...6 more annotations...
  • double the $ symbols in order to escape the $ symbols as it tries to reference a variable.
  • stay consistent with names inside routers and middlewares.
  • providers.file
  • the service name is always in the form of [service name]@[provider
  • write different routing rules for a service and how to generate SSL certificates
  • traefik.http.services.home.loadbalancer.server.port=8123 indicates that the service port I want to expose is 8123.
  •  
    "a proxy is associated with the client(s), while a reverse proxy is associated with the server(s); a reverse proxy is usually an internal-facing proxy used as a 'front-end' to control and protect access to a server on a private network."
張 旭

Language Server Protocol - Wikipedia - 0 views

  • Modern IDEs provide developers with sophisticated features like code completion, refactoring, navigating to a symbol's definition, syntax highlighting, and error and warning markers.
  • an IDE needs a sophisticated understanding of the programming language that the program's source is written in.
  • Conventional compilers or interpreters for a specific programming language are typically unable to provide these language services, because they are written with the goal of either transforming the source code into object code or immediately executing the code.
  • ...5 more annotations...
  • Prior to the design and implementation of the Language Server Protocol for the development of Visual Studio Code, most language services were generally tied to a given IDE or other editor.
  • The Language Server Protocol allows for decoupling language services from the editor so that the services may be contained within a general purpose language server.
  • LSP is not restricted to programming languages. It can be used for any kind of text-based language, like specifications[7] or domain-specific languages (DSL).
  • When a user edits one or more source code files using a language server protocol-enabled tool, the tool acts as a client that consumes the language services provided by a language server.
  • The protocol does not make any provisions about how requests, responses and notifications are transferred between client and server.
張 旭

Serverless Architectures - 0 views

  • Serverless was first used to describe applications that significantly or fully depend on 3rd party applications / services (‘in the cloud’) to manage server-side logic and state.
  • ‘rich client’ applications (think single page web apps, or mobile apps) that use the vast ecosystem of cloud accessible databases (like Parse, Firebase), authentication services (Auth0, AWS Cognito), etc.
  • ‘(Mobile) Backend as a Service’
  • ...33 more annotations...
  • Serverless can also mean applications where some amount of server-side logic is still written by the application developer but unlike traditional architectures is run in stateless compute containers that are event-triggered, ephemeral (may only last for one invocation), and fully managed by a 3rd party.
  • ‘Functions as a service
  • AWS Lambda is one of the most popular implementations of FaaS at present,
  • A good example is Auth0 - they started initially with BaaS ‘Authentication as a Service’, but with Auth0 Webtask they are entering the FaaS space.
  • a typical ecommerce app
  • a backend data-processing service
  • with zero administration.
  • FaaS offerings do not require coding to a specific framework or library.
  • Horizontal scaling is completely automatic, elastic, and managed by the provider
  • Functions in FaaS are triggered by event types defined by the provider.
  • a FaaS-supported message broker
  • from a deployment-unit point of view FaaS functions are stateless.
  • allowed the client direct access to a subset of our database
  • deleted the authentication logic in the original application and have replaced it with a third party BaaS service
  • The client is in fact well on its way to becoming a Single Page Application.
  • implement a FaaS function that responds to http requests via an API Gateway
  • port the search code from the Pet Store server to the Pet Store Search function
  • replaced a long lived consumer application with a FaaS function that runs within the event driven context
  • server applications - is a key difference when comparing with other modern architectural trends like containers and PaaS
  • the only code that needs to change when moving to FaaS is the ‘main method / startup’ code, in that it is deleted, and likely the specific code that is the top-level message handler (the ‘message listener interface’ implementation), but this might only be a change in method signature
  • With FaaS you need to write the function ahead of time to assume parallelism
  • Most providers also allow functions to be triggered as a response to inbound http requests, typically in some kind of API gateway
  • you should assume that for any given invocation of a function none of the in-process or host state that you create will be available to any subsequent invocation.
  • FaaS functions are either naturally stateless
  • store state across requests or for further input to handle a request.
  • certain classes of long lived task are not suited to FaaS functions without re-architecture
  • if you were writing a low-latency trading application you probably wouldn’t want to use FaaS systems at this time
  • An API Gateway is an HTTP server where routes / endpoints are defined in configuration and each route is associated with a FaaS function.
  • API Gateway will allow mapping from http request parameters to inputs arguments for the FaaS function
  • API Gateways may also perform authentication, input validation, response code mapping, etc.
  • the Serverless Framework makes working with API Gateway + Lambda significantly easier than using the first principles provided by AWS.
  • Apex - a project to ‘Build, deploy, and manage AWS Lambda functions with ease.'
  • 'Serverless' to mean the union of a couple of other ideas - 'Backend as a Service' and 'Functions as a Service'.
張 旭

How to Use Docker on OS X: The Missing Guide | Viget - 0 views

  • Docker is a client-server application.
  • The Docker server is a daemon that does all the heavy lifting: building and downloading images, starting and stopping containers, and the like. It exposes a REST API for remote management.
  • The Docker client is a command line program that communicates with the Docker server using the REST API.
  • ...9 more annotations...
  • interact with Docker by using the client to send commands to the server.
  • The machine running the Docker server is called the Docker host
  • Docker uses features only available to Linux, that machine must be running Linux (more specifically, the Linux kernel).
  • boot2docker is a “lightweight Linux distribution made specifically to run Docker containers.”
  • Docker server will run inside our boot2docker VM
  • boot2docker, not OS X, is the Docker host, not OS X.
  • Docker mounts volumes from the boot2docker VM, not from OS X
  • initialize boot2docker (we only have to do this once):
  • The Docker client assumes the Docker host is the current machine. We need to tell it to use our boot2docker VM by setting the DOCKER_HOST environment variable
1 - 20 of 206 Next › Last »
Showing 20 items per page