Skip to main content

Home/ Larvata/ Group items tagged document

Rss Feed Group items tagged

張 旭

Dynamic Provisioning | vSphere Storage for Kubernetes - 0 views

  • Storage Policy based Management (SPBM). SPBM provides a single unified control plane across a broad range of data services and storage solutions
  • Kubernetes StorageClasses allow the creation of PersistentVolumes on-demand without having to create storage and mount it into K8s nodes upfront
  • When a PVC is created, the PersistentVolume will be provisioned on a compatible datastore with the most free space that satisfies the gold storage policy requirements.
  • ...2 more annotations...
  • When a PVC is created, the vSphere Cloud Provider checks if the user specified datastore satisfies the gold storage policy requirements. If it does, the vSphere Cloud Provider will provision the PersistentVolume on the user specified datastore. If not, it will create an error telling the user that the specified datastore is not compatible with gold storage policy requirements.
  • The Kubernetes user will have the ability to specify custom vSAN Storage Capabilities during dynamic volume provisioning.
  •  
    "Storage Policy based Management (SPBM). SPBM provides a single unified control plane across a broad range of data services and storage solutions"
張 旭

An App's Brief Journey from Source to Image · Cloud Native Buildpack Document... - 0 views

  • A buildpack’s job is to gather everything your app needs to build and run, and it often does this job quickly and quietly.
  • a platform sequentially tests groups of buildpacks against your app’s source code.
  • transforming your source code into a runnable app image.
  • ...2 more annotations...
  • Detection criteria is specific to each buildpack – for instance, an NPM buildpack might look for a package.json, and a Go buildpack might look for Go source files.
  • A builder is essentially an image containing buildpacks.
張 旭

vSphere Cloud Provider | vSphere Storage for Kubernetes - 0 views

  • Containers are stateless and ephemeral but applications are stateful and need persistent storage.
  • Cloud Provider
  • Kubernetes cloud providers are an interface to integrate various node (i.e. hosts), load balancers and networking routes
  • ...8 more annotations...
  • VMware offers a Cloud Provider known as the vSphere Cloud Provider (VCP) for Kubernetes which allows Pods to use enterprise grade persistent storage.
  • A vSphere datastore is an abstraction which hides storage details (such as LUNs) and provides a uniform interface for storing persistent data.
  • the datastores can be of the type vSAN, VMFS, NFS & VVol.
  • VMFS (Virtual Machine File System) is a cluster file system that allows virtualization to scale beyond a single node for multiple VMware ESX servers.
  • NFS (Network File System) is a distributed file protocol to access storage over network like local storage.
  • vSphere Cloud Provider supports every storage primitive exposed by Kubernetes
  • Kubernetes PVs are defined in Pod specifications.
  • PVCs when using Dynamic Provisioning (preferred).
張 旭

Swarm mode key concepts | Docker Documentation - 0 views

  • The cluster management and orchestration features embedded in the Docker Engine are built using SwarmKit.
  • Docker engines participating in a cluster are running in swarm mode
  • A swarm is a cluster of Docker engines, or nodes, where you deploy services
  • ...19 more annotations...
  • When you run Docker without using swarm mode, you execute container commands.
  • When you run the Docker in swarm mode, you orchestrate services.
  • You can run swarm services and standalone containers on the same Docker instances.
  • A node is an instance of the Docker engine participating in the swarm
  • You can run one or more nodes on a single physical computer or cloud server
  • To deploy your application to a swarm, you submit a service definition to a manager node.
  • Manager nodes also perform the orchestration and cluster management functions required to maintain the desired state of the swarm.
  • Manager nodes elect a single leader to conduct orchestration tasks.
  • Worker nodes receive and execute tasks dispatched from manager nodes.
  • service is the definition of the tasks to execute on the worker nodes
  • When you create a service, you specify which container image to use and which commands to execute inside running containers.
  • replicated services model, the swarm manager distributes a specific number of replica tasks among the nodes based upon the scale you set in the desired state.
  • global services, the swarm runs one task for the service on every available node in the cluster.
  • A task carries a Docker container and the commands to run inside the container
  • Manager nodes assign tasks to worker nodes according to the number of replicas set in the service scale.
  • Once a task is assigned to a node, it cannot move to another node
  • If you do not specify a port, the swarm manager assigns the service a port in the 30000-32767 range.
  • External components, such as cloud load balancers, can access the service on the PublishedPort of any node in the cluster whether or not the node is currently running the task for the service.
  • Swarm mode has an internal DNS component that automatically assigns each service in the swarm a DNS entry.
張 旭

Open source load testing tool review 2020 - 0 views

  • Hey is a simple tool, written in Go, with good performance and the most common features you'll need to run simple static URL tests.
  • Hey supports HTTP/2, which neither Wrk nor Apachebench does
  • Apachebench is very fast, so often you will not need more than one CPU core to generate enough traffic
  • ...16 more annotations...
  • Hey has rate limiting, which can be used to run fixed-rate tests.
  • Vegeta was designed to be run on the command line; it reads from stdin a list of HTTP transactions to generate, and sends results in binary format to stdout,
  • Vegeta is a really strong tool that caters to people who want a tool to test simple, static URLs (perhaps API end points) but also want a bit more functionality.
  • Vegeta can even be used as a Golang library/package if you want to create your own load testing tool.
  • Wrk is so damn fast
  • being fast and measuring correctly is about all that Wrk does
  • k6 is scriptable in plain Javascript
  • k6 is average or better. In some categories (documentation, scripting API, command line UX) it is outstanding.
  • Jmeter is a huge beast compared to most other tools.
  • Siege is a simple tool, similar to e.g. Apachebench in that it has no scripting and is primarily used when you want to hit a single, static URL repeatedly.
  • A good way of testing the testing tools is to not test them on your code, but on some third-party thing that is sure to be very high-performing.
  • use a tool like e.g. top to keep track of Nginx CPU usage while testing. If you see just one process, and see it using close to 100% CPU, it means you could be CPU-bound on the target side.
  • If you see multiple Nginx processes but only one is using a lot of CPU, it means your load testing tool is only talking to that particular worker process.
  • Network delay is also important to take into account as it sets an upper limit on the number of requests per second you can push through.
  • If, say, the Nginx default page requires a transfer of 250 bytes to load, it means that if the servers are connected via a 100 Mbit/s link, the theoretical max RPS rate would be around 100,000,000 divided by 8 (bits per byte) divided by 250 => 100M/2000 = 50,000 RPS. Though that is a very optimistic calculation - protocol overhead will make the actual number a lot lower so in the case above I would start to get worried bandwidth was an issue if I saw I could push through max 30,000 RPS, or something like that.
  • Wrk managed to push through over 50,000 RPS and that made 8 Nginx workers on the target system consume about 600% CPU.
張 旭

Production Notes - MongoDB Manual - 0 views

  • mongod will not start if dbPath contains data files created by a storage engine other than the one specified by --storageEngine.
  • mongod must possess read and write permissions for the specified dbPath.
  • WiredTiger supports concurrent access by readers and writers to the documents in a collection
  • ...9 more annotations...
  • Journaling guarantees that MongoDB can quickly recover write operations that were written to the journal but not written to data files in cases where mongod terminated due to a crash or other serious failure.
  • To use read concern level of "majority", replica sets must use WiredTiger storage engine.
  • Write concern describes the level of acknowledgement requested from MongoDB for write operations.
  • With stronger write concerns, clients must wait after sending a write operation until MongoDB confirms the write operation at the requested write concern level.
  • By default, authorization is not enabled, and mongod assumes a trusted environment
  • The HTTP interface is disabled by default. Do not enable the HTTP interface in production environments.
  • Avoid overloading the connection resources of a mongod or mongos instance by adjusting the connection pool size to suit your use case.
  • ensure that each mongod or mongos instance has access to two real cores or one multi-core physical CPU.
  • The WiredTiger storage engine is multithreaded and can take advantage of additional CPU cores
張 旭

Logstash Alternatives: Pros & Cons of 5 Log Shippers [2019] - Sematext - 0 views

  • In this case, Elasticsearch. And because Elasticsearch can be down or struggling, or the network can be down, the shipper would ideally be able to buffer and retry
  • Logstash is typically used for collecting, parsing, and storing logs for future use as part of log management.
  • Logstash’s biggest con or “Achille’s heel” has always been performance and resource consumption (the default heap size is 1GB).
  • ...37 more annotations...
  • This can be a problem for high traffic deployments, when Logstash servers would need to be comparable with the Elasticsearch ones.
  • Filebeat was made to be that lightweight log shipper that pushes to Logstash or Elasticsearch.
  • differences between Logstash and Filebeat are that Logstash has more functionality, while Filebeat takes less resources.
  • Filebeat is just a tiny binary with no dependencies.
  • For example, how aggressive it should be in searching for new files to tail and when to close file handles when a file didn’t get changes for a while.
  • For example, the apache module will point Filebeat to default access.log and error.log paths
  • Filebeat’s scope is very limited,
  • Initially it could only send logs to Logstash and Elasticsearch, but now it can send to Kafka and Redis, and in 5.x it also gains filtering capabilities.
  • Filebeat can parse JSON
  • you can push directly from Filebeat to Elasticsearch, and have Elasticsearch do both parsing and storing.
  • You shouldn’t need a buffer when tailing files because, just as Logstash, Filebeat remembers where it left off
  • For larger deployments, you’d typically use Kafka as a queue instead, because Filebeat can talk to Kafka as well
  • The default syslog daemon on most Linux distros, rsyslog can do so much more than just picking logs from the syslog socket and writing to /var/log/messages.
  • It can tail files, parse them, buffer (on disk and in memory) and ship to a number of destinations, including Elasticsearch.
  • rsyslog is the fastest shipper
  • Its grammar-based parsing module (mmnormalize) works at constant speed no matter the number of rules (we tested this claim).
  • use it as a simple router/shipper, any decent machine will be limited by network bandwidth
  • It’s also one of the lightest parsers you can find, depending on the configured memory buffers.
  • rsyslog requires more work to get the configuration right
  • the main difference between Logstash and rsyslog is that Logstash is easier to use while rsyslog lighter.
  • rsyslog fits well in scenarios where you either need something very light yet capable (an appliance, a small VM, collecting syslog from within a Docker container).
  • rsyslog also works well when you need that ultimate performance.
  • syslog-ng as an alternative to rsyslog (though historically it was actually the other way around).
  • a modular syslog daemon, that can do much more than just syslog
  • Unlike rsyslog, it features a clear, consistent configuration format and has nice documentation.
  • Similarly to rsyslog, you’d probably want to deploy syslog-ng on boxes where resources are tight, yet you do want to perform potentially complex processing.
  • syslog-ng has an easier, more polished feel than rsyslog, but likely not that ultimate performance
  • Fluentd was built on the idea of logging in JSON wherever possible (which is a practice we totally agree with) so that log shippers down the line don’t have to guess which substring is which field of which type.
  • Fluentd plugins are in Ruby and very easy to write.
  • structured data through Fluentd, it’s not made to have the flexibility of other shippers on this list (Filebeat excluded).
  • Fluent Bit, which is to Fluentd similar to how Filebeat is for Logstash.
  • Fluentd is a good fit when you have diverse or exotic sources and destinations for your logs, because of the number of plugins.
  • Splunk isn’t a log shipper, it’s a commercial logging solution
  • Graylog is another complete logging solution, an open-source alternative to Splunk.
  • everything goes through graylog-server, from authentication to queries.
  • Graylog is nice because you have a complete logging solution, but it’s going to be harder to customize than an ELK stack.
  • it depends
張 旭

Helm | - 0 views

  • Templates generate manifest files, which are YAML-formatted resource descriptions that Kubernetes can understand.
  • service.yaml: A basic manifest for creating a service endpoint for your deployment
  • In Kubernetes, a ConfigMap is simply a container for storing configuration data.
  • ...88 more annotations...
  • deployment.yaml: A basic manifest for creating a Kubernetes deployment
  • using the suffix .yaml for YAML files and .tpl for helpers.
  • It is just fine to put a plain YAML file like this in the templates/ directory.
  • helm get manifest
  • The helm get manifest command takes a release name (full-coral) and prints out all of the Kubernetes resources that were uploaded to the server. Each file begins with --- to indicate the start of a YAML document
  • Names should be unique to a release
  • The name: field is limited to 63 characters because of limitations to the DNS system.
  • release names are limited to 53 characters
  • {{ .Release.Name }}
  • A template directive is enclosed in {{ and }} blocks.
  • The values that are passed into a template can be thought of as namespaced objects, where a dot (.) separates each namespaced element.
  • The leading dot before Release indicates that we start with the top-most namespace for this scope
  • The Release object is one of the built-in objects for Helm
  • When you want to test the template rendering, but not actually install anything, you can use helm install ./mychart --debug --dry-run
  • Using --dry-run will make it easier to test your code, but it won’t ensure that Kubernetes itself will accept the templates you generate.
  • Objects are passed into a template from the template engine.
  • create new objects within your templates
  • Objects can be simple, and have just one value. Or they can contain other objects or functions.
  • Release is one of the top-level objects that you can access in your templates.
  • Release.Namespace: The namespace to be released into (if the manifest doesn’t override)
  • Values: Values passed into the template from the values.yaml file and from user-supplied files. By default, Values is empty.
  • Chart: The contents of the Chart.yaml file.
  • Files: This provides access to all non-special files in a chart.
  • Files.Get is a function for getting a file by name
  • Files.GetBytes is a function for getting the contents of a file as an array of bytes instead of as a string. This is useful for things like images.
  • Template: Contains information about the current template that is being executed
  • BasePath: The namespaced path to the templates directory of the current chart
  • The built-in values always begin with a capital letter.
  • Go’s naming convention
  • use only initial lower case letters in order to distinguish local names from those built-in.
  • If this is a subchart, the values.yaml file of a parent chart
  • Individual parameters passed with --set
  • values.yaml is the default, which can be overridden by a parent chart’s values.yaml, which can in turn be overridden by a user-supplied values file, which can in turn be overridden by --set parameters.
  • While structuring data this way is possible, the recommendation is that you keep your values trees shallow, favoring flatness.
  • If you need to delete a key from the default values, you may override the value of the key to be null, in which case Helm will remove the key from the overridden values merge.
  • Kubernetes would then fail because you can not declare more than one livenessProbe handler.
  • When injecting strings from the .Values object into the template, we ought to quote these strings.
  • quote
  • Template functions follow the syntax functionName arg1 arg2...
  • While we talk about the “Helm template language” as if it is Helm-specific, it is actually a combination of the Go template language, some extra functions, and a variety of wrappers to expose certain objects to the templates.
  • Drawing on a concept from UNIX, pipelines are a tool for chaining together a series of template commands to compactly express a series of transformations.
  • pipelines are an efficient way of getting several things done in sequence
  • The repeat function will echo the given string the given number of times
  • default DEFAULT_VALUE GIVEN_VALUE. This function allows you to specify a default value inside of the template, in case the value is omitted.
  • all static default values should live in the values.yaml, and should not be repeated using the default command
  • Operators are implemented as functions that return a boolean value.
  • To use eq, ne, lt, gt, and, or, not etcetera place the operator at the front of the statement followed by its parameters just as you would a function.
  • if and
  • if or
  • with to specify a scope
  • range, which provides a “for each”-style loop
  • block declares a special kind of fillable template area
  • A pipeline is evaluated as false if the value is: a boolean false a numeric zero an empty string a nil (empty or null) an empty collection (map, slice, tuple, dict, array)
  • incorrect YAML because of the whitespacing
  • When the template engine runs, it removes the contents inside of {{ and }}, but it leaves the remaining whitespace exactly as is.
  • {{- (with the dash and space added) indicates that whitespace should be chomped left, while -}} means whitespace to the right should be consumed.
  • Newlines are whitespace!
  • an * at the end of the line indicates a newline character that would be removed
  • Be careful with the chomping modifiers.
  • the indent function
  • Scopes can be changed. with can allow you to set the current scope (.) to a particular object.
  • Inside of the restricted scope, you will not be able to access the other objects from the parent scope.
  • range
  • The range function will “range over” (iterate through) the pizzaToppings list.
  • Just like with sets the scope of ., so does a range operator.
  • The toppings: |- line is declaring a multi-line string.
  • not a YAML list. It’s a big string.
  • the data in ConfigMaps data is composed of key/value pairs, where both the key and the value are simple strings.
  • The |- marker in YAML takes a multi-line string.
  • range can be used to iterate over collections that have a key and a value (like a map or dict).
  • In Helm templates, a variable is a named reference to another object. It follows the form $name
  • Variables are assigned with a special assignment operator: :=
  • {{- $relname := .Release.Name -}}
  • capture both the index and the value
  • the integer index (starting from zero) to $index and the value to $topping
  • For data structures that have both a key and a value, we can use range to get both
  • Variables are normally not “global”. They are scoped to the block in which they are declared.
  • one variable that is always global - $ - this variable will always point to the root context.
  • $.
  • $.
  • Helm template language is its ability to declare multiple templates and use them together.
  • A named template (sometimes called a partial or a subtemplate) is simply a template defined inside of a file, and given a name.
  • when naming templates: template names are global.
  • If you declare two templates with the same name, whichever one is loaded last will be the one used.
  • you should be careful to name your templates with chart-specific names.
  • templates in subcharts are compiled together with top-level templates
  • naming convention is to prefix each defined template with the name of the chart: {{ define "mychart.labels" }}
  • Helm has over 60 available functions.
張 旭

Memory inside Linux containers | Fabio Kung - 0 views

  • /sys/fs/cgroup/ is the recommended location for cgroup hierarchies, but it is not a standard.
  • most container specific metrics are available at the cgroup filesystem via /path/to/cgroup/memory.stat, /path/to/cgroup/memory.usage_in_bytes, /path/to/cgroup/memory.limit_in_bytes and others.
  • cat /sys/fs/cgroup/memory/memory.stat
  • ...3 more annotations...
  • /sys/fs/cgroup is just an umbrella for all cgroup hierarchies, there is no recommendation or standard for my own cgroup location.
  • an userspace library that processes can use to query their memory usage and available memory.
  • we might need to encourage people to stop using those tools inside containers.
張 旭

Run the Docker daemon as a non-root user (Rootless mode) | Docker Documentation - 0 views

  • running the Docker daemon and containers as a non-root user
  • Rootless mode does not require root privileges even during the installation of the Docker daemon
  • Rootless mode executes the Docker daemon and containers inside a user namespace.
  • ...9 more annotations...
  • in rootless mode, both the daemon and the container are running without root privileges.
  • Rootless mode does not use binaries with SETUID bits or file capabilities, except newuidmap and newgidmap, which are needed to allow multiple UIDs/GIDs to be used in the user namespace.
  • expose privileged ports (< 1024)
  • add net.ipv4.ip_unprivileged_port_start=0 to /etc/sysctl.conf (or /etc/sysctl.d) and run sudo sysctl --system
  • dockerd-rootless.sh uses slirp4netns (if installed) or VPNKit as the network stack by default.
  • These network stacks run in userspace and might have performance overhead
  • This error occurs when the number of available entries in /etc/subuid or /etc/subgid is not sufficient.
  • This error occurs mostly when the host is running in cgroup v2. See the section Fedora 31 or later for information on switching the host to use cgroup v1.
  • --net=host doesn’t listen ports on the host network namespace This is an expected behavior, as the daemon is namespaced inside RootlessKit’s network namespace. Use docker run -p instead.
張 旭

Helm | Named Templates - 0 views

  • a special-purpose include function that works similarly to the template action.
  • when naming templates: template names are global.
  • templates in subcharts are compiled together with top-level templates, you should be careful to name your templates with chart-specific names.
  • ...14 more annotations...
  • One popular naming convention is to prefix each defined template with the name of the chart: {{ define "mychart.labels" }}
  • using the specific chart name as a prefix we can avoid any conflicts
  • But files whose name begins with an underscore (_) are assumed to not have a manifest inside.
  • The define action allows us to create a named template inside of a template file.
  • include it with the template action
  • a define does not produce output unless it is called with a template
  • define functions should have a simple documentation block ({{/* ... */}}) describing what they do.
  • template names are global.
  • A popular naming convention is to prefix each defined template with the name of the chart
  • When a named template (created with define) is rendered, it will receive the scope passed in by the template call.
  • No scope was passed in, so within the template we cannot access anything in .
  • Note that we pass . at the end of the template call. We could just as easily pass .Values or .Values.favorite or whatever scope we want
  • the template that is substituted in has the text aligned to the left. Because template is an action, and not a function, there is no way to pass the output of a template call to other functions; the data is simply inserted inline.
  • use indent to indent
  •  
    "a special-purpose include function that works similarly to the template action."
張 旭

Supported DDL operations for a CDC Replication Engine for Db2 Database - IBM Documentation - 1 views

  • SQL statements are divided into two categories: Data Definition Language (DDL) and Data Manipulation Language (DML).
  • DDL operations on a table may affect dependent objects such as constraints and Indexes.
  •  
    "DDL statements are used to describe a database, to define its structure, to create its objects and to create the table's sub-objects."
張 旭

Ingress - Kubernetes - 0 views

  • An API object that manages external access to the services in a cluster, typically HTTP.
  • load balancing
  • SSL termination
  • ...62 more annotations...
  • name-based virtual hosting
  • Edge routerA router that enforces the firewall policy for your cluster.
  • Cluster networkA set of links, logical or physical, that facilitate communication within a cluster according to the Kubernetes networking model.
  • A Kubernetes ServiceA way to expose an application running on a set of Pods as a network service. that identifies a set of Pods using labelTags objects with identifying attributes that are meaningful and relevant to users. selectors.
  • Services are assumed to have virtual IPs only routable within the cluster network.
  • Ingress exposes HTTP and HTTPS routes from outside the cluster to services within the cluster.
  • Traffic routing is controlled by rules defined on the Ingress resource.
  • An Ingress can be configured to give Services externally-reachable URLs, load balance traffic, terminate SSL / TLS, and offer name based virtual hosting.
  • Exposing services other than HTTP and HTTPS to the internet typically uses a service of type Service.Type=NodePort or Service.Type=LoadBalancer.
  • You must have an ingress controller to satisfy an Ingress. Only creating an Ingress resource has no effect.
  • As with all other Kubernetes resources, an Ingress needs apiVersion, kind, and metadata fields
  • Ingress frequently uses annotations to configure some options depending on the Ingress controller,
  • Ingress resource only supports rules for directing HTTP traffic.
  • An optional host.
  • A list of paths
  • A backend is a combination of Service and port names
  • has an associated backend
  • Both the host and path must match the content of an incoming request before the load balancer directs traffic to the referenced Service.
  • HTTP (and HTTPS) requests to the Ingress that matches the host and path of the rule are sent to the listed backend.
  • A default backend is often configured in an Ingress controller to service any requests that do not match a path in the spec.
  • An Ingress with no rules sends all traffic to a single default backend.
  • Ingress controllers and load balancers may take a minute or two to allocate an IP address.
  • A fanout configuration routes traffic from a single IP address to more than one Service, based on the HTTP URI being requested.
  • nginx.ingress.kubernetes.io/rewrite-target: /
  • describe ingress
  • get ingress
  • Name-based virtual hosts support routing HTTP traffic to multiple host names at the same IP address.
  • route requests based on the Host header.
  • an Ingress resource without any hosts defined in the rules, then any web traffic to the IP address of your Ingress controller can be matched without a name based virtual host being required.
  • secure an Ingress by specifying a SecretStores sensitive information, such as passwords, OAuth tokens, and ssh keys. that contains a TLS private key and certificate.
  • Currently the Ingress only supports a single TLS port, 443, and assumes TLS termination.
  • An Ingress controller is bootstrapped with some load balancing policy settings that it applies to all Ingress, such as the load balancing algorithm, backend weight scheme, and others.
  • persistent sessions, dynamic weights) are not yet exposed through the Ingress. You can instead get these features through the load balancer used for a Service.
  • review the controller specific documentation to see how they handle health checks
  • edit ingress
  • After you save your changes, kubectl updates the resource in the API server, which tells the Ingress controller to reconfigure the load balancer.
  • kubectl replace -f on a modified Ingress YAML file.
  • Node: A worker machine in Kubernetes, part of a cluster.
  • in most common Kubernetes deployments, nodes in the cluster are not part of the public internet.
  • Edge router: A router that enforces the firewall policy for your cluster.
  • a gateway managed by a cloud provider or a physical piece of hardware.
  • Cluster network: A set of links, logical or physical, that facilitate communication within a cluster according to the Kubernetes networking model.
  • Service: A Kubernetes Service that identifies a set of Pods using label selectors.
  • An Ingress may be configured to give Services externally-reachable URLs, load balance traffic, terminate SSL / TLS, and offer name-based virtual hosting.
  • An Ingress does not expose arbitrary ports or protocols.
  • You must have an Ingress controller to satisfy an Ingress. Only creating an Ingress resource has no effect.
  • The name of an Ingress object must be a valid DNS subdomain name
  • The Ingress spec has all the information needed to configure a load balancer or proxy server.
  • Ingress resource only supports rules for directing HTTP(S) traffic.
  • An Ingress with no rules sends all traffic to a single default backend and .spec.defaultBackend is the backend that should handle requests in that case.
  • If defaultBackend is not set, the handling of requests that do not match any of the rules will be up to the ingress controller
  • A common usage for a Resource backend is to ingress data to an object storage backend with static assets.
  • Exact: Matches the URL path exactly and with case sensitivity.
  • Prefix: Matches based on a URL path prefix split by /. Matching is case sensitive and done on a path element by element basis.
  • multiple paths within an Ingress will match a request. In those cases precedence will be given first to the longest matching path.
  • Hosts can be precise matches (for example “foo.bar.com”) or a wildcard (for example “*.foo.com”).
  • No match, wildcard only covers a single DNS label
  • Each Ingress should specify a class, a reference to an IngressClass resource that contains additional configuration including the name of the controller that should implement the class.
  • secure an Ingress by specifying a Secret that contains a TLS private key and certificate.
  • The Ingress resource only supports a single TLS port, 443, and assumes TLS termination at the ingress point (traffic to the Service and its Pods is in plaintext).
  • TLS will not work on the default rule because the certificates would have to be issued for all the possible sub-domains.
  • hosts in the tls section need to explicitly match the host in the rules section.
張 旭

NGINX Ingress Controller - Documentation - 0 views

  • NodePort, as the name says, means that a port on a node is configured to route incoming requests to a certain service.
  • LoadBalancer is a service, which is typically implemented by the cloud provider as an external service (with additional cost).
  • Load balancer provides a single IP address to access your services, which can run on multiple nodes.
  • ...5 more annotations...
  • ngress controller helps to consolidate routing rules of multiple applications into one entity.
  • Ingress controller is exposed to an external network with the help of NodePort or LoadBalancer.
  • cloud load balancers are not necessary. Load balancer can also be implemented with MetalLB, which can be deployed in the same Kubernetes cluster.
  • to expose the Ingress controller to an external network is to use NodePort.
  • Installing NGINX using NodePort is the most simple example for Ingress Controller as we can avoid the load balancer dependency. NodePort is used for exposing the NGINX Ingress to the external network.
張 旭

Cluster Networking - Kubernetes - 0 views

  • Networking is a central part of Kubernetes, but it can be challenging to understand exactly how it is expected to work
  • Highly-coupled container-to-container communications
  • Pod-to-Pod communications
  • ...57 more annotations...
  • this is the primary focus of this document
    • 張 旭
       
      Cluster Networking 所關注處理的是: Pod 到 Pod 之間的連線
  • Pod-to-Service communications
  • External-to-Service communications
  • Kubernetes is all about sharing machines between applications.
  • sharing machines requires ensuring that two applications do not try to use the same ports.
  • Dynamic port allocation brings a lot of complications to the system
  • Every Pod gets its own IP address
  • do not need to explicitly create links between Pods
  • almost never need to deal with mapping container ports to host ports.
  • Pods can be treated much like VMs or physical hosts from the perspectives of port allocation, naming, service discovery, load balancing, application configuration, and migration.
  • pods on a node can communicate with all pods on all nodes without NAT
  • agents on a node (e.g. system daemons, kubelet) can communicate with all pods on that node
  • pods in the host network of a node can communicate with all pods on all nodes without NAT
  • If your job previously ran in a VM, your VM had an IP and could talk to other VMs in your project. This is the same basic model.
  • containers within a Pod share their network namespaces - including their IP address
  • containers within a Pod can all reach each other’s ports on localhost
  • containers within a Pod must coordinate port usage
  • “IP-per-pod” model.
  • request ports on the Node itself which forward to your Pod (called host ports), but this is a very niche operation
  • The Pod itself is blind to the existence or non-existence of host ports.
  • AOS is an Intent-Based Networking system that creates and manages complex datacenter environments from a simple integrated platform.
  • Cisco Application Centric Infrastructure offers an integrated overlay and underlay SDN solution that supports containers, virtual machines, and bare metal servers.
  • AOS Reference Design currently supports Layer-3 connected hosts that eliminate legacy Layer-2 switching problems.
  • The AWS VPC CNI offers integrated AWS Virtual Private Cloud (VPC) networking for Kubernetes clusters.
  • users can apply existing AWS VPC networking and security best practices for building Kubernetes clusters.
  • Using this CNI plugin allows Kubernetes pods to have the same IP address inside the pod as they do on the VPC network.
  • The CNI allocates AWS Elastic Networking Interfaces (ENIs) to each Kubernetes node and using the secondary IP range from each ENI for pods on the node.
  • Big Cloud Fabric is a cloud native networking architecture, designed to run Kubernetes in private cloud/on-premises environments.
  • Cilium is L7/HTTP aware and can enforce network policies on L3-L7 using an identity based security model that is decoupled from network addressing.
  • CNI-Genie is a CNI plugin that enables Kubernetes to simultaneously have access to different implementations of the Kubernetes network model in runtime.
  • CNI-Genie also supports assigning multiple IP addresses to a pod, each from a different CNI plugin.
  • cni-ipvlan-vpc-k8s contains a set of CNI and IPAM plugins to provide a simple, host-local, low latency, high throughput, and compliant networking stack for Kubernetes within Amazon Virtual Private Cloud (VPC) environments by making use of Amazon Elastic Network Interfaces (ENI) and binding AWS-managed IPs into Pods using the Linux kernel’s IPvlan driver in L2 mode.
  • to be straightforward to configure and deploy within a VPC
  • Contiv provides configurable networking
  • Contrail, based on Tungsten Fabric, is a truly open, multi-cloud network virtualization and policy management platform.
  • DANM is a networking solution for telco workloads running in a Kubernetes cluster.
  • Flannel is a very simple overlay network that satisfies the Kubernetes requirements.
  • Any traffic bound for that subnet will be routed directly to the VM by the GCE network fabric.
  • sysctl net.ipv4.ip_forward=1
  • Jaguar provides overlay network using vxlan and Jaguar CNIPlugin provides one IP address per pod.
  • Knitter is a network solution which supports multiple networking in Kubernetes.
  • Kube-OVN is an OVN-based kubernetes network fabric for enterprises.
  • Kube-router provides a Linux LVS/IPVS-based service proxy, a Linux kernel forwarding-based pod-to-pod networking solution with no overlays, and iptables/ipset-based network policy enforcer.
  • If you have a “dumb” L2 network, such as a simple switch in a “bare-metal” environment, you should be able to do something similar to the above GCE setup.
  • Multus is a Multi CNI plugin to support the Multi Networking feature in Kubernetes using CRD based network objects in Kubernetes.
  • NSX-T can provide network virtualization for a multi-cloud and multi-hypervisor environment and is focused on emerging application frameworks and architectures that have heterogeneous endpoints and technology stacks.
  • NSX-T Container Plug-in (NCP) provides integration between NSX-T and container orchestrators such as Kubernetes
  • Nuage uses the open source Open vSwitch for the data plane along with a feature rich SDN Controller built on open standards.
  • OpenVSwitch is a somewhat more mature but also complicated way to build an overlay network
  • OVN is an opensource network virtualization solution developed by the Open vSwitch community.
  • Project Calico is an open source container networking provider and network policy engine.
  • Calico provides a highly scalable networking and network policy solution for connecting Kubernetes pods based on the same IP networking principles as the internet
  • Calico can be deployed without encapsulation or overlays to provide high-performance, high-scale data center networking.
  • Calico can also be run in policy enforcement mode in conjunction with other networking solutions such as Flannel, aka canal, or native GCE, AWS or Azure networking.
  • Romana is an open source network and security automation solution that lets you deploy Kubernetes without an overlay network
  • Weave Net runs as a CNI plug-in or stand-alone. In either version, it doesn’t require any configuration or extra code to run, and in both cases, the network provides one IP address per pod - as is standard for Kubernetes.
  • The network model is implemented by the container runtime on each node.
張 旭

Securing NGINX-ingress - cert-manager Documentation - 1 views

  • If using a ClusterIssuer, remember to update the Ingress annotation cert-manager.io/issuer to cert-manager.io/cluster-issuer
  • Certificates resources allow you to specify the details of the certificate you want to request.
  • An Issuer defines how cert-manager will request TLS certificates.
  • ...4 more annotations...
  • cert-manager mainly uses two different custom Kubernetes resources - known as CRDs - to configure and control how it operates, as well as to store state. These resources are Issuers and Certificates.
  • using annotations on the ingress with ingress-shim or directly creating a certificate resource.
  • The secret that is used in the ingress should match the secret defined in the certificate.
  • a typo will result in the ingress-nginx-controller falling back to its self-signed certificate.
  •  
    "If using a ClusterIssuer, remember to update the Ingress annotation cert-manager.io/issuer to cert-manager.io/cluster-issuer"
張 旭

3. Hello, world! - Err 9.9.9 documentation - 0 views

  • The first is a Message object, which represents the full message object received by Errbot.
  • The second is a string (or a list, if using the split_args_with parameter of botcmd()) with the arguments passed to the command.
  • args would be the string “Mister Errbot”.
  • ...6 more annotations...
  • If you return None, Errbot will not respond with any kind of message when executing the command.
  • a file that ends with the extension .plug and it is used by Errbot to identify and load plugins.
  • The key Module should point to a module that Python can find and import.
  • The key Name should be identical to the name you gave to the class in your plugin file
  • The presence of __init__.py indicates lib is a Python regular package.
  • the !status command,
« First ‹ Previous 61 - 80 of 80
Showing 20 items per page