Skip to main content

Home/ Larvata/ Group items tagged cloud

Rss Feed Group items tagged

張 旭

Intro to deployment strategies: blue-green, canary, and more - DEV Community - 0 views

  • using a service-oriented architecture and microservices approach, developers can design a code base to be modular.
  • Modern applications are often distributed and cloud-based
  • different release cycles for different components
  • ...20 more annotations...
  • the abstraction of the infrastructure layer, which is now considered code. Deployment of a new application may require the deployment of new infrastructure code as well.
  • "big bang" deployments update whole or large parts of an application in one fell swoop.
  • Big bang deployments required the business to conduct extensive development and testing before release, often associated with the "waterfall model" of large sequential releases.
  • Rollbacks are often costly, time-consuming, or even impossible.
  • In a rolling deployment, an application’s new version gradually replaces the old one.
  • new and old versions will coexist without affecting functionality or user experience.
  • Each container is modified to download the latest image from the app vendor’s site.
  • two identical production environments work in parallel.
  • Once the testing results are successful, application traffic is routed from blue to green.
  • In a blue-green deployment, both systems use the same persistence layer or database back end.
  • You can use the primary database by blue for write operations and use the secondary by green for read operations.
  • Blue-green deployments rely on traffic routing.
  • long TTL values can delay these changes.
  • The main challenge of canary deployment is to devise a way to route some users to the new application.
  • Using an application logic to unlock new features to specific users and groups.
  • With CD, the CI-built code artifact is packaged and always ready to be deployed in one or more environments.
  • Use Build Automation tools to automate environment builds
  • Use configuration management tools
  • Enable automated rollbacks for deployments
  • An application performance monitoring (APM) tool can help your team monitor critical performance metrics including server response times after deployments.
張 旭

Azure 101: Networking Part 1 - Cloud Solution Architect - 0 views

  • Virtual Private Gateways and it is these combined set of services that allow you to provide traffic flow to/from your Virtual Network and any external network, such as your On-Prem DataCenter.
  • No matter which version of the gateway you plan on implementing, there are three resources within Azure that you will need to implement and then connect to one of your Virtual Networks.
  • "Gateway Subnet". This is a specialized Subnet within your Virtual Network that can only be used for connecting Virtual Private Gateways to a VPN connection of some kind.
  • ...2 more annotations...
  • The Local Gateway is where you define the configuration of your external network's VPN access point with the most important piece being the external IP of that device so that Azure knows exactly how to establish the VPN connection.
  • The VPN Gateway is the Azure resource that you tie into your Gateway Subnet within your Virtual Network.
張 旭

Helm | - 0 views

  • A chart is a collection of files that describe a related set of Kubernetes resources.
  • A single chart might be used to deploy something simple, like a memcached pod, or something complex, like a full web app stack with HTTP servers, databases, caches, and so on.
  • Charts are created as files laid out in a particular directory tree, then they can be packaged into versioned archives to be deployed.
  • ...170 more annotations...
  • A chart is organized as a collection of files inside of a directory.
  • values.yaml # The default configuration values for this chart
  • charts/ # A directory containing any charts upon which this chart depends.
  • templates/ # A directory of templates that, when combined with values, # will generate valid Kubernetes manifest files.
  • version: A SemVer 2 version (required)
  • apiVersion: The chart API version, always "v1" (required)
  • Every chart must have a version number. A version must follow the SemVer 2 standard.
  • non-SemVer names are explicitly disallowed by the system.
  • When generating a package, the helm package command will use the version that it finds in the Chart.yaml as a token in the package name.
  • the appVersion field is not related to the version field. It is a way of specifying the version of the application.
  • appVersion: The version of the app that this contains (optional). This needn't be SemVer.
  • If the latest version of a chart in the repository is marked as deprecated, then the chart as a whole is considered to be deprecated.
  • deprecated: Whether this chart is deprecated (optional, boolean)
  • one chart may depend on any number of other charts.
  • dependencies can be dynamically linked through the requirements.yaml file or brought in to the charts/ directory and managed manually.
  • the preferred method of declaring dependencies is by using a requirements.yaml file inside of your chart.
  • A requirements.yaml file is a simple file for listing your dependencies.
  • The repository field is the full URL to the chart repository.
  • you must also use helm repo add to add that repo locally.
  • helm dependency update and it will use your dependency file to download all the specified charts into your charts/ directory for you.
  • When helm dependency update retrieves charts, it will store them as chart archives in the charts/ directory.
  • Managing charts with requirements.yaml is a good way to easily keep charts updated, and also share requirements information throughout a team.
  • All charts are loaded by default.
  • The condition field holds one or more YAML paths (delimited by commas). If this path exists in the top parent’s values and resolves to a boolean value, the chart will be enabled or disabled based on that boolean value.
  • The tags field is a YAML list of labels to associate with this chart.
  • all charts with tags can be enabled or disabled by specifying the tag and a boolean value.
  • The --set parameter can be used as usual to alter tag and condition values.
  • Conditions (when set in values) always override tags.
  • The first condition path that exists wins and subsequent ones for that chart are ignored.
  • The keys containing the values to be imported can be specified in the parent chart’s requirements.yaml file using a YAML list. Each item in the list is a key which is imported from the child chart’s exports field.
  • specifying the key data in our import list, Helm looks in the exports field of the child chart for data key and imports its contents.
  • the parent key data is not contained in the parent’s final values. If you need to specify the parent key, use the ‘child-parent’ format.
  • To access values that are not contained in the exports key of the child chart’s values, you will need to specify the source key of the values to be imported (child) and the destination path in the parent chart’s values (parent).
  • To drop a dependency into your charts/ directory, use the helm fetch command
  • A dependency can be either a chart archive (foo-1.2.3.tgz) or an unpacked chart directory.
  • name cannot start with _ or .. Such files are ignored by the chart loader.
  • a single release is created with all the objects for the chart and its dependencies.
  • Helm Chart templates are written in the Go template language, with the addition of 50 or so add-on template functions from the Sprig library and a few other specialized functions
  • When Helm renders the charts, it will pass every file in that directory through the template engine.
  • Chart developers may supply a file called values.yaml inside of a chart. This file can contain default values.
  • Chart users may supply a YAML file that contains values. This can be provided on the command line with helm install.
  • When a user supplies custom values, these values will override the values in the chart’s values.yaml file.
  • Template files follow the standard conventions for writing Go templates
  • {{default "minio" .Values.storage}}
  • Values that are supplied via a values.yaml file (or via the --set flag) are accessible from the .Values object in a template.
  • pre-defined, are available to every template, and cannot be overridden
  • the names are case sensitive
  • Release.Name: The name of the release (not the chart)
  • Release.IsUpgrade: This is set to true if the current operation is an upgrade or rollback.
  • Release.Revision: The revision number. It begins at 1, and increments with each helm upgrade
  • Chart: The contents of the Chart.yaml
  • Files: A map-like object containing all non-special files in the chart.
  • Files can be accessed using {{index .Files "file.name"}} or using the {{.Files.Get name}} or {{.Files.GetString name}} functions.
  • .helmignore
  • access the contents of the file as []byte using {{.Files.GetBytes}}
  • Any unknown Chart.yaml fields will be dropped
  • Chart.yaml cannot be used to pass arbitrarily structured data into the template.
  • A values file is formatted in YAML.
  • A chart may include a default values.yaml file
  • be merged into the default values file.
  • The default values file included inside of a chart must be named values.yaml
  • accessible inside of templates using the .Values object
  • Values files can declare values for the top-level chart, as well as for any of the charts that are included in that chart’s charts/ directory.
  • Charts at a higher level have access to all of the variables defined beneath.
  • lower level charts cannot access things in parent charts
  • Values are namespaced, but namespaces are pruned.
  • the scope of the values has been reduced and the namespace prefix removed
  • Helm supports special “global” value.
  • a way of sharing one top-level variable with all subcharts, which is useful for things like setting metadata properties like labels.
  • If a subchart declares a global variable, that global will be passed downward (to the subchart’s subcharts), but not upward to the parent chart.
  • global variables of parent charts take precedence over the global variables from subcharts.
  • helm lint
  • A chart repository is an HTTP server that houses one or more packaged charts
  • Any HTTP server that can serve YAML files and tar files and can answer GET requests can be used as a repository server.
  • Helm does not provide tools for uploading charts to remote repository servers.
  • the only way to add a chart to $HELM_HOME/starters is to manually copy it there.
  • Helm provides a hook mechanism to allow chart developers to intervene at certain points in a release’s life cycle.
  • Execute a Job to back up a database before installing a new chart, and then execute a second job after the upgrade in order to restore data.
  • Hooks are declared as an annotation in the metadata section of a manifest
  • Hooks work like regular templates, but they have special annotations
  • pre-install
  • post-install: Executes after all resources are loaded into Kubernetes
  • pre-delete
  • post-delete: Executes on a deletion request after all of the release’s resources have been deleted.
  • pre-upgrade
  • post-upgrade
  • pre-rollback
  • post-rollback: Executes on a rollback request after all resources have been modified.
  • crd-install
  • test-success: Executes when running helm test and expects the pod to return successfully (return code == 0).
  • test-failure: Executes when running helm test and expects the pod to fail (return code != 0).
  • Hooks allow you, the chart developer, an opportunity to perform operations at strategic points in a release lifecycle
  • Tiller then loads the hook with the lowest weight first (negative to positive)
  • Tiller returns the release name (and other data) to the client
  • If the resources is a Job kind, Tiller will wait until the job successfully runs to completion.
  • if the job fails, the release will fail. This is a blocking operation, so the Helm client will pause while the Job is run.
  • If they have hook weights (see below), they are executed in weighted order. Otherwise, ordering is not guaranteed.
  • good practice to add a hook weight, and set it to 0 if weight is not important.
  • The resources that a hook creates are not tracked or managed as part of the release.
  • leave the hook resource alone.
  • To destroy such resources, you need to either write code to perform this operation in a pre-delete or post-delete hook or add "helm.sh/hook-delete-policy" annotation to the hook template file.
  • Hooks are just Kubernetes manifest files with special annotations in the metadata section
  • One resource can implement multiple hooks
  • no limit to the number of different resources that may implement a given hook.
  • When subcharts declare hooks, those are also evaluated. There is no way for a top-level chart to disable the hooks declared by subcharts.
  • Hook weights can be positive or negative numbers but must be represented as strings.
  • sort those hooks in ascending order.
  • Hook deletion policies
  • "before-hook-creation" specifies Tiller should delete the previous hook before the new hook is launched.
  • By default Tiller will wait for 60 seconds for a deleted hook to no longer exist in the API server before timing out.
  • Custom Resource Definitions (CRDs) are a special kind in Kubernetes.
  • The crd-install hook is executed very early during an installation, before the rest of the manifests are verified.
  • A common reason why the hook resource might already exist is that it was not deleted following use on a previous install/upgrade.
  • Helm uses Go templates for templating your resource files.
  • two special template functions: include and required
  • include function allows you to bring in another template, and then pass the results to other template functions.
  • The required function allows you to declare a particular values entry as required for template rendering.
  • If the value is empty, the template rendering will fail with a user submitted error message.
  • When you are working with string data, you are always safer quoting the strings than leaving them as bare words
  • Quote Strings, Don’t Quote Integers
  • when working with integers do not quote the values
  • env variables values which are expected to be string
  • to include a template, and then perform an operation on that template’s output, Helm has a special include function
  • The above includes a template called toYaml, passes it $value, and then passes the output of that template to the nindent function.
  • Go provides a way for setting template options to control behavior when a map is indexed with a key that’s not present in the map
  • The required function gives developers the ability to declare a value entry as required for template rendering.
  • The tpl function allows developers to evaluate strings as templates inside a template.
  • Rendering a external configuration file
  • (.Files.Get "conf/app.conf")
  • Image pull secrets are essentially a combination of registry, username, and password.
  • Automatically Roll Deployments When ConfigMaps or Secrets change
  • configmaps or secrets are injected as configuration files in containers
  • a restart may be required should those be updated with a subsequent helm upgrade
  • The sha256sum function can be used to ensure a deployment’s annotation section is updated if another file changes
  • checksum/config: {{ include (print $.Template.BasePath "/configmap.yaml") . | sha256sum }}
  • helm upgrade --recreate-pods
  • "helm.sh/resource-policy": keep
  • resources that should not be deleted when Helm runs a helm delete
  • this resource becomes orphaned. Helm will no longer manage it in any way.
  • create some reusable parts in your chart
  • In the templates/ directory, any file that begins with an underscore(_) is not expected to output a Kubernetes manifest file.
  • by convention, helper templates and partials are placed in a _helpers.tpl file.
  • The current best practice for composing a complex application from discrete parts is to create a top-level umbrella chart that exposes the global configurations, and then use the charts/ subdirectory to embed each of the components.
  • SAP’s Converged charts: These charts install SAP Converged Cloud a full OpenStack IaaS on Kubernetes. All of the charts are collected together in one GitHub repository, except for a few submodules.
  • Deis’s Workflow: This chart exposes the entire Deis PaaS system with one chart. But it’s different from the SAP chart in that this umbrella chart is built from each component, and each component is tracked in a different Git repository.
  • YAML is a superset of JSON
  • any valid JSON structure ought to be valid in YAML.
  • As a best practice, templates should follow a YAML-like syntax unless the JSON syntax substantially reduces the risk of a formatting issue.
  • There are functions in Helm that allow you to generate random data, cryptographic keys, and so on.
  • a chart repository is a location where packaged charts can be stored and shared.
  • A chart repository is an HTTP server that houses an index.yaml file and optionally some packaged charts.
  • Because a chart repository can be any HTTP server that can serve YAML and tar files and can answer GET requests, you have a plethora of options when it comes down to hosting your own chart repository.
  • It is not required that a chart package be located on the same server as the index.yaml file.
  • A valid chart repository must have an index file. The index file contains information about each chart in the chart repository.
  • The Helm project provides an open-source Helm repository server called ChartMuseum that you can host yourself.
  • $ helm repo index fantastic-charts --url https://fantastic-charts.storage.googleapis.com
  • A repository will not be added if it does not contain a valid index.yaml
  • add the repository to their helm client via the helm repo add [NAME] [URL] command with any name they would like to use to reference the repository.
  • Helm has provenance tools which help chart users verify the integrity and origin of a package.
  • Integrity is established by comparing a chart to a provenance record
  • The provenance file contains a chart’s YAML file plus several pieces of verification information
  • Chart repositories serve as a centralized collection of Helm charts.
  • Chart repositories must make it possible to serve provenance files over HTTP via a specific request, and must make them available at the same URI path as the chart.
  • We don’t want to be “the certificate authority” for all chart signers. Instead, we strongly favor a decentralized model, which is part of the reason we chose OpenPGP as our foundational technology.
  • The Keybase platform provides a public centralized repository for trust information.
  • A chart contains a number of Kubernetes resources and components that work together.
  • A test in a helm chart lives under the templates/ directory and is a pod definition that specifies a container with a given command to run.
  • The pod definition must contain one of the helm test hook annotations: helm.sh/hook: test-success or helm.sh/hook: test-failure
  • helm test
  • nest your test suite under a tests/ directory like <chart-name>/templates/tests/
張 旭

An App's Brief Journey from Source to Image · Cloud Native Buildpack Document... - 0 views

  • A buildpack’s job is to gather everything your app needs to build and run, and it often does this job quickly and quietly.
  • a platform sequentially tests groups of buildpacks against your app’s source code.
  • transforming your source code into a runnable app image.
  • ...2 more annotations...
  • Detection criteria is specific to each buildpack – for instance, an NPM buildpack might look for a package.json, and a Go buildpack might look for Go source files.
  • A builder is essentially an image containing buildpacks.
張 旭

Introducing Infrastructure as Code | Linode - 0 views

  • Infrastructure as Code (IaC) is a technique for deploying and managing infrastructure using software, configuration files, and automated tools.
  • With the older methods, technicians must configure a device manually, perhaps with the aid of an interactive tool. Information is added to configuration files by hand or through the use of ad-hoc scripts. Configuration wizards and similar utilities are helpful, but they still require hands-on management. A small group of experts owns the expertise, the process is typically poorly defined, and errors are common.
  • The development of the continuous integration and continuous delivery (CI/CD) pipeline made the idea of treating infrastructure as software much more attractive.
  • ...20 more annotations...
  • Infrastructure as Code takes advantage of the software development process, making use of quality assurance and test automation techniques.
  • Consistency/Standardization
  • Each node in the network becomes what is known as a snowflake, with its own unique settings. This leads to a system state that cannot easily be reproduced and is difficult to debug.
  • With standard configuration files and software-based configuration, there is greater consistency between all equipment of the same type. A key IaC concept is idempotence.
  • Idempotence makes it easy to troubleshoot, test, stabilize, and upgrade all the equipment.
  • Infrastructure as Code is central to the culture of DevOps, which is a mix of development and operations
  • edits are always made to the source configuration files, never on the target.
  • A declarative approach describes the final state of a device, but does not mandate how it should get there. The specific IaC tool makes all the procedural decisions. The end state is typically defined through a configuration file, a JSON specification, or a similar encoding.
  • An imperative approach defines specific functions or procedures that must be used to configure the device. It focuses on what must happen, but does not necessarily describe the final state. Imperative techniques typically use scripts for the implementation.
  • With a push configuration, the central server pushes the configuration to the destination device.
  • If a device is mutable, its configuration can be changed while it is active
  • Immutable devices cannot be changed. They must be decommissioned or rebooted and then completely rebuilt.
  • an immutable approach ensures consistency and avoids drift. However, it usually takes more time to remove or rebuild a configuration than it does to change it.
  • System administrators should consider security issues as part of the development process.
  • Ansible is a very popular open source IaC application from Red Hat
  • Ansible is often used in conjunction with Kubernetes and Docker.
  • Linode offers a collection of several Ansible guides for a more comprehensive overview.
  • Pulumi permits the use of a variety of programming languages to deploy and manage infrastructure within a cloud environment.
  • Terraform allows users to provision data center infrastructure using either JSON or Terraform’s own declarative language.
  • Terraform manages resources through the use of providers, which are similar to APIs.
張 旭

Think Before you NodePort in Kubernetes - Oteemo - 0 views

  • Two options are provided for Services intended for external use: a NodePort, or a LoadBalancer
  • no built-in cloud load balancers for Kubernetes in bare-metal environments
  • NodePort may not be your best choice.
  • ...15 more annotations...
  • NodePort, by design, bypasses almost all network security in Kubernetes.
  • NetworkPolicy resources can currently only control NodePorts by allowing or disallowing all traffic on them.
  • put a network filter in front of all the nodes
  • if a Nodeport-ranged Service is advertised to the public, it may serve as an invitation to black-hats to scan and probe
  • When Kubernetes creates a NodePort service, it allocates a port from a range specified in the flags that define your Kubernetes cluster. (By default, these are ports ranging from 30000-32767.)
  • By design, Kubernetes NodePort cannot expose standard low-numbered ports like 80 and 443, or even 8080 and 8443.
  • A port in the NodePort range can be specified manually, but this would mean the creation of a list of non-standard ports, cross-referenced with the applications they map to
  • if you want the exposed application to be highly available, everything contacting the application has to know all of your node addresses, or at least more than one.
  • non-standard ports.
  • Ingress resources use an Ingress controller (the nginx one is common but not by any means the only choice) and an external load balancer or public IP to enable path-based routing of external requests to internal Services.
  • With a single point of entry to expose and secure
  • get simpler TLS management!
  • consider putting a real load balancer in front of your NodePort Services before opening them up to the world
  • Google very recently released an alpha-stage bare-metal load balancer that, once installed in your cluster, will load-balance using BGP
  • NodePort Services are easy to create but hard to secure, hard to manage, and not especially friendly to others
張 旭

Tagging AWS resources - AWS General Reference - 0 views

  • assign metadata to your AWS resources in the form of tags.
  • a user-defined key and value
  • Tag keys are case sensitive.
  • ...17 more annotations...
  • tag values are case sensitive.
  • Tags are accessible to many AWS services, including billing.
  • personally identifiable information (PII)
  • apply it consistently across all resource types.
  • Use automated tools to help manage resource tags.
  • Use too many tags rather than too few tags.
  • Tag policies let you specify tagging rules that define valid key names and the values that are valid for each key.
  • Name – Identify individual resources
  • Environment – Distinguish between development, test, and production resources
  • Project – Identify projects that the resource supports
  • Owner – Identify who is responsible for the resource
  • Each resource can have a maximum of 50 user created tags.
  • For each resource, each tag key must be unique, and each tag key can have only one value.
  • Tag keys and values are case sensitive.
  • decide on a strategy for capitalizing tags, and consistently implement that strategy across all resource types.
  • AWS Cost Explorer and detailed billing reports let you break down AWS costs by tag.
  • An effective tagging strategy uses standardized tags and applies them consistently and programmatically across AWS resources.
  •  
    "assign metadata to your AWS resources in the form of tags."
張 旭

Ingress - Kubernetes - 0 views

  • An API object that manages external access to the services in a cluster, typically HTTP.
  • load balancing
  • SSL termination
  • ...62 more annotations...
  • name-based virtual hosting
  • Edge routerA router that enforces the firewall policy for your cluster.
  • Cluster networkA set of links, logical or physical, that facilitate communication within a cluster according to the Kubernetes networking model.
  • A Kubernetes ServiceA way to expose an application running on a set of Pods as a network service. that identifies a set of Pods using labelTags objects with identifying attributes that are meaningful and relevant to users. selectors.
  • Services are assumed to have virtual IPs only routable within the cluster network.
  • Ingress exposes HTTP and HTTPS routes from outside the cluster to services within the cluster.
  • Traffic routing is controlled by rules defined on the Ingress resource.
  • An Ingress can be configured to give Services externally-reachable URLs, load balance traffic, terminate SSL / TLS, and offer name based virtual hosting.
  • Exposing services other than HTTP and HTTPS to the internet typically uses a service of type Service.Type=NodePort or Service.Type=LoadBalancer.
  • You must have an ingress controller to satisfy an Ingress. Only creating an Ingress resource has no effect.
  • As with all other Kubernetes resources, an Ingress needs apiVersion, kind, and metadata fields
  • Ingress frequently uses annotations to configure some options depending on the Ingress controller,
  • Ingress resource only supports rules for directing HTTP traffic.
  • An optional host.
  • A list of paths
  • A backend is a combination of Service and port names
  • has an associated backend
  • Both the host and path must match the content of an incoming request before the load balancer directs traffic to the referenced Service.
  • HTTP (and HTTPS) requests to the Ingress that matches the host and path of the rule are sent to the listed backend.
  • A default backend is often configured in an Ingress controller to service any requests that do not match a path in the spec.
  • An Ingress with no rules sends all traffic to a single default backend.
  • Ingress controllers and load balancers may take a minute or two to allocate an IP address.
  • A fanout configuration routes traffic from a single IP address to more than one Service, based on the HTTP URI being requested.
  • nginx.ingress.kubernetes.io/rewrite-target: /
  • describe ingress
  • get ingress
  • Name-based virtual hosts support routing HTTP traffic to multiple host names at the same IP address.
  • route requests based on the Host header.
  • an Ingress resource without any hosts defined in the rules, then any web traffic to the IP address of your Ingress controller can be matched without a name based virtual host being required.
  • secure an Ingress by specifying a SecretStores sensitive information, such as passwords, OAuth tokens, and ssh keys. that contains a TLS private key and certificate.
  • Currently the Ingress only supports a single TLS port, 443, and assumes TLS termination.
  • An Ingress controller is bootstrapped with some load balancing policy settings that it applies to all Ingress, such as the load balancing algorithm, backend weight scheme, and others.
  • persistent sessions, dynamic weights) are not yet exposed through the Ingress. You can instead get these features through the load balancer used for a Service.
  • review the controller specific documentation to see how they handle health checks
  • edit ingress
  • After you save your changes, kubectl updates the resource in the API server, which tells the Ingress controller to reconfigure the load balancer.
  • kubectl replace -f on a modified Ingress YAML file.
  • Node: A worker machine in Kubernetes, part of a cluster.
  • in most common Kubernetes deployments, nodes in the cluster are not part of the public internet.
  • Edge router: A router that enforces the firewall policy for your cluster.
  • a gateway managed by a cloud provider or a physical piece of hardware.
  • Cluster network: A set of links, logical or physical, that facilitate communication within a cluster according to the Kubernetes networking model.
  • Service: A Kubernetes Service that identifies a set of Pods using label selectors.
  • An Ingress may be configured to give Services externally-reachable URLs, load balance traffic, terminate SSL / TLS, and offer name-based virtual hosting.
  • An Ingress does not expose arbitrary ports or protocols.
  • You must have an Ingress controller to satisfy an Ingress. Only creating an Ingress resource has no effect.
  • The name of an Ingress object must be a valid DNS subdomain name
  • The Ingress spec has all the information needed to configure a load balancer or proxy server.
  • Ingress resource only supports rules for directing HTTP(S) traffic.
  • An Ingress with no rules sends all traffic to a single default backend and .spec.defaultBackend is the backend that should handle requests in that case.
  • If defaultBackend is not set, the handling of requests that do not match any of the rules will be up to the ingress controller
  • A common usage for a Resource backend is to ingress data to an object storage backend with static assets.
  • Exact: Matches the URL path exactly and with case sensitivity.
  • Prefix: Matches based on a URL path prefix split by /. Matching is case sensitive and done on a path element by element basis.
  • multiple paths within an Ingress will match a request. In those cases precedence will be given first to the longest matching path.
  • Hosts can be precise matches (for example “foo.bar.com”) or a wildcard (for example “*.foo.com”).
  • No match, wildcard only covers a single DNS label
  • Each Ingress should specify a class, a reference to an IngressClass resource that contains additional configuration including the name of the controller that should implement the class.
  • secure an Ingress by specifying a Secret that contains a TLS private key and certificate.
  • The Ingress resource only supports a single TLS port, 443, and assumes TLS termination at the ingress point (traffic to the Service and its Pods is in plaintext).
  • TLS will not work on the default rule because the certificates would have to be issued for all the possible sub-domains.
  • hosts in the tls section need to explicitly match the host in the rules section.
張 旭

Service | Kubernetes - 0 views

  • Each Pod gets its own IP address
  • Pods are nonpermanent resources.
  • Kubernetes Pods are created and destroyed to match the state of your cluster
  • ...23 more annotations...
  • In Kubernetes, a Service is an abstraction which defines a logical set of Pods and a policy by which to access them (sometimes this pattern is called a micro-service).
  • The set of Pods targeted by a Service is usually determined by a selector
  • If you're able to use Kubernetes APIs for service discovery in your application, you can query the API server for Endpoints, that get updated whenever the set of Pods in a Service changes.
  • A Service in Kubernetes is a REST object, similar to a Pod.
  • The name of a Service object must be a valid DNS label name
  • Kubernetes assigns this Service an IP address (sometimes called the "cluster IP"), which is used by the Service proxies
  • A Service can map any incoming port to a targetPort. By default and for convenience, the targetPort is set to the same value as the port field.
  • The default protocol for Services is TCP
  • As many Services need to expose more than one port, Kubernetes supports multiple port definitions on a Service object. Each port definition can have the same protocol, or a different one.
  • Because this Service has no selector, the corresponding Endpoints object is not created automatically. You can manually map the Service to the network address and port where it's running, by adding an Endpoints object manually
  • Endpoint IP addresses cannot be the cluster IPs of other Kubernetes Services
  • Kubernetes ServiceTypes allow you to specify what kind of Service you want. The default is ClusterIP
  • ClusterIP: Exposes the Service on a cluster-internal IP.
  • NodePort: Exposes the Service on each Node's IP at a static port (the NodePort). A ClusterIP Service, to which the NodePort Service routes, is automatically created. You'll be able to contact the NodePort Service, from outside the cluster, by requesting <NodeIP>:<NodePort>.
  • LoadBalancer: Exposes the Service externally using a cloud provider's load balancer
  • ExternalName: Maps the Service to the contents of the externalName field (e.g. foo.bar.example.com), by returning a CNAME record with its value. No proxying of any kind is set up.
  • You can also use Ingress to expose your Service. Ingress is not a Service type, but it acts as the entry point for your cluster.
  • If you set the type field to NodePort, the Kubernetes control plane allocates a port from a range specified by --service-node-port-range flag (default: 30000-32767).
  • The default for --nodeport-addresses is an empty list. This means that kube-proxy should consider all available network interfaces for NodePort.
  • you need to take care of possible port collisions yourself. You also have to use a valid port number, one that's inside the range configured for NodePort use.
  • Service is visible as <NodeIP>:spec.ports[*].nodePort and .spec.clusterIP:spec.ports[*].port
  • Choosing this value makes the Service only reachable from within the cluster.
  • NodePort: Exposes the Service on each Node's IP at a static port
張 旭

Providers - Configuration Language | Terraform | HashiCorp Developer - 0 views

  • Terraform relies on plugins called providers to interact with cloud providers, SaaS providers, and other APIs.
  • Terraform configurations must declare which providers they require so that Terraform can install and use them.
  • Each provider adds a set of resource types and/or data sources that Terraform can manage.
  • ...6 more annotations...
  • Every resource type is implemented by a provider; without providers, Terraform can't manage any kind of infrastructure.
  • The Terraform Registry is the main directory of publicly available Terraform providers, and hosts providers for most major infrastructure platforms.
  • Dependency Lock File documents an additional HCL file that can be included with a configuration, which tells Terraform to always use a specific set of provider versions.
  • Terraform CLI finds and installs providers when initializing a working directory. It can automatically download providers from a Terraform registry, or load them from a local mirror or cache.
  • To save time and bandwidth, Terraform CLI supports an optional plugin cache. You can enable the cache using the plugin_cache_dir setting in the CLI configuration file.
  • you can use Terraform CLI to create a dependency lock file and commit it to version control along with your configuration.
張 旭

Moving away from Alpine - DEV Community - 0 views

  • it’s a lot of work to get packages that are not readily available in Alpine repository.
  • things compiled in Alpine won’t be usable on Ubuntu, for example, and vice versa.
  • the difficulty in pinning package versions in Alpine.
  • ...2 more annotations...
  • Developers rely heavily on app logs via syslog (mounted /dev/log) and Alpine uses busybox syslog by default.
  • Ubuntu officially launched minimal ubuntu images for cloud / container use
張 旭

Considerations for large clusters | Kubernetes - 0 views

  • A cluster is a set of nodes (physical or virtual machines) running Kubernetes agents, managed by the control plane.
  • Kubernetes v1.23 supports clusters with up to 5000 nodes.
  • criteria: No more than 110 pods per node No more than 5000 nodes No more than 150000 total pods No more than 300000 total containers
  • ...14 more annotations...
  • In-use IP addresses
  • run one or two control plane instances per failure zone, scaling those instances vertically first and then scaling horizontally after reaching the point of falling returns to (vertical) scale.
  • Kubernetes nodes do not automatically steer traffic towards control-plane endpoints that are in the same failure zone
  • store Event objects in a separate dedicated etcd instance.
  • start and configure additional etcd instance
  • Kubernetes resource limits help to minimize the impact of memory leaks and other ways that pods and containers can impact on other components.
  • Addons' default limits are typically based on data collected from experience running each addon on small or medium Kubernetes clusters.
  • When running on large clusters, addons often consume more of some resources than their default limits.
  • Many addons scale horizontally - you add capacity by running more pods
  • The VerticalPodAutoscaler can run in recommender mode to provide suggested figures for requests and limits.
  • Some addons run as one copy per node, controlled by a DaemonSet: for example, a node-level log aggregator.
  • VerticalPodAutoscaler is a custom resource that you can deploy into your cluster to help you manage resource requests and limits for pods.
  • The cluster autoscaler integrates with a number of cloud providers to help you run the right number of nodes for the level of resource demand in your cluster.
  • The addon resizer helps you in resizing the addons automatically as your cluster's scale changes.
« First ‹ Previous 41 - 55 of 55
Showing 20 items per page