using a service-oriented architecture and microservices approach, developers can design a code base to be modular.
Intro to deployment strategies: blue-green, canary, and more - DEV Community - 0 views
- ...20 more annotations...
-
the abstraction of the infrastructure layer, which is now considered code. Deployment of a new application may require the deployment of new infrastructure code as well.
-
Big bang deployments required the business to conduct extensive development and testing before release, often associated with the "waterfall model" of large sequential releases.
-
You can use the primary database by blue for write operations and use the secondary by green for read operations.
-
The main challenge of canary deployment is to devise a way to route some users to the new application.
-
With CD, the CI-built code artifact is packaged and always ready to be deployed in one or more environments.
-
An application performance monitoring (APM) tool can help your team monitor critical performance metrics including server response times after deployments.
Google Cloud Platform Blog: Introducing Kayenta: An open automated canary analysis tool... - 0 views
使用mysqladmin ext了解MySQL运行状态 - Database, Cloud Computing and Life - 0 views
Azure 101: Networking Part 1 - Cloud Solution Architect - 0 views
-
Virtual Private Gateways and it is these combined set of services that allow you to provide traffic flow to/from your Virtual Network and any external network, such as your On-Prem DataCenter.
-
No matter which version of the gateway you plan on implementing, there are three resources within Azure that you will need to implement and then connect to one of your Virtual Networks.
-
"Gateway Subnet". This is a specialized Subnet within your Virtual Network that can only be used for connecting Virtual Private Gateways to a VPN connection of some kind.
- ...2 more annotations...
-
The Local Gateway is where you define the configuration of your external network's VPN access point with the most important piece being the external IP of that device so that Azure knows exactly how to establish the VPN connection.
-
The VPN Gateway is the Azure resource that you tie into your Gateway Subnet within your Virtual Network.
Helm | - 0 views
-
A single chart might be used to deploy something simple, like a memcached pod, or something complex, like a full web app stack with HTTP servers, databases, caches, and so on.
-
Charts are created as files laid out in a particular directory tree, then they can be packaged into versioned archives to be deployed.
- ...170 more annotations...
-
templates/ # A directory of templates that, when combined with values, # will generate valid Kubernetes manifest files.
-
When generating a package, the helm package command will use the version that it finds in the Chart.yaml as a token in the package name.
-
the appVersion field is not related to the version field. It is a way of specifying the version of the application.
-
If the latest version of a chart in the repository is marked as deprecated, then the chart as a whole is considered to be deprecated.
-
dependencies can be dynamically linked through the requirements.yaml file or brought in to the charts/ directory and managed manually.
-
the preferred method of declaring dependencies is by using a requirements.yaml file inside of your chart.
-
helm dependency update and it will use your dependency file to download all the specified charts into your charts/ directory for you.
-
When helm dependency update retrieves charts, it will store them as chart archives in the charts/ directory.
-
Managing charts with requirements.yaml is a good way to easily keep charts updated, and also share requirements information throughout a team.
-
The condition field holds one or more YAML paths (delimited by commas). If this path exists in the top parent’s values and resolves to a boolean value, the chart will be enabled or disabled based on that boolean value.
-
The keys containing the values to be imported can be specified in the parent chart’s requirements.yaml file using a YAML list. Each item in the list is a key which is imported from the child chart’s exports field.
-
specifying the key data in our import list, Helm looks in the exports field of the child chart for data key and imports its contents.
-
the parent key data is not contained in the parent’s final values. If you need to specify the parent key, use the ‘child-parent’ format.
-
To access values that are not contained in the exports key of the child chart’s values, you will need to specify the source key of the values to be imported (child) and the destination path in the parent chart’s values (parent).
-
Helm Chart templates are written in the Go template language, with the addition of 50 or so add-on template functions from the Sprig library and a few other specialized functions
-
When Helm renders the charts, it will pass every file in that directory through the template engine.
-
Chart developers may supply a file called values.yaml inside of a chart. This file can contain default values.
-
Chart users may supply a YAML file that contains values. This can be provided on the command line with helm install.
-
When a user supplies custom values, these values will override the values in the chart’s values.yaml file.
-
Values that are supplied via a values.yaml file (or via the --set flag) are accessible from the .Values object in a template.
-
Files can be accessed using {{index .Files "file.name"}} or using the {{.Files.Get name}} or {{.Files.GetString name}} functions.
-
Values files can declare values for the top-level chart, as well as for any of the charts that are included in that chart’s charts/ directory.
-
a way of sharing one top-level variable with all subcharts, which is useful for things like setting metadata properties like labels.
-
If a subchart declares a global variable, that global will be passed downward (to the subchart’s subcharts), but not upward to the parent chart.
-
Any HTTP server that can serve YAML files and tar files and can answer GET requests can be used as a repository server.
-
Helm provides a hook mechanism to allow chart developers to intervene at certain points in a release’s life cycle.
-
Execute a Job to back up a database before installing a new chart, and then execute a second job after the upgrade in order to restore data.
-
test-success: Executes when running helm test and expects the pod to return successfully (return code == 0).
-
Hooks allow you, the chart developer, an opportunity to perform operations at strategic points in a release lifecycle
-
if the job fails, the release will fail. This is a blocking operation, so the Helm client will pause while the Job is run.
-
If they have hook weights (see below), they are executed in weighted order. Otherwise, ordering is not guaranteed.
-
To destroy such resources, you need to either write code to perform this operation in a pre-delete or post-delete hook or add "helm.sh/hook-delete-policy" annotation to the hook template file.
-
When subcharts declare hooks, those are also evaluated. There is no way for a top-level chart to disable the hooks declared by subcharts.
-
"before-hook-creation" specifies Tiller should delete the previous hook before the new hook is launched.
-
By default Tiller will wait for 60 seconds for a deleted hook to no longer exist in the API server before timing out.
-
The crd-install hook is executed very early during an installation, before the rest of the manifests are verified.
-
A common reason why the hook resource might already exist is that it was not deleted following use on a previous install/upgrade.
-
include function allows you to bring in another template, and then pass the results to other template functions.
-
The required function allows you to declare a particular values entry as required for template rendering.
-
When you are working with string data, you are always safer quoting the strings than leaving them as bare words
-
to include a template, and then perform an operation on that template’s output, Helm has a special include function
-
The above includes a template called toYaml, passes it $value, and then passes the output of that template to the nindent function.
-
Go provides a way for setting template options to control behavior when a map is indexed with a key that’s not present in the map
-
The required function gives developers the ability to declare a value entry as required for template rendering.
-
The sha256sum function can be used to ensure a deployment’s annotation section is updated if another file changes
-
In the templates/ directory, any file that begins with an underscore(_) is not expected to output a Kubernetes manifest file.
-
The current best practice for composing a complex application from discrete parts is to create a top-level umbrella chart that exposes the global configurations, and then use the charts/ subdirectory to embed each of the components.
-
SAP’s Converged charts: These charts install SAP Converged Cloud a full OpenStack IaaS on Kubernetes. All of the charts are collected together in one GitHub repository, except for a few submodules.
-
Deis’s Workflow: This chart exposes the entire Deis PaaS system with one chart. But it’s different from the SAP chart in that this umbrella chart is built from each component, and each component is tracked in a different Git repository.
-
As a best practice, templates should follow a YAML-like syntax unless the JSON syntax substantially reduces the risk of a formatting issue.
-
A chart repository is an HTTP server that houses an index.yaml file and optionally some packaged charts.
-
Because a chart repository can be any HTTP server that can serve YAML and tar files and can answer GET requests, you have a plethora of options when it comes down to hosting your own chart repository.
-
A valid chart repository must have an index file. The index file contains information about each chart in the chart repository.
-
The Helm project provides an open-source Helm repository server called ChartMuseum that you can host yourself.
-
add the repository to their helm client via the helm repo add [NAME] [URL] command with any name they would like to use to reference the repository.
-
Chart repositories must make it possible to serve provenance files over HTTP via a specific request, and must make them available at the same URI path as the chart.
-
We don’t want to be “the certificate authority” for all chart signers. Instead, we strongly favor a decentralized model, which is part of the reason we chose OpenPGP as our foundational technology.
-
A test in a helm chart lives under the templates/ directory and is a pod definition that specifies a container with a given command to run.
-
The pod definition must contain one of the helm test hook annotations: helm.sh/hook: test-success or helm.sh/hook: test-failure
An App's Brief Journey from Source to Image · Cloud Native Buildpack Document... - 0 views
-
A buildpack’s job is to gather everything your app needs to build and run, and it often does this job quickly and quietly.
- ...2 more annotations...
-
Detection criteria is specific to each buildpack – for instance, an NPM buildpack might look for a package.json, and a Go buildpack might look for Go source files.
Introducing Infrastructure as Code | Linode - 0 views
-
Infrastructure as Code (IaC) is a technique for deploying and managing infrastructure using software, configuration files, and automated tools.
-
With the older methods, technicians must configure a device manually, perhaps with the aid of an interactive tool. Information is added to configuration files by hand or through the use of ad-hoc scripts. Configuration wizards and similar utilities are helpful, but they still require hands-on management. A small group of experts owns the expertise, the process is typically poorly defined, and errors are common.
-
The development of the continuous integration and continuous delivery (CI/CD) pipeline made the idea of treating infrastructure as software much more attractive.
- ...20 more annotations...
-
Infrastructure as Code takes advantage of the software development process, making use of quality assurance and test automation techniques.
-
Each node in the network becomes what is known as a snowflake, with its own unique settings. This leads to a system state that cannot easily be reproduced and is difficult to debug.
-
With standard configuration files and software-based configuration, there is greater consistency between all equipment of the same type. A key IaC concept is idempotence.
-
Infrastructure as Code is central to the culture of DevOps, which is a mix of development and operations
-
A declarative approach describes the final state of a device, but does not mandate how it should get there. The specific IaC tool makes all the procedural decisions. The end state is typically defined through a configuration file, a JSON specification, or a similar encoding.
-
An imperative approach defines specific functions or procedures that must be used to configure the device. It focuses on what must happen, but does not necessarily describe the final state. Imperative techniques typically use scripts for the implementation.
-
Immutable devices cannot be changed. They must be decommissioned or rebooted and then completely rebuilt.
-
an immutable approach ensures consistency and avoids drift. However, it usually takes more time to remove or rebuild a configuration than it does to change it.
-
Pulumi permits the use of a variety of programming languages to deploy and manage infrastructure within a cloud environment.
-
Terraform allows users to provision data center infrastructure using either JSON or Terraform’s own declarative language.
Think Before you NodePort in Kubernetes - Oteemo - 0 views
- ...15 more annotations...
-
NetworkPolicy resources can currently only control NodePorts by allowing or disallowing all traffic on them.
-
if a Nodeport-ranged Service is advertised to the public, it may serve as an invitation to black-hats to scan and probe
-
When Kubernetes creates a NodePort service, it allocates a port from a range specified in the flags that define your Kubernetes cluster. (By default, these are ports ranging from 30000-32767.)
-
By design, Kubernetes NodePort cannot expose standard low-numbered ports like 80 and 443, or even 8080 and 8443.
-
A port in the NodePort range can be specified manually, but this would mean the creation of a list of non-standard ports, cross-referenced with the applications they map to
-
if you want the exposed application to be highly available, everything contacting the application has to know all of your node addresses, or at least more than one.
-
Ingress resources use an Ingress controller (the nginx one is common but not by any means the only choice) and an external load balancer or public IP to enable path-based routing of external requests to internal Services.
-
consider putting a real load balancer in front of your NodePort Services before opening them up to the world
-
Google very recently released an alpha-stage bare-metal load balancer that, once installed in your cluster, will load-balance using BGP
-
NodePort Services are easy to create but hard to secure, hard to manage, and not especially friendly to others
Tagging AWS resources - AWS General Reference - 0 views
- ...17 more annotations...
-
Tag policies let you specify tagging rules that define valid key names and the values that are valid for each key.
-
decide on a strategy for capitalizing tags, and consistently implement that strategy across all resource types.
-
An effective tagging strategy uses standardized tags and applies them consistently and programmatically across AWS resources.
Ingress - Kubernetes - 0 views
- ...62 more annotations...
-
Cluster networkA set of links, logical or physical, that facilitate communication within a cluster according to the Kubernetes networking model.
-
A Kubernetes ServiceA way to expose an application running on a set of Pods as a network service. that identifies a set of Pods using labelTags objects with identifying attributes that are meaningful and relevant to users. selectors.
-
An Ingress can be configured to give Services externally-reachable URLs, load balance traffic, terminate SSL / TLS, and offer name based virtual hosting.
-
Exposing services other than HTTP and HTTPS to the internet typically uses a service of type Service.Type=NodePort or Service.Type=LoadBalancer.
-
You must have an ingress controller to satisfy an Ingress. Only creating an Ingress resource has no effect.
-
Both the host and path must match the content of an incoming request before the load balancer directs traffic to the referenced Service.
-
HTTP (and HTTPS) requests to the Ingress that matches the host and path of the rule are sent to the listed backend.
-
A default backend is often configured in an Ingress controller to service any requests that do not match a path in the spec.
-
A fanout configuration routes traffic from a single IP address to more than one Service, based on the HTTP URI being requested.
-
Name-based virtual hosts support routing HTTP traffic to multiple host names at the same IP address.
-
an Ingress resource without any hosts defined in the rules, then any web traffic to the IP address of your Ingress controller can be matched without a name based virtual host being required.
-
secure an Ingress by specifying a SecretStores sensitive information, such as passwords, OAuth tokens, and ssh keys. that contains a TLS private key and certificate.
-
An Ingress controller is bootstrapped with some load balancing policy settings that it applies to all Ingress, such as the load balancing algorithm, backend weight scheme, and others.
-
persistent sessions, dynamic weights) are not yet exposed through the Ingress. You can instead get these features through the load balancer used for a Service.
-
After you save your changes, kubectl updates the resource in the API server, which tells the Ingress controller to reconfigure the load balancer.
-
Cluster network: A set of links, logical or physical, that facilitate communication within a cluster according to the Kubernetes networking model.
-
An Ingress may be configured to give Services externally-reachable URLs, load balance traffic, terminate SSL / TLS, and offer name-based virtual hosting.
-
You must have an Ingress controller to satisfy an Ingress. Only creating an Ingress resource has no effect.
-
An Ingress with no rules sends all traffic to a single default backend and .spec.defaultBackend is the backend that should handle requests in that case.
-
If defaultBackend is not set, the handling of requests that do not match any of the rules will be up to the ingress controller
-
A common usage for a Resource backend is to ingress data to an object storage backend with static assets.
-
Prefix: Matches based on a URL path prefix split by /. Matching is case sensitive and done on a path element by element basis.
-
multiple paths within an Ingress will match a request. In those cases precedence will be given first to the longest matching path.
-
Each Ingress should specify a class, a reference to an IngressClass resource that contains additional configuration including the name of the controller that should implement the class.
-
The Ingress resource only supports a single TLS port, 443, and assumes TLS termination at the ingress point (traffic to the Service and its Pods is in plaintext).
-
TLS will not work on the default rule because the certificates would have to be issued for all the possible sub-domains.
Service | Kubernetes - 0 views
- ...23 more annotations...
-
In Kubernetes, a Service is an abstraction which defines a logical set of Pods and a policy by which to access them (sometimes this pattern is called a micro-service).
-
If you're able to use Kubernetes APIs for service discovery in your application, you can query the API server for Endpoints, that get updated whenever the set of Pods in a Service changes.
-
Kubernetes assigns this Service an IP address (sometimes called the "cluster IP"), which is used by the Service proxies
-
A Service can map any incoming port to a targetPort. By default and for convenience, the targetPort is set to the same value as the port field.
-
As many Services need to expose more than one port, Kubernetes supports multiple port definitions on a Service object. Each port definition can have the same protocol, or a different one.
-
Because this Service has no selector, the corresponding Endpoints object is not created automatically. You can manually map the Service to the network address and port where it's running, by adding an Endpoints object manually
-
Kubernetes ServiceTypes allow you to specify what kind of Service you want. The default is ClusterIP
-
NodePort: Exposes the Service on each Node's IP at a static port (the NodePort). A ClusterIP Service, to which the NodePort Service routes, is automatically created. You'll be able to contact the NodePort Service, from outside the cluster, by requesting <NodeIP>:<NodePort>.
-
ExternalName: Maps the Service to the contents of the externalName field (e.g. foo.bar.example.com), by returning a CNAME record with its value. No proxying of any kind is set up.
-
You can also use Ingress to expose your Service. Ingress is not a Service type, but it acts as the entry point for your cluster.
-
If you set the type field to NodePort, the Kubernetes control plane allocates a port from a range specified by --service-node-port-range flag (default: 30000-32767).
-
The default for --nodeport-addresses is an empty list. This means that kube-proxy should consider all available network interfaces for NodePort.
-
you need to take care of possible port collisions yourself. You also have to use a valid port number, one that's inside the range configured for NodePort use.
Providers - Configuration Language | Terraform | HashiCorp Developer - 0 views
-
Terraform relies on plugins called providers to interact with cloud providers, SaaS providers, and other APIs.
-
Terraform configurations must declare which providers they require so that Terraform can install and use them.
- ...6 more annotations...
-
Every resource type is implemented by a provider; without providers, Terraform can't manage any kind of infrastructure.
-
The Terraform Registry is the main directory of publicly available Terraform providers, and hosts providers for most major infrastructure platforms.
-
Dependency Lock File documents an additional HCL file that can be included with a configuration, which tells Terraform to always use a specific set of provider versions.
-
Terraform CLI finds and installs providers when initializing a working directory. It can automatically download providers from a Terraform registry, or load them from a local mirror or cache.
-
To save time and bandwidth, Terraform CLI supports an optional plugin cache. You can enable the cache using the plugin_cache_dir setting in the CLI configuration file.
-
you can use Terraform CLI to create a dependency lock file and commit it to version control along with your configuration.
Moving away from Alpine - DEV Community - 0 views
- ...2 more annotations...
-
Developers rely heavily on app logs via syslog (mounted /dev/log) and Alpine uses busybox syslog by default.
Considerations for large clusters | Kubernetes - 0 views
-
A cluster is a set of nodes (physical or virtual machines) running Kubernetes agents, managed by the control plane.
-
criteria: No more than 110 pods per node No more than 5000 nodes No more than 150000 total pods No more than 300000 total containers
- ...14 more annotations...
-
run one or two control plane instances per failure zone, scaling those instances vertically first and then scaling horizontally after reaching the point of falling returns to (vertical) scale.
-
Kubernetes nodes do not automatically steer traffic towards control-plane endpoints that are in the same failure zone
-
Kubernetes resource limits help to minimize the impact of memory leaks and other ways that pods and containers can impact on other components.
-
Addons' default limits are typically based on data collected from experience running each addon on small or medium Kubernetes clusters.
-
When running on large clusters, addons often consume more of some resources than their default limits.
-
The VerticalPodAutoscaler can run in recommender mode to provide suggested figures for requests and limits.
-
Some addons run as one copy per node, controlled by a DaemonSet: for example, a node-level log aggregator.
-
VerticalPodAutoscaler is a custom resource that you can deploy into your cluster to help you manage resource requests and limits for pods.
-
The cluster autoscaler integrates with a number of cloud providers to help you run the right number of nodes for the level of resource demand in your cluster.