Skip to main content

Home/ Larvata/ Group items tagged infrastructure

Rss Feed Group items tagged

張 旭

Introducing Infrastructure as Code | Linode - 0 views

  • Infrastructure as Code (IaC) is a technique for deploying and managing infrastructure using software, configuration files, and automated tools.
  • With the older methods, technicians must configure a device manually, perhaps with the aid of an interactive tool. Information is added to configuration files by hand or through the use of ad-hoc scripts. Configuration wizards and similar utilities are helpful, but they still require hands-on management. A small group of experts owns the expertise, the process is typically poorly defined, and errors are common.
  • The development of the continuous integration and continuous delivery (CI/CD) pipeline made the idea of treating infrastructure as software much more attractive.
  • ...20 more annotations...
  • Infrastructure as Code takes advantage of the software development process, making use of quality assurance and test automation techniques.
  • Consistency/Standardization
  • Each node in the network becomes what is known as a snowflake, with its own unique settings. This leads to a system state that cannot easily be reproduced and is difficult to debug.
  • With standard configuration files and software-based configuration, there is greater consistency between all equipment of the same type. A key IaC concept is idempotence.
  • Idempotence makes it easy to troubleshoot, test, stabilize, and upgrade all the equipment.
  • Infrastructure as Code is central to the culture of DevOps, which is a mix of development and operations
  • edits are always made to the source configuration files, never on the target.
  • A declarative approach describes the final state of a device, but does not mandate how it should get there. The specific IaC tool makes all the procedural decisions. The end state is typically defined through a configuration file, a JSON specification, or a similar encoding.
  • An imperative approach defines specific functions or procedures that must be used to configure the device. It focuses on what must happen, but does not necessarily describe the final state. Imperative techniques typically use scripts for the implementation.
  • With a push configuration, the central server pushes the configuration to the destination device.
  • If a device is mutable, its configuration can be changed while it is active
  • Immutable devices cannot be changed. They must be decommissioned or rebooted and then completely rebuilt.
  • an immutable approach ensures consistency and avoids drift. However, it usually takes more time to remove or rebuild a configuration than it does to change it.
  • System administrators should consider security issues as part of the development process.
  • Ansible is a very popular open source IaC application from Red Hat
  • Ansible is often used in conjunction with Kubernetes and Docker.
  • Linode offers a collection of several Ansible guides for a more comprehensive overview.
  • Pulumi permits the use of a variety of programming languages to deploy and manage infrastructure within a cloud environment.
  • Terraform allows users to provision data center infrastructure using either JSON or Terraform’s own declarative language.
  • Terraform manages resources through the use of providers, which are similar to APIs.
張 旭

Using Infrastructure as Code to Automate VMware Deployments - 1 views

  • Infrastructure as code is at the heart of provisioning for cloud infrastructure marking a significant shift away from monolithic point-and-click management tools.
  • infrastructure as code enables operators to take a programmatic approach to provisioning.
  • provides a single workflow to provision and maintain infrastructure and services from all of your vendors, making it not only easier to switch providers
  • ...5 more annotations...
  • A Terraform Provider is responsible for understanding API interactions between and exposing the resources from a given Infrastructure, Platform, or SaaS offering to Terraform.
  • write a Terraform file that describes the Virtual Machine that you want, apply that file with Terraform and create that VM as you described without ever needing to log into the vSphere dashboard.
  • HashiCorp Configuration Language (HCL)
  • the provider credentials are passed in at the top of the script to connect to the vSphere account.
  • modules— a way to encapsulate infrastructure resources into a reusable format.
  •  
    "revolutionizing"
crazylion lee

Keepalived for Linux - 0 views

  •  
    Keepalived is a routing software written in C. The main goal of this project is to provide simple and robust facilities for loadbalancing and high-availability to Linux system and Linux based infrastructures. Loadbalancing framework relies on well-known and widely used Linux Virtual Server (IPVS) kernel module providing Layer4 loadbalancing. Keepalived implements a set of checkers to dynamically and adaptively maintain and manage loadbalanced server pool according their health. On the other hand high-availability is achieved by VRRP protocol. VRRP is a fundamental brick for router failover. In addition, Keepalived implements a set of hooks to the VRRP finite state machine providing low-level and high-speed protocol interactions. Keepalived frameworks can be used independently or all together to provide resilient infrastructures.
張 旭

Secrets Management with Terraform - 0 views

  • Terraform is an Infrastructure as Code (IaC) tool that allows you to write declarative code to manage your infrastructure.
  • Keeping Secrets Out of .tf Files
  • .tf files contain the declarative code used to create, manage, and destroy infrastructure.
  • ...17 more annotations...
  • .tf files can accept values from input variables.
  • variable definitions can have default values assigned to them.
  • values are stored in separate files with the .tfvars extension.
  • looks through the working directory for a file named terraform.tfvars, or for files with the .auto.tfvars extension.
  • add the terraform.tfvars file to your .gitignore file and keep it out of version control.
  • include an example terraform.tfvars.example in your Git repository with all of the variable names recorded (but none of the values entered).
  • terraform apply -var-file=myvars.tfvars
  • Terraform allows you to keep input variable values in environment variables.
  • the prefix TF_VAR_
  • If Terraform does not find a default value for a defined variable; or a value from a .tfvars file, environment variable, or CLI flag; it will prompt you for a value before running an action
  • state file contains a JSON object that holds your managed infrastructure’s current state
  • state is a snapshot of the various attributes of your infrastructure at the time it was last modified
  • sensitive information used to generate your Terraform state can be stored as plain text in the terraform.tfstate file.
  • Avoid checking your terraform.tfstate file into your version control repository.
  • Some backends, like Consul, also allow for state locking. If one user is applying a state, another user will be unable to make any changes.
  • Terraform backends allow the user to securely store their state in a remote location, such as a key/value store like Consul, or an S3 compatible bucket storage like Minio.
  • at minimum the repository should be private.
張 旭

Public Key Infrastructure (PKI) Overview - 0 views

  • A PKI allows you to bind public keys (contained in SSL certificates) with a person in a way that allows you to trust the certificate.
  • Public Key Infrastructures, like the one used to secure the Internet, most commonly use a Certificate Authority (also called a Registration Authority) to verify the identity of an entity and create unforgeable certificates.
  • An SSL Certificate Authority (also called a trusted third party or CA) is an organization that issues digital certificates to organizations or individuals after verifying their identity.
  • ...9 more annotations...
  • An SSL Certificate provides assurances that we are talking to the right server, but the assurances are limited.
  • In PKI, trust simply means that a certificate can be validated by a CA that is in our trust store.
  • An SSL Certificate in a PKI is a digital document containing a public key, entity information, and a digital signature from the certificate issuer.
  • it is much more practical and secure to establish a chain of trust to the Root certificate by signing an Intermediate certificate
  • A trust store is a collection of Root certificates that are trusted by default.
  • there are four primary trust stores that are relied upon for the majority of software: Apple, Microsoft, Chrome, and Mozilla.
  • a revocation system that allows a certificate to be listed as invalid if it was improperly issued or if the private key has been compromised.
  • Online Certificate Status Protocol (OCSP)
  • Certificate Revocation List (CRL)
張 旭

State: Workspaces - Terraform by HashiCorp - 0 views

  • The persistent data stored in the backend belongs to a workspace.
  • Certain backends support multiple named workspaces, allowing multiple states to be associated with a single configuration.
  • Terraform starts with a single workspace named "default". This workspace is special both because it is the default and also because it cannot ever be deleted.
  • ...12 more annotations...
  • Within your Terraform configuration, you may include the name of the current workspace using the ${terraform.workspace} interpolation sequence.
  • changing behavior based on the workspace.
  • Named workspaces allow conveniently switching between multiple instances of a single configuration within its single backend.
  • A common use for multiple workspaces is to create a parallel, distinct copy of a set of infrastructure in order to test a set of changes before modifying the main production infrastructure.
  • Non-default workspaces are often related to feature branches in version control.
  • Workspaces alone are not a suitable tool for system decomposition, because each subsystem should have its own separate configuration and backend, and will thus have its own distinct set of workspaces.
  • In particular, organizations commonly want to create a strong separation between multiple deployments of the same infrastructure serving different development stages (e.g. staging vs. production) or different internal teams.
  • use one or more re-usable modules to represent the common elements, and then represent each instance as a separate configuration that instantiates those common elements in the context of a different backend.
  • If a Terraform state for one configuration is stored in a remote backend that is accessible to other configurations then terraform_remote_state can be used to directly consume its root module outputs from those other configurations.
  • For server addresses, use a provider-specific resource to create a DNS record with a predictable name and then either use that name directly or use the dns provider to retrieve the published addresses in other configurations.
  • Workspaces are technically equivalent to renaming your state file.
  • using a remote backend instead is recommended when there are multiple collaborators.
  •  
    "The persistent data stored in the backend belongs to a workspace."
張 旭

Introducing the MinIO Operator and Operator Console - 0 views

  • Object-storage-as-a-service is a game changer for IT.
  • provision multi-tenant object storage as a service.
  • have the skill set to create, deploy, tune, scale and manage modern, application oriented object storage using Kubernetes
  • ...12 more annotations...
  • MinIO is purpose-built to take full advantage of the Kubernetes architecture.
  • MinIO and Kubernetes work together to simplify infrastructure management, providing a way to manage object storage infrastructure within the Kubernetes toolset.  
  • The operator pattern extends Kubernetes's familiar declarative API model with custom resource definitions (CRDs) to perform common operations like resource orchestration, non-disruptive upgrades, cluster expansion and to maintain high-availability
  • The Operator uses the command set kubectl that the Kubernetes community was already familiar with and adds the kubectl minio plugin . The MinIO Operator and the MinIO kubectl plugin facilitate the deployment and management of MinIO Object Storage on Kubernetes - which is how multi-tenant object storage as a service is delivered.
  • choosing a leader for a distributed application without an internal member election process
  • The Operator Console makes Kubernetes object storage easier still. In this graphical user interface, MinIO created something so simple that anyone in the organization can create, deploy and manage object storage as a service.
  • The primary unit of managing MinIO on Kubernetes is the tenant.
  • The MinIO Operator can allocate multiple tenants within the same Kubernetes cluster.
  • Each tenant, in turn, can have different capacity (i.e: a small 500GB tenant vs a 100TB tenant), resources (1000m CPU and 4Gi RAM vs 4000m CPU and 16Gi RAM) and servers (4 pods vs 16 pods), as well a separate configurations regarding Identity Providers, Encryption and versions.
  • each tenant is a cluster of server pools (independent sets of nodes with their own compute, network, and storage resources), that, while sharing the same physical infrastructure, are fully isolated from each other in their own namespaces.
  • Each tenant runs their own MinIO cluster, fully isolated from other tenants
  • Each tenant scales independently by federating clusters across geographies.
張 旭

Developing with Docker - 1 views

  • Before moving our production infrastructure over however, we decided that we wanted to start developing with them locally first. We could shake out any issues with our applications before risking the production environment.
  • using Chef and Vagrant to provision local VMs
  • Engineers at IFTTT currently all use Apple computers
  • ...7 more annotations...
  • /bin/true
    • 張 旭
       
      如果使用 docker create 就不用跑這個, 不過目前 docker-compose 沒有支援 volume-only 的 container
  • it will install gems onto the data volume from the bundler-cache container.
  • dev rm bundler-cache
    • 張 旭
       
      要完全刪除干淨,後面的指令可能是: docker rm -v bundler-cache
  • if you accidentally delete bundler-cache, you then have to install all your gems over again.
  • Containerization and Docker are powerful tools in your infrastructure toolbox.
  • highly recommend starting off in your developer environment first
  • the onboarding time for new developers go from a couple days or more to a matter of hours.
張 旭

Automated Docker-based Rails deployments - 0 views

  • how to automate the whole deployment process with a real world
  • use Unicorn as our webserver
  •  
    "This is the third post in a series of 3 on how my company moved its infrastructure from PaaS to Docker based deployment."
張 旭

What is DevOps? | Atlassian - 0 views

  • DevOps is a set of practices that automates the processes between software development and IT teams, in order that they can build, test, and release software faster and more reliably.
  • increased trust, faster software releases, ability to solve critical issues quickly, and better manage unplanned work.
  • bringing together the best of software development and IT operations.
  • ...39 more annotations...
  • DevOps is a culture, a movement, a philosophy.
  • a firm handshake between development and operations
  • DevOps isn’t magic, and transformations don’t happen overnight.
  • Infrastructure as code
  • Culture is the #1 success factor in DevOps.
  • Building a culture of shared responsibility, transparency and faster feedback is the foundation of every high performing DevOps team.
  •  'not our problem' mentality
  • DevOps is that change in mindset of looking at the development process holistically and breaking down the barrier between Dev and Ops.
  • Speed is everything.
  • Lack of automated test and review cycles block the release to production and poor incident response time kills velocity and team confidence
  • Open communication helps Dev and Ops teams swarm on issues, fix incidents, and unblock the release pipeline faster.
  • Unplanned work is a reality that every team faces–a reality that most often impacts team productivity.
  • “cross-functional collaboration.”
  • All the tooling and automation in the world are useless if they aren’t accompanied by a genuine desire on the part of development and IT/Ops professionals to work together.
  • DevOps doesn’t solve tooling problems. It solves human problems.
  • Forming project- or product-oriented teams to replace function-based teams is a step in the right direction.
  • sharing a common goal and having a plan to reach it together
  • join sprint planning sessions, daily stand-ups, and sprint demos.
  • DevOps culture across every department
  • open channels of communication, and talk regularly
  • DevOps isn’t one team’s job. It’s everyone’s job.
  • automation eliminates repetitive manual work, yields repeatable processes, and creates reliable systems.
  • Build, test, deploy, and provisioning automation
  • continuous delivery: the practice of running each code change through a gauntlet of automated tests, often facilitated by cloud-based infrastructure, then packaging up successful builds and promoting them up toward production using automated deploys.
  • automated deploys alert IT/Ops to server “drift” between environments, which reduces or eliminates surprises when it’s time to release.
  • “configuration as code.”
  • when DevOps uses automated deploys to send thoroughly tested code to identically provisioned environments, “Works on my machine!” becomes irrelevant.
  • A DevOps mindset sees opportunities for continuous improvement everywhere.
  • regular retrospectives
  • A/B testing
  • failure is inevitable. So you might as well set up your team to absorb it, recover, and learn from it (some call this “being anti-fragile”).
  • Postmortems focus on where processes fell down and how to strengthen them – not on which team member f'ed up the code.
  • Our engineers are responsible for QA, writing, and running their own tests to get the software out to customers.
  • How long did it take to go from development to deployment? 
  • How long does it take to recover after a system failure?
  • service level agreements (SLAs)
  • Devops isn't any single person's job. It's everyone's job.
  • DevOps is big on the idea that the same people who build an application should be involved in shipping and running it.
  • developers and operators pair with each other in each phase of the application’s lifecycle.
張 旭

Intro to deployment strategies: blue-green, canary, and more - DEV Community - 0 views

  • using a service-oriented architecture and microservices approach, developers can design a code base to be modular.
  • Modern applications are often distributed and cloud-based
  • different release cycles for different components
  • ...20 more annotations...
  • the abstraction of the infrastructure layer, which is now considered code. Deployment of a new application may require the deployment of new infrastructure code as well.
  • "big bang" deployments update whole or large parts of an application in one fell swoop.
  • Big bang deployments required the business to conduct extensive development and testing before release, often associated with the "waterfall model" of large sequential releases.
  • Rollbacks are often costly, time-consuming, or even impossible.
  • In a rolling deployment, an application’s new version gradually replaces the old one.
  • new and old versions will coexist without affecting functionality or user experience.
  • Each container is modified to download the latest image from the app vendor’s site.
  • two identical production environments work in parallel.
  • Once the testing results are successful, application traffic is routed from blue to green.
  • In a blue-green deployment, both systems use the same persistence layer or database back end.
  • You can use the primary database by blue for write operations and use the secondary by green for read operations.
  • Blue-green deployments rely on traffic routing.
  • long TTL values can delay these changes.
  • The main challenge of canary deployment is to devise a way to route some users to the new application.
  • Using an application logic to unlock new features to specific users and groups.
  • With CD, the CI-built code artifact is packaged and always ready to be deployed in one or more environments.
  • Use Build Automation tools to automate environment builds
  • Use configuration management tools
  • Enable automated rollbacks for deployments
  • An application performance monitoring (APM) tool can help your team monitor critical performance metrics including server response times after deployments.
張 旭

How to create reusable infrastructure with Terraform modules - 0 views

  • auto scaling schedule
  • The easiest way to create a versioned module is to put the code for the module in a separate Git repository and to set the source parameter to that repository’s URL.
張 旭

Warnings, Notes, & Tips - 0 views

  • AS3 manages topology records globally in /Common, it is required that records only be managed through AS3, as it will treat the records declaratively.
  • If a record is added outside of AS3, it will be removed if it is not included in the next AS3 declaration for topology records (AS3 completely overwrites non-AS3 topologies when a declaration is submitted).
  • using AS3 to delete a tenant (for example, sending DELETE to the /declare/<TENANT> endpoint) that contains GSLB topologies will completely remove ALL GSLB topologies from the BIG-IP.
  • ...12 more annotations...
  • When posting a large declaration (hundreds of application services in a single declaration), you may experience a 500 error stating that the save sys config operation failed.
  • Even if you have asynchronous mode set to false, after 45 seconds AS3 sets asynchronous mode to true (API swap), and returns an async response.
  • When creating a new tenant using AS3, it must not use the same name as a partition you separately create on the target BIG-IP system.
  • If you use the same name and then post the declaration, AS3 overwrites (or removes) the existing partition completely, including all configuration objects in that partition.
  • use AS3 to create a tenant (which creates a BIG-IP partition), manually adding configuration objects to the partition created by AS3 can have unexpected results
  • When you delete the Tenant using AS3, the system deletes both virtual servers.
  • if a Firewall_Address_List contains zero addresses, a dummy IPv6 address of ::1:5ee:bad:c0de is added in order to maintain a valid Firewall_Address_List. If an address is added to the list, the dummy address is removed.
  • use /mgmt/shared/appsvcs/declare?async=true if you have a particularly large declaration which will take a long time to process.
  • reviewing the Sizing BIG-IP Virtual Editions section (page 7) of Deploying BIG-IP VEs in a Hyper-Converged Infrastructure
  • To test whether your system has AS3 installed or not, use GET with the /mgmt/shared/appsvcs/info URI.
  • You may find it more convenient to put multi-line texts such as iRules into AS3 declarations by first encoding them in Base64.
  • no matter your BIG-IP user account name, audit logs show all messages from admin and not the specific user name.
張 旭

Understanding Nginx HTTP Proxying, Load Balancing, Buffering, and Caching | DigitalOcean - 0 views

  • allow Nginx to pass requests off to backend http servers for further processing
  • Nginx is often set up as a reverse proxy solution to help scale out infrastructure or to pass requests to other servers that are not designed to handle large client loads
  • explore buffering and caching to improve the performance of proxying operations for clients
  • ...48 more annotations...
  • Nginx is built to handle many concurrent connections at the same time.
  • provides you with flexibility in easily adding backend servers or taking them down as needed for maintenance
  • Proxying in Nginx is accomplished by manipulating a request aimed at the Nginx server and passing it to other servers for the actual processing
  • The servers that Nginx proxies requests to are known as upstream servers.
  • Nginx can proxy requests to servers that communicate using the http(s), FastCGI, SCGI, and uwsgi, or memcached protocols through separate sets of directives for each type of proxy
  • When a request matches a location with a proxy_pass directive inside, the request is forwarded to the URL given by the directive
  • For example, when a request for /match/here/please is handled by this block, the request URI will be sent to the example.com server as http://example.com/match/here/please
  • The request coming from Nginx on behalf of a client will look different than a request coming directly from a client
  • Nginx gets rid of any empty headers
  • Nginx, by default, will consider any header that contains underscores as invalid. It will remove these from the proxied request
    • 張 旭
       
      這裡要注意一下,header 欄位名稱有設定底線的,要設定 Nginx 讓它可以通過。
  • The "Host" header is re-written to the value defined by the $proxy_host variable.
  • The upstream should not expect this connection to be persistent
  • Headers with empty values are completely removed from the passed request.
  • if your backend application will be processing non-standard headers, you must make sure that they do not have underscores
  • by default, this will be set to the value of $proxy_host, a variable that will contain the domain name or IP address and port taken directly from the proxy_pass definition
  • This is selected by default as it is the only address Nginx can be sure the upstream server responds to
  • (as it is pulled directly from the connection info)
  • $http_host: Sets the "Host" header to the "Host" header from the client request.
  • The headers sent by the client are always available in Nginx as variables. The variables will start with an $http_ prefix, followed by the header name in lowercase, with any dashes replaced by underscores.
  • preference to: the host name from the request line itself
  • set the "Host" header to the $host variable. It is the most flexible and will usually provide the proxied servers with a "Host" header filled in as accurately as possible
  • sets the "Host" header to the $host variable, which should contain information about the original host being requested
  • This variable takes the value of the original X-Forwarded-For header retrieved from the client and adds the Nginx server's IP address to the end.
  • The upstream directive must be set in the http context of your Nginx configuration.
  • http context
  • Once defined, this name will be available for use within proxy passes as if it were a regular domain name
  • By default, this is just a simple round-robin selection process (each request will be routed to a different host in turn)
  • Specifies that new connections should always be given to the backend that has the least number of active connections.
  • distributes requests to different servers based on the client's IP address.
  • mainly used with memcached proxying
  • As for the hash method, you must provide the key to hash against
  • Server Weight
  • Nginx's buffering and caching capabilities
  • Without buffers, data is sent from the proxied server and immediately begins to be transmitted to the client.
  • With buffers, the Nginx proxy will temporarily store the backend's response and then feed this data to the client
  • Nginx defaults to a buffering design
  • can be set in the http, server, or location contexts.
  • the sizing directives are configured per request, so increasing them beyond your need can affect your performance
  • When buffering is "off" only the buffer defined by the proxy_buffer_size directive will be used
  • A high availability (HA) setup is an infrastructure without a single point of failure, and your load balancers are a part of this configuration.
  • multiple load balancers (one active and one or more passive) behind a static IP address that can be remapped from one server to another.
  • Nginx also provides a way to cache content from backend servers
  • The proxy_cache_path directive must be set in the http context.
  • proxy_cache backcache;
    • 張 旭
       
      這裡的 backcache 是前文設定的 backcache 變數,看起來每個 location 都可以有自己的 cache 目錄。
  • The proxy_cache_bypass directive is set to the $http_cache_control variable. This will contain an indicator as to whether the client is explicitly requesting a fresh, non-cached version of the resource
  • any user-related data should not be cached
  • For private content, you should set the Cache-Control header to "no-cache", "no-store", or "private" depending on the nature of the data
張 旭

Introducing CNAME Flattening: RFC-Compliant CNAMEs at a Domain's Root - 0 views

  • you can now safely use a CNAME record, as opposed to an A record that points to a fixed IP address, as your root record in CloudFlare DNS without triggering a number of edge case error conditions because you’re violating the DNS spec.
  • CNAME Flattening allowed us to use a root domain while still maintaining DNS fault-tolerance across multiple IP addresses.
  • Traditionally, the root record of a domain needed to point to an IP address (known as an A -- for "address" -- Record).
  • ...13 more annotations...
  • WordPlumblr allows its users to use custom domains that point to the WordPlumblr infrastructure
  • A CNAME is an alias. It allows one domain to point to another domain which, eventually if you follow the CNAME chain, will resolve to an A record and IP address.
  • For example, WordPlumblr might have assigned the CNAME 6equj5.wordplumblr.com for Foo.com. Foo.com and the other customers may have all initially resolved, at the end of the CNAME chain, to the same IP address.
  • you usually don't want to address memory directly but, instead, you set up a pointer to a block of memory where you're going to store something. If the operating system needs to move the memory around then it just updates the pointer to point to wherever the chunk of memory has been moved to.
  • CNAMEs work great for subdomains like www.foo.com or blog.foo.com. Unfortunately, they don't work for a naked domain like foo.com itself.
  • the DNS spec enshrined that the root record -- the naked domain without any subdomain -- could not be a CNAME.
  • Technically, the root could be a CNAME but the RFCs state that once a record has a CNAME it can't have any other entries associated with it
  • a way to support a CNAME at the root, but still follow the RFC and return an IP address for any query for the root record.
  • extended our authoritative DNS infrastructure to, in certain cases, act as a kind of DNS resolver.
  • if there's a CNAME at the root, rather than returning that record directly we recurse through the CNAME chain ourselves until we find an A Record.
  • allows the flexibility of having CNAMEs at the root without breaking the DNS specification.
  • We cache the CNAME responses -- respecting the DNS TTLs, just like a recursor should -- which means often we have the answer without having to traverse the chain.
  • CNAME flattening solved email resolution errors for us which was very key.
張 旭

Syntax - Configuration Language | Terraform | HashiCorp Developer - 0 views

  • the native syntax of the Terraform language, which is a rich language designed to be relatively easy for humans to read and write.
  • Terraform's configuration language is based on a more general language called HCL, and HCL's documentation usually uses the word "attribute" instead of "argument."
  • A particular block type may have any number of required labels, or it may require none
  • ...34 more annotations...
  • After the block type keyword and any labels, the block body is delimited by the { and } characters
  • Identifiers can contain letters, digits, underscores (_), and hyphens (-). The first character of an identifier must not be a digit, to avoid ambiguity with literal numbers.
  • The # single-line comment style is the default comment style and should be used in most cases.
  • he idiomatic style is to use the Unix convention
  • Indent two spaces for each nesting level.
  • align their equals signs
  • Use empty lines to separate logical groups of arguments within a block.
  • Use one blank line to separate the arguments from the blocks.
  • "meta-arguments" (as defined by the Terraform language semantics)
  • Avoid separating multiple blocks of the same type with other blocks of a different type, unless the block types are defined by semantics to form a family.
  • Resource names must start with a letter or underscore, and may contain only letters, digits, underscores, and dashes.
  • Each resource is associated with a single resource type, which determines the kind of infrastructure object it manages and what arguments and other attributes the resource supports.
  • Each resource type is implemented by a provider, which is a plugin for Terraform that offers a collection of resource types.
  • By convention, resource type names start with their provider's preferred local name.
  • Most publicly available providers are distributed on the Terraform Registry, which also hosts their documentation.
  • The Terraform language defines several meta-arguments, which can be used with any resource type to change the behavior of resources.
  • use precondition and postcondition blocks to specify assumptions and guarantees about how the resource operates.
  • Some resource types provide a special timeouts nested block argument that allows you to customize how long certain operations are allowed to take before being considered to have failed.
  • Timeouts are handled entirely by the resource type implementation in the provider
  • Most resource types do not support the timeouts block at all.
  • A resource block declares that you want a particular infrastructure object to exist with the given settings.
  • Destroy resources that exist in the state but no longer exist in the configuration.
  • Destroy and re-create resources whose arguments have changed but which cannot be updated in-place due to remote API limitations.
  • Expressions within a Terraform module can access information about resources in the same module, and you can use that information to help configure other resources. Use the <RESOURCE TYPE>.<NAME>.<ATTRIBUTE> syntax to reference a resource attribute in an expression.
  • resources often provide read-only attributes with information obtained from the remote API; this often includes things that can't be known until the resource is created, like the resource's unique random ID.
  • data sources, which are a special type of resource used only for looking up information.
  • some dependencies cannot be recognized implicitly in configuration.
  • local-only resource types exist for generating private keys, issuing self-signed TLS certificates, and even generating random ids.
  • The behavior of local-only resources is the same as all other resources, but their result data exists only within the Terraform state.
  • The count meta-argument accepts a whole number, and creates that many instances of the resource or module.
  • count.index — The distinct index number (starting with 0) corresponding to this instance.
  • the count value must be known before Terraform performs any remote resource actions. This means count can't refer to any resource attributes that aren't known until after a configuration is applied
  • Within nested provisioner or connection blocks, the special self object refers to the current resource instance, not the resource block as a whole.
  • This was fragile, because the resource instances were still identified by their index instead of the string values in the list.
  •  
    "the native syntax of the Terraform language, which is a rich language designed to be relatively easy for humans to read and write. "
張 旭

Providers - Configuration Language | Terraform | HashiCorp Developer - 0 views

  • Terraform relies on plugins called providers to interact with cloud providers, SaaS providers, and other APIs.
  • Terraform configurations must declare which providers they require so that Terraform can install and use them.
  • Each provider adds a set of resource types and/or data sources that Terraform can manage.
  • ...6 more annotations...
  • Every resource type is implemented by a provider; without providers, Terraform can't manage any kind of infrastructure.
  • The Terraform Registry is the main directory of publicly available Terraform providers, and hosts providers for most major infrastructure platforms.
  • Dependency Lock File documents an additional HCL file that can be included with a configuration, which tells Terraform to always use a specific set of provider versions.
  • Terraform CLI finds and installs providers when initializing a working directory. It can automatically download providers from a Terraform registry, or load them from a local mirror or cache.
  • To save time and bandwidth, Terraform CLI supports an optional plugin cache. You can enable the cache using the plugin_cache_dir setting in the CLI configuration file.
  • you can use Terraform CLI to create a dependency lock file and commit it to version control along with your configuration.
張 旭

Cluster Networking - Kubernetes - 0 views

  • Networking is a central part of Kubernetes, but it can be challenging to understand exactly how it is expected to work
  • Highly-coupled container-to-container communications
  • Pod-to-Pod communications
  • ...57 more annotations...
  • this is the primary focus of this document
    • 張 旭
       
      Cluster Networking 所關注處理的是: Pod 到 Pod 之間的連線
  • Pod-to-Service communications
  • External-to-Service communications
  • Kubernetes is all about sharing machines between applications.
  • sharing machines requires ensuring that two applications do not try to use the same ports.
  • Dynamic port allocation brings a lot of complications to the system
  • Every Pod gets its own IP address
  • do not need to explicitly create links between Pods
  • almost never need to deal with mapping container ports to host ports.
  • Pods can be treated much like VMs or physical hosts from the perspectives of port allocation, naming, service discovery, load balancing, application configuration, and migration.
  • pods on a node can communicate with all pods on all nodes without NAT
  • agents on a node (e.g. system daemons, kubelet) can communicate with all pods on that node
  • pods in the host network of a node can communicate with all pods on all nodes without NAT
  • If your job previously ran in a VM, your VM had an IP and could talk to other VMs in your project. This is the same basic model.
  • containers within a Pod share their network namespaces - including their IP address
  • containers within a Pod can all reach each other’s ports on localhost
  • containers within a Pod must coordinate port usage
  • “IP-per-pod” model.
  • request ports on the Node itself which forward to your Pod (called host ports), but this is a very niche operation
  • The Pod itself is blind to the existence or non-existence of host ports.
  • AOS is an Intent-Based Networking system that creates and manages complex datacenter environments from a simple integrated platform.
  • Cisco Application Centric Infrastructure offers an integrated overlay and underlay SDN solution that supports containers, virtual machines, and bare metal servers.
  • AOS Reference Design currently supports Layer-3 connected hosts that eliminate legacy Layer-2 switching problems.
  • The AWS VPC CNI offers integrated AWS Virtual Private Cloud (VPC) networking for Kubernetes clusters.
  • users can apply existing AWS VPC networking and security best practices for building Kubernetes clusters.
  • Using this CNI plugin allows Kubernetes pods to have the same IP address inside the pod as they do on the VPC network.
  • The CNI allocates AWS Elastic Networking Interfaces (ENIs) to each Kubernetes node and using the secondary IP range from each ENI for pods on the node.
  • Big Cloud Fabric is a cloud native networking architecture, designed to run Kubernetes in private cloud/on-premises environments.
  • Cilium is L7/HTTP aware and can enforce network policies on L3-L7 using an identity based security model that is decoupled from network addressing.
  • CNI-Genie is a CNI plugin that enables Kubernetes to simultaneously have access to different implementations of the Kubernetes network model in runtime.
  • CNI-Genie also supports assigning multiple IP addresses to a pod, each from a different CNI plugin.
  • cni-ipvlan-vpc-k8s contains a set of CNI and IPAM plugins to provide a simple, host-local, low latency, high throughput, and compliant networking stack for Kubernetes within Amazon Virtual Private Cloud (VPC) environments by making use of Amazon Elastic Network Interfaces (ENI) and binding AWS-managed IPs into Pods using the Linux kernel’s IPvlan driver in L2 mode.
  • to be straightforward to configure and deploy within a VPC
  • Contiv provides configurable networking
  • Contrail, based on Tungsten Fabric, is a truly open, multi-cloud network virtualization and policy management platform.
  • DANM is a networking solution for telco workloads running in a Kubernetes cluster.
  • Flannel is a very simple overlay network that satisfies the Kubernetes requirements.
  • Any traffic bound for that subnet will be routed directly to the VM by the GCE network fabric.
  • sysctl net.ipv4.ip_forward=1
  • Jaguar provides overlay network using vxlan and Jaguar CNIPlugin provides one IP address per pod.
  • Knitter is a network solution which supports multiple networking in Kubernetes.
  • Kube-OVN is an OVN-based kubernetes network fabric for enterprises.
  • Kube-router provides a Linux LVS/IPVS-based service proxy, a Linux kernel forwarding-based pod-to-pod networking solution with no overlays, and iptables/ipset-based network policy enforcer.
  • If you have a “dumb” L2 network, such as a simple switch in a “bare-metal” environment, you should be able to do something similar to the above GCE setup.
  • Multus is a Multi CNI plugin to support the Multi Networking feature in Kubernetes using CRD based network objects in Kubernetes.
  • NSX-T can provide network virtualization for a multi-cloud and multi-hypervisor environment and is focused on emerging application frameworks and architectures that have heterogeneous endpoints and technology stacks.
  • NSX-T Container Plug-in (NCP) provides integration between NSX-T and container orchestrators such as Kubernetes
  • Nuage uses the open source Open vSwitch for the data plane along with a feature rich SDN Controller built on open standards.
  • OpenVSwitch is a somewhat more mature but also complicated way to build an overlay network
  • OVN is an opensource network virtualization solution developed by the Open vSwitch community.
  • Project Calico is an open source container networking provider and network policy engine.
  • Calico provides a highly scalable networking and network policy solution for connecting Kubernetes pods based on the same IP networking principles as the internet
  • Calico can be deployed without encapsulation or overlays to provide high-performance, high-scale data center networking.
  • Calico can also be run in policy enforcement mode in conjunction with other networking solutions such as Flannel, aka canal, or native GCE, AWS or Azure networking.
  • Romana is an open source network and security automation solution that lets you deploy Kubernetes without an overlay network
  • Weave Net runs as a CNI plug-in or stand-alone. In either version, it doesn’t require any configuration or extra code to run, and in both cases, the network provides one IP address per pod - as is standard for Kubernetes.
  • The network model is implemented by the container runtime on each node.
張 旭

Manage swarm security with public key infrastructure (PKI) | Docker Documentation - 0 views

  • The nodes in a swarm use mutual Transport Layer Security (TLS) to authenticate, authorize, and encrypt the communications with other nodes in the swarm.
  • By default, the manager node generates a new root Certificate Authority (CA) along with a key pair, which are used to secure communications with other nodes that join the swarm.
  • The manager node also generates two tokens to use when you join additional nodes to the swarm: one worker token and one manager token.
  • ...3 more annotations...
  • Each time a new node joins the swarm, the manager issues a certificate to the node
  • By default, each node in the swarm renews its certificate every three months.
  • a cluster CA key or a manager node is compromised, you can rotate the swarm root CA so that none of the nodes trust certificates signed by the old root CA anymore.
  •  
    "The nodes in a swarm use mutual Transport Layer Security (TLS) to authenticate, authorize, and encrypt the communications with other nodes in the swarm."
1 - 20 of 27 Next ›
Showing 20 items per page