Skip to main content

Home/ Larvata/ Group items tagged cloud

Rss Feed Group items tagged

張 旭

Virtual Private Cloud (VPC)  |  Virtual Private Cloud  |  Google Cloud - 0 views

  • A single Google Cloud VPC can span multiple regions without communicating across the public Internet.
  • Google Cloud VPCs let you increase the IP space of any subnets without any workload shutdown or downtime.
  • Get private access to Google services, such as storage, big data, analytics, or machine learning, without having to give your service a public IP address.
  • ...3 more annotations...
  • Enable dynamic Border Gateway Protocol (BGP) route updates between your VPC network and your non-Google network with our virtual router.
  • Configure a VPC Network to be shared across several projects in your organization.
  • Hosting globally distributed multi-tier applications, by creating a VPC with subnets.
張 旭

Cluster Networking - Kubernetes - 0 views

  • Networking is a central part of Kubernetes, but it can be challenging to understand exactly how it is expected to work
  • Highly-coupled container-to-container communications
  • Pod-to-Pod communications
  • ...57 more annotations...
  • this is the primary focus of this document
    • 張 旭
       
      Cluster Networking 所關注處理的是: Pod 到 Pod 之間的連線
  • Pod-to-Service communications
  • External-to-Service communications
  • Kubernetes is all about sharing machines between applications.
  • sharing machines requires ensuring that two applications do not try to use the same ports.
  • Dynamic port allocation brings a lot of complications to the system
  • Every Pod gets its own IP address
  • do not need to explicitly create links between Pods
  • almost never need to deal with mapping container ports to host ports.
  • Pods can be treated much like VMs or physical hosts from the perspectives of port allocation, naming, service discovery, load balancing, application configuration, and migration.
  • pods on a node can communicate with all pods on all nodes without NAT
  • agents on a node (e.g. system daemons, kubelet) can communicate with all pods on that node
  • pods in the host network of a node can communicate with all pods on all nodes without NAT
  • If your job previously ran in a VM, your VM had an IP and could talk to other VMs in your project. This is the same basic model.
  • containers within a Pod share their network namespaces - including their IP address
  • containers within a Pod can all reach each other’s ports on localhost
  • containers within a Pod must coordinate port usage
  • “IP-per-pod” model.
  • request ports on the Node itself which forward to your Pod (called host ports), but this is a very niche operation
  • The Pod itself is blind to the existence or non-existence of host ports.
  • AOS is an Intent-Based Networking system that creates and manages complex datacenter environments from a simple integrated platform.
  • Cisco Application Centric Infrastructure offers an integrated overlay and underlay SDN solution that supports containers, virtual machines, and bare metal servers.
  • AOS Reference Design currently supports Layer-3 connected hosts that eliminate legacy Layer-2 switching problems.
  • The AWS VPC CNI offers integrated AWS Virtual Private Cloud (VPC) networking for Kubernetes clusters.
  • users can apply existing AWS VPC networking and security best practices for building Kubernetes clusters.
  • Using this CNI plugin allows Kubernetes pods to have the same IP address inside the pod as they do on the VPC network.
  • The CNI allocates AWS Elastic Networking Interfaces (ENIs) to each Kubernetes node and using the secondary IP range from each ENI for pods on the node.
  • Big Cloud Fabric is a cloud native networking architecture, designed to run Kubernetes in private cloud/on-premises environments.
  • Cilium is L7/HTTP aware and can enforce network policies on L3-L7 using an identity based security model that is decoupled from network addressing.
  • CNI-Genie is a CNI plugin that enables Kubernetes to simultaneously have access to different implementations of the Kubernetes network model in runtime.
  • CNI-Genie also supports assigning multiple IP addresses to a pod, each from a different CNI plugin.
  • cni-ipvlan-vpc-k8s contains a set of CNI and IPAM plugins to provide a simple, host-local, low latency, high throughput, and compliant networking stack for Kubernetes within Amazon Virtual Private Cloud (VPC) environments by making use of Amazon Elastic Network Interfaces (ENI) and binding AWS-managed IPs into Pods using the Linux kernel’s IPvlan driver in L2 mode.
  • to be straightforward to configure and deploy within a VPC
  • Contiv provides configurable networking
  • Contrail, based on Tungsten Fabric, is a truly open, multi-cloud network virtualization and policy management platform.
  • DANM is a networking solution for telco workloads running in a Kubernetes cluster.
  • Flannel is a very simple overlay network that satisfies the Kubernetes requirements.
  • Any traffic bound for that subnet will be routed directly to the VM by the GCE network fabric.
  • sysctl net.ipv4.ip_forward=1
  • Jaguar provides overlay network using vxlan and Jaguar CNIPlugin provides one IP address per pod.
  • Knitter is a network solution which supports multiple networking in Kubernetes.
  • Kube-OVN is an OVN-based kubernetes network fabric for enterprises.
  • Kube-router provides a Linux LVS/IPVS-based service proxy, a Linux kernel forwarding-based pod-to-pod networking solution with no overlays, and iptables/ipset-based network policy enforcer.
  • If you have a “dumb” L2 network, such as a simple switch in a “bare-metal” environment, you should be able to do something similar to the above GCE setup.
  • Multus is a Multi CNI plugin to support the Multi Networking feature in Kubernetes using CRD based network objects in Kubernetes.
  • NSX-T can provide network virtualization for a multi-cloud and multi-hypervisor environment and is focused on emerging application frameworks and architectures that have heterogeneous endpoints and technology stacks.
  • NSX-T Container Plug-in (NCP) provides integration between NSX-T and container orchestrators such as Kubernetes
  • Nuage uses the open source Open vSwitch for the data plane along with a feature rich SDN Controller built on open standards.
  • OpenVSwitch is a somewhat more mature but also complicated way to build an overlay network
  • OVN is an opensource network virtualization solution developed by the Open vSwitch community.
  • Project Calico is an open source container networking provider and network policy engine.
  • Calico provides a highly scalable networking and network policy solution for connecting Kubernetes pods based on the same IP networking principles as the internet
  • Calico can be deployed without encapsulation or overlays to provide high-performance, high-scale data center networking.
  • Calico can also be run in policy enforcement mode in conjunction with other networking solutions such as Flannel, aka canal, or native GCE, AWS or Azure networking.
  • Romana is an open source network and security automation solution that lets you deploy Kubernetes without an overlay network
  • Weave Net runs as a CNI plug-in or stand-alone. In either version, it doesn’t require any configuration or extra code to run, and in both cases, the network provides one IP address per pod - as is standard for Kubernetes.
  • The network model is implemented by the container runtime on each node.
張 旭

vSphere Cloud Provider | vSphere Storage for Kubernetes - 0 views

  • Containers are stateless and ephemeral but applications are stateful and need persistent storage.
  • Cloud Provider
  • Kubernetes cloud providers are an interface to integrate various node (i.e. hosts), load balancers and networking routes
  • ...8 more annotations...
  • VMware offers a Cloud Provider known as the vSphere Cloud Provider (VCP) for Kubernetes which allows Pods to use enterprise grade persistent storage.
  • A vSphere datastore is an abstraction which hides storage details (such as LUNs) and provides a uniform interface for storing persistent data.
  • the datastores can be of the type vSAN, VMFS, NFS & VVol.
  • VMFS (Virtual Machine File System) is a cluster file system that allows virtualization to scale beyond a single node for multiple VMware ESX servers.
  • NFS (Network File System) is a distributed file protocol to access storage over network like local storage.
  • vSphere Cloud Provider supports every storage primitive exposed by Kubernetes
  • Kubernetes PVs are defined in Pod specifications.
  • PVCs when using Dynamic Provisioning (preferred).
crazylion lee

Google Cloud is 50% cheaper than AWS - The HFT Guy - 0 views

  •  
    "Google Cloud is 50% cheaper than AWS"
張 旭

Overview of Virtual Private Cloud  |  VPC  |  Google Cloud - 0 views

  • a VPC network the same way you'd think of a physical network, except that it is virtualized within GCP.
  • VPC networks are logically isolated from each other in GCP.
  • The network connects the resources to each other and to the Internet.
  •  
    "a VPC network the same way you'd think of a physical network, except that it is virtualized within GCP."
張 旭

What Is Amazon VPC? - Amazon Virtual Private Cloud - 0 views

  • to allow an instance in your VPC to initiate outbound connections to the internet but prevent unsolicited inbound connections from the internet, you can use a network address translation (NAT) device for IPv4 traffic
  • A NAT device has an Elastic IP address and is connected to the internet through an internet gateway.
  • By default, each instance that you launch into a nondefault subnet has a private IPv4 address, but no public IPv4 address, unless you specifically assign one at launch, or you modify the subnet's public IP address attribute.
  • ...11 more annotations...
  • Amazon VPC is the networking layer for Amazon EC2.
  • A virtual private cloud (VPC) is a virtual network dedicated to your AWS account. It is logically isolated from other virtual networks in the AWS Cloud.
  • Instances can connect to the internet over IPv6 through an internet gateway
  • IPv6 traffic is separate from IPv4 traffic; your route tables must include separate routes for IPv6 traffic.
  • You can optionally connect your VPC to your own corporate data center using an IPsec AWS managed VPN connection, making the AWS Cloud an extension of your data center.
  • A VPN connection consists of a virtual private gateway attached to your VPC and a customer gateway located in your data center.
  • A virtual private gateway is the VPN concentrator on the Amazon side of the VPN connection. A customer gateway is a physical device or software appliance on your side of the VPN connection.
  • AWS PrivateLink is a highly available, scalable technology that enables you to privately connect your VPC to supported AWS services, services hosted by other AWS accounts (VPC endpoint services)
  • Traffic between your VPC and the service does not leave the Amazon network
  • To use AWS PrivateLink, create an interface VPC endpoint for a service in your VPC. This creates an elastic network interface in your subnet with a private IP address that serves as an entry point for traffic destined to the service.
  • create your own AWS PrivateLink-powered service (endpoint service) and enable other AWS customers to access your service.
張 旭

Kubernetes Components | Kubernetes - 0 views

  • A Kubernetes cluster consists of a set of worker machines, called nodes, that run containerized applications
  • Every cluster has at least one worker node.
  • The control plane manages the worker nodes and the Pods in the cluster.
  • ...29 more annotations...
  • The control plane's components make global decisions about the cluster
  • Control plane components can be run on any machine in the cluster.
  • for simplicity, set up scripts typically start all control plane components on the same machine, and do not run user containers on this machine
  • The API server is the front end for the Kubernetes control plane.
  • kube-apiserver is designed to scale horizontally—that is, it scales by deploying more instances. You can run several instances of kube-apiserver and balance traffic between those instances.
  • Kubernetes cluster uses etcd as its backing store, make sure you have a back up plan for those data.
  • watches for newly created Pods with no assigned node, and selects a node for them to run on.
  • Factors taken into account for scheduling decisions include: individual and collective resource requirements, hardware/software/policy constraints, affinity and anti-affinity specifications, data locality, inter-workload interference, and deadlines.
  • each controller is a separate process, but to reduce complexity, they are all compiled into a single binary and run in a single process.
  • Node controller
  • Job controller
  • Endpoints controller
  • Service Account & Token controllers
  • The cloud controller manager lets you link your cluster into your cloud provider's API, and separates out the components that interact with that cloud platform from components that only interact with your cluster.
  • If you are running Kubernetes on your own premises, or in a learning environment inside your own PC, the cluster does not have a cloud controller manager.
  • An agent that runs on each node in the cluster. It makes sure that containers are running in a Pod.
  • The kubelet takes a set of PodSpecs that are provided through various mechanisms and ensures that the containers described in those PodSpecs are running and healthy.
  • The kubelet doesn't manage containers which were not created by Kubernetes.
  • kube-proxy is a network proxy that runs on each node in your cluster, implementing part of the Kubernetes Service concept.
  • kube-proxy maintains network rules on nodes. These network rules allow network communication to your Pods from network sessions inside or outside of your cluster.
  • kube-proxy uses the operating system packet filtering layer if there is one and it's available.
  • Kubernetes supports several container runtimes: Docker, containerd, CRI-O, and any implementation of the Kubernetes CRI (Container Runtime Interface).
  • Addons use Kubernetes resources (DaemonSet, Deployment, etc) to implement cluster features
  • namespaced resources for addons belong within the kube-system namespace.
  • all Kubernetes clusters should have cluster DNS,
  • Cluster DNS is a DNS server, in addition to the other DNS server(s) in your environment, which serves DNS records for Kubernetes services.
  • Containers started by Kubernetes automatically include this DNS server in their DNS searches.
  • Container Resource Monitoring records generic time-series metrics about containers in a central database, and provides a UI for browsing that data.
  • A cluster-level logging mechanism is responsible for saving container logs to a central log store with search/browsing interface.
crazylion lee

Dply - 0 views

shared by crazylion lee on 27 Nov 16 - No Cached
  •  
    "Create a cloud server FREE for 2 hours, add more time as you need it."
crazylion lee

Databricks makes Spark easy through a cloud-based integrated workspace. - 0 views

  •  
    "We believe big data should be simple. Apache® Spark™ made a big step towards this goal. Databricks makes Spark easy through a cloud-based integrated workspace."
張 旭

MetalLB, bare metal load-balancer for Kubernetes - 0 views

  • it allows you to create Kubernetes services of type “LoadBalancer” in clusters that don’t run on a cloud provider
  • In a cloud-enabled Kubernetes cluster, you request a load-balancer, and your cloud platform assigns an IP address to you.
  • MetalLB cannot create IP addresses out of thin air, so you do have to give it pools of IP addresses that it can use.
  • ...6 more annotations...
  • MetalLB lets you define as many address pools as you want, and doesn’t care what “kind” of addresses you give it.
  • Once MetalLB has assigned an external IP address to a service, it needs to make the network beyond the cluster aware that the IP “lives” in the cluster.
  • In layer 2 mode, one machine in the cluster takes ownership of the service, and uses standard address discovery protocols (ARP for IPv4, NDP for IPv6) to make those IPs reachable on the local network
  • From the LAN’s point of view, the announcing machine simply has multiple IP addresses.
  • In BGP mode, all machines in the cluster establish BGP peering sessions with nearby routers that you control, and tell those routers how to forward traffic to the service IPs.
  • Using BGP allows for true load balancing across multiple nodes, and fine-grained traffic control thanks to BGP’s policy mechanisms.
張 旭

Best practices for building Kubernetes Operators and stateful apps | Google Cloud Blog - 0 views

  • use the StatefulSet workload controller to maintain identity for each of the pods, and to use Persistent Volumes to persist data so it can survive a service restart.
  • a way to extend Kubernetes functionality with application specific logic using custom resources and custom controllers.
  • An Operator can automate various features of an application, but it should be specific to a single application
  • ...12 more annotations...
  • Kubebuilder is a comprehensive development kit for building and publishing Kubernetes APIs and Controllers using CRDs
  • Design declarative APIs for operators, not imperative APIs. This aligns well with Kubernetes APIs that are declarative in nature.
  • With declarative APIs, users only need to express their desired cluster state, while letting the operator perform all necessary steps to achieve it.
  • scaling, backup, restore, and monitoring. An operator should be made up of multiple controllers that specifically handle each of the those features.
  • the operator can have a main controller to spawn and manage application instances, a backup controller to handle backup operations, and a restore controller to handle restore operations.
  • each controller should correspond to a specific CRD so that the domain of each controller's responsibility is clear.
  • If you keep a log for every container, you will likely end up with unmanageable amount of logs.
  • integrate application-specific details to the log messages such as adding a prefix for the application name.
  • you may have to use external logging tools such as Google Stackdriver, Elasticsearch, Fluentd, or Kibana to perform the aggregations.
  • adding labels to metrics to facilitate aggregation and analysis by monitoring systems.
  • a more viable option is for application pods to expose a metrics HTTP endpoint for monitoring tools to scrape.
  • A good way to achieve this is to use open-source application-specific exporters for exposing Prometheus-style metrics.
張 旭

What is Kubernetes Ingress? | IBM - 0 views

  • expose an application to the outside of your Kubernetes cluster,
  • ClusterIP, NodePort, LoadBalancer, and Ingress.
  • A service is essentially a frontend for your application that automatically reroutes traffic to available pods in an evenly distributed way.
  • ...23 more annotations...
  • Services are an abstract way of exposing an application running on a set of pods as a network service.
  • Pods are immutable, which means that when they die, they are not resurrected. The Kubernetes cluster creates new pods in the same node or in a new node once a pod dies. 
  • A service provides a single point of access from outside the Kubernetes cluster and allows you to dynamically access a group of replica pods. 
  • For internal application access within a Kubernetes cluster, ClusterIP is the preferred method
  • To expose a service to external network requests, NodePort, LoadBalancer, and Ingress are possible options.
  • Kubernetes Ingress is an API object that provides routing rules to manage external users' access to the services in a Kubernetes cluster, typically via HTTPS/HTTP.
  • content-based routing, support for multiple protocols, and authentication.
  • Ingress is made up of an Ingress API object and the Ingress Controller.
  • Kubernetes Ingress is an API object that describes the desired state for exposing services to the outside of the Kubernetes cluster.
  • An Ingress Controller reads and processes the Ingress Resource information and usually runs as pods within the Kubernetes cluster.  
  • If Kubernetes Ingress is the API object that provides routing rules to manage external access to services, Ingress Controller is the actual implementation of the Ingress API.
  • The Ingress Controller is usually a load balancer for routing external traffic to your Kubernetes cluster and is responsible for L4-L7 Network Services. 
  • Layer 7 (L7) refers to the application level of the OSI stack—external connections load-balanced across pods, based on requests.
  • if Kubernetes Ingress is a computer, then Ingress Controller is a programmer using the computer and taking action.
  • Ingress Rules are a set of rules for processing inbound HTTP traffic. An Ingress with no rules sends all traffic to a single default backend service. 
  • the Ingress Controller is an application that runs in a Kubernetes cluster and configures an HTTP load balancer according to Ingress Resources.
  • The load balancer can be a software load balancer running in the cluster or a hardware or cloud load balancer running externally.
  • ClusterIP is the preferred option for internal service access and uses an internal IP address to access the service
  • A NodePort is a virtual machine (VM) used to expose a service on a Static Port number.
  • a NodePort would be used to expose a single service (with no load-balancing requirements for multiple services).
  • Ingress enables you to consolidate the traffic-routing rules into a single resource and runs as part of a Kubernetes cluster.
  • An application is accessed from the Internet via Port 80 (HTTP) or Port 443 (HTTPS), and Ingress is an object that allows access to your Kubernetes services from outside the Kubernetes cluster. 
  • To implement Ingress, you need to configure an Ingress Controller in your cluster—it is responsible for processing Ingress Resource information and allowing traffic based on the Ingress Rules.
crazylion lee

Cloudcraft - Draw AWS diagrams - 0 views

  •  
    "Visualize your cloud architecture like a pro Create smart AWS diagrams "
crazylion lee

Sauce Labs: Selenium Testing, Mobile Testing, JS Unit Testing - 0 views

  •  
    "LESS TIME TESTING. MORE TIME INNOVATING. Accelerate your software development process using the world's largest automated testing cloud for web and mobile applications FREE TRIAL "
張 旭

Serverless Architectures - 0 views

  • Serverless was first used to describe applications that significantly or fully depend on 3rd party applications / services (‘in the cloud’) to manage server-side logic and state.
  • ‘rich client’ applications (think single page web apps, or mobile apps) that use the vast ecosystem of cloud accessible databases (like Parse, Firebase), authentication services (Auth0, AWS Cognito), etc.
  • ‘(Mobile) Backend as a Service’
  • ...33 more annotations...
  • Serverless can also mean applications where some amount of server-side logic is still written by the application developer but unlike traditional architectures is run in stateless compute containers that are event-triggered, ephemeral (may only last for one invocation), and fully managed by a 3rd party.
  • ‘Functions as a service
  • AWS Lambda is one of the most popular implementations of FaaS at present,
  • A good example is Auth0 - they started initially with BaaS ‘Authentication as a Service’, but with Auth0 Webtask they are entering the FaaS space.
  • a typical ecommerce app
  • a backend data-processing service
  • with zero administration.
  • FaaS offerings do not require coding to a specific framework or library.
  • Horizontal scaling is completely automatic, elastic, and managed by the provider
  • Functions in FaaS are triggered by event types defined by the provider.
  • a FaaS-supported message broker
  • from a deployment-unit point of view FaaS functions are stateless.
  • allowed the client direct access to a subset of our database
  • deleted the authentication logic in the original application and have replaced it with a third party BaaS service
  • The client is in fact well on its way to becoming a Single Page Application.
  • implement a FaaS function that responds to http requests via an API Gateway
  • port the search code from the Pet Store server to the Pet Store Search function
  • replaced a long lived consumer application with a FaaS function that runs within the event driven context
  • server applications - is a key difference when comparing with other modern architectural trends like containers and PaaS
  • the only code that needs to change when moving to FaaS is the ‘main method / startup’ code, in that it is deleted, and likely the specific code that is the top-level message handler (the ‘message listener interface’ implementation), but this might only be a change in method signature
  • With FaaS you need to write the function ahead of time to assume parallelism
  • Most providers also allow functions to be triggered as a response to inbound http requests, typically in some kind of API gateway
  • you should assume that for any given invocation of a function none of the in-process or host state that you create will be available to any subsequent invocation.
  • FaaS functions are either naturally stateless
  • store state across requests or for further input to handle a request.
  • certain classes of long lived task are not suited to FaaS functions without re-architecture
  • if you were writing a low-latency trading application you probably wouldn’t want to use FaaS systems at this time
  • An API Gateway is an HTTP server where routes / endpoints are defined in configuration and each route is associated with a FaaS function.
  • API Gateway will allow mapping from http request parameters to inputs arguments for the FaaS function
  • API Gateways may also perform authentication, input validation, response code mapping, etc.
  • the Serverless Framework makes working with API Gateway + Lambda significantly easier than using the first principles provided by AWS.
  • Apex - a project to ‘Build, deploy, and manage AWS Lambda functions with ease.'
  • 'Serverless' to mean the union of a couple of other ideas - 'Backend as a Service' and 'Functions as a Service'.
張 旭

NAT Gateways - Amazon Virtual Private Cloud - 0 views

  • a network address translation (NAT) gateway to enable instances in a private subnet to connect to the internet or other AWS services
  • but prevent the internet from initiating a connection with those instances
  • NAT gateways are not supported for IPv6 traffic
  • ...11 more annotations...
  • must specify the public subnet in which the NAT gateway should reside
  • update the route table associated with one or more of your private subnets to point Internet-bound traffic to the NAT gateway.
  • NAT gateway is created in a specific Availability Zone and implemented with redundancy in that zone.
  • ensure that resources use the NAT gateway in the same Availability Zone
  • The main route table sends internet traffic from the instances in the private subnet to the NAT gateway. The NAT gateway sends the traffic to the internet gateway using the NAT gateway’s Elastic IP address as the source IP address
  • A NAT gateway supports 5 Gbps of bandwidth and automatically scales up to 45 Gbps
  • You can associate exactly one Elastic IP address with a NAT gateway
  • A NAT gateway supports the following protocols: TCP, UDP, and ICMP
  • cannot associate a security group with a NAT gateway.
  • create a NAT gateway in the same subnet as your NAT instance, and then replace the existing route in your route table that points to the NAT instance with a route that points to the NAT gateway
  • A NAT gateway cannot send traffic over VPC endpoints, VPN connections, AWS Direct Connect, or VPC peering connections.
張 旭

VPCs and Subnets - Amazon Virtual Private Cloud - 0 views

  • you must specify a range of IPv4 addresses for the VPC in the form of a Classless Inter-Domain Routing (CIDR) block
  • A VPC spans all the Availability Zones in the region
  • add one or more subnets in each Availability Zone.
  • ...19 more annotations...
  • Each subnet must reside entirely within one Availability Zone and cannot span zones.
  • Availability Zones are distinct locations that are engineered to be isolated from failures in other Availability Zones
  • If a subnet's traffic is routed to an internet gateway, the subnet is known as a public subnet.
  • If a subnet doesn't have a route to the internet gateway, the subnet is known as a private subnet.
  • If a subnet doesn't have a route to the internet gateway, but has its traffic routed to a virtual private gateway for a VPN connection, the subnet is known as a VPN-only subnet.
  • By default, all VPCs and subnets must have IPv4 CIDR blocks—you can't change this behavior.
  • The allowed block size is between a /16 netmask (65,536 IP addresses) and /28 netmask (16 IP addresses).
  • The first four IP addresses and the last IP address in each subnet CIDR block are not available for you to use
  • The allowed block size is between a /28 netmask and /16 netmask
  • The CIDR block must not overlap with any existing CIDR block that's associated with the VPC.
  • Each subnet must be associated with a route table
  • Every subnet that you create is automatically associated with the main route table for the VPC
  • Security groups control inbound and outbound traffic for your instances
  • network ACLs control inbound and outbound traffic for your subnets
  • each subnet must be associated with a network ACL
  • You can create a flow log on your VPC or subnet to capture the traffic that flows to and from the network interfaces in your VPC or subnet.
  • A VPC peering connection enables you to route traffic between the VPCs using private IP addresses
  • you cannot create a VPC peering connection between VPCs that have overlapping CIDR blocks
  • recommend that you create a VPC with a CIDR range large enough for expected future growth, but not one that overlaps with current or expected future subnets anywhere in your corporate or home network, or that overlaps with current or future VPCs
張 旭

Practical persistent cloud storage for Docker in AWS using RexRay - pt 4 - 0 views

  • Docker volumes can then be created and managed via the plugin, as requests are passed by Docker, and then orchestrated by the local server.
  • volumes are usually protected from deletion via a reference count.
  • Using the plugin means that the reference count is kept at the node level, so the plugin is only aware of the containers on a single node.
  • ...3 more annotations...
  • The S3FS plugin as of version 0.9.2 cannot delete an S3 bucket unless the bucket is empty, and has never been used (just created) as a Docker volume.
  • Starting with Docker 1.13 a new plugin system was introduced in which the plugin runs inside of a container.
  • Even though the plugin is a container image, you cannot start it using either docker image pull or docker container run; you need to use the docker plugin set of sub‑commands.
  •  
    "Docker volumes can then be created and managed via the plugin, as requests are passed by Docker, and then orchestrated by the local server."
chiehting

Top 5 Kubernetes Best Practices From Sandeep Dinesh (Google) - DZone Cloud - 0 views

  • Best Practices for Kubernetes
  • #1: Building Containers
  • Don’t Trust Arbitrary Base Images!
  • ...29 more annotations...
  • There’s a lot wrong with this: you could be using the wrong version of code that has exploits, has a bug in it, or worse it could have malware bundled in on purpose—you just don’t know.
  • Keep Base Images Small
  • Node.js for example, it includes an extra 600MB of libraries you don’t need.
  • Use the Builder Pattern
  • #2: Container Internals
  • Use a Non-Root User Inside the Container
  • Make the File System Read-Only
  • One Process per Container
  • Don’t Restart on Failure. Crash Cleanly Instead.
  • Log Everything to stdout and stderr
  • #3: Deployments
  • Use the “Record” Option for Easier Rollbacks
  • Use Plenty of Descriptive Labels
  • Use Sidecars for Proxies, Watchers, Etc.
  • Don’t Use Sidecars for Bootstrapping!
  • Don’t Use :Latest or No Tag
  • Readiness and Liveness Probes are Your Friend
  • #4: Services
  • Don’t Use type: LoadBalancer
  • Type: Nodeport Can Be “Good Enough”
  • Use Static IPs They Are Free!
  • Map External Services to Internal Ones
  • #5: Application Architecture
  • Use Helm Charts
  • All Downstream Dependencies Are Unreliable
  • Use Weave Cloud
  • Make Sure Your Microservices Aren’t Too Micro
  • Use Namespaces to Split Up Your Cluster
  • Role-Based Access Control
1 - 20 of 55 Next › Last »
Showing 20 items per page