Cluster Networking - Kubernetes - 0 views
-
Networking is a central part of Kubernetes, but it can be challenging to understand exactly how it is expected to work
- ...57 more annotations...
-
this is the primary focus of this document
-
Pods can be treated much like VMs or physical hosts from the perspectives of port allocation, naming, service discovery, load balancing, application configuration, and migration.
-
If your job previously ran in a VM, your VM had an IP and could talk to other VMs in your project. This is the same basic model.
-
request ports on the Node itself which forward to your Pod (called host ports), but this is a very niche operation
-
AOS is an Intent-Based Networking system that creates and manages complex datacenter environments from a simple integrated platform.
-
Cisco Application Centric Infrastructure offers an integrated overlay and underlay SDN solution that supports containers, virtual machines, and bare metal servers.
-
AOS Reference Design currently supports Layer-3 connected hosts that eliminate legacy Layer-2 switching problems.
-
The AWS VPC CNI offers integrated AWS Virtual Private Cloud (VPC) networking for Kubernetes clusters.
-
users can apply existing AWS VPC networking and security best practices for building Kubernetes clusters.
-
Using this CNI plugin allows Kubernetes pods to have the same IP address inside the pod as they do on the VPC network.
-
The CNI allocates AWS Elastic Networking Interfaces (ENIs) to each Kubernetes node and using the secondary IP range from each ENI for pods on the node.
-
Big Cloud Fabric is a cloud native networking architecture, designed to run Kubernetes in private cloud/on-premises environments.
-
Cilium is L7/HTTP aware and can enforce network policies on L3-L7 using an identity based security model that is decoupled from network addressing.
-
CNI-Genie is a CNI plugin that enables Kubernetes to simultaneously have access to different implementations of the Kubernetes network model in runtime.
-
cni-ipvlan-vpc-k8s contains a set of CNI and IPAM plugins to provide a simple, host-local, low latency, high throughput, and compliant networking stack for Kubernetes within Amazon Virtual Private Cloud (VPC) environments by making use of Amazon Elastic Network Interfaces (ENI) and binding AWS-managed IPs into Pods using the Linux kernel’s IPvlan driver in L2 mode.
-
Contrail, based on Tungsten Fabric, is a truly open, multi-cloud network virtualization and policy management platform.
-
Kube-router provides a Linux LVS/IPVS-based service proxy, a Linux kernel forwarding-based pod-to-pod networking solution with no overlays, and iptables/ipset-based network policy enforcer.
-
If you have a “dumb” L2 network, such as a simple switch in a “bare-metal” environment, you should be able to do something similar to the above GCE setup.
-
Multus is a Multi CNI plugin to support the Multi Networking feature in Kubernetes using CRD based network objects in Kubernetes.
-
NSX-T can provide network virtualization for a multi-cloud and multi-hypervisor environment and is focused on emerging application frameworks and architectures that have heterogeneous endpoints and technology stacks.
-
NSX-T Container Plug-in (NCP) provides integration between NSX-T and container orchestrators such as Kubernetes
-
Nuage uses the open source Open vSwitch for the data plane along with a feature rich SDN Controller built on open standards.
-
Calico provides a highly scalable networking and network policy solution for connecting Kubernetes pods based on the same IP networking principles as the internet
-
Calico can be deployed without encapsulation or overlays to provide high-performance, high-scale data center networking.
-
Calico can also be run in policy enforcement mode in conjunction with other networking solutions such as Flannel, aka canal, or native GCE, AWS or Azure networking.
-
Romana is an open source network and security automation solution that lets you deploy Kubernetes without an overlay network
-
Weave Net runs as a CNI plug-in or stand-alone. In either version, it doesn’t require any configuration or extra code to run, and in both cases, the network provides one IP address per pod - as is standard for Kubernetes.