musl is an implementation of C standard library. It is more lightweight, faster and simpler than glibc used by other Linux distros, such as Ubuntu.
6More
Why I Will Never Use Alpine Linux Ever Again | Martin Heinz | Personal Website & Blog - 2 views
-
Some of it stems from how musl (and therefore also Alpine) handles DNS (it's always DNS), more specifically, musl (by design) doesn't support DNS-over-TCP.
- ...2 more annotations...
-
this DNS issue does not manifest in Docker container. It can only happen in Kubernetes, so if you test locally, everything will work fine, and you will only find out about unfixable issue when you deploy the application to a cluster.
9More
What is Data Definition Language (DDL) and how is it used? - 1 views
-
Data Definition Language (DDL) is used to create and modify the structure of objects in a database using predefined commands and a specific syntax.
-
DDL includes Structured Query Language (SQL) statements to create and drop databases, aliases, locations, indexes, tables and sequences.
-
Since DDL includes SQL statements to define changes in the database schema, it is considered a subset of SQL.
- ...6 more annotations...
-
Data Manipulation Language (DML), commands are used to modify data in a database. DML statements control access to the database data.
-
DDL commands are used to create, delete or alter the structure of objects in a database but not its data.
-
DDL deals with descriptions of the database schema and is useful for creating new tables, indexes, sequences, stogroups, etc. and to define the attributes of these objects, such as data type, field length and alternate table names (aliases).
-
Data Query Language (DQL) is used to get data within the schema objects of a database and also to query it and impose order upon it.
3More
Supported DDL operations for a CDC Replication Engine for Db2 Database - IBM Documentation - 1 views
-
SQL statements are divided into two categories: Data Definition Language (DDL) and Data Manipulation Language (DML).
8More
Securing NGINX-ingress - cert-manager Documentation - 1 views
-
If using a ClusterIssuer, remember to update the Ingress annotation cert-manager.io/issuer to cert-manager.io/cluster-issuer
- ...4 more annotations...
-
cert-manager mainly uses two different custom Kubernetes resources - known as CRDs - to configure and control how it operates, as well as to store state. These resources are Issuers and Certificates.
26More
Service | Kubernetes - 0 views
- ...23 more annotations...
-
In Kubernetes, a Service is an abstraction which defines a logical set of Pods and a policy by which to access them (sometimes this pattern is called a micro-service).
-
If you're able to use Kubernetes APIs for service discovery in your application, you can query the API server for Endpoints, that get updated whenever the set of Pods in a Service changes.
-
Kubernetes assigns this Service an IP address (sometimes called the "cluster IP"), which is used by the Service proxies
-
A Service can map any incoming port to a targetPort. By default and for convenience, the targetPort is set to the same value as the port field.
-
As many Services need to expose more than one port, Kubernetes supports multiple port definitions on a Service object. Each port definition can have the same protocol, or a different one.
-
Because this Service has no selector, the corresponding Endpoints object is not created automatically. You can manually map the Service to the network address and port where it's running, by adding an Endpoints object manually
-
Kubernetes ServiceTypes allow you to specify what kind of Service you want. The default is ClusterIP
-
NodePort: Exposes the Service on each Node's IP at a static port (the NodePort). A ClusterIP Service, to which the NodePort Service routes, is automatically created. You'll be able to contact the NodePort Service, from outside the cluster, by requesting <NodeIP>:<NodePort>.
-
ExternalName: Maps the Service to the contents of the externalName field (e.g. foo.bar.example.com), by returning a CNAME record with its value. No proxying of any kind is set up.
-
You can also use Ingress to expose your Service. Ingress is not a Service type, but it acts as the entry point for your cluster.
-
If you set the type field to NodePort, the Kubernetes control plane allocates a port from a range specified by --service-node-port-range flag (default: 30000-32767).
-
The default for --nodeport-addresses is an empty list. This means that kube-proxy should consider all available network interfaces for NodePort.
-
you need to take care of possible port collisions yourself. You also have to use a valid port number, one that's inside the range configured for NodePort use.
8More
NGINX Ingress Controller - Documentation - 0 views
-
NodePort, as the name says, means that a port on a node is configured to route incoming requests to a certain service.
-
LoadBalancer is a service, which is typically implemented by the cloud provider as an external service (with additional cost).
-
Load balancer provides a single IP address to access your services, which can run on multiple nodes.
- ...5 more annotations...
-
cloud load balancers are not necessary. Load balancer can also be implemented with MetalLB, which can be deployed in the same Kubernetes cluster.
-
Installing NGINX using NodePort is the most simple example for Ingress Controller as we can avoid the load balancer dependency. NodePort is used for exposing the NGINX Ingress to the external network.
4More
Ingress Controllers | Kubernetes - 0 views
-
If you do not specify an IngressClass for an Ingress, and your cluster has exactly one IngressClass marked as default, then Kubernetes applies the cluster's default IngressClass to the Ingress.
65More
Ingress - Kubernetes - 0 views
- ...62 more annotations...
-
Cluster networkA set of links, logical or physical, that facilitate communication within a cluster according to the Kubernetes networking model.
-
A Kubernetes ServiceA way to expose an application running on a set of Pods as a network service. that identifies a set of Pods using labelTags objects with identifying attributes that are meaningful and relevant to users. selectors.
-
An Ingress can be configured to give Services externally-reachable URLs, load balance traffic, terminate SSL / TLS, and offer name based virtual hosting.
-
Exposing services other than HTTP and HTTPS to the internet typically uses a service of type Service.Type=NodePort or Service.Type=LoadBalancer.
-
You must have an ingress controller to satisfy an Ingress. Only creating an Ingress resource has no effect.
-
Both the host and path must match the content of an incoming request before the load balancer directs traffic to the referenced Service.
-
HTTP (and HTTPS) requests to the Ingress that matches the host and path of the rule are sent to the listed backend.
-
A default backend is often configured in an Ingress controller to service any requests that do not match a path in the spec.
-
A fanout configuration routes traffic from a single IP address to more than one Service, based on the HTTP URI being requested.
-
Name-based virtual hosts support routing HTTP traffic to multiple host names at the same IP address.
-
an Ingress resource without any hosts defined in the rules, then any web traffic to the IP address of your Ingress controller can be matched without a name based virtual host being required.
-
secure an Ingress by specifying a SecretStores sensitive information, such as passwords, OAuth tokens, and ssh keys. that contains a TLS private key and certificate.
-
An Ingress controller is bootstrapped with some load balancing policy settings that it applies to all Ingress, such as the load balancing algorithm, backend weight scheme, and others.
-
persistent sessions, dynamic weights) are not yet exposed through the Ingress. You can instead get these features through the load balancer used for a Service.
-
After you save your changes, kubectl updates the resource in the API server, which tells the Ingress controller to reconfigure the load balancer.
-
Cluster network: A set of links, logical or physical, that facilitate communication within a cluster according to the Kubernetes networking model.
-
An Ingress may be configured to give Services externally-reachable URLs, load balance traffic, terminate SSL / TLS, and offer name-based virtual hosting.
-
You must have an Ingress controller to satisfy an Ingress. Only creating an Ingress resource has no effect.
-
An Ingress with no rules sends all traffic to a single default backend and .spec.defaultBackend is the backend that should handle requests in that case.
-
If defaultBackend is not set, the handling of requests that do not match any of the rules will be up to the ingress controller
-
A common usage for a Resource backend is to ingress data to an object storage backend with static assets.
-
Prefix: Matches based on a URL path prefix split by /. Matching is case sensitive and done on a path element by element basis.
-
multiple paths within an Ingress will match a request. In those cases precedence will be given first to the longest matching path.
-
Each Ingress should specify a class, a reference to an IngressClass resource that contains additional configuration including the name of the controller that should implement the class.
-
The Ingress resource only supports a single TLS port, 443, and assumes TLS termination at the ingress point (traffic to the Service and its Pods is in plaintext).
-
TLS will not work on the default rule because the certificates would have to be issued for all the possible sub-domains.
4More
Creating Highly Available clusters with kubeadm | Kubernetes - 0 views
-
If instead, you prefer to copy certs across control-plane nodes manually or using automation tools, please remove this flag and refer to Manual certificate distribution section below.
-
if you are using a kubeadm configuration file set the podSubnet field under the networking object of ClusterConfiguration.
-
manually copy the certificates from the primary control plane node to the joining control plane nodes.
- ...1 more annotation...
-
Copy only the certificates in the above list. kubeadm will take care of generating the rest of the certificates with the required SANs for the joining control-plane instances.
15More
Creating a cluster with kubeadm | Kubernetes - 0 views
-
(Recommended) If you have plans to upgrade this single control-plane kubeadm cluster to high availability you should specify the --control-plane-endpoint to set the shared endpoint for all control-plane nodes
- ...12 more annotations...
-
kubeadm uses the network interface associated with the default gateway to set the advertise address for this particular control-plane node's API server. To use a different network interface, specify the --apiserver-advertise-address=<ip-address> argument to kubeadm init
-
Do not share the admin.conf file with anyone and instead grant users custom permissions by generating them a kubeconfig file using the kubeadm kubeconfig user command.
-
The token is used for mutual authentication between the control-plane node and the joining nodes. The token included here is secret. Keep it safe, because anyone with this token can add authenticated nodes to your cluster.
-
You must deploy a Container Network Interface (CNI) based Pod network add-on so that your Pods can communicate with each other. Cluster DNS (CoreDNS) will not start up before a network is installed.
-
Make sure that your Pod network plugin supports RBAC, and so do any manifests that you use to deploy it.
-
The cluster created here has a single control-plane node, with a single etcd database running on it.
-
The node-role.kubernetes.io/control-plane label is such a restricted label and kubeadm manually applies it using a privileged client after a node has been created.
-
remove the node-role.kubernetes.io/control-plane:NoSchedule taint from any nodes that have it, including the control plane nodes, meaning that the scheduler will then be able to schedule Pods everywhere.
9More
Options for Highly Available Topology | Kubernetes - 0 views
-
A stacked HA cluster is a topology where the distributed data storage cluster provided by etcd is stacked on top of the cluster formed by the nodes managed by kubeadm that run control plane components.
-
Each control plane node runs an instance of the kube-apiserver, kube-scheduler, and kube-controller-manager
- ...6 more annotations...
-
Each control plane node creates a local etcd member and this etcd member communicates only with the kube-apiserver of this node.
-
a stacked cluster runs the risk of failed coupling. If one node goes down, both an etcd member and a control plane instance are lost
-
An HA cluster with external etcd is a topology where the distributed data storage cluster provided by etcd is external to the cluster formed by the nodes that run control plane components.
-
etcd members run on separate hosts, and each etcd host communicates with the kube-apiserver of each control plane node.
-
This topology decouples the control plane and etcd member. It therefore provides an HA setup where losing a control plane instance or an etcd member has less impact and does not affect the cluster redundancy as much as the stacked HA topology.
9More
Installing kubeadm | Kubernetes - 0 views
- ...6 more annotations...
-
kubeadm will not install or manage kubelet or kubectl for you, so you will need to ensure they match the version of the Kubernetes control plane you want kubeadm to install for you.
-
one minor version skew between the kubelet and the control plane is supported, but the kubelet version may never exceed the API server version.
-
Both the container runtime and the kubelet have a property called "cgroup driver", which is important for the management of cgroups on Linux machines.
4More
Installing Addons | Kubernetes - 0 views
-
Calico is a networking and network policy provider. Calico supports a flexible set of networking options so you can choose the most efficient option for your situation, including non-overlay and overlay networks, with or without BGP. Calico uses the same engine to enforce network policy for hosts, pods, and (if using Istio & Envoy) applications at the service mesh layer.
-
Cilium is a networking, observability, and security solution with an eBPF-based data plane. Cilium provides a simple flat Layer 3 network with the ability to span multiple clusters in either a native routing or overlay/encapsulation mode, and can enforce network policies on L3-L7 using an identity-based security model that is decoupled from network addressing. Cilium can act as a replacement for kube-proxy; it also offers additional, opt-in observability and security features.
- ...1 more annotation...
-
The node problem detector runs on Linux nodes and reports system issues as either Events or Node conditions.
61More
Cluster Networking - Kubernetes - 0 views
-
Networking is a central part of Kubernetes, but it can be challenging to understand exactly how it is expected to work
- ...57 more annotations...
-
this is the primary focus of this document
-
Pods can be treated much like VMs or physical hosts from the perspectives of port allocation, naming, service discovery, load balancing, application configuration, and migration.
-
If your job previously ran in a VM, your VM had an IP and could talk to other VMs in your project. This is the same basic model.
-
request ports on the Node itself which forward to your Pod (called host ports), but this is a very niche operation
-
AOS is an Intent-Based Networking system that creates and manages complex datacenter environments from a simple integrated platform.
-
Cisco Application Centric Infrastructure offers an integrated overlay and underlay SDN solution that supports containers, virtual machines, and bare metal servers.
-
AOS Reference Design currently supports Layer-3 connected hosts that eliminate legacy Layer-2 switching problems.
-
The AWS VPC CNI offers integrated AWS Virtual Private Cloud (VPC) networking for Kubernetes clusters.
-
users can apply existing AWS VPC networking and security best practices for building Kubernetes clusters.
-
Using this CNI plugin allows Kubernetes pods to have the same IP address inside the pod as they do on the VPC network.
-
The CNI allocates AWS Elastic Networking Interfaces (ENIs) to each Kubernetes node and using the secondary IP range from each ENI for pods on the node.
-
Big Cloud Fabric is a cloud native networking architecture, designed to run Kubernetes in private cloud/on-premises environments.
-
Cilium is L7/HTTP aware and can enforce network policies on L3-L7 using an identity based security model that is decoupled from network addressing.
-
CNI-Genie is a CNI plugin that enables Kubernetes to simultaneously have access to different implementations of the Kubernetes network model in runtime.
-
cni-ipvlan-vpc-k8s contains a set of CNI and IPAM plugins to provide a simple, host-local, low latency, high throughput, and compliant networking stack for Kubernetes within Amazon Virtual Private Cloud (VPC) environments by making use of Amazon Elastic Network Interfaces (ENI) and binding AWS-managed IPs into Pods using the Linux kernel’s IPvlan driver in L2 mode.
-
Contrail, based on Tungsten Fabric, is a truly open, multi-cloud network virtualization and policy management platform.
-
Kube-router provides a Linux LVS/IPVS-based service proxy, a Linux kernel forwarding-based pod-to-pod networking solution with no overlays, and iptables/ipset-based network policy enforcer.
-
If you have a “dumb” L2 network, such as a simple switch in a “bare-metal” environment, you should be able to do something similar to the above GCE setup.
-
Multus is a Multi CNI plugin to support the Multi Networking feature in Kubernetes using CRD based network objects in Kubernetes.
-
NSX-T can provide network virtualization for a multi-cloud and multi-hypervisor environment and is focused on emerging application frameworks and architectures that have heterogeneous endpoints and technology stacks.
-
NSX-T Container Plug-in (NCP) provides integration between NSX-T and container orchestrators such as Kubernetes
-
Nuage uses the open source Open vSwitch for the data plane along with a feature rich SDN Controller built on open standards.
-
Calico provides a highly scalable networking and network policy solution for connecting Kubernetes pods based on the same IP networking principles as the internet
-
Calico can be deployed without encapsulation or overlays to provide high-performance, high-scale data center networking.
-
Calico can also be run in policy enforcement mode in conjunction with other networking solutions such as Flannel, aka canal, or native GCE, AWS or Azure networking.
-
Romana is an open source network and security automation solution that lets you deploy Kubernetes without an overlay network
-
Weave Net runs as a CNI plug-in or stand-alone. In either version, it doesn’t require any configuration or extra code to run, and in both cases, the network provides one IP address per pod - as is standard for Kubernetes.
View AllMost Active Members
View AllTop 10 Tags
- 151system
- 133programming
- 102docker
- 101rails
- 89development
- 83devops
- 81kubernetes
- 80javascript
- 77database
- 71ruby
- 68linux
- 64web
- 61server
- 58networking
- 52security
- 49python
- 42mysql
- 42php
- 40framework
- 35performance