Skip to main content

Home/ Groups/ Larvata
張 旭

Why I Will Never Use Alpine Linux Ever Again | Martin Heinz | Personal Website & Blog - 2 views

  • musl is an implementation of C standard library. It is more lightweight, faster and simpler than glibc used by other Linux distros, such as Ubuntu.
  • Some of it stems from how musl (and therefore also Alpine) handles DNS (it's always DNS), more specifically, musl (by design) doesn't support DNS-over-TCP.
  • By using Alpine, you're getting "free" chaos engineering for you cluster.
  • ...2 more annotations...
  • this DNS issue does not manifest in Docker container. It can only happen in Kubernetes, so if you test locally, everything will work fine, and you will only find out about unfixable issue when you deploy the application to a cluster.
  • if your application requires CGO_ENABLED=1, you will obviously run into issue with Alpine.
  •  
    "musl is an implementation of C standard library. It is more lightweight, faster and simpler than glibc used by other Linux distros, such as Ubuntu."
張 旭

What is Data Definition Language (DDL) and how is it used? - 1 views

  • Data Definition Language (DDL) is used to create and modify the structure of objects in a database using predefined commands and a specific syntax.
  • DDL includes Structured Query Language (SQL) statements to create and drop databases, aliases, locations, indexes, tables and sequences.
  • Since DDL includes SQL statements to define changes in the database schema, it is considered a subset of SQL.
  • ...6 more annotations...
  • Data Manipulation Language (DML), commands are used to modify data in a database. DML statements control access to the database data.
  • DDL commands are used to create, delete or alter the structure of objects in a database but not its data.
  • DDL deals with descriptions of the database schema and is useful for creating new tables, indexes, sequences, stogroups, etc. and to define the attributes of these objects, such as data type, field length and alternate table names (aliases).
  • Data Query Language (DQL) is used to get data within the schema objects of a database and also to query it and impose order upon it.
  • DQL is also a subset of SQL. One of the most common commands in DQL is SELECT.
  • The most common command types in DDL are CREATE, ALTER and DROP.
張 旭

CDC and DDL Changes to Source Tables - Microsoft® SQL Server 2012 Unleashed [... - 1 views

  • One of the common challenges when capturing data changes from your source tables is how to handle DDL changes to the source tables.
  • Change Data Capture
張 旭

Supported DDL operations for a CDC Replication Engine for Db2 Database - IBM Documentation - 1 views

  • SQL statements are divided into two categories: Data Definition Language (DDL) and Data Manipulation Language (DML).
  • DDL operations on a table may affect dependent objects such as constraints and Indexes.
  •  
    "DDL statements are used to describe a database, to define its structure, to create its objects and to create the table's sub-objects."
張 旭

Securing NGINX-ingress - cert-manager Documentation - 1 views

  • If using a ClusterIssuer, remember to update the Ingress annotation cert-manager.io/issuer to cert-manager.io/cluster-issuer
  • Certificates resources allow you to specify the details of the certificate you want to request.
  • An Issuer defines how cert-manager will request TLS certificates.
  • ...4 more annotations...
  • cert-manager mainly uses two different custom Kubernetes resources - known as CRDs - to configure and control how it operates, as well as to store state. These resources are Issuers and Certificates.
  • using annotations on the ingress with ingress-shim or directly creating a certificate resource.
  • The secret that is used in the ingress should match the secret defined in the certificate.
  • a typo will result in the ingress-nginx-controller falling back to its self-signed certificate.
  •  
    "If using a ClusterIssuer, remember to update the Ingress annotation cert-manager.io/issuer to cert-manager.io/cluster-issuer"
張 旭

Installation Guide - NGINX Ingress Controller - 0 views

  • On most Kubernetes clusters, the ingress controller will work without requiring any extra configuration.
張 旭

Service | Kubernetes - 0 views

  • Each Pod gets its own IP address
  • Pods are nonpermanent resources.
  • Kubernetes Pods are created and destroyed to match the state of your cluster
  • ...23 more annotations...
  • In Kubernetes, a Service is an abstraction which defines a logical set of Pods and a policy by which to access them (sometimes this pattern is called a micro-service).
  • The set of Pods targeted by a Service is usually determined by a selector
  • If you're able to use Kubernetes APIs for service discovery in your application, you can query the API server for Endpoints, that get updated whenever the set of Pods in a Service changes.
  • A Service in Kubernetes is a REST object, similar to a Pod.
  • The name of a Service object must be a valid DNS label name
  • Kubernetes assigns this Service an IP address (sometimes called the "cluster IP"), which is used by the Service proxies
  • A Service can map any incoming port to a targetPort. By default and for convenience, the targetPort is set to the same value as the port field.
  • The default protocol for Services is TCP
  • As many Services need to expose more than one port, Kubernetes supports multiple port definitions on a Service object. Each port definition can have the same protocol, or a different one.
  • Because this Service has no selector, the corresponding Endpoints object is not created automatically. You can manually map the Service to the network address and port where it's running, by adding an Endpoints object manually
  • Endpoint IP addresses cannot be the cluster IPs of other Kubernetes Services
  • Kubernetes ServiceTypes allow you to specify what kind of Service you want. The default is ClusterIP
  • ClusterIP: Exposes the Service on a cluster-internal IP.
  • NodePort: Exposes the Service on each Node's IP at a static port (the NodePort). A ClusterIP Service, to which the NodePort Service routes, is automatically created. You'll be able to contact the NodePort Service, from outside the cluster, by requesting <NodeIP>:<NodePort>.
  • LoadBalancer: Exposes the Service externally using a cloud provider's load balancer
  • ExternalName: Maps the Service to the contents of the externalName field (e.g. foo.bar.example.com), by returning a CNAME record with its value. No proxying of any kind is set up.
  • You can also use Ingress to expose your Service. Ingress is not a Service type, but it acts as the entry point for your cluster.
  • If you set the type field to NodePort, the Kubernetes control plane allocates a port from a range specified by --service-node-port-range flag (default: 30000-32767).
  • The default for --nodeport-addresses is an empty list. This means that kube-proxy should consider all available network interfaces for NodePort.
  • you need to take care of possible port collisions yourself. You also have to use a valid port number, one that's inside the range configured for NodePort use.
  • Service is visible as <NodeIP>:spec.ports[*].nodePort and .spec.clusterIP:spec.ports[*].port
  • Choosing this value makes the Service only reachable from within the cluster.
  • NodePort: Exposes the Service on each Node's IP at a static port
張 旭

NGINX Ingress Controller - Documentation - 0 views

  • NodePort, as the name says, means that a port on a node is configured to route incoming requests to a certain service.
  • LoadBalancer is a service, which is typically implemented by the cloud provider as an external service (with additional cost).
  • Load balancer provides a single IP address to access your services, which can run on multiple nodes.
  • ...5 more annotations...
  • ngress controller helps to consolidate routing rules of multiple applications into one entity.
  • Ingress controller is exposed to an external network with the help of NodePort or LoadBalancer.
  • cloud load balancers are not necessary. Load balancer can also be implemented with MetalLB, which can be deployed in the same Kubernetes cluster.
  • to expose the Ingress controller to an external network is to use NodePort.
  • Installing NGINX using NodePort is the most simple example for Ingress Controller as we can avoid the load balancer dependency. NodePort is used for exposing the NGINX Ingress to the external network.
張 旭

Ingress Controllers | Kubernetes - 0 views

  • In order for the Ingress resource to work, the cluster must have an ingress controller running.
  • ingressClassName is a replacement of the older annotation method.
  • If you do not specify an IngressClass for an Ingress, and your cluster has exactly one IngressClass marked as default, then Kubernetes applies the cluster's default IngressClass to the Ingress.
  •  
    "In order for the Ingress resource to work, the cluster must have an ingress controller running. "
張 旭

Ingress - Kubernetes - 0 views

  • An API object that manages external access to the services in a cluster, typically HTTP.
  • load balancing
  • SSL termination
  • ...62 more annotations...
  • name-based virtual hosting
  • Edge routerA router that enforces the firewall policy for your cluster.
  • Cluster networkA set of links, logical or physical, that facilitate communication within a cluster according to the Kubernetes networking model.
  • A Kubernetes ServiceA way to expose an application running on a set of Pods as a network service. that identifies a set of Pods using labelTags objects with identifying attributes that are meaningful and relevant to users. selectors.
  • Services are assumed to have virtual IPs only routable within the cluster network.
  • Ingress exposes HTTP and HTTPS routes from outside the cluster to services within the cluster.
  • Traffic routing is controlled by rules defined on the Ingress resource.
  • An Ingress can be configured to give Services externally-reachable URLs, load balance traffic, terminate SSL / TLS, and offer name based virtual hosting.
  • Exposing services other than HTTP and HTTPS to the internet typically uses a service of type Service.Type=NodePort or Service.Type=LoadBalancer.
  • You must have an ingress controller to satisfy an Ingress. Only creating an Ingress resource has no effect.
  • As with all other Kubernetes resources, an Ingress needs apiVersion, kind, and metadata fields
  • Ingress frequently uses annotations to configure some options depending on the Ingress controller,
  • Ingress resource only supports rules for directing HTTP traffic.
  • An optional host.
  • A list of paths
  • A backend is a combination of Service and port names
  • has an associated backend
  • Both the host and path must match the content of an incoming request before the load balancer directs traffic to the referenced Service.
  • HTTP (and HTTPS) requests to the Ingress that matches the host and path of the rule are sent to the listed backend.
  • A default backend is often configured in an Ingress controller to service any requests that do not match a path in the spec.
  • An Ingress with no rules sends all traffic to a single default backend.
  • Ingress controllers and load balancers may take a minute or two to allocate an IP address.
  • A fanout configuration routes traffic from a single IP address to more than one Service, based on the HTTP URI being requested.
  • nginx.ingress.kubernetes.io/rewrite-target: /
  • describe ingress
  • get ingress
  • Name-based virtual hosts support routing HTTP traffic to multiple host names at the same IP address.
  • route requests based on the Host header.
  • an Ingress resource without any hosts defined in the rules, then any web traffic to the IP address of your Ingress controller can be matched without a name based virtual host being required.
  • secure an Ingress by specifying a SecretStores sensitive information, such as passwords, OAuth tokens, and ssh keys. that contains a TLS private key and certificate.
  • Currently the Ingress only supports a single TLS port, 443, and assumes TLS termination.
  • An Ingress controller is bootstrapped with some load balancing policy settings that it applies to all Ingress, such as the load balancing algorithm, backend weight scheme, and others.
  • persistent sessions, dynamic weights) are not yet exposed through the Ingress. You can instead get these features through the load balancer used for a Service.
  • review the controller specific documentation to see how they handle health checks
  • edit ingress
  • After you save your changes, kubectl updates the resource in the API server, which tells the Ingress controller to reconfigure the load balancer.
  • kubectl replace -f on a modified Ingress YAML file.
  • Node: A worker machine in Kubernetes, part of a cluster.
  • in most common Kubernetes deployments, nodes in the cluster are not part of the public internet.
  • Edge router: A router that enforces the firewall policy for your cluster.
  • a gateway managed by a cloud provider or a physical piece of hardware.
  • Cluster network: A set of links, logical or physical, that facilitate communication within a cluster according to the Kubernetes networking model.
  • Service: A Kubernetes Service that identifies a set of Pods using label selectors.
  • An Ingress may be configured to give Services externally-reachable URLs, load balance traffic, terminate SSL / TLS, and offer name-based virtual hosting.
  • An Ingress does not expose arbitrary ports or protocols.
  • You must have an Ingress controller to satisfy an Ingress. Only creating an Ingress resource has no effect.
  • The name of an Ingress object must be a valid DNS subdomain name
  • The Ingress spec has all the information needed to configure a load balancer or proxy server.
  • Ingress resource only supports rules for directing HTTP(S) traffic.
  • An Ingress with no rules sends all traffic to a single default backend and .spec.defaultBackend is the backend that should handle requests in that case.
  • If defaultBackend is not set, the handling of requests that do not match any of the rules will be up to the ingress controller
  • A common usage for a Resource backend is to ingress data to an object storage backend with static assets.
  • Exact: Matches the URL path exactly and with case sensitivity.
  • Prefix: Matches based on a URL path prefix split by /. Matching is case sensitive and done on a path element by element basis.
  • multiple paths within an Ingress will match a request. In those cases precedence will be given first to the longest matching path.
  • Hosts can be precise matches (for example “foo.bar.com”) or a wildcard (for example “*.foo.com”).
  • No match, wildcard only covers a single DNS label
  • Each Ingress should specify a class, a reference to an IngressClass resource that contains additional configuration including the name of the controller that should implement the class.
  • secure an Ingress by specifying a Secret that contains a TLS private key and certificate.
  • The Ingress resource only supports a single TLS port, 443, and assumes TLS termination at the ingress point (traffic to the Service and its Pods is in plaintext).
  • TLS will not work on the default rule because the certificates would have to be issued for all the possible sub-domains.
  • hosts in the tls section need to explicitly match the host in the rules section.
張 旭

Creating Highly Available clusters with kubeadm | Kubernetes - 0 views

  • If instead, you prefer to copy certs across control-plane nodes manually or using automation tools, please remove this flag and refer to Manual certificate distribution section below.
  • if you are using a kubeadm configuration file set the podSubnet field under the networking object of ClusterConfiguration.
  • manually copy the certificates from the primary control plane node to the joining control plane nodes.
  • ...1 more annotation...
  • Copy only the certificates in the above list. kubeadm will take care of generating the rest of the certificates with the required SANs for the joining control-plane instances.
張 旭

Creating a cluster with kubeadm | Kubernetes - 0 views

  • (Recommended) If you have plans to upgrade this single control-plane kubeadm cluster to high availability you should specify the --control-plane-endpoint to set the shared endpoint for all control-plane nodes
  • set the --pod-network-cidr to a provider-specific value.
  • kubeadm tries to detect the container runtime by using a list of well known endpoints.
  • ...12 more annotations...
  • kubeadm uses the network interface associated with the default gateway to set the advertise address for this particular control-plane node's API server. To use a different network interface, specify the --apiserver-advertise-address=<ip-address> argument to kubeadm init
  • Do not share the admin.conf file with anyone and instead grant users custom permissions by generating them a kubeconfig file using the kubeadm kubeconfig user command.
  • The token is used for mutual authentication between the control-plane node and the joining nodes. The token included here is secret. Keep it safe, because anyone with this token can add authenticated nodes to your cluster.
  • You must deploy a Container Network Interface (CNI) based Pod network add-on so that your Pods can communicate with each other. Cluster DNS (CoreDNS) will not start up before a network is installed.
  • Take care that your Pod network must not overlap with any of the host networks
  • Make sure that your Pod network plugin supports RBAC, and so do any manifests that you use to deploy it.
  • You can install only one Pod network per cluster.
  • The cluster created here has a single control-plane node, with a single etcd database running on it.
  • The node-role.kubernetes.io/control-plane label is such a restricted label and kubeadm manually applies it using a privileged client after a node has been created.
  • By default, your cluster will not schedule Pods on the control plane nodes for security reasons.
  • kubectl taint nodes --all node-role.kubernetes.io/control-plane-
  • remove the node-role.kubernetes.io/control-plane:NoSchedule taint from any nodes that have it, including the control plane nodes, meaning that the scheduler will then be able to schedule Pods everywhere.
張 旭

Options for Highly Available Topology | Kubernetes - 0 views

  • With stacked control plane nodes, where etcd nodes are colocated with control plane nodes
  • A stacked HA cluster is a topology where the distributed data storage cluster provided by etcd is stacked on top of the cluster formed by the nodes managed by kubeadm that run control plane components.
  • Each control plane node runs an instance of the kube-apiserver, kube-scheduler, and kube-controller-manager
  • ...6 more annotations...
  • Each control plane node creates a local etcd member and this etcd member communicates only with the kube-apiserver of this node.
  • This topology couples the control planes and etcd members on the same nodes.
  • a stacked cluster runs the risk of failed coupling. If one node goes down, both an etcd member and a control plane instance are lost
  • An HA cluster with external etcd is a topology where the distributed data storage cluster provided by etcd is external to the cluster formed by the nodes that run control plane components.
  • etcd members run on separate hosts, and each etcd host communicates with the kube-apiserver of each control plane node.
  • This topology decouples the control plane and etcd member. It therefore provides an HA setup where losing a control plane instance or an etcd member has less impact and does not affect the cluster redundancy as much as the stacked HA topology.
張 旭

Installing kubeadm | Kubernetes - 0 views

  • Swap disabled. You MUST disable swap in order for the kubelet to work properly.
  • The product_uuid can be checked by using the command sudo cat /sys/class/dmi/id/product_uuid
  • some virtual machines may have identical values.
  • ...6 more annotations...
  • Kubernetes uses these values to uniquely identify the nodes in the cluster.
  • Make sure that the br_netfilter module is loaded.
  • you should ensure net.bridge.bridge-nf-call-iptables is set to 1 in your sysctl config,
  • kubeadm will not install or manage kubelet or kubectl for you, so you will need to ensure they match the version of the Kubernetes control plane you want kubeadm to install for you.
  • one minor version skew between the kubelet and the control plane is supported, but the kubelet version may never exceed the API server version.
  • Both the container runtime and the kubelet have a property called "cgroup driver", which is important for the management of cgroups on Linux machines.
張 旭

Installing Addons | Kubernetes - 0 views

  • Calico is a networking and network policy provider. Calico supports a flexible set of networking options so you can choose the most efficient option for your situation, including non-overlay and overlay networks, with or without BGP. Calico uses the same engine to enforce network policy for hosts, pods, and (if using Istio & Envoy) applications at the service mesh layer.
  • Cilium is a networking, observability, and security solution with an eBPF-based data plane. Cilium provides a simple flat Layer 3 network with the ability to span multiple clusters in either a native routing or overlay/encapsulation mode, and can enforce network policies on L3-L7 using an identity-based security model that is decoupled from network addressing. Cilium can act as a replacement for kube-proxy; it also offers additional, opt-in observability and security features.
  • CoreDNS is a flexible, extensible DNS server which can be installed as the in-cluster DNS for pods.
  • ...1 more annotation...
  • The node problem detector runs on Linux nodes and reports system issues as either Events or Node conditions.
張 旭

Cluster Networking - Kubernetes - 0 views

  • Networking is a central part of Kubernetes, but it can be challenging to understand exactly how it is expected to work
  • Highly-coupled container-to-container communications
  • Pod-to-Pod communications
  • ...57 more annotations...
  • this is the primary focus of this document
    • 張 旭
       
      Cluster Networking 所關注處理的是: Pod 到 Pod 之間的連線
  • Pod-to-Service communications
  • External-to-Service communications
  • Kubernetes is all about sharing machines between applications.
  • sharing machines requires ensuring that two applications do not try to use the same ports.
  • Dynamic port allocation brings a lot of complications to the system
  • Every Pod gets its own IP address
  • do not need to explicitly create links between Pods
  • almost never need to deal with mapping container ports to host ports.
  • Pods can be treated much like VMs or physical hosts from the perspectives of port allocation, naming, service discovery, load balancing, application configuration, and migration.
  • pods on a node can communicate with all pods on all nodes without NAT
  • agents on a node (e.g. system daemons, kubelet) can communicate with all pods on that node
  • pods in the host network of a node can communicate with all pods on all nodes without NAT
  • If your job previously ran in a VM, your VM had an IP and could talk to other VMs in your project. This is the same basic model.
  • containers within a Pod share their network namespaces - including their IP address
  • containers within a Pod can all reach each other’s ports on localhost
  • containers within a Pod must coordinate port usage
  • “IP-per-pod” model.
  • request ports on the Node itself which forward to your Pod (called host ports), but this is a very niche operation
  • The Pod itself is blind to the existence or non-existence of host ports.
  • AOS is an Intent-Based Networking system that creates and manages complex datacenter environments from a simple integrated platform.
  • Cisco Application Centric Infrastructure offers an integrated overlay and underlay SDN solution that supports containers, virtual machines, and bare metal servers.
  • AOS Reference Design currently supports Layer-3 connected hosts that eliminate legacy Layer-2 switching problems.
  • The AWS VPC CNI offers integrated AWS Virtual Private Cloud (VPC) networking for Kubernetes clusters.
  • users can apply existing AWS VPC networking and security best practices for building Kubernetes clusters.
  • Using this CNI plugin allows Kubernetes pods to have the same IP address inside the pod as they do on the VPC network.
  • The CNI allocates AWS Elastic Networking Interfaces (ENIs) to each Kubernetes node and using the secondary IP range from each ENI for pods on the node.
  • Big Cloud Fabric is a cloud native networking architecture, designed to run Kubernetes in private cloud/on-premises environments.
  • Cilium is L7/HTTP aware and can enforce network policies on L3-L7 using an identity based security model that is decoupled from network addressing.
  • CNI-Genie is a CNI plugin that enables Kubernetes to simultaneously have access to different implementations of the Kubernetes network model in runtime.
  • CNI-Genie also supports assigning multiple IP addresses to a pod, each from a different CNI plugin.
  • cni-ipvlan-vpc-k8s contains a set of CNI and IPAM plugins to provide a simple, host-local, low latency, high throughput, and compliant networking stack for Kubernetes within Amazon Virtual Private Cloud (VPC) environments by making use of Amazon Elastic Network Interfaces (ENI) and binding AWS-managed IPs into Pods using the Linux kernel’s IPvlan driver in L2 mode.
  • to be straightforward to configure and deploy within a VPC
  • Contiv provides configurable networking
  • Contrail, based on Tungsten Fabric, is a truly open, multi-cloud network virtualization and policy management platform.
  • DANM is a networking solution for telco workloads running in a Kubernetes cluster.
  • Flannel is a very simple overlay network that satisfies the Kubernetes requirements.
  • Any traffic bound for that subnet will be routed directly to the VM by the GCE network fabric.
  • sysctl net.ipv4.ip_forward=1
  • Jaguar provides overlay network using vxlan and Jaguar CNIPlugin provides one IP address per pod.
  • Knitter is a network solution which supports multiple networking in Kubernetes.
  • Kube-OVN is an OVN-based kubernetes network fabric for enterprises.
  • Kube-router provides a Linux LVS/IPVS-based service proxy, a Linux kernel forwarding-based pod-to-pod networking solution with no overlays, and iptables/ipset-based network policy enforcer.
  • If you have a “dumb” L2 network, such as a simple switch in a “bare-metal” environment, you should be able to do something similar to the above GCE setup.
  • Multus is a Multi CNI plugin to support the Multi Networking feature in Kubernetes using CRD based network objects in Kubernetes.
  • NSX-T can provide network virtualization for a multi-cloud and multi-hypervisor environment and is focused on emerging application frameworks and architectures that have heterogeneous endpoints and technology stacks.
  • NSX-T Container Plug-in (NCP) provides integration between NSX-T and container orchestrators such as Kubernetes
  • Nuage uses the open source Open vSwitch for the data plane along with a feature rich SDN Controller built on open standards.
  • OpenVSwitch is a somewhat more mature but also complicated way to build an overlay network
  • OVN is an opensource network virtualization solution developed by the Open vSwitch community.
  • Project Calico is an open source container networking provider and network policy engine.
  • Calico provides a highly scalable networking and network policy solution for connecting Kubernetes pods based on the same IP networking principles as the internet
  • Calico can be deployed without encapsulation or overlays to provide high-performance, high-scale data center networking.
  • Calico can also be run in policy enforcement mode in conjunction with other networking solutions such as Flannel, aka canal, or native GCE, AWS or Azure networking.
  • Romana is an open source network and security automation solution that lets you deploy Kubernetes without an overlay network
  • Weave Net runs as a CNI plug-in or stand-alone. In either version, it doesn’t require any configuration or extra code to run, and in both cases, the network provides one IP address per pod - as is standard for Kubernetes.
  • The network model is implemented by the container runtime on each node.
張 旭

Configuring a cgroup driver | Kubernetes - 0 views

  • the systemd driver is recommended for kubeadm based setups instead of the cgroupfs driver, because kubeadm manages the kubelet as a systemd service.
1 - 20 of 1421 Next › Last »
Showing 20 items per page