Skip to main content

Home/ Larvata/ Group items tagged iOS

Rss Feed Group items tagged

張 旭

Providers - Configuration Language | Terraform | HashiCorp Developer - 0 views

  • Terraform relies on plugins called providers to interact with cloud providers, SaaS providers, and other APIs.
  • Terraform configurations must declare which providers they require so that Terraform can install and use them.
  • Each provider adds a set of resource types and/or data sources that Terraform can manage.
  • ...6 more annotations...
  • Every resource type is implemented by a provider; without providers, Terraform can't manage any kind of infrastructure.
  • The Terraform Registry is the main directory of publicly available Terraform providers, and hosts providers for most major infrastructure platforms.
  • Dependency Lock File documents an additional HCL file that can be included with a configuration, which tells Terraform to always use a specific set of provider versions.
  • Terraform CLI finds and installs providers when initializing a working directory. It can automatically download providers from a Terraform registry, or load them from a local mirror or cache.
  • To save time and bandwidth, Terraform CLI supports an optional plugin cache. You can enable the cache using the plugin_cache_dir setting in the CLI configuration file.
  • you can use Terraform CLI to create a dependency lock file and commit it to version control along with your configuration.
張 旭

Container Runtimes | Kubernetes - 0 views

  • Kubernetes releases before v1.24 included a direct integration with Docker Engine, using a component named dockershim. That special direct integration is no longer part of Kubernetes
  • You need to install a container runtime into each node in the cluster so that Pods can run there.
  • Kubernetes 1.26 requires that you use a runtime that conforms with the Container Runtime Interface (CRI).
  • ...9 more annotations...
  • On Linux, control groups are used to constrain resources that are allocated to processes.
  • Both kubelet and the underlying container runtime need to interface with control groups to enforce resource management for pods and containers and set resources such as cpu/memory requests and limits.
  • When the cgroupfs driver is used, the kubelet and the container runtime directly interface with the cgroup filesystem to configure cgroups.
  • The cgroupfs driver is not recommended when systemd is the init system
  • When systemd is chosen as the init system for a Linux distribution, the init process generates and consumes a root control group (cgroup) and acts as a cgroup manager.
  • Two cgroup managers result in two views of the available and in-use resources in the system.
  • Changing the cgroup driver of a Node that has joined a cluster is a sensitive operation. If the kubelet has created Pods using the semantics of one cgroup driver, changing the container runtime to another cgroup driver can cause errors when trying to re-create the Pod sandbox for such existing Pods. Restarting the kubelet may not solve such errors.
  • The approach to mitigate this instability is to use systemd as the cgroup driver for the kubelet and the container runtime when systemd is the selected init system.
  • Kubernetes 1.26 defaults to using v1 of the CRI API. If a container runtime does not support the v1 API, the kubelet falls back to using the (deprecated) v1alpha2 API instead.
張 旭

Configuring a cgroup driver | Kubernetes - 0 views

  • the systemd driver is recommended for kubeadm based setups instead of the cgroupfs driver, because kubeadm manages the kubelet as a systemd service.
張 旭

Installing kubeadm | Kubernetes - 0 views

  • Swap disabled. You MUST disable swap in order for the kubelet to work properly.
  • The product_uuid can be checked by using the command sudo cat /sys/class/dmi/id/product_uuid
  • some virtual machines may have identical values.
  • ...6 more annotations...
  • Kubernetes uses these values to uniquely identify the nodes in the cluster.
  • Make sure that the br_netfilter module is loaded.
  • you should ensure net.bridge.bridge-nf-call-iptables is set to 1 in your sysctl config,
  • kubeadm will not install or manage kubelet or kubectl for you, so you will need to ensure they match the version of the Kubernetes control plane you want kubeadm to install for you.
  • one minor version skew between the kubelet and the control plane is supported, but the kubelet version may never exceed the API server version.
  • Both the container runtime and the kubelet have a property called "cgroup driver", which is important for the management of cgroups on Linux machines.
張 旭

Ephemeral Containers | Kubernetes - 0 views

  • a special type of container that runs temporarily in an existing Pod to accomplish user-initiated actions such as troubleshooting.
  • you cannot add a container to a Pod once it has been created. Instead, you usually delete and replace Pods in a controlled fashion using deployments.
  • you can run an ephemeral container in an existing Pod to inspect its state and run arbitrary commands.
  • ...4 more annotations...
  • Ephemeral containers differ from other containers in that they lack guarantees for resources or execution, and they will never be automatically restarted, so they are not appropriate for building applications.
  • Ephemeral containers are created using a special ephemeralcontainers handler in the API rather than by adding them directly to pod.spec, so it's not possible to add an ephemeral container using kubectl edit
  • distroless images enable you to deploy minimal container images that reduce attack surface and exposure to bugs and vulnerabilities.
  • enable process namespace sharing so you can view processes in other containers.
  •  
    "a special type of container that runs temporarily in an existing Pod to accomplish user-initiated actions such as troubleshooting. "
張 旭

Share Process Namespace between Containers in a Pod | Kubernetes - 0 views

  • When process namespace sharing is enabled, processes in a container are visible to all other containers in the same pod.
  • It's even possible to access the file system of another container using the /proc/$pid/root link.
  • Pods share many resources so it makes sense they would also share a process namespace.
  • ...2 more annotations...
  • Processes are visible to other containers in the pod. This includes all information visible in /proc, such as passwords that were passed as arguments or environment variables. These are protected only by regular Unix permissions.
  • Container filesystems are visible to other containers in the pod through the /proc/$pid/root link. This makes debugging easier, but it also means that filesystem secrets are protected only by filesystem permissions.
  •  
    "When process namespace sharing is enabled, processes in a container are visible to all other containers in the same pod. "
張 旭

3. Hello, world! - Err 9.9.9 documentation - 0 views

  • The first is a Message object, which represents the full message object received by Errbot.
  • The second is a string (or a list, if using the split_args_with parameter of botcmd()) with the arguments passed to the command.
  • args would be the string “Mister Errbot”.
  • ...6 more annotations...
  • If you return None, Errbot will not respond with any kind of message when executing the command.
  • a file that ends with the extension .plug and it is used by Errbot to identify and load plugins.
  • The key Module should point to a module that Python can find and import.
  • The key Name should be identical to the name you gave to the class in your plugin file
  • The presence of __init__.py indicates lib is a Python regular package.
  • the !status command,
張 旭

kube-proxy | Kubernetes - 0 views

  • The Kubernetes network proxy runs on each node. This reflects services as defined in the Kubernetes API on each node and can do simple TCP, UDP, and SCTP stream forwarding or round robin TCP, UDP, and SCTP forwarding across a set of backends.
  • Service cluster IPs and ports are currently found through Docker-links-compatible environment variables specifying ports opened by the service proxy.
  •  
    "The Kubernetes network proxy runs on each node. This reflects services as defined in the Kubernetes API on each node and can do simple TCP, UDP, and SCTP stream forwarding or round robin TCP, UDP, and SCTP forwarding across a set of backends."
張 旭

Considerations for large clusters | Kubernetes - 0 views

  • A cluster is a set of nodes (physical or virtual machines) running Kubernetes agents, managed by the control plane.
  • Kubernetes v1.23 supports clusters with up to 5000 nodes.
  • criteria: No more than 110 pods per node No more than 5000 nodes No more than 150000 total pods No more than 300000 total containers
  • ...14 more annotations...
  • In-use IP addresses
  • run one or two control plane instances per failure zone, scaling those instances vertically first and then scaling horizontally after reaching the point of falling returns to (vertical) scale.
  • Kubernetes nodes do not automatically steer traffic towards control-plane endpoints that are in the same failure zone
  • store Event objects in a separate dedicated etcd instance.
  • start and configure additional etcd instance
  • Kubernetes resource limits help to minimize the impact of memory leaks and other ways that pods and containers can impact on other components.
  • Addons' default limits are typically based on data collected from experience running each addon on small or medium Kubernetes clusters.
  • When running on large clusters, addons often consume more of some resources than their default limits.
  • Many addons scale horizontally - you add capacity by running more pods
  • The VerticalPodAutoscaler can run in recommender mode to provide suggested figures for requests and limits.
  • Some addons run as one copy per node, controlled by a DaemonSet: for example, a node-level log aggregator.
  • VerticalPodAutoscaler is a custom resource that you can deploy into your cluster to help you manage resource requests and limits for pods.
  • The cluster autoscaler integrates with a number of cloud providers to help you run the right number of nodes for the level of resource demand in your cluster.
  • The addon resizer helps you in resizing the addons automatically as your cluster's scale changes.
« First ‹ Previous 261 - 270 of 270
Showing 20 items per page