Skip to main content

Home/ Larvata/ Group items tagged command

Rss Feed Group items tagged

張 旭

The differences between Docker, containerd, CRI-O and runc - Tutorial Works - 0 views

  • Docker isn’t the only container contender on the block.
  • Container Runtime Interface (CRI), which defines an API between Kubernetes and the container runtime
  • Open Container Initiative (OCI) which publishes specifications for images and containers.
  • ...20 more annotations...
  • for a lot of people, the name “Docker” itself is synonymous with the word “container”.
  • Docker created a very ergonomic (nice-to-use) tool for working with containers – also called docker.
  • docker is designed to be installed on a workstation or server and comes with a bunch of tools to make it easy to build and run containers as a developer, or DevOps person.
  • containerd: This is a daemon process that manages and runs containers.
  • runc: This is the low-level container runtime (the thing that actually creates and runs containers).
  • libcontainer, a native Go-based implementation for creating containers.
  • Kubernetes includes a component called dockershim, which allows it to support Docker.
  • Kubernetes prefers to run containers through any container runtime which supports its Container Runtime Interface (CRI).
  • Kubernetes will remove support for Docker directly, and prefer to use only container runtimes that implement its Container Runtime Interface.
  • Both containerd and CRI-O can run Docker-formatted (actually OCI-formatted) images, they just do it without having to use the docker command or the Docker daemon.
  • Docker images, are actually images packaged in the Open Container Initiative (OCI) format.
  • CRI is the API that Kubernetes uses to control the different runtimes that create and manage containers.
  • CRI makes it easier for Kubernetes to use different container runtimes
  • containerd is a high-level container runtime that came from Docker, and implements the CRI spec
  • containerd was separated out of the Docker project, to make Docker more modular.
  • CRI-O is another high-level container runtime which implements the Container Runtime Interface (CRI).
  • The idea behind the OCI is that you can choose between different runtimes which conform to the spec.
  • runc is an OCI-compatible container runtime.
  • A reference implementation is a piece of software that has implemented all the requirements of a specification or standard.
  • runc provides all of the low-level functionality for containers, interacting with existing low-level Linux features, like namespaces and control groups.
張 旭

Speeding up Docker image build process of a Rails application | BigBinary Blog - 1 views

  • we do not want to execute bundle install and rake assets:precompile tasks while starting a container in each pod which would prevent that pod from accepting any requests until these tasks are finished.
  • run bundle install and rake assets:precompile tasks while or before containerizing the Rails application.
  • Kubernetes pulls the image, starts a Docker container using that image inside the pod and runs puma server immediately.
  • ...7 more annotations...
  • Since source code changes often, the previously cached layer for the ADD instruction is invalidated due to the mismatching checksums.
  • The ARG instruction in the Dockerfile defines RAILS_ENV variable and is implicitly used as an environment variable by the rest of the instructions defined just after that ARG instruction.
  • RUN instructions are used to install gems and precompile static assets using sprockets
  • Instead, Docker automatically re-uses the previously built layer for the RUN bundle install instruction if the Gemfile.lock file remains unchanged.
  • everyday we need to build a lot of Docker images containing source code from varying Git branches as well as with varying environments.
  • it is hard for Docker to cache layers for bundle install and rake assets:precompile tasks and re-use those layers during every docker build command run with different application source code and a different environment.
  • By default, Bundler installs gems at the location which is set by Rubygems.
  •  
    "we do not want to execute bundle install and rake assets:precompile tasks while starting a container in each pod which would prevent that pod from accepting any requests until these tasks are finished."
張 旭

Kubernetes Deployments: The Ultimate Guide - Semaphore - 1 views

  • Continuous integration gives you confidence in your code. To extend that confidence to the release process, your deployment operations need to come with a safety belt.
  • these Kubernetes objects ensure that you can progressively deploy, roll back and scale your applications without downtime.
  • A pod is just a group of containers (it can be a group of one container) that run on the same machine, and share a few things together.
  • ...34 more annotations...
  • the containers within a pod can communicate with each other over localhost
  • From a network perspective, all the processes in these containers are local.
  • we can never create a standalone container: the closest we can do is create a pod, with a single container in it.
  • Kubernetes is a declarative system (by opposition to imperative systems).
  • All we can do, is describe what we want to have, and wait for Kubernetes to take action to reconcile what we have, with what we want to have.
  • In other words, we can say, “I would like a 40-feet long blue container with yellow doors“, and Kubernetes will find such a container for us. If it doesn’t exist, it will build it; if there is already one but it’s green with red doors, it will paint it for us; if there is already a container of the right size and color, Kubernetes will do nothing, since what we have already matches what we want.
  • The specification of a replica set looks very much like the specification of a pod, except that it carries a number, indicating how many replicas
  • What happens if we change that definition? Suddenly, there are zero pods matching the new specification.
  • the creation of new pods could happen in a more gradual manner.
  • the specification for a deployment looks very much like the one for a replica set: it features a pod specification, and a number of replicas.
  • Deployments, however, don’t create or delete pods directly.
  • When we update a deployment and adjust the number of replicas, it passes that update down to the replica set.
  • When we update the pod specification, the deployment creates a new replica set with the updated pod specification. That replica set has an initial size of zero. Then, the size of that replica set is progressively increased, while decreasing the size of the other replica set.
  • we are going to fade in (turn up the volume) on the new replica set, while we fade out (turn down the volume) on the old one.
  • During the whole process, requests are sent to pods of both the old and new replica sets, without any downtime for our users.
  • A readiness probe is a test that we add to a container specification.
  • Kubernetes supports three ways of implementing readiness probes:Running a command inside a container;Making an HTTP(S) request against a container; orOpening a TCP socket against a container.
  • When we roll out a new version, Kubernetes will wait for the new pod to mark itself as “ready” before moving on to the next one.
  • If there is no readiness probe, then the container is considered as ready, as long as it could be started.
  • MaxSurge indicates how many extra pods we are willing to run during a rolling update, while MaxUnavailable indicates how many pods we can lose during the rolling update.
  • Setting MaxUnavailable to 0 means, “do not shutdown any old pod before a new one is up and ready to serve traffic“.
  • Setting MaxSurge to 100% means, “immediately start all the new pods“, implying that we have enough spare capacity on our cluster, and that we want to go as fast as possible.
  • kubectl rollout undo deployment web
  • the replica set doesn’t look at the pods’ specifications, but only at their labels.
  • A replica set contains a selector, which is a logical expression that “selects” (just like a SELECT query in SQL) a number of pods.
  • it is absolutely possible to manually create pods with these labels, but running a different image (or with different settings), and fool our replica set.
  • Selectors are also used by services, which act as the load balancers for Kubernetes traffic, internal and external.
  • internal IP address (denoted by the name ClusterIP)
  • during a rollout, the deployment doesn’t reconfigure or inform the load balancer that pods are started and stopped. It happens automatically through the selector of the service associated to the load balancer.
  • a pod is added as a valid endpoint for a service only if all its containers pass their readiness check. In other words, a pod starts receiving traffic only once it’s actually ready for it.
  • In blue/green deployment, we want to instantly switch over all the traffic from the old version to the new, instead of doing it progressively
  • We can achieve blue/green deployment by creating multiple deployments (in the Kubernetes sense), and then switching from one to another by changing the selector of our service
  • kubectl label pods -l app=blue,version=v1.5 status=enabled
  • kubectl label pods -l app=blue,version=v1.4 status-
  •  
    "Continuous integration gives you confidence in your code. To extend that confidence to the release process, your deployment operations need to come with a safety belt."
張 旭

podman/rootless.md at master · containers/podman - 0 views

  • Podman can not create containers that bind to ports < 1024
  • If /etc/subuid and /etc/subgid are not setup for a user, then podman commands can easily fail
  • Fedora 31 defaults to cgroup V2, which has full support of rootless cgroup management.
  • ...6 more annotations...
  • Some system unit configuration options do not work in the rootless container
  • it's better to create an override.conf drop-in that sets PrivateNetwork=no
  • Difficult to use additional stores for sharing content
  • Can not use overlayfs driver, but does support fuse-overlayfs
  • No CNI Support
  • Making device nodes within a container fails, even when running --privileged.
張 旭

mvn clean install - a short guide to Maven - 0 views

  • An equivalent in other languages would be Javascript’s npm, Ruby’s gems or PHP’s composer.
  • Maven expects a certain directory structure for your Java source code to live in and when you later do a mvn clean install , the whole compilation and packaging work will be done for you.
  • any directory that contains a pom.xml file is also a valid Maven project.
  • ...17 more annotations...
  • A pom.xml file contains everything needed to describe your Java project.
  • Java source code is to be meant to live in the "/src/main/java" folder
  • Maven will put compiled Java classes into the "target/classes" folder
  • Maven will also build a .jar or .war file, depending on your project, that lives in the "target" folder.
  • Maven has the concept of a build lifecycle, which is made up of different phases.
  • clean is not part of Maven’s default lifecycle, you end up with commands like mvn clean install or mvn clean package. Install or package will trigger all preceding phases, but you need to specify clean in addition.
  • Maven will always download your project dependencies into your local maven repository first and then reference them for your build.
  • local repositories (in your user’s home directory: ~/.m2/)
  • clean: deletes the /target folder.
  • mvn clean package
  • mvn clean install
  • package: Converts your .java source code into a .jar/.war file and puts it into the /target folder.
  • install: First, it does a package(!). Then it takes that .jar/.war file and puts it into your local Maven repository, which lives in ~/.m2/repository.
  • calling 'mvn install' would be enough if Maven was smart enough to do reliable, incremental builds.
  • figuring out what Java source files/modules changed and only compile those.
  • developers got it ingrained to always call 'mvn clean install' (even though this increases build time a lot in bigger projects).
  • make sure that Maven always tries to download the latest snapshot dependency versions
張 旭

Deploy Replica Set With Keyfile Authentication - MongoDB Manual - 0 views

  • Keyfiles are bare-minimum forms of security and are best suited for testing or development environments.
  • With keyfile authentication, each mongod instances in the replica set uses the contents of the keyfile as the shared password for authenticating other members in the deployment.
  • On UNIX systems, the keyfile must not have group or world permissions.
  • ...3 more annotations...
  • Copy the keyfile to each server hosting the replica set members.
  • the user running the mongod instances is the owner of the file and can access the keyfile.
  • For each member in the replica set, start the mongod with either the security.keyFile configuration file setting or the --keyFile command-line option.
張 旭

Introducing the MinIO Operator and Operator Console - 0 views

  • Object-storage-as-a-service is a game changer for IT.
  • provision multi-tenant object storage as a service.
  • have the skill set to create, deploy, tune, scale and manage modern, application oriented object storage using Kubernetes
  • ...12 more annotations...
  • MinIO is purpose-built to take full advantage of the Kubernetes architecture.
  • MinIO and Kubernetes work together to simplify infrastructure management, providing a way to manage object storage infrastructure within the Kubernetes toolset.  
  • The operator pattern extends Kubernetes's familiar declarative API model with custom resource definitions (CRDs) to perform common operations like resource orchestration, non-disruptive upgrades, cluster expansion and to maintain high-availability
  • The Operator uses the command set kubectl that the Kubernetes community was already familiar with and adds the kubectl minio plugin . The MinIO Operator and the MinIO kubectl plugin facilitate the deployment and management of MinIO Object Storage on Kubernetes - which is how multi-tenant object storage as a service is delivered.
  • choosing a leader for a distributed application without an internal member election process
  • The Operator Console makes Kubernetes object storage easier still. In this graphical user interface, MinIO created something so simple that anyone in the organization can create, deploy and manage object storage as a service.
  • The primary unit of managing MinIO on Kubernetes is the tenant.
  • The MinIO Operator can allocate multiple tenants within the same Kubernetes cluster.
  • Each tenant, in turn, can have different capacity (i.e: a small 500GB tenant vs a 100TB tenant), resources (1000m CPU and 4Gi RAM vs 4000m CPU and 16Gi RAM) and servers (4 pods vs 16 pods), as well a separate configurations regarding Identity Providers, Encryption and versions.
  • each tenant is a cluster of server pools (independent sets of nodes with their own compute, network, and storage resources), that, while sharing the same physical infrastructure, are fully isolated from each other in their own namespaces.
  • Each tenant runs their own MinIO cluster, fully isolated from other tenants
  • Each tenant scales independently by federating clusters across geographies.
張 旭

Helm | Getting Started - 0 views

  • The templates/ directory is for template files. When Helm evaluates a chart, it will send all of the files in the templates/ directory through the template rendering engine. It then collects the results of those templates and sends them on to Kubernetes.
  • The charts/ directory may contain other charts (which we call subcharts).
  • we recommend using the suffix .yaml for YAML files and .tpl for helpers.
  • ...8 more annotations...
  • The helm get manifest command takes a release name (full-coral) and prints out all of the Kubernetes resources that were uploaded to the server.
  • Each file begins with --- to indicate the start of a YAML document, and then is followed by an automatically generated comment line that tells us what template file generated this YAML document.
  • name: field is limited to 63 characters because of limitations to the DNS system.
  • The template directive {{ .Release.Name }} injects the release name into the template. The values that are passed into a template can be thought of as namespaced objects, where a dot (.) separates each namespaced element.
  • The leading dot before Release indicates that we start with the top-most namespace for this scope
  • helm install --debug --dry-run goodly-guppy ./mychart. This will render the templates. But instead of installing the chart, it will return the rendered template to you
  • Using --dry-run will make it easier to test your code, but it won't ensure that Kubernetes itself will accept the templates you generate.
  • It's best not to assume that your chart will install just because --dry-run works.
張 旭

Docker image building on GitLab CI | $AYMDEV() - 0 views

  • Continuous Integration (or CI) is a practice where you continously test an application to detect errors as soon as possible.
  • Docker is a container technology, many CI tools execute jobs (the tasks of a pipeline) in container to have an isolated environment.
  • Docker in Docker (« DinD » in short) means executing Docker in a Docker container.
  • ...11 more annotations...
  • images are saved in the host registry, we can benefit from Docker layer caching
  • All jobs will share the same environment, if many of them run simultaneously they might get into conflicts.
  • storage management (accumulating images)
  • The Docker socket binding technique means making a volume of /var/run/docker.sock between host and containers.
  • all containers would share the same Docker daemon.
  • Add privileged = true in the [runners.docker] section, the privileged mode is mandatory to use DinD.
  • To avoid that the runner only run one job at a time, change the concurrent value on the first line.
  • To avoid building a Docker image at each job, it can be built in a first job, pushed to the image registry provided by GitLab, and pulled in the next jobs.
  • functional tests depending on a database.
  • Docker Compose allows you to easily start multiple containers, but it has no more feature than Docker itself
  • Docker in Docker works well, but has its drawbacks, like Docker layer caching which needs some more commands to be used.
張 旭

Creating a cluster with kubeadm | Kubernetes - 0 views

  • (Recommended) If you have plans to upgrade this single control-plane kubeadm cluster to high availability you should specify the --control-plane-endpoint to set the shared endpoint for all control-plane nodes
  • set the --pod-network-cidr to a provider-specific value.
  • kubeadm tries to detect the container runtime by using a list of well known endpoints.
  • ...12 more annotations...
  • kubeadm uses the network interface associated with the default gateway to set the advertise address for this particular control-plane node's API server. To use a different network interface, specify the --apiserver-advertise-address=<ip-address> argument to kubeadm init
  • Do not share the admin.conf file with anyone and instead grant users custom permissions by generating them a kubeconfig file using the kubeadm kubeconfig user command.
  • The token is used for mutual authentication between the control-plane node and the joining nodes. The token included here is secret. Keep it safe, because anyone with this token can add authenticated nodes to your cluster.
  • You must deploy a Container Network Interface (CNI) based Pod network add-on so that your Pods can communicate with each other. Cluster DNS (CoreDNS) will not start up before a network is installed.
  • Take care that your Pod network must not overlap with any of the host networks
  • Make sure that your Pod network plugin supports RBAC, and so do any manifests that you use to deploy it.
  • You can install only one Pod network per cluster.
  • The cluster created here has a single control-plane node, with a single etcd database running on it.
  • The node-role.kubernetes.io/control-plane label is such a restricted label and kubeadm manually applies it using a privileged client after a node has been created.
  • By default, your cluster will not schedule Pods on the control plane nodes for security reasons.
  • kubectl taint nodes --all node-role.kubernetes.io/control-plane-
  • remove the node-role.kubernetes.io/control-plane:NoSchedule taint from any nodes that have it, including the control plane nodes, meaning that the scheduler will then be able to schedule Pods everywhere.
張 旭

Installing kubeadm | Kubernetes - 0 views

  • Swap disabled. You MUST disable swap in order for the kubelet to work properly.
  • The product_uuid can be checked by using the command sudo cat /sys/class/dmi/id/product_uuid
  • some virtual machines may have identical values.
  • ...6 more annotations...
  • Kubernetes uses these values to uniquely identify the nodes in the cluster.
  • Make sure that the br_netfilter module is loaded.
  • you should ensure net.bridge.bridge-nf-call-iptables is set to 1 in your sysctl config,
  • kubeadm will not install or manage kubelet or kubectl for you, so you will need to ensure they match the version of the Kubernetes control plane you want kubeadm to install for you.
  • one minor version skew between the kubelet and the control plane is supported, but the kubelet version may never exceed the API server version.
  • Both the container runtime and the kubelet have a property called "cgroup driver", which is important for the management of cgroups on Linux machines.
張 旭

Ephemeral Containers | Kubernetes - 0 views

  • a special type of container that runs temporarily in an existing Pod to accomplish user-initiated actions such as troubleshooting.
  • you cannot add a container to a Pod once it has been created. Instead, you usually delete and replace Pods in a controlled fashion using deployments.
  • you can run an ephemeral container in an existing Pod to inspect its state and run arbitrary commands.
  • ...4 more annotations...
  • Ephemeral containers differ from other containers in that they lack guarantees for resources or execution, and they will never be automatically restarted, so they are not appropriate for building applications.
  • Ephemeral containers are created using a special ephemeralcontainers handler in the API rather than by adding them directly to pod.spec, so it's not possible to add an ephemeral container using kubectl edit
  • distroless images enable you to deploy minimal container images that reduce attack surface and exposure to bugs and vulnerabilities.
  • enable process namespace sharing so you can view processes in other containers.
  •  
    "a special type of container that runs temporarily in an existing Pod to accomplish user-initiated actions such as troubleshooting. "
張 旭

chaifeng/ufw-docker: To fix the Docker and UFW security flaw without disabling iptables - 0 views

  • It requires to disable docker's iptables function first, but this also means that we give up docker's network management function.
  • This causes containers will not be able to access the external network.
  • such as -A POSTROUTING ! -o docker0 -s 172.17.0.0/16 -j MASQUERADE. But this only allows containers that belong to network 172.17.0.0/16 can access outside.
  • ...13 more annotations...
  • Don't need to disable Docker's iptables and let Docker to manage it's network.
  • The public network cannot access ports that published by Docker.
  • In a very convenient way to allow/deny public networks to access container ports without additional software and extra configurations
  • Enable Docker's iptables feature. Remove all changes like --iptables=false , including configuration file /etc/docker/daemon.json
  • Modify the UFW configuration file /etc/ufw/after.rules
  • There may be some unknown reasons cause the UFW rules will not take effect after restart UFW, please reboot servers.
  • If we publish a port by using option -p 8080:80, we should use the container port 80, not the host port 8080
  • allow the private networks to be able to visit each other.
  • The following rules block connection requests initiated by all public networks, but allow internal networks to access external networks.
  • Since the UDP protocol is stateless, it is not possible to block the handshake signal that initiates the connection request as TCP does.
  • For GNU/Linux we can find the local port range in the file /proc/sys/net/ipv4/ip_local_port_range. The default range is 32768 60999
  • It not only exposes ports of containers but also exposes ports of the host.
  • Cannot expose services running on hosts and containers at the same time by the same command.
  •  
    "It requires to disable docker's iptables function first, but this also means that we give up docker's network management function."
張 旭

kube-proxy | Kubernetes - 0 views

  • The Kubernetes network proxy runs on each node. This reflects services as defined in the Kubernetes API on each node and can do simple TCP, UDP, and SCTP stream forwarding or round robin TCP, UDP, and SCTP forwarding across a set of backends.
  • Service cluster IPs and ports are currently found through Docker-links-compatible environment variables specifying ports opened by the service proxy.
  •  
    "The Kubernetes network proxy runs on each node. This reflects services as defined in the Kubernetes API on each node and can do simple TCP, UDP, and SCTP stream forwarding or round robin TCP, UDP, and SCTP forwarding across a set of backends."
« First ‹ Previous 101 - 114 of 114
Showing 20 items per page