Skip to main content

Home/ Larvata/ Group items tagged daemon

Rss Feed Group items tagged

張 旭

phusion/baseimage-docker - 1 views

    • 張 旭
       
      原始的 docker 在執行命令時,預設就是將傳入的 COMMAND 當成 PID 1 的程序,執行完畢就結束這個  docker,其他的 daemons 並不會執行,而 baseimage 解決了這個問題。
    • crazylion lee
       
      好棒棒
  • docker exec
  • Through SSH
  • ...57 more annotations...
  • docker exec -t -i YOUR-CONTAINER-ID bash -l
  • Login to the container
  • Baseimage-docker only advocates running multiple OS processes inside a single container.
  • Password and challenge-response authentication are disabled by default. Only key authentication is allowed.
  • A tool for running a command as another user
  • The Docker developers advocate the philosophy of running a single logical service per container. A logical service can consist of multiple OS processes.
  • All syslog messages are forwarded to "docker logs".
  • Baseimage-docker advocates running multiple OS processes inside a single container, and a single logical service can consist of multiple OS processes.
  • Baseimage-docker provides tools to encourage running processes as different users
  • sometimes it makes sense to run multiple services in a single container, and sometimes it doesn't.
  • Splitting your logical service into multiple OS processes also makes sense from a security standpoint.
  • using environment variables to pass parameters to containers is very much the "Docker way"
  • Baseimage-docker provides a facility to run a single one-shot command, while solving all of the aforementioned problems
  • the shell script must run the daemon without letting it daemonize/fork it.
  • All executable scripts in /etc/my_init.d, if this directory exists. The scripts are run in lexicographic order.
  • variables will also be passed to all child processes
  • Environment variables on Unix are inherited on a per-process basis
  • there is no good central place for defining environment variables for all applications and services
  • centrally defining environment variables
  • One of the ideas behind Docker is that containers should be stateless, easily restartable, and behave like a black box.
  • a one-shot command in a new container
  • immediately exit after the command exits,
  • However the downside of this approach is that the init system is not started. That is, while invoking COMMAND, important daemons such as cron and syslog are not running. Also, orphaned child processes are not properly reaped, because COMMAND is PID 1.
  • add additional daemons (e.g. your own app) to the image by creating runit entries.
  • Nginx is one such example: it removes all environment variables unless you explicitly instruct it to retain them through the env configuration option.
  • Mechanisms for easily running multiple processes, without violating the Docker philosophy
  • Ubuntu is not designed to be run inside Docker
  • According to the Unix process model, the init process -- PID 1 -- inherits all orphaned child processes and must reap them
  • Syslog-ng seems to be much more stable
  • cron daemon
  • Rotates and compresses logs
  • /sbin/setuser
  • A tool for installing apt packages that automatically cleans up after itself.
  • a single logical service inside a single container
  • A daemon is a program which runs in the background of its system, such as a web server.
  • The shell script must be called run, must be executable, and is to be placed in the directory /etc/service/<NAME>. runsv will switch to the directory and invoke ./run after your container starts.
  • If any script exits with a non-zero exit code, the booting will fail.
  • If your process is started with a shell script, make sure you exec the actual process, otherwise the shell will receive the signal and not your process.
  • any environment variables set with docker run --env or with the ENV command in the Dockerfile, will be picked up by my_init
  • not possible for a child process to change the environment variables of other processes
  • they will not see the environment variables that were originally passed by Docker.
  • We ignore HOME, SHELL, USER and a bunch of other environment variables on purpose, because not ignoring them will break multi-user containers.
  • my_init imports environment variables from the directory /etc/container_environment
  • /etc/container_environment.sh - a dump of the environment variables in Bash format.
  • modify the environment variables in my_init (and therefore the environment variables in all child processes that are spawned after that point in time), by altering the files in /etc/container_environment
  • my_init only activates changes in /etc/container_environment when running startup scripts
  • environment variables don't contain sensitive data, then you can also relax the permissions
  • Syslog messages are forwarded to the console
  • syslog-ng is started separately before the runit supervisor process, and shutdown after runit exits.
  • RUN apt-get update && apt-get upgrade -y -o Dpkg::Options::="--force-confold"
  • /sbin/my_init --skip-startup-files --quiet --
  • By default, no keys are installed, so nobody can login
  • provide a pregenerated, insecure key (PuTTY format)
  • RUN /usr/sbin/enable_insecure_key
  • docker run YOUR_IMAGE /sbin/my_init --enable-insecure-key
  • RUN cat /tmp/your_key.pub >> /root/.ssh/authorized_keys && rm -f /tmp/your_key.pub
  • The default baseimage-docker installs syslog-ng, cron and sshd services during the build process
張 旭

Run the Docker daemon as a non-root user (Rootless mode) | Docker Documentation - 0 views

  • running the Docker daemon and containers as a non-root user
  • Rootless mode does not require root privileges even during the installation of the Docker daemon
  • Rootless mode executes the Docker daemon and containers inside a user namespace.
  • ...9 more annotations...
  • in rootless mode, both the daemon and the container are running without root privileges.
  • Rootless mode does not use binaries with SETUID bits or file capabilities, except newuidmap and newgidmap, which are needed to allow multiple UIDs/GIDs to be used in the user namespace.
  • expose privileged ports (< 1024)
  • add net.ipv4.ip_unprivileged_port_start=0 to /etc/sysctl.conf (or /etc/sysctl.d) and run sudo sysctl --system
  • dockerd-rootless.sh uses slirp4netns (if installed) or VPNKit as the network stack by default.
  • These network stacks run in userspace and might have performance overhead
  • This error occurs when the number of available entries in /etc/subuid or /etc/subgid is not sufficient.
  • This error occurs mostly when the host is running in cgroup v2. See the section Fedora 31 or later for information on switching the host to use cgroup v1.
  • --net=host doesn’t listen ports on the host network namespace This is an expected behavior, as the daemon is namespaced inside RootlessKit’s network namespace. Use docker run -p instead.
張 旭

Baseimage-docker: A minimal Ubuntu base image modified for Docker-friendliness - 0 views

  • We encourage you to use multiple processes.
  • Baseimage-docker is a special Docker image that is configured for correct use within Docker containers.
  • When your Docker container starts, only the CMD command is run.
  • ...16 more annotations...
  • You're not running them, you're only running your app.
  • You have Ubuntu installed in Docker. The files are there. But that doesn't mean Ubuntu's running as it should.
  • The only processes that will be running inside the container is the CMD command, and all processes that it spawns.
  • A proper Unix system should run all kinds of important system services.
  • Ubuntu is not designed to be run inside Docker
  • When a system is started, the first process in the system is called the init process, with PID 1. The system halts when this processs halts.
  • Runit (written in C) is much lighter weight than supervisord (written in Python).
  • Docker runs fine with multiple processes in a container.
  • Baseimage-docker encourages you to run multiple processes through the use of runit.
  • If your init process is your app, then it'll probably only shut down itself, not all the other processes in the container.
  • a Docker container, which is a locked down environment with e.g. no direct access to many kernel resources.
  • Used for service supervision and management.
  • A custom tool for running a command as another user.
  • add additional daemons (e.g. your own app) to the image by creating runit entries.
  • write a small shell script which runs your daemon, and runit will keep it up and running for you, restarting it when it crashes, etc.
  • the shell script must run the daemon without letting it daemonize/fork it.
crazylion lee

GitHub - etsy/statsd: Daemon for easy but powerful stats aggregation - 0 views

  •  
    "Daemon for easy but powerful stats aggregation"
張 旭

phusion/passenger-docker: Docker base images for Ruby, Python, Node.js and Meteor web apps - 0 views

  • Ubuntu 20.04 LTS as base system
  • 2.7.5 is configured as the default.
  • Python 3.8
  • ...23 more annotations...
  • A build system, git, and development headers for many popular libraries, so that the most popular Ruby, Python and Node.js native extensions can be compiled without problems.
  • Nginx 1.18. Disabled by default
  • production-grade features, such as process monitoring, administration and status inspection.
  • Redis 5.0. Not installed by default.
  • The image has an app user with UID 9999 and home directory /home/app. Your application is supposed to run as this user.
  • running applications without root privileges is good security practice.
  • Your application should be placed inside /home/app.
  • COPY --chown=app:app
  • Passenger works like a mod_ruby, mod_nodejs, etc. It changes Nginx into an application server and runs your app from Nginx.
  • placing a .conf file in the directory /etc/nginx/sites-enabled
  • The best way to configure Nginx is by adding .conf files to /etc/nginx/main.d and /etc/nginx/conf.d
  • files in conf.d are included in the Nginx configuration's http context.
  • any environment variables you set with docker run -e, Docker linking and /etc/container_environment, won't reach Nginx.
  • To preserve these variables, place an Nginx config file ending with *.conf in the directory /etc/nginx/main.d, in which you tell Nginx to preserve these variables.
  • By default, Phusion Passenger sets all of the following environment variables to the value production
  • Setting these environment variables yourself (e.g. using docker run -e RAILS_ENV=...) will not have any effect, because Phusion Passenger overrides all of these environment variables.
  • PASSENGER_APP_ENV environment variable
  • passenger-docker autogenerates an Nginx configuration file (/etc/nginx/conf.d/00_app_env.conf) during container boot.
  • The configuration file is in /etc/redis/redis.conf. Modify it as you see fit, but make sure daemonize no is set.
  • You can add additional daemons to the image by creating runit entries.
  • The shell script must be called run, must be executable
  • the shell script must run the daemon without letting it daemonize/fork it.
  • We use RVM to install and to manage Ruby interpreters.
張 旭

GitLab Auto DevOps 深入淺出,自動部署,連設定檔不用?! | 五倍紅寶石・專業程式教育 - 0 views

  • 一個 K8S 的 Cluster,Auto DevOps 將會把網站部署到這個 Cluster
  • 需要有一個 wildcard 的 DNS 讓部署在這個環境的網站有 Domain name
  • 一個可以跑 Docker 的 GitLab Runner,將會為由它來執行 CI / CD 的流程。
  • ...37 more annotations...
  • 其實 Auto DevOps 就是一份官方寫好的 gitlab-ci.yml,在啟動 Auto DevOps 的專案裡,如果找不到 gitlab-ci.yml 檔,那就會直接用官方 gitlab-ci.yml 去跑 CI / CD 流程。
  • Pod 是 K8S 中可以被部署的最小元件,一個 Pod 是由一到多個 Container 組成,同個 Pod 的不同 Container 之間彼此共享網路資源。
  • 每個 Pod 都會有它的 yaml 檔,用以描述 Pod 會使用的 Image 還有連接的 Port 等資訊。
  • Node 又分成 Worker Node 和 Master Node 兩種
  • Helm 透過參數 (parameter) 跟模板 (template) 的方式,讓我們可以在只修改參數的方式重複利用模板。
  • 為了要有 CI CD 的功能我們會把 .gitlab-ci.yml 放在專案的根目錄裡, GitLab 會依造 .gitlab-ci.yml 的設定產生 CI/CD Pipeline,每個 Pipeline 裡面可能有多個 Job,這時候就會需要有 GitLab Runner 來執行這些 Job 並把執行的結果回傳給 GitLab 讓它知道這個 Job 是否有正常執行。
  • 把專案打包成 Docker Image 這工作又或是 helm 的操作都會在 Container 內執行
  • CI/CD Pipeline 是由 stage 還有 job 組成的,stage 是有順序性的,前面的 stage 完成後才會開始下一個 stage。
  • 每個 stage 裡面包含一到多個 Job
  • Auto Devops 裡也會大量用到這種在指定 Container 內運行的工作。
  • 可以通過 health checks
  • 開 private 的話還要注意使用 Container Registry 的權限問題
  • 申請好的 wildcard 的 DNS
  • Auto Devops 也提供只要設定環境變數就能一定程度客製化的選項
  • 特別注意 namespace 有沒有設定對,不然會找不到資料喔
  • Auto Devops,如果想要進一步的客製化,而且是改 GitLab 環境變數都無法實現的客製化,這時候還是得回到 .gitlab-ci.yml 設定檔
  • 在 Docker in Docker 的環境用 Dockerfile 打包 Image
  • 用 helm upgrade 把 chart 部署到 K8S 上
  • GitLab CI 的環境變數主要有三個來源,優先度高到低依序為Settings > CI/CD 介面定義的變數gitlab_ci.yml 定義環境變數GitLab 預設環境變數
  • 把專案打包成 Docker Image 首先需要在專案下新增一份 Dockerfile
  • Auto Devops 裡面的做法,用 herokuish 提供的 Image 來打包專案
  • 在 Runner 的環境中是沒有 docker 指令可以用的,所以這邊啟動一個 Docker Container 在裡面執行就可以用 docker 指令了。
  • 其中 $CI_COMMIT_SHA $CI_COMMIT_BEFORE_SHA 這兩個都是 GitLab 預設環境變數,代表這次 commit 還有上次 commit 的 SHA 值。
  • dind 則是直接啟動 docker daemon,此外 dind 還會自動產生 TLS certificates
  • 為了在 Docker Container 內運行 Docker,會把 Host 上面的 Docker API 分享給 Container。
  • docker:stable 有執行 docker 需要的執行檔,他裡面也包含要啟動 docker 的程式(docker daemon),但啟動 Container 的 entrypoint 是 sh
  • docker:dind 繼承自 docker:stable,而且它 entrypoint 就是啟動 docker 的腳本,此外還會做完 TLS certificates
  • Container 要去連 Host 上的 Docker API 。但現在連線失敗卻是找 http://docker:2375,現在的 dind 已經不是被當做 services 來用了,而是要直接在裡面跑 Docker,所以他應該是要 unix:///var/run/docker.sock 用這種連線,於是把環境變數 DOCKER_HOST 從 tcp://docker:2375 改成空字串,讓 docker daemon 走預設連線就能成功囉!
  • auto-deploy preparationhelm init 建立 helm 專案設定 tiller 在背景執行設定 cluster 的 namespace
  • auto-deploy deploy使用 helm upgrade 部署 chart 到 K8S 上透過 --set 來設定要注入 template 的參數
  • set -x,這樣就能在執行前,顯示指令內容。
  • 用 helm repo list 看看現在有註冊哪些 Chart Repository
  • helm fetch gitlab/auto-deploy-app --untar
  • nohup 可以讓你在離線或登出系統後,還能夠讓工作繼續進行
  • 在不特別設定 CI_APPLICATION_REPOSITORY 的情況下,image_repository 的值就是預設環境變數 CI_REGISTRY_IMAGE/CI_COMMIT_REF_SLUG
  • A:-B 的意思是如果有 A 就用它,沒有就用 B
  • 研究 Auto Devops 難度最高的地方就是太多工具整合在一起,搞不清楚他們之間的關係,出錯也不知道從何查起
crazylion lee

coreos/torus: Torus Distributed Storage - 1 views

  •  
    "Torus is an open source project for distributed storage coordinated through etcd. Torus provides a resource pool and basic file primitives from a set of daemons running atop multiple nodes. These primitives are made consistent by being append-only and coordinated by etcd. From these primitives, a Torus server can support multiple types of volumes, the semantics of which can be broken into subprojects. It ships with a simple block-device volume plugin, but is extensible to more."
張 旭

How To Install and Use Docker: Getting Started | DigitalOcean - 0 views

  • docker as a project offers you the complete set of higher-level tools to carry everything that forms an application across systems and machines - virtual or physical - and brings along loads more of great benefits with it
  • docker daemon: used to manage docker (LXC) containers on the host it runs
  • docker CLI: used to command and communicate with the docker daemon
  • ...20 more annotations...
  • containers: directories containing everything-your-application
  • images: snapshots of containers or base OS (e.g. Ubuntu) images
  • Dockerfiles: scripts automating the building process of images
  • Docker containers are basically directories which can be packed (e.g. tar-archived) like any other, then shared and run across various different machines and platforms (hosts).
  • Linux Containers can be defined as a combination various kernel-level features (i.e. things that Linux-kernel can do) which allow management of applications (and resources they use) contained within their own environment
  • Each container is layered like an onion and each action taken within a container consists of putting another block (which actually translates to a simple change within the file system) on top of the previous one.
  • Each docker container starts from a docker image which forms the base for other applications and layers to come.
  • Docker images constitute the base of docker containers from which everything starts to form
  • a solid, consistent and dependable base with everything that is needed to run the applications
  • As more layers (tools, applications etc.) are added on top of the base, new images can be formed by committing these changes.
  • a Dockerfile for automated image building
  • Dockerfiles are scripts containing a successive series of instructions, directions, and commands which are to be executed to form a new docker image.
  • As you work with a container and continue to perform actions on it (e.g. download and install software, configure files etc.), to have it keep its state, you need to “commit”.
  • Please remember to “commit” all your changes.
  • When you "run" any process using an image, in return, you will have a container.
  • When the process is not actively running, this container will be a non-running container. Nonetheless, all of them will reside on your system until you remove them via rm command.
  • To create a new container, you need to use a base image and specify a command to run.
  • you can not change the command you run after having created a container (hence specifying one during "creation")
  • If you would like to save the progress and changes you made with a container, you can use “commit”
  • turns your container to an image
張 旭

The Twelve-Factor App - 0 views

  • PHP processes run as child processes of Apache, started on demand as needed by request volume.
  • Java processes take the opposite approach, with the JVM providing one massive uberprocess that reserves a large block of system resources (CPU and memory) on startup, with concurrency managed internally via threads
  • Processes in the twelve-factor app take strong cues from the unix process model for running service daemons.
  • ...3 more annotations...
  • application must also be able to span multiple processes running on multiple physical machines.
  • The array of process types and number of processes of each type is known as the process formation.
  • Twelve-factor app processes should never daemonize or write PID files.
張 旭

探索 Docker bridge 的正确姿势,亲测有效! | DaoCloud - 1 views

  • Docker bridge 和 Linux bridge 二者,初看如出一辙,再看又相去甚远
  • Linux bridge 模式下,Linux Kernel 会创建出一个虚拟网桥 ,用以实现主机网络接口与虚拟网络接口间的通信
  • Linux bridge 像一台虚拟交换机
  • ...15 more annotations...
  • Docker Daemon 会创建出一个名为 docker0 的虚拟网桥 ,用来连接宿主机与容器,或者连接不同的容器
  • veth pair 技术的特性可以保证无论哪一个 veth 接收到网络报文,都会无条件地传输给另一方
  • 在桥接模式下,Docker Daemon 将 veth0 附加到 docker0 网桥上,保证宿主机的报文有能力发往 veth0。
  • 将 veth1 添加到 Docker 容器所属的网络命名空间[注释2],保证宿主机的网络报文若发往 veth0 可以立即被 veth1 收到
  • NATP 包含两种转换方式:SNAT 和 DNAT
  • 目的 NAT (Destination NAT,DNAT): 修改数据包的目的地址
  • 容器的 IP 与端口对外都是不可见的
  • 数据包的目的地址为宿主机的 ip 和端口
  • 将数据包发送附加到 docker0 网桥上的 veth0 接口,veth0 接口再将数据包发送给容器内部的 veth1 接口,容器接收数据包并作出响应
  • 源 NAT (Source NAT,SNAT): 修改数据包的源地址
  • 宿主机上的 docker0 网桥发现数据包的目的地址为外界的 IP 和端口,便会将数据包转发给 eth0 ,并从 eth0 发出去。由于存在 SNAT 规则,会将数据包的源地址转换为宿主机的 ip 和端口
  • Docker 容器对外是不可见的
  • veth pair是用于不同network namespace间进行通信的方式,veth pair 将一个 network namespace 数据发往另一个 network namespace 的 veth
  • 网络命名空间是用于隔离网络资源(/proc/net、IP 地址、网卡、路由等)
  • NAT 为网络地址转换(Network Address Translation)的缩写
張 旭

The differences between Docker, containerd, CRI-O and runc - Tutorial Works - 0 views

  • Docker isn’t the only container contender on the block.
  • Container Runtime Interface (CRI), which defines an API between Kubernetes and the container runtime
  • Open Container Initiative (OCI) which publishes specifications for images and containers.
  • ...20 more annotations...
  • for a lot of people, the name “Docker” itself is synonymous with the word “container”.
  • Docker created a very ergonomic (nice-to-use) tool for working with containers – also called docker.
  • docker is designed to be installed on a workstation or server and comes with a bunch of tools to make it easy to build and run containers as a developer, or DevOps person.
  • containerd: This is a daemon process that manages and runs containers.
  • runc: This is the low-level container runtime (the thing that actually creates and runs containers).
  • libcontainer, a native Go-based implementation for creating containers.
  • Kubernetes includes a component called dockershim, which allows it to support Docker.
  • Kubernetes prefers to run containers through any container runtime which supports its Container Runtime Interface (CRI).
  • Kubernetes will remove support for Docker directly, and prefer to use only container runtimes that implement its Container Runtime Interface.
  • Both containerd and CRI-O can run Docker-formatted (actually OCI-formatted) images, they just do it without having to use the docker command or the Docker daemon.
  • Docker images, are actually images packaged in the Open Container Initiative (OCI) format.
  • CRI is the API that Kubernetes uses to control the different runtimes that create and manage containers.
  • CRI makes it easier for Kubernetes to use different container runtimes
  • containerd is a high-level container runtime that came from Docker, and implements the CRI spec
  • containerd was separated out of the Docker project, to make Docker more modular.
  • CRI-O is another high-level container runtime which implements the Container Runtime Interface (CRI).
  • The idea behind the OCI is that you can choose between different runtimes which conform to the spec.
  • runc is an OCI-compatible container runtime.
  • A reference implementation is a piece of software that has implemented all the requirements of a specification or standard.
  • runc provides all of the low-level functionality for containers, interacting with existing low-level Linux features, like namespaces and control groups.
張 旭

Monitor Node Health | Kubernetes - 0 views

  • Node Problem Detector is a daemon for monitoring and reporting about a node's health
  • Node Problem Detector collects information about node problems from various daemons and reports these conditions to the API server as NodeCondition and Event.
  • Node Problem Detector only supports file based kernel log. Log tools such as journald are not supported.
  • ...2 more annotations...
  • kubectl provides the most flexible management of Node Problem Detector.
  • run the Node Problem Detector in your cluster to monitor node health.
張 旭

Logstash Alternatives: Pros & Cons of 5 Log Shippers [2019] - Sematext - 0 views

  • In this case, Elasticsearch. And because Elasticsearch can be down or struggling, or the network can be down, the shipper would ideally be able to buffer and retry
  • Logstash is typically used for collecting, parsing, and storing logs for future use as part of log management.
  • Logstash’s biggest con or “Achille’s heel” has always been performance and resource consumption (the default heap size is 1GB).
  • ...37 more annotations...
  • This can be a problem for high traffic deployments, when Logstash servers would need to be comparable with the Elasticsearch ones.
  • Filebeat was made to be that lightweight log shipper that pushes to Logstash or Elasticsearch.
  • differences between Logstash and Filebeat are that Logstash has more functionality, while Filebeat takes less resources.
  • Filebeat is just a tiny binary with no dependencies.
  • For example, how aggressive it should be in searching for new files to tail and when to close file handles when a file didn’t get changes for a while.
  • For example, the apache module will point Filebeat to default access.log and error.log paths
  • Filebeat’s scope is very limited,
  • Initially it could only send logs to Logstash and Elasticsearch, but now it can send to Kafka and Redis, and in 5.x it also gains filtering capabilities.
  • Filebeat can parse JSON
  • you can push directly from Filebeat to Elasticsearch, and have Elasticsearch do both parsing and storing.
  • You shouldn’t need a buffer when tailing files because, just as Logstash, Filebeat remembers where it left off
  • For larger deployments, you’d typically use Kafka as a queue instead, because Filebeat can talk to Kafka as well
  • The default syslog daemon on most Linux distros, rsyslog can do so much more than just picking logs from the syslog socket and writing to /var/log/messages.
  • It can tail files, parse them, buffer (on disk and in memory) and ship to a number of destinations, including Elasticsearch.
  • rsyslog is the fastest shipper
  • Its grammar-based parsing module (mmnormalize) works at constant speed no matter the number of rules (we tested this claim).
  • use it as a simple router/shipper, any decent machine will be limited by network bandwidth
  • It’s also one of the lightest parsers you can find, depending on the configured memory buffers.
  • rsyslog requires more work to get the configuration right
  • the main difference between Logstash and rsyslog is that Logstash is easier to use while rsyslog lighter.
  • rsyslog fits well in scenarios where you either need something very light yet capable (an appliance, a small VM, collecting syslog from within a Docker container).
  • rsyslog also works well when you need that ultimate performance.
  • syslog-ng as an alternative to rsyslog (though historically it was actually the other way around).
  • a modular syslog daemon, that can do much more than just syslog
  • Unlike rsyslog, it features a clear, consistent configuration format and has nice documentation.
  • Similarly to rsyslog, you’d probably want to deploy syslog-ng on boxes where resources are tight, yet you do want to perform potentially complex processing.
  • syslog-ng has an easier, more polished feel than rsyslog, but likely not that ultimate performance
  • Fluentd was built on the idea of logging in JSON wherever possible (which is a practice we totally agree with) so that log shippers down the line don’t have to guess which substring is which field of which type.
  • Fluentd plugins are in Ruby and very easy to write.
  • structured data through Fluentd, it’s not made to have the flexibility of other shippers on this list (Filebeat excluded).
  • Fluent Bit, which is to Fluentd similar to how Filebeat is for Logstash.
  • Fluentd is a good fit when you have diverse or exotic sources and destinations for your logs, because of the number of plugins.
  • Splunk isn’t a log shipper, it’s a commercial logging solution
  • Graylog is another complete logging solution, an open-source alternative to Splunk.
  • everything goes through graylog-server, from authentication to queries.
  • Graylog is nice because you have a complete logging solution, but it’s going to be harder to customize than an ELK stack.
  • it depends
張 旭

The Rails Command Line - Ruby on Rails Guides - 0 views

  • rake --tasks
  • Think of destroy as the opposite of generate.
  • runner runs Ruby code in the context of Rails non-interactively
  • ...28 more annotations...
  • rails dbconsole figures out which database you're using and drops you into whichever command line interface you would use with it
  • The console command lets you interact with your Rails application from the command line. On the underside, rails console uses IRB
  • rake about gives information about version numbers for Ruby, RubyGems, Rails, the Rails subcomponents, your application's folder, the current Rails environment name, your app's database adapter, and schema version
  • You can precompile the assets in app/assets using rake assets:precompile and remove those compiled assets using rake assets:clean.
  • rake db:version is useful when troubleshooting
  • The doc: namespace has the tools to generate documentation for your app, API documentation, guides.
  • rake notes will search through your code for comments beginning with FIXME, OPTIMIZE or TODO.
  • You can also use custom annotations in your code and list them using rake notes:custom by specifying the annotation using an environment variable ANNOTATION.
  • rake routes will list all of your defined routes, which is useful for tracking down routing problems in your app, or giving you a good overview of the URLs in an app you're trying to get familiar with.
  • rake secret will give you a pseudo-random key to use for your session secret.
  • Custom rake tasks have a .rake extension and are placed in Rails.root/lib/tasks.
  • rails new . --git --database=postgresql
  • All commands can run with -h or --help to list more information
  • The rails server command launches a small web server named WEBrick which comes bundled with Ruby
  • rails server -e production -p 4000
  • You can run a server as a daemon by passing a -d option
  • The rails generate command uses templates to create a whole lot of things.
  • Using generators will save you a large amount of time by writing boilerplate code, code that is necessary for the app to work.
  • All Rails console utilities have help text.
  • generate controller ControllerName action1 action2.
  • With a normal, plain-old Rails application, your URLs will generally follow the pattern of http://(host)/(controller)/(action), and a URL like http://(host)/(controller) will hit the index action of that controller.
  • A scaffold in Rails is a full set of model, database migration for that model, controller to manipulate it, views to view and manipulate the data, and a test suite for each of the above.
  • Unit tests are code that tests and makes assertions about code.
  • Unit tests are your friend.
  • rails console --sandbox
  • rails db
  • Each task has a description, and should help you find the thing you need.
  • rake tmp:clear clears all the three: cache, sessions and sockets.
張 旭

How to Use Docker on OS X: The Missing Guide | Viget - 0 views

  • Docker is a client-server application.
  • The Docker server is a daemon that does all the heavy lifting: building and downloading images, starting and stopping containers, and the like. It exposes a REST API for remote management.
  • The Docker client is a command line program that communicates with the Docker server using the REST API.
  • ...9 more annotations...
  • interact with Docker by using the client to send commands to the server.
  • The machine running the Docker server is called the Docker host
  • Docker uses features only available to Linux, that machine must be running Linux (more specifically, the Linux kernel).
  • boot2docker is a “lightweight Linux distribution made specifically to run Docker containers.”
  • Docker server will run inside our boot2docker VM
  • boot2docker, not OS X, is the Docker host, not OS X.
  • Docker mounts volumes from the boot2docker VM, not from OS X
  • initialize boot2docker (we only have to do this once):
  • The Docker client assumes the Docker host is the current machine. We need to tell it to use our boot2docker VM by setting the DOCKER_HOST environment variable
張 旭

Best practices for writing Dockerfiles | Docker Documentation - 0 views

  • building efficient images
  • Docker builds images automatically by reading the instructions from a Dockerfile -- a text file that contains all commands, in order, needed to build a given image.
  • A Docker image consists of read-only layers each of which represents a Dockerfile instruction.
  • ...47 more annotations...
  • The layers are stacked and each one is a delta of the changes from the previous layer
  • When you run an image and generate a container, you add a new writable layer (the “container layer”) on top of the underlying layers.
  • By “ephemeral,” we mean that the container can be stopped and destroyed, then rebuilt and replaced with an absolute minimum set up and configuration.
  • Inadvertently including files that are not necessary for building an image results in a larger build context and larger image size.
  • To exclude files not relevant to the build (without restructuring your source repository) use a .dockerignore file. This file supports exclusion patterns similar to .gitignore files.
  • minimize image layers by leveraging build cache.
  • if your build contains several layers, you can order them from the less frequently changed (to ensure the build cache is reusable) to the more frequently changed
  • avoid installing extra or unnecessary packages just because they might be “nice to have.”
  • Each container should have only one concern.
  • Decoupling applications into multiple containers makes it easier to scale horizontally and reuse containers
  • Limiting each container to one process is a good rule of thumb, but it is not a hard and fast rule.
  • Use your best judgment to keep containers as clean and modular as possible.
  • do multi-stage builds and only copy the artifacts you need into the final image. This allows you to include tools and debug information in your intermediate build stages without increasing the size of the final image.
  • avoid duplication of packages and make the list much easier to update.
  • When building an image, Docker steps through the instructions in your Dockerfile, executing each in the order specified.
  • the next instruction is compared against all child images derived from that base image to see if one of them was built using the exact same instruction. If not, the cache is invalidated.
  • simply comparing the instruction in the Dockerfile with one of the child images is sufficient.
  • For the ADD and COPY instructions, the contents of the file(s) in the image are examined and a checksum is calculated for each file.
  • If anything has changed in the file(s), such as the contents and metadata, then the cache is invalidated.
  • cache checking does not look at the files in the container to determine a cache match.
  • In that case just the command string itself is used to find a match.
    • 張 旭
       
      RUN apt-get 這樣的指令,直接比對指令內容的意思。
  • Whenever possible, use current official repositories as the basis for your images.
  • Using RUN apt-get update && apt-get install -y ensures your Dockerfile installs the latest package versions with no further coding or manual intervention.
  • cache busting
  • Docker executes these commands using the /bin/sh -c interpreter, which only evaluates the exit code of the last operation in the pipe to determine success.
  • set -o pipefail && to ensure that an unexpected error prevents the build from inadvertently succeeding.
  • The CMD instruction should be used to run the software contained by your image, along with any arguments.
  • CMD should almost always be used in the form of CMD [“executable”, “param1”, “param2”…]
  • CMD should rarely be used in the manner of CMD [“param”, “param”] in conjunction with ENTRYPOINT
  • The ENV instruction is also useful for providing required environment variables specific to services you wish to containerize,
  • Each ENV line creates a new intermediate layer, just like RUN commands
  • COPY is preferred
  • COPY only supports the basic copying of local files into the container
  • the best use for ADD is local tar file auto-extraction into the image, as in ADD rootfs.tar.xz /
  • If you have multiple Dockerfile steps that use different files from your context, COPY them individually, rather than all at once.
  • using ADD to fetch packages from remote URLs is strongly discouraged; you should use curl or wget instead
  • The best use for ENTRYPOINT is to set the image’s main command, allowing that image to be run as though it was that command (and then use CMD as the default flags).
  • the image name can double as a reference to the binary as shown in the command
  • The VOLUME instruction should be used to expose any database storage area, configuration storage, or files/folders created by your docker container.
  • use VOLUME for any mutable and/or user-serviceable parts of your image
  • If you absolutely need functionality similar to sudo, such as initializing the daemon as root but running it as non-root), consider using “gosu”.
  • always use absolute paths for your WORKDIR
  • An ONBUILD command executes after the current Dockerfile build completes.
  • Think of the ONBUILD command as an instruction the parent Dockerfile gives to the child Dockerfile
  • A Docker build executes ONBUILD commands before any command in a child Dockerfile.
  • Be careful when putting ADD or COPY in ONBUILD. The “onbuild” image fails catastrophically if the new build’s context is missing the resource being added.
張 旭

Queues - Laravel - The PHP Framework For Web Artisans - 0 views

  • Laravel queues provide a unified API across a variety of different queue backends, such as Beanstalk, Amazon SQS, Redis, or even a relational database.
  • The queue configuration file is stored in config/queue.php
  • a synchronous driver that will execute jobs immediately (for local use)
  • ...56 more annotations...
  • A null queue driver is also included which discards queued jobs.
  • In your config/queue.php configuration file, there is a connections configuration option.
  • any given queue connection may have multiple "queues" which may be thought of as different stacks or piles of queued jobs.
  • each connection configuration example in the queue configuration file contains a queue attribute.
  • if you dispatch a job without explicitly defining which queue it should be dispatched to, the job will be placed on the queue that is defined in the queue attribute of the connection configuration
  • pushing jobs to multiple queues can be especially useful for applications that wish to prioritize or segment how jobs are processed
  • specify which queues it should process by priority.
  • If your Redis queue connection uses a Redis Cluster, your queue names must contain a key hash tag.
  • ensure all of the Redis keys for a given queue are placed into the same hash slot
  • all of the queueable jobs for your application are stored in the app/Jobs directory.
  • Job classes are very simple, normally containing only a handle method which is called when the job is processed by the queue.
  • we were able to pass an Eloquent model directly into the queued job's constructor. Because of the SerializesModels trait that the job is using, Eloquent models will be gracefully serialized and unserialized when the job is processing.
  • When the job is actually handled, the queue system will automatically re-retrieve the full model instance from the database.
  • The handle method is called when the job is processed by the queue
  • The arguments passed to the dispatch method will be given to the job's constructor
  • delay the execution of a queued job, you may use the delay method when dispatching a job.
  • dispatch a job immediately (synchronously), you may use the dispatchNow method.
  • When using this method, the job will not be queued and will be run immediately within the current process
  • specify a list of queued jobs that should be run in sequence.
  • Deleting jobs using the $this->delete() method will not prevent chained jobs from being processed. The chain will only stop executing if a job in the chain fails.
  • this does not push jobs to different queue "connections" as defined by your queue configuration file, but only to specific queues within a single connection.
  • To specify the queue, use the onQueue method when dispatching the job
  • To specify the connection, use the onConnection method when dispatching the job
  • defining the maximum number of attempts on the job class itself.
  • to defining how many times a job may be attempted before it fails, you may define a time at which the job should timeout.
  • using the funnel method, you may limit jobs of a given type to only be processed by one worker at a time
  • using the throttle method, you may throttle a given type of job to only run 10 times every 60 seconds.
  • If an exception is thrown while the job is being processed, the job will automatically be released back onto the queue so it may be attempted again.
  • dispatch a Closure. This is great for quick, simple tasks that need to be executed outside of the current request cycle
  • When dispatching Closures to the queue, the Closure's code contents is cryptographically signed so it can not be modified in transit.
  • Laravel includes a queue worker that will process new jobs as they are pushed onto the queue.
  • once the queue:work command has started, it will continue to run until it is manually stopped or you close your terminal
  • queue workers are long-lived processes and store the booted application state in memory.
  • they will not notice changes in your code base after they have been started.
  • during your deployment process, be sure to restart your queue workers.
  • customize your queue worker even further by only processing particular queues for a given connection
  • The --once option may be used to instruct the worker to only process a single job from the queue
  • The --stop-when-empty option may be used to instruct the worker to process all jobs and then exit gracefully.
  • Daemon queue workers do not "reboot" the framework before processing each job.
  • you should free any heavy resources after each job completes.
  • Since queue workers are long-lived processes, they will not pick up changes to your code without being restarted.
  • restart the workers during your deployment process.
  • php artisan queue:restart
  • The queue uses the cache to store restart signals
  • the queue workers will die when the queue:restart command is executed, you should be running a process manager such as Supervisor to automatically restart the queue workers.
  • each queue connection defines a retry_after option. This option specifies how many seconds the queue connection should wait before retrying a job that is being processed.
  • The --timeout option specifies how long the Laravel queue master process will wait before killing off a child queue worker that is processing a job.
  • When jobs are available on the queue, the worker will keep processing jobs with no delay in between them.
  • While sleeping, the worker will not process any new jobs - the jobs will be processed after the worker wakes up again
  • the numprocs directive will instruct Supervisor to run 8 queue:work processes and monitor all of them, automatically restarting them if they fail.
  • Laravel includes a convenient way to specify the maximum number of times a job should be attempted.
  • define a failed method directly on your job class, allowing you to perform job specific clean-up when a failure occurs.
  • a great opportunity to notify your team via email or Slack.
  • php artisan queue:retry all
  • php artisan queue:flush
  • When injecting an Eloquent model into a job, it is automatically serialized before being placed on the queue and restored when the job is processed
張 旭

Upgrading kubeadm clusters | Kubernetes - 0 views

  • Swap must be disabled.
  • read the release notes carefully.
  • back up any important components, such as app-level state stored in a database.
  • ...16 more annotations...
  • All containers are restarted after upgrade, because the container spec hash value is changed.
  • The upgrade procedure on control plane nodes should be executed one node at a time.
  • /etc/kubernetes/admin.conf
  • kubeadm upgrade also automatically renews the certificates that it manages on this node. To opt-out of certificate renewal the flag --certificate-renewal=false can be used.
  • Manually upgrade your CNI provider plugin.
  • sudo systemctl daemon-reload sudo systemctl restart kubelet
  • If kubeadm upgrade fails and does not roll back, for example because of an unexpected shutdown during execution, you can run kubeadm upgrade again.
  • To recover from a bad state, you can also run kubeadm upgrade apply --force without changing the version that your cluster is running.
  • kubeadm-backup-etcd contains a backup of the local etcd member data for this control plane Node.
  • the contents of this folder can be manually restored in /var/lib/etcd
  • kubeadm-backup-manifests contains a backup of the static Pod manifest files for this control plane Node.
  • the contents of this folder can be manually restored in /etc/kubernetes/manifests
  • Enforces the version skew policies.
  • Upgrades the control plane components or rollbacks if any of them fails to come up.
  • Creates new certificate and key files of the API server and backs up old files if they're about to expire in 180 days.
  • backup folders under /etc/kubernetes/tmp
張 旭

Gracefully Shutdown Docker Container - Kakashi's Blog - 1 views

  • The initial idea is to make application invokes deconstructor of each component as soon as the application receives specific signals such as SIGTERM and SIGINT
  • When you run a docker container, by default it has a PID namespace, which means the docker process is isolated from other processes on your host.
  • The PID namespace has an important task to reap zombie processes.
  • ...11 more annotations...
  • This uses /bin/bash as PID1 and runs your program as the subprocess.
  • When a signal is sent to a shell, the signal actually won’t be forwarded to subprocesses.
  • By using the exec form, we can run our program as PID1
  • if you use exec form to run a shell script to spawn your application, remember to use exec syscall to overwrite /usr/bin/bash otherwise it will act as senario1
  • /bin/bash can handle repeating zombie process
  • with Tini, SIGTERM properly terminates your process even if you didn’t explicitly install a signal handler for it.
  • run tini as PID1 and it will forward the signal for subprocesses.
  • tini is a signal proxy and it also can deal with zombie process issue automatically.
  • run your program with tini by passing --init flag to docker run
  • use docker stop, docker will wait for 10s for stopping container before killing a process (by default). The main process inside the container will receive SIGTERM, then docker daemon will wait for 10s and send SIGKILL to terminate process.
  • kill running containers immediately. it’s more like kill -9 and kill --SIGKILL
張 旭

Docker image building on GitLab CI | $AYMDEV() - 0 views

  • Continuous Integration (or CI) is a practice where you continously test an application to detect errors as soon as possible.
  • Docker is a container technology, many CI tools execute jobs (the tasks of a pipeline) in container to have an isolated environment.
  • Docker in Docker (« DinD » in short) means executing Docker in a Docker container.
  • ...11 more annotations...
  • images are saved in the host registry, we can benefit from Docker layer caching
  • All jobs will share the same environment, if many of them run simultaneously they might get into conflicts.
  • storage management (accumulating images)
  • The Docker socket binding technique means making a volume of /var/run/docker.sock between host and containers.
  • all containers would share the same Docker daemon.
  • Add privileged = true in the [runners.docker] section, the privileged mode is mandatory to use DinD.
  • To avoid that the runner only run one job at a time, change the concurrent value on the first line.
  • To avoid building a Docker image at each job, it can be built in a first job, pushed to the image registry provided by GitLab, and pulled in the next jobs.
  • functional tests depending on a database.
  • Docker Compose allows you to easily start multiple containers, but it has no more feature than Docker itself
  • Docker in Docker works well, but has its drawbacks, like Docker layer caching which needs some more commands to be used.
1 - 20 of 22 Next ›
Showing 20 items per page