Skip to main content

Home/ Larvata/ Group items tagged ubuntu

Rss Feed Group items tagged

crazylion lee

My First 10 Minutes On a Server - Primer for Securing Ubuntu - 0 views

  •  
    "My First 5 Minutes on a Server, by Bryan Kennedy, is an excellent intro into securing a server against most attacks. We have a few modifications to his approach that we wanted to document as part of our efforts of externalizing our processes and best practices. We also wanted to spend a bit more time explaining a few things that younger engineers may benefit from."
張 旭

How To Create a Kubernetes Cluster Using Kubeadm on Ubuntu 18.04 | DigitalOcean - 0 views

  • A pod is an atomic unit that runs one or more containers.
  • Pods are the basic unit of scheduling in Kubernetes: all containers in a pod are guaranteed to run on the same node that the pod is scheduled on.
  • Each pod has its own IP address, and a pod on one node should be able to access a pod on another node using the pod's IP.
  • ...12 more annotations...
  • Communication between pods is more complicated, however, and requires a separate networking component that can transparently route traffic from a pod on one node to a pod on another.
  • pod network plugins. For this cluster, you will use Flannel, a stable and performant option.
  • Passing the argument --pod-network-cidr=10.244.0.0/16 specifies the private subnet that the pod IPs will be assigned from.
  • kubectl apply -f descriptor.[yml|json] is the syntax for telling kubectl to create the objects described in the descriptor.[yml|json] file.
  • deploy Nginx using Deployments and Services
  • A deployment is a type of Kubernetes object that ensures there's always a specified number of pods running based on a defined template, even if the pod crashes during the cluster's lifetime.
  • NodePort, a scheme that will make the pod accessible through an arbitrary port opened on each node of the cluster
  • Services are another type of Kubernetes object that expose cluster internal services to clients, both internal and external.
  • load balancing requests to multiple pods
  • Pods are ubiquitous in Kubernetes, so understanding them will facilitate your work
  • how controllers such as deployments work since they are used frequently in stateless applications for scaling and the automated healing of unhealthy applications.
  • Understanding the types of services and the options they have is essential for running both stateless and stateful applications.
張 旭

Baseimage-docker: A minimal Ubuntu base image modified for Docker-friendliness - 0 views

  • We encourage you to use multiple processes.
  • Baseimage-docker is a special Docker image that is configured for correct use within Docker containers.
  • When your Docker container starts, only the CMD command is run.
  • ...16 more annotations...
  • You're not running them, you're only running your app.
  • You have Ubuntu installed in Docker. The files are there. But that doesn't mean Ubuntu's running as it should.
  • The only processes that will be running inside the container is the CMD command, and all processes that it spawns.
  • A proper Unix system should run all kinds of important system services.
  • Ubuntu is not designed to be run inside Docker
  • When a system is started, the first process in the system is called the init process, with PID 1. The system halts when this processs halts.
  • Runit (written in C) is much lighter weight than supervisord (written in Python).
  • Docker runs fine with multiple processes in a container.
  • Baseimage-docker encourages you to run multiple processes through the use of runit.
  • If your init process is your app, then it'll probably only shut down itself, not all the other processes in the container.
  • a Docker container, which is a locked down environment with e.g. no direct access to many kernel resources.
  • Used for service supervision and management.
  • A custom tool for running a command as another user.
  • add additional daemons (e.g. your own app) to the image by creating runit entries.
  • write a small shell script which runs your daemon, and runit will keep it up and running for you, restarting it when it crashes, etc.
  • the shell script must run the daemon without letting it daemonize/fork it.
張 旭

Why I Will Never Use Alpine Linux Ever Again | Martin Heinz | Personal Website & Blog - 2 views

  • musl is an implementation of C standard library. It is more lightweight, faster and simpler than glibc used by other Linux distros, such as Ubuntu.
  • Some of it stems from how musl (and therefore also Alpine) handles DNS (it's always DNS), more specifically, musl (by design) doesn't support DNS-over-TCP.
  • By using Alpine, you're getting "free" chaos engineering for you cluster.
  • ...2 more annotations...
  • this DNS issue does not manifest in Docker container. It can only happen in Kubernetes, so if you test locally, everything will work fine, and you will only find out about unfixable issue when you deploy the application to a cluster.
  • if your application requires CGO_ENABLED=1, you will obviously run into issue with Alpine.
  •  
    "musl is an implementation of C standard library. It is more lightweight, faster and simpler than glibc used by other Linux distros, such as Ubuntu."
張 旭

Moving away from Alpine - DEV Community - 0 views

  • it’s a lot of work to get packages that are not readily available in Alpine repository.
  • things compiled in Alpine won’t be usable on Ubuntu, for example, and vice versa.
  • the difficulty in pinning package versions in Alpine.
  • ...2 more annotations...
  • Developers rely heavily on app logs via syslog (mounted /dev/log) and Alpine uses busybox syslog by default.
  • Ubuntu officially launched minimal ubuntu images for cloud / container use
crazylion lee

Security Onion - 0 views

  •  
    "Security Onion is a Linux distro for intrusion detection, network security monitoring, and log management. It's based on Ubuntu and contains Snort, Suricata, Bro, OSSEC, Sguil, Squert, ELSA, Xplico, NetworkMiner, and many other security tools. The easy-to-use Setup wizard allows you to build an army of distributed sensors for your enterprise in minutes!"
張 旭

How To Benchmark HTTP Latency with wrk on Ubuntu 14.04 | DigitalOcean - 0 views

  • wrk, which measures the latency of your HTTP services at high loads.
  • Latency refers to the time interval between the moment the request was made (by wrk) and the moment the response was received (from the service).
  • Tests can't be compared to real users, but they should give you a good estimate of expected latency
張 旭

phusion/baseimage-docker - 1 views

    • 張 旭
       
      原始的 docker 在執行命令時,預設就是將傳入的 COMMAND 當成 PID 1 的程序,執行完畢就結束這個  docker,其他的 daemons 並不會執行,而 baseimage 解決了這個問題。
    • crazylion lee
       
      好棒棒
  • docker exec
  • Through SSH
  • ...57 more annotations...
  • docker exec -t -i YOUR-CONTAINER-ID bash -l
  • Login to the container
  • Baseimage-docker only advocates running multiple OS processes inside a single container.
  • Password and challenge-response authentication are disabled by default. Only key authentication is allowed.
  • A tool for running a command as another user
  • The Docker developers advocate the philosophy of running a single logical service per container. A logical service can consist of multiple OS processes.
  • All syslog messages are forwarded to "docker logs".
  • Baseimage-docker advocates running multiple OS processes inside a single container, and a single logical service can consist of multiple OS processes.
  • Baseimage-docker provides tools to encourage running processes as different users
  • sometimes it makes sense to run multiple services in a single container, and sometimes it doesn't.
  • Splitting your logical service into multiple OS processes also makes sense from a security standpoint.
  • using environment variables to pass parameters to containers is very much the "Docker way"
  • Baseimage-docker provides a facility to run a single one-shot command, while solving all of the aforementioned problems
  • the shell script must run the daemon without letting it daemonize/fork it.
  • All executable scripts in /etc/my_init.d, if this directory exists. The scripts are run in lexicographic order.
  • variables will also be passed to all child processes
  • Environment variables on Unix are inherited on a per-process basis
  • there is no good central place for defining environment variables for all applications and services
  • centrally defining environment variables
  • One of the ideas behind Docker is that containers should be stateless, easily restartable, and behave like a black box.
  • a one-shot command in a new container
  • immediately exit after the command exits,
  • However the downside of this approach is that the init system is not started. That is, while invoking COMMAND, important daemons such as cron and syslog are not running. Also, orphaned child processes are not properly reaped, because COMMAND is PID 1.
  • add additional daemons (e.g. your own app) to the image by creating runit entries.
  • Nginx is one such example: it removes all environment variables unless you explicitly instruct it to retain them through the env configuration option.
  • Mechanisms for easily running multiple processes, without violating the Docker philosophy
  • Ubuntu is not designed to be run inside Docker
  • According to the Unix process model, the init process -- PID 1 -- inherits all orphaned child processes and must reap them
  • Syslog-ng seems to be much more stable
  • cron daemon
  • Rotates and compresses logs
  • /sbin/setuser
  • A tool for installing apt packages that automatically cleans up after itself.
  • a single logical service inside a single container
  • A daemon is a program which runs in the background of its system, such as a web server.
  • The shell script must be called run, must be executable, and is to be placed in the directory /etc/service/<NAME>. runsv will switch to the directory and invoke ./run after your container starts.
  • If any script exits with a non-zero exit code, the booting will fail.
  • If your process is started with a shell script, make sure you exec the actual process, otherwise the shell will receive the signal and not your process.
  • any environment variables set with docker run --env or with the ENV command in the Dockerfile, will be picked up by my_init
  • not possible for a child process to change the environment variables of other processes
  • they will not see the environment variables that were originally passed by Docker.
  • We ignore HOME, SHELL, USER and a bunch of other environment variables on purpose, because not ignoring them will break multi-user containers.
  • my_init imports environment variables from the directory /etc/container_environment
  • /etc/container_environment.sh - a dump of the environment variables in Bash format.
  • modify the environment variables in my_init (and therefore the environment variables in all child processes that are spawned after that point in time), by altering the files in /etc/container_environment
  • my_init only activates changes in /etc/container_environment when running startup scripts
  • environment variables don't contain sensitive data, then you can also relax the permissions
  • Syslog messages are forwarded to the console
  • syslog-ng is started separately before the runit supervisor process, and shutdown after runit exits.
  • RUN apt-get update && apt-get upgrade -y -o Dpkg::Options::="--force-confold"
  • /sbin/my_init --skip-startup-files --quiet --
  • By default, no keys are installed, so nobody can login
  • provide a pregenerated, insecure key (PuTTY format)
  • RUN /usr/sbin/enable_insecure_key
  • docker run YOUR_IMAGE /sbin/my_init --enable-insecure-key
  • RUN cat /tmp/your_key.pub >> /root/.ssh/authorized_keys && rm -f /tmp/your_key.pub
  • The default baseimage-docker installs syslog-ng, cron and sshd services during the build process
張 旭

Pre-Built CircleCI Docker Images - CircleCI - 0 views

  • typically extensions of official Docker images and include tools especially useful for CI/CD.
  • Convenience images are based on the most recently built versions of upstream images, so it is best practice to use the most specific image possible.
  • add -jessie or -stretch to the end of each of those containers to ensure you’re only using that version of the Debian base OS.
  • ...12 more annotations...
  • language images
  • service images
  • All images add a circleci user as a system user
  • A language image should be listed first under the docker key in your configuration, making it the primary container during execution.
  • For example, if you want to add browsers to the circleci/golang:1.9 image, use the circleci/golang:1.9-browsers image.
  • Service images are convenience images for services like databases
  • should be listed after language images so they become secondary service containers.
  • To speed up builds using RAM volume, add the -ram suffix to the end of a service image tag
  • All convenience images have been extended with additional tools.
  • all images include the following packages, installed via apt-get
  • Most CircleCI convenience images are Debian Jessie- or Stretch-based images, however some extend Ubuntu-based images.
  • The following packages are installed via curl
張 旭

如何在 Ubuntu 18.04 下正确配置网络 - 运维之美 - 0 views

  •  
    "systemd-resolve --status"
張 旭

How to Set Up Squid Proxy for Private Connections on Ubuntu 20.04 | DigitalOcean - 0 views

  • it’s a good idea to keep the deny all rule at the bottom of this configuration block.
  •  
    "it's a good idea to keep the deny all rule at the bottom of this configuration block."
張 旭

Getting Started with Docker - Servers for Hackers - 0 views

  • Docker is an isolated portion of the host computer, sharing the host kernel (OS) and even its bin/libraries if appropriate.
  • the Docker Container contains the parts that make Ubuntu different from CoreOS.
  • A Docker container only stays alive as long as there is an active process being run in it.
  • ...10 more annotations...
  • Allocate a (pseudo) tty
  • Keep stdin open (so we can interact with it)
  • Docker allows us make changes to an image, commit those changes, and then push those changes out somehwere.
  • Docker tracks any changes we make to a container
  • The Dockerfile provides a set of instructions for Docker to run on a container.
  • what image (and tag in this case) to base this off of
  • run the given command (as user "root")
  • copy a file from the host machine into the container
  • expose a port to the host machine. You can expose multiple ports
  • run a command
張 旭

How To Install and Use Docker: Getting Started | DigitalOcean - 0 views

  • docker as a project offers you the complete set of higher-level tools to carry everything that forms an application across systems and machines - virtual or physical - and brings along loads more of great benefits with it
  • docker daemon: used to manage docker (LXC) containers on the host it runs
  • docker CLI: used to command and communicate with the docker daemon
  • ...20 more annotations...
  • containers: directories containing everything-your-application
  • images: snapshots of containers or base OS (e.g. Ubuntu) images
  • Dockerfiles: scripts automating the building process of images
  • Docker containers are basically directories which can be packed (e.g. tar-archived) like any other, then shared and run across various different machines and platforms (hosts).
  • Linux Containers can be defined as a combination various kernel-level features (i.e. things that Linux-kernel can do) which allow management of applications (and resources they use) contained within their own environment
  • Each container is layered like an onion and each action taken within a container consists of putting another block (which actually translates to a simple change within the file system) on top of the previous one.
  • Each docker container starts from a docker image which forms the base for other applications and layers to come.
  • Docker images constitute the base of docker containers from which everything starts to form
  • a solid, consistent and dependable base with everything that is needed to run the applications
  • As more layers (tools, applications etc.) are added on top of the base, new images can be formed by committing these changes.
  • a Dockerfile for automated image building
  • Dockerfiles are scripts containing a successive series of instructions, directions, and commands which are to be executed to form a new docker image.
  • As you work with a container and continue to perform actions on it (e.g. download and install software, configure files etc.), to have it keep its state, you need to “commit”.
  • Please remember to “commit” all your changes.
  • When you "run" any process using an image, in return, you will have a container.
  • When the process is not actively running, this container will be a non-running container. Nonetheless, all of them will reside on your system until you remove them via rm command.
  • To create a new container, you need to use a base image and specify a command to run.
  • you can not change the command you run after having created a container (hence specifying one during "creation")
  • If you would like to save the progress and changes you made with a container, you can use “commit”
  • turns your container to an image
張 旭

一位开发者的 Linux 容器之旅 - 51CTO.COM - 0 views

  • 容器是一个 Linux 进程,Linux 认为它只是一个运行中的进程。该进程只知道它被告知的东西。
  • 容器进程也分配了它自己的 IP 地址
  • 和典型虚拟机的静态方式不同。所有这些资源的共享都由容器管理器来管理。
  • ...26 more annotations...
  • 可以在容器管理器上运行命令,使容器 IP 映射到主机中能访问公网的 IP 地址。建立了该映射,无论出于什么意图和目的,容器就是网络上一个可访问的独立机器,从概念上类似于虚拟机。
  • 容器是拥有不同 IP 地址从而使其成为网络上可识别的独立 Linux 进程
  • 容器/进程以动态、合作的方式共享主机上的资源。
  • 容器能非常快速地启动
  • 操作系统被所有容器所共享,减少了容器足迹的重复和冗余。每个容器只包括该容器特有的部分
  • 获得了虚拟机独立和封装的好处,而抛弃了静态资源专有的缺陷
  • 托管容器的计算机运行着被剥离的只剩下主要部分的某个 Linux 版本
  • Ubuntu Snappy
  • Red Hat Atomic Host
  • CoreOS
  • 在容器化方面,容器进程有它自己的 IP 地址。一旦给予了一个 IP 地址,该进程就是宿主网络中可识别的资源
  • 一个容器组件被称为层(layer)
  • 层是一个容器镜像
  • 容器管理器只提供你所要的操作系统在宿主操作系统中不存在的部分
  • 在容器配置文件中重新定义层
  • 容器的各种功能都由一个称为容器管理器(container manager)的软件控制
  • Docker
  • Rocket
  • 镜像存储在注册库(registry)中,注册库通过网络访问
  • 注册库类似于一个使用 Java 的人眼中的 Maven 仓库、使用 .NET 的人眼中的 NuGet 服务器。
  • 容器管理器会封装你应用程序的所有东西为一个独立容器,该容器将会在容器管理器的管理下运行在宿主计算机上。
  • 每个容器有一个独立的 IP 地址
  • 在一个负载均衡容器后运行容器集群以获得更高的性能和高可用计算
  • Deis 的容器配置技术
  • 每次添加实例到环境中时,你不需要手动配置负载均衡器以便接受你的容器镜像。
  • 使用服务发现技术让容器告知均衡器它可用
張 旭

一位开发者的 Linux 容器之旅-技术 ◆ 学习|Linux.中国-开源社区 - 1 views

  • 容器是一个 Linux 进程,Linux 认为它只是一个运行中的进程。该进程只知道它被告知的东西。
  • 容器进程也分配了它自己的 IP 地址。
  • 在容器化方面,容器进程有它自己的 IP 地址。一旦给予了一个 IP 地址,该进程就是宿主网络中可识别的资源
  • ...20 more annotations...
  • 使容器 IP 映射到主机中能访问公网的 IP 地址。建立了该映射,无论出于什么意图和目的,容器就是网络上一个可访问的独立机器,从概念上类似于虚拟机。
  • 容器是拥有不同 IP 地址从而使其成为网络上可识别的独立 Linux 进程
  • CPU、内存和存储空间的分配是动态的,和典型虚拟机的静态方式不同。所有这些资源的共享都由容器管理器来管理。
  • 容器能非常快速地启动
  • 托管容器的计算机运行着被剥离的只剩下主要部分的某个 Linux 版本。
  • 操作系统被所有容器所共享,减少了容器足迹的重复和冗余。每个容器只包括该容器特有的部分
  • 层是一个容器镜像
  • 一个容器组件被称为层(layer)
  • 容器的各种功能都由一个称为容器管理器(container manager)的软件控制
  • 流行的容器管理器是 Docker 和 Rocket
  • 镜像存储在注册库(registry)中,注册库通过网络访问
  • 镜像代表了你的容器需要完成其工作的容器模板
  • 应用程序所需镜像的容器配置文件
  • 每个容器有一个独立的 IP 地址。因此,能把它放到负载均衡器后面。将容器放到负载均衡器后面,这就上升了一个层面。
  • Deis 的容器配置技术
  • 可以部署一个或多个容器镜像到主机上的负载均衡器下
  • 每次添加实例到环境中时,你不需要手动配置负载均衡器以便接受你的容器镜像。你可以使用服务发现技术让容器告知均衡器它可用。
  • 类似 CoreOS、RHEL Atomic、和 Ubuntu 的 Snappy 宿主操作系统
  • 类似 Docker 和 Rocket 的容器管理技术结合起来
  • 类似 Deis 这样的配置技术使容器创建和部署变得更加简单
張 旭

鳥哥的 Linux 私房菜 -- 第一章、Linux是什麼與如何學習 - 0 views

  • Linux就是核心與系統呼叫介面那兩層
  • 核心與硬體的關係非常的強烈
  • Linux提供了一個完整的作業系統當中最底層的硬體控制與資源管理的完整架構, 這個架構是沿襲Unix良好的傳統來的,所以相當的穩定而功能強大
  • ...31 more annotations...
  • Linux的核心是由Linus Torvalds在1991年的時候給他開發出來的, 並且丟到網路上提供大家下載,後來大家覺得這個小東西(Linux Kernel)相當的小而精巧, 所以慢慢的就有相當多的朋友投入這個小東西的研究領域裡面去
  • 1960年代初期麻省理工學院(MIT)發展了所謂的: 『相容分時系統(Compatible Time-Sharing System, CTSS)』, 它可以讓大型主機透過提供數個終端機(terminal)以連線進入主機,來利用主機的資源進行運算工作
  • 為了更加強化大型主機的功能,以讓主機的資源可以提供更多使用者來利用,所以在1965年前後, 由貝爾實驗室(Bell)、麻省理工學院(MIT)及奇異公司(GE, 或稱為通用電器)共同發起了Multics的計畫
  • 以組合語言(Assembler)寫出了一組核心程式,同時包括一些核心工具程式, 以及一個小小的檔案系統。那個系統就是Unix的原型! 當時Thompson將Multics龐大的複雜系統簡化了不少,於是同實驗室的朋友都戲稱這個系統為:Unics。(當時尚未有Unix的名稱)
  • 所有的程式或系統裝置都是檔案
  • 不管建構編輯器還是附屬檔案,所寫的程式只有一個目的,且要有效的完成目標。
  • Dennis Ritchie (註3) 將B語言重新改寫成C語言,再以C語言重新改寫與編譯Unics的核心, 最後正名與發行出Unix的正式版本!
  • 由於Unix是以較高階的C語言寫的,相對於組合語言需要與硬體有密切的配合, 高階的C語言與硬體的相關性就沒有這麼大了!所以,這個改變也使得Unix很容易被移植到不同的機器上面喔!
  • AT&T此時對於Unix是採取較開放的態度,此外,Unix是以高階的C語言寫成的, 理論上是具有可移植性的!亦即只要取得Unix的原始碼,並且針對大型主機的特性加以修訂原有的原始碼(Source Code), 就可能將Unix移植到另一部不同的主機上頭了。
  • 柏克萊大學的Bill Joy (註4)在取得了Unix的核心原始碼後,著手修改成適合自己機器的版本, 並且同時增加了很多工具軟體與編譯程式,最終將它命名為Berkeley Software Distribution (BSD)。
  • 每一家公司自己出的Unix雖然在架構上面大同小異,但是卻真的僅能支援自身的硬體, 所以囉,早先的Unix只能與伺服器(Server)或者是大型工作站(Workstation)劃上等號!
  • AT&T在1979年發行的第七版Unix中,特別提到了 『不可對學生提供原始碼』的嚴格限制!
  • 純種的Unix指的就是System V以及BSD
  • AT&T自家的System V
  • 既然1979年的Unix第七版可以在Intel的x86架構上面進行移植, 那麼是否意味著可以將Unix改寫並移植到x86上面了呢?在這個想法上, 譚寧邦教授於是乎自己動手寫了Minix這個Unix Like的核心程式!
  • 『既然作業系統太複雜,我就先寫可以在Unix上面運行的小程式,這總可以了吧?』
  • 如果能夠寫出一個不錯的編譯器,那不就是大家都需要的軟體了嗎? 因此他便開始撰寫C語言的編譯器,那就是現在相當有名的GNU C Compiler(gcc)!
  • 他還撰寫了更多可以被呼叫的C函式庫(GNU C library),以及可以被使用來操作作業系統的基本介面BASH shell! 這些都在1990年左右完成了!
  • 有鑑於圖形使用者介面(Graphical User Interface, GUI) 的需求日益加重,在1984年由MIT與其他協力廠商首次發表了X Window System ,並且更在1988年成立了非營利性質的XFree86這個組織。所謂的XFree86其實是 X Window System + Free + x86的整合名稱呢!
  • 譚寧邦教授為了教育需要而撰寫的Minix系統! 他在購買了最新的Intel 386的個人電腦後,就立即安裝了Minix這個作業系統。 另外,上個小節當中也談到,Minix這個作業系統是有附上原始碼的, 所以托瓦茲也經由這個原始碼學習到了很多的核心程式設計的設計概念喔!
  • 托瓦茲自己也說:『我始終是個性能癖』^_^。 為了徹底發揮386的效能,於是托瓦茲花了不少時間在測試386機器上! 他的重要測試就是在測試386的多功性能。首先,他寫了三個小程式,一個程式會持續輸出A、一個會持續輸出B, 最後一個會將兩個程式進行切換。他將三個程式同時執行,結果,他看到螢幕上很順利的一直出現ABABAB...... 他知道,他成功了! ^_^
  • 為了讓所有的軟體都可以在Linux上執行,於是托瓦茲開始參考標準的POSIX規範。
  • POSIX是可攜式作業系統介面(Portable Operating System Interface)的縮寫,重點在規範核心與應用程式之間的介面, 這是由美國電器與電子工程師學會(IEEE)所發佈的一項標準喔
  • 因為托瓦茲放置核心的那個FTP網站的目錄為:Linux, 從此,大家便稱這個核心為Linux了。(請注意,此時的Linux就是那個kernel喔! 另外,托瓦茲所丟到該目錄下的第一個核心版本為0.02呢!)
  • Linux其實就是一個作業系統最底層的核心及其提供的核心工具。 他是GNU GPL授權模式,所以,任何人均可取得原始碼與可執行這個核心程式,並且可以修改。
  • Linux參考POSIX設計規範,於是相容於Unix作業系統,故亦可稱之為Unix Like的一種
  • 為了讓使用者能夠接觸到Linux,於是很多的商業公司或非營利團體, 就將Linux Kernel(含tools)與可運行的軟體整合起來,加上自己具有創意的工具程式, 這個工具程式可以讓使用者以光碟/DVD或者透過網路直接安裝/管理Linux系統。 這個『Kernel + Softwares + Tools + 可完整安裝程序』的咚咚,我們稱之為Linux distribution, 一般中文翻譯成可完整安裝套件,或者Linux發佈商套件等。
  • 在1994年終於完成的Linux的核心正式版!version 1.0。 這一版同時還加入了X Window System的支援呢!且於1996年完成了2.0版、2011 年釋出 3.0 版,更於 2015 年 4 月釋出了 4.0 版哩! 發展相當迅速喔!此外,托瓦茲指明了企鵝為Linux的吉祥物。
  • Linux本身就是個最陽春的作業系統,其開發網站設立在http://www.kernel.org,我們亦稱Linux作業系統最底層的資料為『核心(Kernel)』。
  • 常見的 Linux distributions 分類有『商業、社群』分類法,或『RPM、DPKG』分類法
  • 事實上鳥哥認為distributions主要分為兩大系統,一種是使用RPM方式安裝軟體的系統,包括Red Hat, Fedora, SuSE等都是這類; 一種則是使用Debian的dpkg方式安裝軟體的系統,包括Debian, Ubuntu, B2D等等。
張 旭

Understanding GitHub Actions - GitHub Docs - 0 views

  • A job is a set of steps that execute on the same runner. By default, a workflow with multiple jobs will run those jobs in parallel.
  • Workflows are made up of one or more jobs and can be scheduled or triggered by an event
  • An event is a specific activity that triggers a workflow.
  • ...8 more annotations...
  • configure a workflow to run jobs sequentially.
  • A step is an individual task that can run commands in a job. A step can be either an action or a shell command.
  • Each step in a job executes on the same runner, allowing the actions in that job to share data with each other.
  • Actions are standalone commands that are combined into steps to create a job.
  • Actions are the smallest portable building block of a workflow.
  • To use an action in a workflow, you must include it as a step.
  • You can use a runner hosted by GitHub, or you can host your own.
  • GitHub-hosted runners are based on Ubuntu Linux, Microsoft Windows, and macOS, and each job in a workflow runs in a fresh virtual environment.
  •  
    "A job is a set of steps that execute on the same runner. By default, a workflow with multiple jobs will run those jobs in parallel. "
張 旭

phusion/passenger-docker: Docker base images for Ruby, Python, Node.js and Meteor web apps - 0 views

  • Ubuntu 20.04 LTS as base system
  • 2.7.5 is configured as the default.
  • Python 3.8
  • ...23 more annotations...
  • A build system, git, and development headers for many popular libraries, so that the most popular Ruby, Python and Node.js native extensions can be compiled without problems.
  • Nginx 1.18. Disabled by default
  • production-grade features, such as process monitoring, administration and status inspection.
  • Redis 5.0. Not installed by default.
  • The image has an app user with UID 9999 and home directory /home/app. Your application is supposed to run as this user.
  • running applications without root privileges is good security practice.
  • Your application should be placed inside /home/app.
  • COPY --chown=app:app
  • Passenger works like a mod_ruby, mod_nodejs, etc. It changes Nginx into an application server and runs your app from Nginx.
  • placing a .conf file in the directory /etc/nginx/sites-enabled
  • The best way to configure Nginx is by adding .conf files to /etc/nginx/main.d and /etc/nginx/conf.d
  • files in conf.d are included in the Nginx configuration's http context.
  • any environment variables you set with docker run -e, Docker linking and /etc/container_environment, won't reach Nginx.
  • To preserve these variables, place an Nginx config file ending with *.conf in the directory /etc/nginx/main.d, in which you tell Nginx to preserve these variables.
  • By default, Phusion Passenger sets all of the following environment variables to the value production
  • Setting these environment variables yourself (e.g. using docker run -e RAILS_ENV=...) will not have any effect, because Phusion Passenger overrides all of these environment variables.
  • PASSENGER_APP_ENV environment variable
  • passenger-docker autogenerates an Nginx configuration file (/etc/nginx/conf.d/00_app_env.conf) during container boot.
  • The configuration file is in /etc/redis/redis.conf. Modify it as you see fit, but make sure daemonize no is set.
  • You can add additional daemons to the image by creating runit entries.
  • The shell script must be called run, must be executable
  • the shell script must run the daemon without letting it daemonize/fork it.
  • We use RVM to install and to manage Ruby interpreters.
1 - 19 of 19
Showing 20 items per page