Skip to main content

Home/ Larvata/ Group items tagged system

Rss Feed Group items tagged

張 旭

Containers Vs. Config Management - 0 views

  • With configuration management systems, you write code that describes how you want some component of your systems to be installed and configured, and when you execute the code on your server, it should end up in the desired state.
  • building a hosting platform that is capable of a lot of things that system administrators used to do manually
  • build modules on deployment via bundler or npm or similar, it can be incredibly slow to run, taking minutes or longer in some cases
  • ...10 more annotations...
  • pulling from git is slow.
  • deploying with configuration management tools is a pain in the ass and error prone.
  • Support for containers has existed in the Linux kernel since version 2.6.24 when cgroup support was added
  • All of the logic that used to live in your cookbooks/playbooks/manifests/etc now lives in a Dockerfile that resides directly in the repository for the application it is designed to build
  • All of the dependencies of the application are bundled with the container which means no need to build on the fly on every server during deployment.
  • Containers bring standardization which allows for systems like centralized logging, monitoring, and metrics to easily snap into place no matter what is running in the container.
  • Dockerfiles do not give you the same level of control over configuration as your application transitions between environments, like dev, staging, and production.
  • You may even need to have different Dockerfile’s for each environment in certain cases.
  • configuration management systems now have hooks for docker integration.
  • Config management will only be used to install Docker, an orchestration system, configure PAM/SSH auth, and tune OS sysctl values.
  •  
    "With configuration management systems, you write code that describes how you want some component of your systems to be installed and configured, and when you execute the code on your server, it should end up in the desired state."
crazylion lee

使用 strace 了解程式讀取資料的來源 - fcamel - Medium - 0 views

  •  
    "strace 會列出程式執行的 system call,效率很好且不需要 debug symbol。我比較常用它找出影響程式行為設定檔的位置。 基本概念是開檔、讀取、寫入最後都會用到 system call,system call 數量不多,知道要用什麼 system call 作什麼事,就可以用 strace 觀察特定的 system call 得知很多資訊。比方說開檔一定要用 open,所以觀察 open 就能知道程式讀了那些檔案。 "
張 旭

Database Profiler - MongoDB Manual - 0 views

  • The database profiler collects detailed information about Database Commands executed against a running mongod instance.
  • The profiler writes all the data it collects to the system.profile collection, a capped collection in the admin database.
  • db.setProfilingLevel(2)
  • ...10 more annotations...
  • The slowms and sampleRate profiling settings are global. When set, these settings affect all databases in your process.
  • db.setProfilingLevel(1, { slowms: 20 })
  • db.setProfilingLevel(0, { slowms: 20 })
  • show profile
  • The system.profile collection is a capped collection with a default size of 1 megabyte.
  • By default, sampleRate is set to 1.0, meaning all slow operations are profiled.
  • When logLevel is set to 0, MongoDB records slow operations to the diagnostic log at a rate determined by slowOpSampleRate.
  • The slowms field indicates operation time threshold, in milliseconds, beyond which operations are considered slow.
  • You cannot enable profiling on a mongos instance.
  • profiler logs information about database operations in the system.profile collection.
張 旭

AskF5 | Manual Chapter: Working with Partitions - 0 views

  • During BIG-IP® system installation, the system automatically creates a partition named Common
  • An administrative partition is a logical container that you create, containing a defined set of BIG-IP® system objects.
  • No user can delete partition Common itself.
  • ...9 more annotations...
  • With respect to permissions, all users on the system except those with a user role of No Access have read access to objects in partition Common, and by default, partition Common is their current partition.
  • The current partition is the specific partition to which the system is currently set for a logged-in user.
  • A partition access assignment gives a user some level of access to the specified partition.
  • assigning partition access to a user does not necessarily give the user full access to all objects in the partition
  • user account objects also reside in partitions
  • when you first install the BIG-IP system, every existing user account (root and admin) resides in partition Common
  • the partition in which a user account object resides does not affect the partition or partitions to which that user is granted access to manage other BIG-IP objects
  • the object it references resides in partition Common
  • a referenced object must reside either in the same partition as the object that is referencing it
crazylion lee

GitHub - donnemartin/system-design-primer: Learn how to design large-scale systems. Pre... - 0 views

  •  
    "Learn how to design large-scale systems. Prep for the system design interview. Includes Anki flashcards."
張 旭

鳥哥的 Linux 私房菜 -- 第一章、Linux是什麼與如何學習 - 0 views

  • Linux就是核心與系統呼叫介面那兩層
  • 核心與硬體的關係非常的強烈
  • Linux提供了一個完整的作業系統當中最底層的硬體控制與資源管理的完整架構, 這個架構是沿襲Unix良好的傳統來的,所以相當的穩定而功能強大
  • ...31 more annotations...
  • Linux的核心是由Linus Torvalds在1991年的時候給他開發出來的, 並且丟到網路上提供大家下載,後來大家覺得這個小東西(Linux Kernel)相當的小而精巧, 所以慢慢的就有相當多的朋友投入這個小東西的研究領域裡面去
  • 1960年代初期麻省理工學院(MIT)發展了所謂的: 『相容分時系統(Compatible Time-Sharing System, CTSS)』, 它可以讓大型主機透過提供數個終端機(terminal)以連線進入主機,來利用主機的資源進行運算工作
  • 為了更加強化大型主機的功能,以讓主機的資源可以提供更多使用者來利用,所以在1965年前後, 由貝爾實驗室(Bell)、麻省理工學院(MIT)及奇異公司(GE, 或稱為通用電器)共同發起了Multics的計畫
  • 以組合語言(Assembler)寫出了一組核心程式,同時包括一些核心工具程式, 以及一個小小的檔案系統。那個系統就是Unix的原型! 當時Thompson將Multics龐大的複雜系統簡化了不少,於是同實驗室的朋友都戲稱這個系統為:Unics。(當時尚未有Unix的名稱)
  • 所有的程式或系統裝置都是檔案
  • 不管建構編輯器還是附屬檔案,所寫的程式只有一個目的,且要有效的完成目標。
  • Dennis Ritchie (註3) 將B語言重新改寫成C語言,再以C語言重新改寫與編譯Unics的核心, 最後正名與發行出Unix的正式版本!
  • 由於Unix是以較高階的C語言寫的,相對於組合語言需要與硬體有密切的配合, 高階的C語言與硬體的相關性就沒有這麼大了!所以,這個改變也使得Unix很容易被移植到不同的機器上面喔!
  • AT&T此時對於Unix是採取較開放的態度,此外,Unix是以高階的C語言寫成的, 理論上是具有可移植性的!亦即只要取得Unix的原始碼,並且針對大型主機的特性加以修訂原有的原始碼(Source Code), 就可能將Unix移植到另一部不同的主機上頭了。
  • 柏克萊大學的Bill Joy (註4)在取得了Unix的核心原始碼後,著手修改成適合自己機器的版本, 並且同時增加了很多工具軟體與編譯程式,最終將它命名為Berkeley Software Distribution (BSD)。
  • 每一家公司自己出的Unix雖然在架構上面大同小異,但是卻真的僅能支援自身的硬體, 所以囉,早先的Unix只能與伺服器(Server)或者是大型工作站(Workstation)劃上等號!
  • AT&T在1979年發行的第七版Unix中,特別提到了 『不可對學生提供原始碼』的嚴格限制!
  • 純種的Unix指的就是System V以及BSD
  • AT&T自家的System V
  • 既然1979年的Unix第七版可以在Intel的x86架構上面進行移植, 那麼是否意味著可以將Unix改寫並移植到x86上面了呢?在這個想法上, 譚寧邦教授於是乎自己動手寫了Minix這個Unix Like的核心程式!
  • 『既然作業系統太複雜,我就先寫可以在Unix上面運行的小程式,這總可以了吧?』
  • 如果能夠寫出一個不錯的編譯器,那不就是大家都需要的軟體了嗎? 因此他便開始撰寫C語言的編譯器,那就是現在相當有名的GNU C Compiler(gcc)!
  • 他還撰寫了更多可以被呼叫的C函式庫(GNU C library),以及可以被使用來操作作業系統的基本介面BASH shell! 這些都在1990年左右完成了!
  • 有鑑於圖形使用者介面(Graphical User Interface, GUI) 的需求日益加重,在1984年由MIT與其他協力廠商首次發表了X Window System ,並且更在1988年成立了非營利性質的XFree86這個組織。所謂的XFree86其實是 X Window System + Free + x86的整合名稱呢!
  • 譚寧邦教授為了教育需要而撰寫的Minix系統! 他在購買了最新的Intel 386的個人電腦後,就立即安裝了Minix這個作業系統。 另外,上個小節當中也談到,Minix這個作業系統是有附上原始碼的, 所以托瓦茲也經由這個原始碼學習到了很多的核心程式設計的設計概念喔!
  • 托瓦茲自己也說:『我始終是個性能癖』^_^。 為了徹底發揮386的效能,於是托瓦茲花了不少時間在測試386機器上! 他的重要測試就是在測試386的多功性能。首先,他寫了三個小程式,一個程式會持續輸出A、一個會持續輸出B, 最後一個會將兩個程式進行切換。他將三個程式同時執行,結果,他看到螢幕上很順利的一直出現ABABAB...... 他知道,他成功了! ^_^
  • 為了讓所有的軟體都可以在Linux上執行,於是托瓦茲開始參考標準的POSIX規範。
  • POSIX是可攜式作業系統介面(Portable Operating System Interface)的縮寫,重點在規範核心與應用程式之間的介面, 這是由美國電器與電子工程師學會(IEEE)所發佈的一項標準喔
  • 因為托瓦茲放置核心的那個FTP網站的目錄為:Linux, 從此,大家便稱這個核心為Linux了。(請注意,此時的Linux就是那個kernel喔! 另外,托瓦茲所丟到該目錄下的第一個核心版本為0.02呢!)
  • Linux其實就是一個作業系統最底層的核心及其提供的核心工具。 他是GNU GPL授權模式,所以,任何人均可取得原始碼與可執行這個核心程式,並且可以修改。
  • Linux參考POSIX設計規範,於是相容於Unix作業系統,故亦可稱之為Unix Like的一種
  • 為了讓使用者能夠接觸到Linux,於是很多的商業公司或非營利團體, 就將Linux Kernel(含tools)與可運行的軟體整合起來,加上自己具有創意的工具程式, 這個工具程式可以讓使用者以光碟/DVD或者透過網路直接安裝/管理Linux系統。 這個『Kernel + Softwares + Tools + 可完整安裝程序』的咚咚,我們稱之為Linux distribution, 一般中文翻譯成可完整安裝套件,或者Linux發佈商套件等。
  • 在1994年終於完成的Linux的核心正式版!version 1.0。 這一版同時還加入了X Window System的支援呢!且於1996年完成了2.0版、2011 年釋出 3.0 版,更於 2015 年 4 月釋出了 4.0 版哩! 發展相當迅速喔!此外,托瓦茲指明了企鵝為Linux的吉祥物。
  • Linux本身就是個最陽春的作業系統,其開發網站設立在http://www.kernel.org,我們亦稱Linux作業系統最底層的資料為『核心(Kernel)』。
  • 常見的 Linux distributions 分類有『商業、社群』分類法,或『RPM、DPKG』分類法
  • 事實上鳥哥認為distributions主要分為兩大系統,一種是使用RPM方式安裝軟體的系統,包括Red Hat, Fedora, SuSE等都是這類; 一種則是使用Debian的dpkg方式安裝軟體的系統,包括Debian, Ubuntu, B2D等等。
張 旭

Probably Done Before: Visualizing Docker Containers and Images - 0 views

  •  In my opinion, understanding how a technology works under the hood is the best way to achieve learning speed and to build confidence that you are using the tool in the correct way.
  • union view
    • 張 旭
       
      把多層 image layer 串接起來,看上去就像是在讀一個 image 檔案而已。
  • The top-level layer may be read by a union-ing file system (AUFS on my docker implementation) to present a single cohesive view of all the changes as one read-only file system
  • ...36 more annotations...
  • it is nearly the same thing as an image, except that the top layer is read-write
  • A container is defined only as a read-write layer atop an image (of read-only layers itself).  It does not have to be running.
  • a running container
    • 張 旭
       
      之前一直搞錯了!不是 run 起來的才會叫 container,只要有 read-write layer 就是了!
  • the the isolated process-space and processes within
  • A running container is defined as a read-write "union view" and
  • kernel-level technologies like cgroups, namespaces
  • The processes within this process-space may change, delete or create files within the "union view" file that will be captured in the read-write layer
  • there is no longer a running container
    • 張 旭
       
      這行指令執行結束之後,running container 就停掉了,但是該 container 還在!
  • each layer contains a pointer to a parent layer using the Id
  • The 'docker create' command adds a read-write layer to the top stack based on the image id.  It does not run this container.
  • The command 'docker start' creates a process space around the union view of the container's layers.
  • can only be one process space per container.
  • the docker run command starts with an image, creates a container, and starts the container
  • 'git pull' (which is a combination of 'git fetch' and 'git merge')
  • 'docker ps' lists out the inventory of running containers on your system
  • 'docker ps -a' where the 'a' is short for 'all' lists out all the containers on your system, whether stopped or running.
  • Only those images that have containers attached to them or that have been pulled are considered top-level.
  • 'docker stop' issues a SIGTERM to a running container which politely stops all the processes in that process-space.
  • results is a normal, but non-running, container
  • 'docker kill' issues a non-polite SIGKILL command to all the processes in a running container.
  • 'docker stop' and 'docker kill' which send actual UNIX signals to a running process
  • 'docker pause' uses a special cgroups feature to freeze/pause a running process-space
  • 'docker rm' removes the read-write layer that defines a container from your host system
  • It effectively deletes files
  • 'docker rmi' removes the read-layer that defines a "union view" of an image.
  • 'docker commit' takes a container's top-level read-write layer and burns it into a read-only layer.
  • turns a container (whether running or stopped) into an immutable image
  • uses the FROM directive in the Dockerfile file as the starting image and iteratively 1) runs (create and start) 2) modifies and 3) commits.
  • At each step in the iteration a new layer is created.
  • 'docker exec' command runs on a running container and executes a process in that running container's process space
  • 'docker inspect' fetches the metadata that has been associated with the top-layer of the container or image
  • 'docker save' creates a single tar file that can be used to import on a different host system
  • only be run on an image
  • 'docker export' command creates a tar file of the contents of the "union view" and flattens it for consumption for non-Docker usages
  • This command removes the metadata and the layers.  This command can only be run on containers.
  • 'docker history' command takes an image-id and recursively prints out the read-only layers
crazylion lee

Nmap: the Network Mapper - Free Security Scanner - 1 views

shared by crazylion lee on 22 Nov 15 - No Cached
  •  
    "Nmap ("Network Mapper") is a free and open source (license) utility for network discovery and security auditing. Many systems and network administrators also find it useful for tasks such as network inventory, managing service upgrade schedules, and monitoring host or service uptime. Nmap uses raw IP packets in novel ways to determine what hosts are available on the network, what services (application name and version) those hosts are offering, what operating systems (and OS versions) they are running, what type of packet filters/firewalls are in use, and dozens of other characteristics. It was designed to rapidly scan large networks, but works fine against single hosts. Nmap runs on all major computer operating systems, and official binary packages are available for Linux, Windows, and Mac OS X. In addition to the classic command-line Nmap executable, the Nmap suite includes an advanced GUI and results viewer (Zenmap), a flexible data transfer, redirection, and debugging tool (Ncat), a utility for comparing scan results (Ndiff), and a packet generation and response analysis tool (Nping)."
張 旭

Baseimage-docker: A minimal Ubuntu base image modified for Docker-friendliness - 0 views

  • We encourage you to use multiple processes.
  • Baseimage-docker is a special Docker image that is configured for correct use within Docker containers.
  • A proper Unix system should run all kinds of important system services.
  • ...16 more annotations...
  • You're not running them, you're only running your app.
  • You have Ubuntu installed in Docker. The files are there. But that doesn't mean Ubuntu's running as it should.
  • The only processes that will be running inside the container is the CMD command, and all processes that it spawns.
  • When your Docker container starts, only the CMD command is run.
  • Ubuntu is not designed to be run inside Docker
  • When a system is started, the first process in the system is called the init process, with PID 1. The system halts when this processs halts.
  • If your init process is your app, then it'll probably only shut down itself, not all the other processes in the container.
  • Docker runs fine with multiple processes in a container.
  • Baseimage-docker encourages you to run multiple processes through the use of runit.
  • Runit (written in C) is much lighter weight than supervisord (written in Python).
  • a Docker container, which is a locked down environment with e.g. no direct access to many kernel resources.
  • Used for service supervision and management.
  • A custom tool for running a command as another user.
  • add additional daemons (e.g. your own app) to the image by creating runit entries.
  • write a small shell script which runs your daemon, and runit will keep it up and running for you, restarting it when it crashes, etc.
  • the shell script must run the daemon without letting it daemonize/fork it.
張 旭

The Twelve-Factor App - 0 views

  • Libraries installed through a packaging system can be installed system-wide (known as “site packages”) or scoped into the directory containing the app (known as “vendoring” or “bundling”).
  • A twelve-factor app never relies on implicit existence of system-wide packages.
  • declares all dependencies, completely and exactly, via a dependency declaration manifest.
  • ...8 more annotations...
  • The full and explicit dependency specification is applied uniformly to both production and development.
  • Bundler for Ruby offers the Gemfile manifest format for dependency declaration and bundle exec for dependency isolation.
  • Pip is used for declaration and Virtualenv for isolation.
  • No matter what the toolchain, dependency declaration and isolation must always be used together
  • requiring only the language runtime and dependency manager installed as prerequisites.
  • set up everything needed to run the app’s code with a deterministic build command.
  • If the app needs to shell out to a system tool, that tool should be vendored into the app.
  • do not rely on the implicit existence of any system tools
crazylion lee

GitHub - Netflix/Hystrix: Hystrix is a latency and fault tolerance library designed to ... - 0 views

  •  
    "Hystrix is a latency and fault tolerance library designed to isolate points of access to remote systems, services and 3rd party libraries, stop cascading failure and enable resilience in complex distributed systems where failure is inevitable."
張 旭

LXC vs Docker: Why Docker is Better | UpGuard - 0 views

  • LXC (LinuX Containers) is a OS-level virtualization technology that allows creation and running of multiple isolated Linux virtual environments (VE) on a single control host.
  • Docker, previously called dotCloud, was started as a side project and only open-sourced in 2013. It is really an extension of LXC’s capabilities.
  • run processes in isolation.
  • ...35 more annotations...
  • Docker is developed in the Go language and utilizes LXC, cgroups, and the Linux kernel itself. Since it’s based on LXC, a Docker container does not include a separate operating system; instead it relies on the operating system’s own functionality as provided by the underlying infrastructure.
  • Docker acts as a portable container engine, packaging the application and all its dependencies in a virtual container that can run on any Linux server.
  • a VE there is no preloaded emulation manager software as in a VM.
  • In a VE, the application (or OS) is spawned in a container and runs with no added overhead, except for a usually minuscule VE initialization process.
  • LXC will boast bare metal performance characteristics because it only packages the needed applications.
  • the OS is also just another application that can be packaged too.
  • a VM, which packages the entire OS and machine setup, including hard drive, virtual processors and network interfaces. The resulting bloated mass usually takes a long time to boot and consumes a lot of CPU and RAM.
  • don’t offer some other neat features of VM’s such as IaaS setups and live migration.
  • LXC as supercharged chroot on Linux. It allows you to not only isolate applications, but even the entire OS.
  • Libvirt, which allows the use of containers through the LXC driver by connecting to 'lxc:///'.
  • 'LXC', is not compatible with libvirt, but is more flexible with more userspace tools.
  • Portable deployment across machines
  • Versioning: Docker includes git-like capabilities for tracking successive versions of a container
  • Component reuse: Docker allows building or stacking of already created packages.
  • Shared libraries: There is already a public registry (http://index.docker.io/ ) where thousands have already uploaded the useful containers they have created.
  • Docker taking the devops world by storm since its launch back in 2013.
  • LXC, while older, has not been as popular with developers as Docker has proven to be
  • LXC having a focus on sys admins that’s similar to what solutions like the Solaris operating system, with its Solaris Zones, Linux OpenVZ, and FreeBSD, with its BSD Jails virtualization system
  • it started out being built on top of LXC, Docker later moved beyond LXC containers to its own execution environment called libcontainer.
  • Unlike LXC, which launches an operating system init for each container, Docker provides one OS environment, supplied by the Docker Engine
  • LXC tooling sticks close to what system administrators running bare metal servers are used to
  • The LXC command line provides essential commands that cover routine management tasks, including the creation, launch, and deletion of LXC containers.
  • Docker containers aim to be even lighter weight in order to support the fast, highly scalable, deployment of applications with microservice architecture.
  • With backing from Canonical, LXC and LXD have an ecosystem tightly bound to the rest of the open source Linux community.
  • Docker Swarm
  • Docker Trusted Registry
  • Docker Compose
  • Docker Machine
  • Kubernetes facilitates the deployment of containers in your data center by representing a cluster of servers as a single system.
  • Swarm is Docker’s clustering, scheduling and orchestration tool for managing a cluster of Docker hosts. 
  • rkt is a security minded container engine that uses KVM for VM-based isolation and packs other enhanced security features. 
  • Apache Mesos can run different kinds of distributed jobs, including containers. 
  • Elastic Container Service is Amazon’s service for running and orchestrating containerized applications on AWS
  • LXC offers the advantages of a VE on Linux, mainly the ability to isolate your own private workloads from one another. It is a cheaper and faster solution to implement than a VM, but doing so requires a bit of extra learning and expertise.
  • Docker is a significant improvement of LXC’s capabilities.
crazylion lee

A ZFS developer's analysis of the good and bad in Apple's new APFS file system | Ars Te... - 0 views

  •  
    "A ZFS developer's analysis of the good and bad in Apple's new APFS file system"
crazylion lee

Supervisor: A Process Control System - supervisor 3.1a1-dev documentation - 1 views

  •  
    Supervisor is a client/server system that allows its users to monitor and control a number of processes on UNIX-like operating systems. It shares some of the same goals of programs like launchd, daemontools, and runit. Unlike some of these programs, it is not meant to be run as a substitute for init as "process id 1". Instead it is meant to be used to control processes related to a project or a customer, and is meant to start like any other program at boot time.
張 旭

Introduction To The Queue System - Diving Laravel - 0 views

  • Laravel is shipped with a built-in queue system that helps you run tasks in the background
  • The QueueManager is registered into the container and it knows how to connect to the different built-in queue drivers
  • for example when we called the Queue::push() method, what happened is that the manager selected the desired queue driver, connected to it, and called the push method on that driver.
  • ...2 more annotations...
  • All calls to methods that don't exist in the QueueManager class will be sent to the loaded driver
  • when you do Queue::push() you're actually calling the push method on the queue driver you're using
  •  
    "Laravel is shipped with a built-in queue system that helps you run tasks in the background "
張 旭

Volumes - Kubernetes - 0 views

  • On-disk files in a Container are ephemeral,
  • when a Container crashes, kubelet will restart it, but the files will be lost - the Container starts with a clean state
  • In Docker, a volume is simply a directory on disk or in another Container.
  • ...105 more annotations...
  • A Kubernetes volume, on the other hand, has an explicit lifetime - the same as the Pod that encloses it.
  • a volume outlives any Containers that run within the Pod, and data is preserved across Container restarts.
    • 張 旭
       
      Kubernetes Volume 是跟著 Pod 的生命週期在走
  • Kubernetes supports many types of volumes, and a Pod can use any number of them simultaneously.
  • To use a volume, a Pod specifies what volumes to provide for the Pod (the .spec.volumes field) and where to mount those into Containers (the .spec.containers.volumeMounts field).
  • A process in a container sees a filesystem view composed from their Docker image and volumes.
  • Volumes can not mount onto other volumes or have hard links to other volumes.
  • Each Container in the Pod must independently specify where to mount each volume
  • localnfs
  • cephfs
  • awsElasticBlockStore
  • glusterfs
  • vsphereVolume
  • An awsElasticBlockStore volume mounts an Amazon Web Services (AWS) EBS Volume into your Pod.
  • the contents of an EBS volume are preserved and the volume is merely unmounted.
  • an EBS volume can be pre-populated with data, and that data can be “handed off” between Pods.
  • create an EBS volume using aws ec2 create-volume
  • the nodes on which Pods are running must be AWS EC2 instances
  • EBS only supports a single EC2 instance mounting a volume
  • check that the size and EBS volume type are suitable for your use!
  • A cephfs volume allows an existing CephFS volume to be mounted into your Pod.
  • the contents of a cephfs volume are preserved and the volume is merely unmounted.
    • 張 旭
       
      相當於自己的 AWS EBS
  • CephFS can be mounted by multiple writers simultaneously.
  • have your own Ceph server running with the share exported
  • configMap
  • The configMap resource provides a way to inject configuration data into Pods
  • When referencing a configMap object, you can simply provide its name in the volume to reference it
  • volumeMounts: - name: config-vol mountPath: /etc/config volumes: - name: config-vol configMap: name: log-config items: - key: log_level path: log_level
  • create a ConfigMap before you can use it.
  • A Container using a ConfigMap as a subPath volume mount will not receive ConfigMap updates.
  • An emptyDir volume is first created when a Pod is assigned to a Node, and exists as long as that Pod is running on that node.
  • When a Pod is removed from a node for any reason, the data in the emptyDir is deleted forever.
  • By default, emptyDir volumes are stored on whatever medium is backing the node - that might be disk or SSD or network storage, depending on your environment.
  • you can set the emptyDir.medium field to "Memory" to tell Kubernetes to mount a tmpfs (RAM-backed filesystem)
  • volumeMounts: - mountPath: /cache name: cache-volume volumes: - name: cache-volume emptyDir: {}
  • An fc volume allows an existing fibre channel volume to be mounted in a Pod.
  • configure FC SAN Zoning to allocate and mask those LUNs (volumes) to the target WWNs beforehand so that Kubernetes hosts can access them.
  • Flocker is an open-source clustered Container data volume manager. It provides management and orchestration of data volumes backed by a variety of storage backends.
  • emptyDir
  • flocker
  • A flocker volume allows a Flocker dataset to be mounted into a Pod
  • have your own Flocker installation running
  • A gcePersistentDisk volume mounts a Google Compute Engine (GCE) Persistent Disk into your Pod.
  • Using a PD on a Pod controlled by a ReplicationController will fail unless the PD is read-only or the replica count is 0 or 1
  • A glusterfs volume allows a Glusterfs (an open source networked filesystem) volume to be mounted into your Pod.
  • have your own GlusterFS installation running
  • A hostPath volume mounts a file or directory from the host node’s filesystem into your Pod.
  • a powerful escape hatch for some applications
  • access to Docker internals; use a hostPath of /var/lib/docker
  • allowing a Pod to specify whether a given hostPath should exist prior to the Pod running, whether it should be created, and what it should exist as
  • specify a type for a hostPath volume
  • the files or directories created on the underlying hosts are only writable by root.
  • hostPath: # directory location on host path: /data # this field is optional type: Directory
  • An iscsi volume allows an existing iSCSI (SCSI over IP) volume to be mounted into your Pod.
  • have your own iSCSI server running
  • A feature of iSCSI is that it can be mounted as read-only by multiple consumers simultaneously.
  • A local volume represents a mounted local storage device such as a disk, partition or directory.
  • Local volumes can only be used as a statically created PersistentVolume.
  • Compared to hostPath volumes, local volumes can be used in a durable and portable manner without manually scheduling Pods to nodes, as the system is aware of the volume’s node constraints by looking at the node affinity on the PersistentVolume.
  • If a node becomes unhealthy, then the local volume will also become inaccessible, and a Pod using it will not be able to run.
  • PersistentVolume spec using a local volume and nodeAffinity
  • PersistentVolume nodeAffinity is required when using local volumes. It enables the Kubernetes scheduler to correctly schedule Pods using local volumes to the correct node.
  • PersistentVolume volumeMode can now be set to “Block” (instead of the default value “Filesystem”) to expose the local volume as a raw block device.
  • When using local volumes, it is recommended to create a StorageClass with volumeBindingMode set to WaitForFirstConsumer
  • An nfs volume allows an existing NFS (Network File System) share to be mounted into your Pod.
  • NFS can be mounted by multiple writers simultaneously.
  • have your own NFS server running with the share exported
  • A persistentVolumeClaim volume is used to mount a PersistentVolume into a Pod.
  • PersistentVolumes are a way for users to “claim” durable storage (such as a GCE PersistentDisk or an iSCSI volume) without knowing the details of the particular cloud environment.
  • A projected volume maps several existing volume sources into the same directory.
  • All sources are required to be in the same namespace as the Pod. For more details, see the all-in-one volume design document.
  • Each projected volume source is listed in the spec under sources
  • A Container using a projected volume source as a subPath volume mount will not receive updates for those volume sources.
  • RBD volumes can only be mounted by a single consumer in read-write mode - no simultaneous writers allowed
  • A secret volume is used to pass sensitive information, such as passwords, to Pods
  • store secrets in the Kubernetes API and mount them as files for use by Pods
  • secret volumes are backed by tmpfs (a RAM-backed filesystem) so they are never written to non-volatile storage.
  • create a secret in the Kubernetes API before you can use it
  • A Container using a Secret as a subPath volume mount will not receive Secret updates.
  • StorageOS runs as a Container within your Kubernetes environment, making local or attached storage accessible from any node within the Kubernetes cluster.
  • Data can be replicated to protect against node failure. Thin provisioning and compression can improve utilization and reduce cost.
  • StorageOS provides block storage to Containers, accessible via a file system.
  • A vsphereVolume is used to mount a vSphere VMDK Volume into your Pod.
  • supports both VMFS and VSAN datastore.
  • create VMDK using one of the following methods before using with Pod.
  • share one volume for multiple uses in a single Pod.
  • The volumeMounts.subPath property can be used to specify a sub-path inside the referenced volume instead of its root.
  • volumeMounts: - name: workdir1 mountPath: /logs subPathExpr: $(POD_NAME)
  • env: - name: POD_NAME valueFrom: fieldRef: apiVersion: v1 fieldPath: metadata.name
  • Use the subPathExpr field to construct subPath directory names from Downward API environment variables
  • enable the VolumeSubpathEnvExpansion feature gate
  • The subPath and subPathExpr properties are mutually exclusive.
  • There is no limit on how much space an emptyDir or hostPath volume can consume, and no isolation between Containers or between Pods.
  • emptyDir and hostPath volumes will be able to request a certain amount of space using a resource specification, and to select the type of media to use, for clusters that have several media types.
  • the Container Storage Interface (CSI) and Flexvolume. They enable storage vendors to create custom storage plugins without adding them to the Kubernetes repository.
  • all volume plugins (like volume types listed above) were “in-tree” meaning they were built, linked, compiled, and shipped with the core Kubernetes binaries and extend the core Kubernetes API.
  • Container Storage Interface (CSI) defines a standard interface for container orchestration systems (like Kubernetes) to expose arbitrary storage systems to their container workloads.
  • Once a CSI compatible volume driver is deployed on a Kubernetes cluster, users may use the csi volume type to attach, mount, etc. the volumes exposed by the CSI driver.
  • The csi volume type does not support direct reference from Pod and may only be referenced in a Pod via a PersistentVolumeClaim object.
  • This feature requires CSIInlineVolume feature gate to be enabled:--feature-gates=CSIInlineVolume=true
  • In-tree plugins that support CSI Migration and have a corresponding CSI driver implemented are listed in the “Types of Volumes” section above.
  • Mount propagation allows for sharing volumes mounted by a Container to other Containers in the same Pod, or even to other Pods on the same node.
  • Mount propagation of a volume is controlled by mountPropagation field in Container.volumeMounts.
  • HostToContainer - This volume mount will receive all subsequent mounts that are mounted to this volume or any of its subdirectories.
  • Bidirectional - This volume mount behaves the same the HostToContainer mount. In addition, all volume mounts created by the Container will be propagated back to the host and to all Containers of all Pods that use the same volume.
  • Edit your Docker’s systemd service file. Set MountFlags as follows:MountFlags=shared
張 旭

Helm | - 0 views

  • Helm will figure out where to install Tiller by reading your Kubernetes configuration file (usually $HOME/.kube/config). This is the same file that kubectl uses.
  • kubectl cluster-info
  • Role-Based Access Control (RBAC) enabled
  • ...133 more annotations...
  • initialize the local CLI
  • install Tiller into your Kubernetes cluster
  • helm install
  • helm init --upgrade
  • By default, when Tiller is installed, it does not have authentication enabled.
  • helm repo update
  • Without a max history set the history is kept indefinitely, leaving a large number of records for helm and tiller to maintain.
  • helm init --upgrade
  • Whenever you install a chart, a new release is created.
  • one chart can be installed multiple times into the same cluster. And each can be independently managed and upgraded.
  • helm list function will show you a list of all deployed releases.
  • helm delete
  • helm status
  • you can audit a cluster’s history, and even undelete a release (with helm rollback).
  • the Helm server (Tiller).
  • The Helm client (helm)
  • brew install kubernetes-helm
  • Tiller, the server portion of Helm, typically runs inside of your Kubernetes cluster.
  • it can also be run locally, and configured to talk to a remote Kubernetes cluster.
  • Role-Based Access Control - RBAC for short
  • create a service account for Tiller with the right roles and permissions to access resources.
  • run Tiller in an RBAC-enabled Kubernetes cluster.
  • run kubectl get pods --namespace kube-system and see Tiller running.
  • helm inspect
  • Helm will look for Tiller in the kube-system namespace unless --tiller-namespace or TILLER_NAMESPACE is set.
  • For development, it is sometimes easier to work on Tiller locally, and configure it to connect to a remote Kubernetes cluster.
  • even when running locally, Tiller will store release configuration in ConfigMaps inside of Kubernetes.
  • helm version should show you both the client and server version.
  • Tiller stores its data in Kubernetes ConfigMaps, you can safely delete and re-install Tiller without worrying about losing any data.
  • helm reset
  • The --node-selectors flag allows us to specify the node labels required for scheduling the Tiller pod.
  • --override allows you to specify properties of Tiller’s deployment manifest.
  • helm init --override manipulates the specified properties of the final manifest (there is no “values” file).
  • The --output flag allows us skip the installation of Tiller’s deployment manifest and simply output the deployment manifest to stdout in either JSON or YAML format.
  • By default, tiller stores release information in ConfigMaps in the namespace where it is running.
  • switch from the default backend to the secrets backend, you’ll have to do the migration for this on your own.
  • a beta SQL storage backend that stores release information in an SQL database (only postgres has been tested so far).
  • Once you have the Helm Client and Tiller successfully installed, you can move on to using Helm to manage charts.
  • Helm requires that kubelet have access to a copy of the socat program to proxy connections to the Tiller API.
  • A Release is an instance of a chart running in a Kubernetes cluster. One chart can often be installed many times into the same cluster.
  • helm init --client-only
  • helm init --dry-run --debug
  • A panic in Tiller is almost always the result of a failure to negotiate with the Kubernetes API server
  • Tiller and Helm have to negotiate a common version to make sure that they can safely communicate without breaking API assumptions
  • helm delete --purge
  • Helm stores some files in $HELM_HOME, which is located by default in ~/.helm
  • A Chart is a Helm package. It contains all of the resource definitions necessary to run an application, tool, or service inside of a Kubernetes cluster.
  • it like the Kubernetes equivalent of a Homebrew formula, an Apt dpkg, or a Yum RPM file.
  • A Repository is the place where charts can be collected and shared.
  • Set the $HELM_HOME environment variable
  • each time it is installed, a new release is created.
  • Helm installs charts into Kubernetes, creating a new release for each installation. And to find new charts, you can search Helm chart repositories.
  • chart repository is named stable by default
  • helm search shows you all of the available charts
  • helm inspect
  • To install a new package, use the helm install command. At its simplest, it takes only one argument: The name of the chart.
  • If you want to use your own release name, simply use the --name flag on helm install
  • additional configuration steps you can or should take.
  • Helm does not wait until all of the resources are running before it exits. Many charts require Docker images that are over 600M in size, and may take a long time to install into the cluster.
  • helm status
  • helm inspect values
  • helm inspect values stable/mariadb
  • override any of these settings in a YAML formatted file, and then pass that file during installation.
  • helm install -f config.yaml stable/mariadb
  • --values (or -f): Specify a YAML file with overrides.
  • --set (and its variants --set-string and --set-file): Specify overrides on the command line.
  • Values that have been --set can be cleared by running helm upgrade with --reset-values specified.
  • Chart designers are encouraged to consider the --set usage when designing the format of a values.yaml file.
  • --set-file key=filepath is another variant of --set. It reads the file and use its content as a value.
  • inject a multi-line text into values without dealing with indentation in YAML.
  • An unpacked chart directory
  • When a new version of a chart is released, or when you want to change the configuration of your release, you can use the helm upgrade command.
  • Kubernetes charts can be large and complex, Helm tries to perform the least invasive upgrade.
  • It will only update things that have changed since the last release
  • $ helm upgrade -f panda.yaml happy-panda stable/mariadb
  • deployment
  • If both are used, --set values are merged into --values with higher precedence.
  • The helm get command is a useful tool for looking at a release in the cluster.
  • helm rollback
  • A release version is an incremental revision. Every time an install, upgrade, or rollback happens, the revision number is incremented by 1.
  • helm history
  • a release name cannot be re-used.
  • you can rollback a deleted resource, and have it re-activate.
  • helm repo list
  • helm repo add
  • helm repo update
  • The Chart Development Guide explains how to develop your own charts.
  • helm create
  • helm lint
  • helm package
  • Charts that are archived can be loaded into chart repositories.
  • chart repository server
  • Tiller can be installed into any namespace.
  • Limiting Tiller to only be able to install into specific namespaces and/or resource types is controlled by Kubernetes RBAC roles and rolebindings
  • Release names are unique PER TILLER INSTANCE
  • Charts should only contain resources that exist in a single namespace.
  • not recommended to have multiple Tillers configured to manage resources in the same namespace.
  • a client-side Helm plugin. A plugin is a tool that can be accessed through the helm CLI, but which is not part of the built-in Helm codebase.
  • Helm plugins are add-on tools that integrate seamlessly with Helm. They provide a way to extend the core feature set of Helm, but without requiring every new feature to be written in Go and added to the core tool.
  • Helm plugins live in $(helm home)/plugins
  • The Helm plugin model is partially modeled on Git’s plugin model
  • helm referred to as the porcelain layer, with plugins being the plumbing.
  • helm plugin install https://github.com/technosophos/helm-template
  • command is the command that this plugin will execute when it is called.
  • Environment variables are interpolated before the plugin is executed.
  • The command itself is not executed in a shell. So you can’t oneline a shell script.
  • Helm is able to fetch Charts using HTTP/S
  • Variables like KUBECONFIG are set for the plugin if they are set in the outer environment.
  • In Kubernetes, granting a role to an application-specific service account is a best practice to ensure that your application is operating in the scope that you have specified.
  • restrict Tiller’s capabilities to install resources to certain namespaces, or to grant a Helm client running access to a Tiller instance.
  • Service account with cluster-admin role
  • The cluster-admin role is created by default in a Kubernetes cluster
  • Deploy Tiller in a namespace, restricted to deploying resources only in that namespace
  • Deploy Tiller in a namespace, restricted to deploying resources in another namespace
  • When running a Helm client in a pod, in order for the Helm client to talk to a Tiller instance, it will need certain privileges to be granted.
  • SSL Between Helm and Tiller
  • The Tiller authentication model uses client-side SSL certificates.
  • creating an internal CA, and using both the cryptographic and identity functions of SSL.
  • Helm is a powerful and flexible package-management and operations tool for Kubernetes.
  • default installation applies no security configurations
  • with a cluster that is well-secured in a private network with no data-sharing or no other users or teams.
  • With great power comes great responsibility.
  • Choose the Best Practices you should apply to your helm installation
  • Role-based access control, or RBAC
  • Tiller’s gRPC endpoint and its usage by Helm
  • Kubernetes employ a role-based access control (or RBAC) system (as do modern operating systems) to help mitigate the damage that can be done if credentials are misused or bugs exist.
  • In the default installation the gRPC endpoint that Tiller offers is available inside the cluster (not external to the cluster) without authentication configuration applied.
  • Tiller stores its release information in ConfigMaps. We suggest changing the default to Secrets.
  • release information
  • charts
  • charts are a kind of package that not only installs containers you may or may not have validated yourself, but it may also install into more than one namespace.
  • As with all shared software, in a controlled or shared environment you must validate all software you install yourself before you install it.
  • Helm’s provenance tools to ensure the provenance and integrity of charts
  •  
    "Helm will figure out where to install Tiller by reading your Kubernetes configuration file (usually $HOME/.kube/config). This is the same file that kubectl uses."
張 旭

MongoDB Performance Tuning: Everything You Need to Know - Stackify - 0 views

  • db.serverStatus().globalLock
  • db.serverStatus().locks
  • globalLock.currentQueue.total: This number can indicate a possible concurrency issue if it’s consistently high. This can happen if a lot of requests are waiting for a lock to be released.
  • ...35 more annotations...
  • globalLock.totalTime: If this is higher than the total database uptime, the database has been in a lock state for too long.
  • Unlike relational databases such as MySQL or PostgreSQL, MongoDB uses JSON-like documents for storing data.
  • Databases operate in an environment that consists of numerous reads, writes, and updates.
  • When a lock occurs, no other operation can read or modify the data until the operation that initiated the lock is finished.
  • locks.deadlockCount: Number of times the lock acquisitions have encountered deadlocks
  • Is the database frequently locking from queries? This might indicate issues with the schema design, query structure, or system architecture.
  • For version 3.2 on, WiredTiger is the default.
  • MMAPv1 locks whole collections, not individual documents.
  • WiredTiger performs locking at the document level.
  • When the MMAPv1 storage engine is in use, MongoDB will use memory-mapped files to store data.
  • All available memory will be allocated for this usage if the data set is large enough.
  • db.serverStatus().mem
  • mem.resident: Roughly equivalent to the amount of RAM in megabytes that the database process uses
  • If mem.resident exceeds the value of system memory and there’s a large amount of unmapped data on disk, we’ve most likely exceeded system capacity.
  • If the value of mem.mapped is greater than the amount of system memory, some operations will experience page faults.
  • The WiredTiger storage engine is a significant improvement over MMAPv1 in performance and concurrency.
  • By default, MongoDB will reserve 50 percent of the available memory for the WiredTiger data cache.
  • wiredTiger.cache.bytes currently in the cache – This is the size of the data currently in the cache.
  • wiredTiger.cache.tracked dirty bytes in the cache – This is the size of the dirty data in the cache.
  • we can look at the wiredTiger.cache.bytes read into cache value for read-heavy applications. If this value is consistently high, increasing the cache size may improve overall read performance.
  • check whether the application is read-heavy. If it is, increase the size of the replica set and distribute the read operations to secondary members of the set.
  • write-heavy, use sharding within a sharded cluster to distribute the load.
  • Replication is the propagation of data from one node to another
  • Replication sets handle this replication.
  • Sometimes, data isn’t replicated as quickly as we’d like.
  • a particularly thorny problem if the lag between a primary and secondary node is high and the secondary becomes the primary
  • use the db.printSlaveReplicationInfo() or the rs.printSlaveReplicationInfo() command to see the status of a replica set from the perspective of the secondary member of the set.
  • shows how far behind the secondary members are from the primary. This number should be as low as possible.
  • monitor this metric closely.
  • watch for any spikes in replication delay.
  • Always investigate these issues to understand the reasons for the lag.
  • One replica set is primary. All others are secondary.
  • it’s not normal for nodes to change back and forth between primary and secondary.
  • use the profiler to gain a deeper understanding of the database’s behavior.
  • Enabling the profiler can affect system performance, due to the additional activity.
  •  
    "globalLock.currentQueue.total: This number can indicate a possible concurrency issue if it's consistently high. This can happen if a lot of requests are waiting for a lock to be released."
張 旭

Full Cycle Developers at Netflix - Operate What You Build - 1 views

  • Researching issues felt like bouncing a rubber ball between teams, hard to catch the root cause and harder yet to stop from bouncing between one another.
  • In the past, Edge Engineering had ops-focused teams and SRE specialists who owned the deploy+operate+support parts of the software life cycle
  • hearing about those problems second-hand
  • ...17 more annotations...
  • devs could push code themselves when needed, and also were responsible for off-hours production issues and support requests
  • What were we trying to accomplish and why weren’t we being successful?
  • These specialized roles create efficiencies within each segment while potentially creating inefficiencies across the entire life cycle.
  • Grouping differing specialists together into one team can reduce silos, but having different people do each role adds communication overhead, introduces bottlenecks, and inhibits the effectiveness of feedback loops.
  • devops principles
  • develops a system also be responsible for operating and supporting that system
  • Each development team owns deployment issues, performance bugs, capacity planning, alerting gaps, partner support, and so on.
  • Those centralized teams act as force multipliers by turning their specialized knowledge into reusable building blocks.
  • Communication and alignment are the keys to success.
  • Full cycle developers are expected to be knowledgeable and effective in all areas of the software life cycle.
  • ramping up on areas they haven’t focused on before
  • We run dev bootcamps and other forms of ongoing training to impart this knowledge and build up these skills
  • “how can I automate what is needed to operate this system?”
  • “what self-service tool will enable my partners to answer their questions without needing me to be involved?”
  • A full cycle developer thinks and acts like an SWE, SDET, and SRE. At times they create software that solves business problems, at other times they write test cases for that, and still other times they automate operational aspects of that system.
  • the need for continuous delivery pipelines, monitoring/observability, and so on.
  • Tooling and automation help to scale expertise, but no tool will solve every problem in the developer productivity and operations space
crazylion lee

BearSSL - 0 views

shared by crazylion lee on 11 Nov 16 - No Cached
  •  
    "BearSSL is an implementation of the SSL/TLS protocol (RFC 5246) written in C. It aims at offering the following features: Be correct and secure. In particular, insecure protocol versions and choices of algorithms are not supported, by design; cryptographic algorithm implementations are constant-time by default. Be small, both in RAM and code footprint. For instance, a minimal server implementation may fit in about 20 kilobytes of compiled code and 25 kilobytes of RAM. Be highly portable. BearSSL targets not only "big" operating systems like Linux and Windows, but also small embedded systems and even special contexts like bootstrap code. Be feature-rich and extensible. SSL/TLS has many defined cipher suites and extensions; BearSSL should implement most of them, and allow extra algorithm implementations to be added afterwards, possibly from third parties."
1 - 20 of 257 Next › Last »
Showing 20 items per page