Skip to main content

Home/ Larvata/ Group items tagged engineering

Rss Feed Group items tagged

張 旭

Asset Pipeline - Ruby on Rails 指南 - 0 views

  • 清单文件或帮助方法
    • 張 旭
       
      清單文件是指:application.css 跟 application.js
  • Sprockets 会按照搜索路径中各路径出现的顺序进行搜索。默认情况下,这意味着 app/assets 文件夹中的静态资源优先级较高,会遮盖 lib 和 vendor 文件夹中的相应文件
  • 如果静态资源不会在清单文件中引入,就要添加到预编译的文件列表中,否则在生产环境中就无法访问文件。
  • ...36 more annotations...
  • 程序中使用了 jQuery 代码库和许多模块,都保存在 lib/assets/javascripts/library_name 文件夹中,那么 lib/assets/javascripts/library_name/index.js 文件的作用就是这个代码库的清单。清单文件中可以按顺序列出所需的文件,或者干脆使用 require_tree 指令。
  • 如果使用 Turbolinks(Rails 4 默认启用),加上 data-turbolinks-track 选项后,Turbolinks 会检查静态资源是否有更新,如果更新了就会将其载入页面
  • config.assets.paths 包含标准路径和其他 Rails 引擎添加的路径。
  • 链接不存在的资源(也包括链接到空字符串的情况)会在调用页面抛出异常。
  • 关闭标签不能使用 -%> 形式
  • Sprockets 通过清单文件决定要引入和伺服哪些静态资源
  • 在 JavaScript 文件中,Sprockets 的指令以 //= 开头。在上面的文件中,用到了 require 和 the require_tree 指令。
  • app/assets/javascripts/application.js
  • require_tree 指令告知 Sprockets 递归引入指定文件夹中的所有 JavaScript 文件。文件夹的路径必须相对于清单文件。也可使用 require_directory 指令加载指定文件夹中的所有 JavaScript 文件,但不会递归。
  • Sprockets 会按照从上至下的顺序处理指令,但 require_tree 引入的文件顺序是不可预期的,不要设想能得到一个期望的顺序。
  • app/assets/stylesheets/application.css
  • 不管创建新程序时有没有指定 --skip-sprockets 选项,Rails 4 都会生成 app/assets/javascripts/application.js 和 app/assets/stylesheets/application.css
  • 如果多次调用 require_self,只有最后一次调用有效
  • 如果想使用多个 Sass 文件,应该使用 Sass 中的 @import 规则,不要使用 Sprockets 指令。
  • 清单文件可以有多个。
  • 如果使用默认的 gem,生成控制器或脚手架时,会生成 CoffeeScript 和 SCSS 文件,而不是普通的 JavaScript 和 CSS 文件。
  • 在开发环境中,或者禁用 Asset Pipeline 时,这些文件会使用 coffee-script 和 sass 提供的预处理器处理,然后再发给浏览器
  • 启用 Asset Pipeline 时,这些文件会先使用预处理器处理,然后保存到 public/assets 文件夹中,再由 Rails 程序或网页服务器伺服
  • 添加额外的扩展名可以增加预处理次数,预处理程序会按照扩展名从右至左的顺序处理文件内容。所以,扩展名的顺序要和处理的顺序一致
  • 预处理器的执行顺序很重要
  • 在开发环境中也可启用压缩功能,检查是否能正常运行。需要调试时再禁用压缩即可。
  • 默认情况下,Rails 认为静态资源已经事先编译好了,直接由网页服务器伺服。
  • 般情况下,请勿修改 config.assets.digest 的默认值
  • 可在部署时编译静态资源
  • 在多次部署之间共用这个文件夹是十分重要的,这样只要缓存的页面可用,其中引用的编译后的静态资源就能正常使用。
  • 默认编译的文件包括 application.js、application.css 以及 gem 中 app/assets 文件夹中的所有非 JS/CSS 文件(会自动加载所有图片)
  • 如果想编译其他清单,或者单独的样式表和 JavaScript,可以添加到 config/application.rb 文件中的 precompile 选项
  • 设置编译所有静态资源
  • manifest-md5hash.json 的文件,列出所有静态资源和对应的指纹
  • 把 Expires 报头设置为很久以后
  • 在本地预编译后,可以把编译好的文件纳入版本控制系统,再按照常规的方式部署
  • 实时编译消耗的内存更多,比默认的编译方式性能更低,因此不推荐使用
  • 如果用 CDN 分发静态资源,要确保文件不会被缓存,因为缓存会导致问题。如果设置了 config.action_controller.perform_caching = true,Rack::Cache 会使用 Rails.cache 存储静态文件,很快缓存空间就会用完。
  • Sprockets 默认使用的公开路径是 /assets
  • X-Sendfile 报头的作用是让服务器忽略程序的响应,直接从硬盘上伺服指定的文件
  • 为 Rails 提供标准 JavaScript 代码库的 jquery-rails gem 是个很好的例子。这个 gem 中有个引擎类,继承自 Rails::Engine。添加这层继承关系后,Rails 就知道这个 gem 中可能包含静态资源文件,会把这个引擎中的 app/assets、lib/assets 和 vendor/assets 三个文件夹加入 Sprockets 的搜索路径中。
張 旭

Active Record Migrations - Ruby on Rails Guides - 0 views

  • a convenient way to alter your database schema
  • each migration as being a new 'version' of the database
  • On databases that support transactions with statements that change the schema, migrations are wrapped in a transaction
  • ...14 more annotations...
  • a UTC timestamp identifying the migration
  • references(also available as belongs_to)
  • produce join tables if JoinTable is part of the name
  • The model and scaffold generators will create migrations appropriate for adding a new model
  • The create_table method is one of the most fundamental
  • By default, create_table will create a primary key called id
  • the default is ENGINE=InnoDB
  • Migration method create_join_table creates a HABTM join table.
  • By default, create_join_table will create two columns with no options
  • change_table, used for changing existing tables
  • execute method to execute arbitrary SQL
  • The change method is the primary way of writing migrations
  • migration definitions
  • write the up and down methods instead of using the change method
張 旭

Active Record Migrations - Ruby on Rails Guides - 0 views

    • 張 旭
       
       跟 belongs_to 與 has_many 設定對應的 Migrattion
    • 張 旭
       
      has_and_belongs_to_many 的對應?
  • add_column and remove_column
  • ...114 more annotations...
  • allowing your schema and changes to be database independent.
  • each migration as being a new 'version' of the database
  • each migration modifies it to add or remove tables, columns, or entries
  • Active Record will also update your db/schema.rb file to match the up-to-date structure of your database.
  • A primary key column called id will also be added implicitly, as it's the default primary key for all Active Record models
  • roll this migration back, it will remove the table
  • timestamps macro adds two columns, created_at and updated_at
  • On databases that support transactions with statements that change the schema, migrations are wrapped in a transaction
  • reversible
  • use up and down instead of change
  • Migrations are stored as files in the db/migrate directory, one for each migration class.
  • a UTC timestamp identifying
  • Rails uses this timestamp to determine which migration should be run and in what order
  • "AddXXXToYYY" or "RemoveXXXFromYYY"
  • use a Ruby DSL
  • column type as references
  • part_number:string:index
  • a migration to remove a column
  • "CreateXXX"
  • change_column_null
  • AddUserRefToProducts
  • :references
  • produce join tables if JoinTable is part of the name
  • CreateJoinTable
  • The model and scaffold generators will create migrations appropriate for adding a new model.
  • enclosed by curly braces and follow the field type
  • create_table
  • By default, create_table will create a primary key called id
  • add an index on the new column
  • when using MySQL, the default is ENGINE=InnoDB
  • create_join_table creates an HABTM (has and belongs to many) join table
  • To customize the name of the table, provide a :table_name option:
  • create_join_table also accepts a block
  • change_table, used for changing existing tables
  • remove
  • rename
  • add_column
  • change_column
  • remove_column
  • change_column_default
  • place an SQL fragment in the :options option.
  • limit
  • precision
  • scale
  • polymorphic
  • default
  • index
  • add_foreign_key
  • Active Record only supports single column foreign keys.
  • use the old style of migration using up and down methods instead of the change method.
  • .connection.execute
  • change_table is also reversible, as long as the block does not call change, change_default or remove.
  • remove_column is reversible if you supply the column type as the third argument
  • Complex migrations may require processing that Active Record doesn't know how to reverse
  • reversible
  • Using reversible will ensure that the instructions are executed in the right order too.
  • add_column add_foreign_key add_index add_reference add_timestamps change_column_default (must supply a :from and :to option) change_column_null create_join_table create_table disable_extension drop_join_table drop_table (must supply a block) enable_extension remove_column (must supply a type) remove_foreign_key (must supply a second table) remove_index remove_reference remove_timestamps rename_column rename_index rename_table
  • :column_options option
  • have the option :null set to false by default
  • By default, the name of the join table comes from the union of the first two arguments provided to create_join_table
  • in alphabetical order
  • change_column command is irreversible.
    • 張 旭
       
      關聯物在前,被關聯物在後。 A 關聯到 B
  • If the column names can not be derived from the table names, you can use the :column and :primary_key options.
  • figure out the column name
  • foreign key for a specific column
  • foreign key by name
    • 張 旭
       
      不懂 column 跟 name 的用法差異,基本上一樣。
  • Active Record knows how to reverse the migration automatically
    • 張 旭
       
      使用內建的 method,Rails 比較容易自動 rollback
    • 張 旭
       
      除了幾個特殊的 change_ 跟 remove_
  • should use reversible or write the up and down methods instead of using the change method
  • If your migration is irreversible, you should raise ActiveRecord::IrreversibleMigration from your down method.
  • DontUseConstraintForZipcodeValidationMigration
  • rails db:migrate
  • the db:migrate task also invokes the db:schema:dump task, which will update your db/schema.rb file to match the structure of your database.
  • specify a target version
  • all migrations up to and including 20080906120000
  • run the down method on all the migrations down to, but not including, 20080906120000
  • rails db:rollback
  • db:migrate:redo task is a shortcut for doing a rollback and then migrating back up again
    • 張 旭
       
      舊版的還是 rake!
  • STEP parameter
  • db:setup task will create the database, load the schema and initialize it with the seed data
  • db:reset task will drop the database and set it up again. This is functionally equivalent to rails db:drop db:setup.
  • run a specific migration up or down, the db:migrate:up and db:migrate:down
  • the RAILS_ENV environment variable
  • db:migrate VERBOSE=false will suppress all output.
  • If you have already run the migration, then you cannot just edit the migration and run the migration again: Rails thinks it has already run the migration and so will do nothing when you run rails db:migrate.
  • must rollback the migration (for example with bin/rails db:rollback), edit your migration and then run rails db:migrate to run the corrected version.
  • editing existing migrations is not a good idea.
  • should write a new migration that performs the changes you require
  • revert method can be helpful when writing a new migration to undo previous migrations in whole or in part
  • require_relative
  • revert
  • They are not designed to be edited, they just represent the current state of the database.
  • Schema Files for
  • Schema files are also useful if you want a quick look at what attributes an Active Record object has
  • annotate_models gem automatically adds and updates comments at the top of each model summarizing the schema if you desire that functionality.
  • database-independent
  • multiple databases
  • db/schema.rb cannot express database specific items such as triggers, stored procedures or check constraints
  • you can execute custom SQL statements, the schema dumper cannot reconstitute those statements from the database
  • db:structure:dump
    • 張 旭
       
      資料庫種類不相依的 schema 付出的代價就是有些特殊的資料庫特性無法描述出來,例如 trigger;如果有在 migration 寫 SQL 的,簡單說 schema dumper 這邊就要設定成 :sql 而不是預設的 :ruby
  • set in config/application.rb by the config.active_record.schema_format setting, which may be either :sql or :ruby.
  • check them into source control.
  • db/schema.rb contains the current version number of the database
  • Validations such as validates :foreign_key, uniqueness: true are one way in which models can enforce data integrity
  • The :dependent option on associations allows models to automatically destroy child objects when the parent is destroyed.
  • Migrations can also be used to add or modify data
  • Initial
  • To add initial data after a database is created, Rails has a built-in 'seeds' feature that makes the process quick and easy.
  • db/seeds.rb
  • rails db:seed
張 旭

| Docker Documentation - 0 views

  • The host directory is declared at container run-time: The host directory (the mountpoint) is, by its nature, host-dependent. This is to preserve image portability, since a given host directory can’t be guaranteed to be available on all hosts.
  • This Dockerfile results in an image that causes docker run to create a new mount point at /myvol and copy the greeting file into the newly created volume.
  •  
    "The host directory is declared at container run-time: The host directory (the mountpoint) is, by its nature, host-dependent. This is to preserve image portability, since a given host directory can't be guaranteed to be available on all hosts."
張 旭

Docker ARG, ENV and .env - a Complete Guide · vsupalov.com - 1 views

  • understand and use Docker build-time variables, environment variables and docker-compose templating the right way.
  • ARG is only available during the build of a Docker image (RUN etc), not after the image is created and containers are started from it (ENTRYPOINT, CMD).
  • ENV values are available to containers, but also RUN-style commands during the Docker build starting with the line where they are introduced.
  • ...20 more annotations...
  • set an environment variable in an intermediate container using bash (RUN export VARI=5 && …) it will not persist in the next command.
  • An env_file, is a convenient way to pass many environment variables to a single command in one batch.
  • not be confused with a .env file
  • the dot in front of env - .env, not an “env_file”.
  • If you have a file named .env in your project, it’s only used to put values into the docker-compose.yml file which is in the same folder. Those are used with Docker Compose and Docker Stack.
  • Just type docker-compose config. This way you’ll see how the docker-compose.yml file content looks after the substitution step has been performed without running anything else.
  • ARG are also known as build-time variables. They are only available from the moment they are ‘announced’ in the Dockerfile with an ARG instruction up to the moment when the image is built.
  • Running containers can’t access values of ARG variables.
  • ENV variables are also available during the build, as soon as you introduce them with an ENV instruction. However, unlike ARG, they are also accessible by containers started from the final image.
  • ENV values can be overridden when starting a container,
  • If you don’t provide a value to expected ARG variables which don’t have a default, you’ll get an error message.
  • args block
  • You can use ARG to set the default values of ENV vars.
  • dynamic on-build env values
  • 2. Pass environment variable values from your host
  • 1. Provide values one by one
  • 3. Take values from a file (env_file)
  • for each RUN statement, a new container is launched from an intermediate image.
  • An image is saved by the end of the command, but environment variables do not persist that way.
  • The precedence is, from stronger to less-strong: stuff the containerized application sets, values from single environment entries, values from the env_file(s) and finally Dockerfile defaults.
張 旭

What is DevOps? | Atlassian - 0 views

  • DevOps is a set of practices that automates the processes between software development and IT teams, in order that they can build, test, and release software faster and more reliably.
  • increased trust, faster software releases, ability to solve critical issues quickly, and better manage unplanned work.
  • bringing together the best of software development and IT operations.
  • ...39 more annotations...
  • DevOps is a culture, a movement, a philosophy.
  • a firm handshake between development and operations
  • DevOps isn’t magic, and transformations don’t happen overnight.
  • Infrastructure as code
  • Culture is the #1 success factor in DevOps.
  • Building a culture of shared responsibility, transparency and faster feedback is the foundation of every high performing DevOps team.
  •  'not our problem' mentality
  • DevOps is that change in mindset of looking at the development process holistically and breaking down the barrier between Dev and Ops.
  • Speed is everything.
  • Lack of automated test and review cycles block the release to production and poor incident response time kills velocity and team confidence
  • Open communication helps Dev and Ops teams swarm on issues, fix incidents, and unblock the release pipeline faster.
  • Unplanned work is a reality that every team faces–a reality that most often impacts team productivity.
  • “cross-functional collaboration.”
  • All the tooling and automation in the world are useless if they aren’t accompanied by a genuine desire on the part of development and IT/Ops professionals to work together.
  • DevOps doesn’t solve tooling problems. It solves human problems.
  • Forming project- or product-oriented teams to replace function-based teams is a step in the right direction.
  • sharing a common goal and having a plan to reach it together
  • join sprint planning sessions, daily stand-ups, and sprint demos.
  • DevOps culture across every department
  • open channels of communication, and talk regularly
  • DevOps isn’t one team’s job. It’s everyone’s job.
  • automation eliminates repetitive manual work, yields repeatable processes, and creates reliable systems.
  • Build, test, deploy, and provisioning automation
  • continuous delivery: the practice of running each code change through a gauntlet of automated tests, often facilitated by cloud-based infrastructure, then packaging up successful builds and promoting them up toward production using automated deploys.
  • automated deploys alert IT/Ops to server “drift” between environments, which reduces or eliminates surprises when it’s time to release.
  • “configuration as code.”
  • when DevOps uses automated deploys to send thoroughly tested code to identically provisioned environments, “Works on my machine!” becomes irrelevant.
  • A DevOps mindset sees opportunities for continuous improvement everywhere.
  • regular retrospectives
  • A/B testing
  • failure is inevitable. So you might as well set up your team to absorb it, recover, and learn from it (some call this “being anti-fragile”).
  • Postmortems focus on where processes fell down and how to strengthen them – not on which team member f'ed up the code.
  • Our engineers are responsible for QA, writing, and running their own tests to get the software out to customers.
  • How long did it take to go from development to deployment? 
  • How long does it take to recover after a system failure?
  • service level agreements (SLAs)
  • Devops isn't any single person's job. It's everyone's job.
  • DevOps is big on the idea that the same people who build an application should be involved in shipping and running it.
  • developers and operators pair with each other in each phase of the application’s lifecycle.
張 旭

Running Docker Commands - CircleCI - 0 views

  • To build Docker images for deployment, you must use a special setup_remote_docker key which creates a separate environment for each build for security.
  • When setup_remote_docker executes, a remote environment will be created, and your current primary container will be configured to use it.
  • Once setup_remote_docker is called, a new remote environment is created, and your primary container is configured to use it.
  • ...8 more annotations...
  • but building/pushing images and running containers happens in the remote Docker Engine
  • use a primary image that already has Docker (recommended)
  • installs Docker and has Git, use 17.05.0-ce-git
  • The job and remote docker run in separate environments.
  • It is not possible to start a service in remote docker and ping it directly from a primary container or to start a primary container that can ping a service in remote docker.
  • It is not possible to mount a folder from your job space into a container in Remote Docker (and vice versa).
    • 張 旭
       
      等於是 docker client 跟 docker server 是兩台不同的主機就對了。
  • use https://github.com/outstand/docker-dockup or a similar image for backup and restore to spin up a container
  •  
    "To build Docker images for deployment, you must use a special setup_remote_docker key which creates a separate environment for each build for security. "
張 旭

How services work | Docker Documentation - 0 views

  • a service is the image for a microservice within the context of some larger application.
  • When you create a service, you specify which container image to use and which commands to execute inside running containers.
  • an overlay network for the service to connect to other services in the swarm
  • ...13 more annotations...
  • In the swarm mode model, each task invokes exactly one container
  • A task is analogous to a “slot” where the scheduler places a container.
  • A task is the atomic unit of scheduling within a swarm.
  • A task is a one-directional mechanism. It progresses monotonically through a series of states: assigned, prepared, running, etc.
  • Docker swarm mode is a general purpose scheduler and orchestrator.
  • Hypothetically, you could implement other types of tasks such as virtual machine tasks or non-containerized process tasks.
  • If all nodes are paused or drained, and you create a service, it is pending until a node becomes available.
  • reserve a specific amount of memory for a service.
  • impose placement constraints on the service
  • As the administrator of a swarm, you declare the desired state of your swarm, and the manager works with the nodes in the swarm to create that state.
  • two types of service deployments, replicated and global.
  • A global service is a service that runs one task on every node.
  • Good candidates for global services are monitoring agents, an anti-virus scanners or other types of containers that you want to run on every node in the swarm.
張 旭

Manage nodes in a swarm | Docker Documentation - 0 views

  • Drain means the scheduler doesn’t assign new tasks to the node. The scheduler shuts down any existing tasks and schedules them on an available node.
  • Reachable means the node is a manager node participating in the Raft consensus quorum. If the leader node becomes unavailable, the node is eligible for election as the new leader.
  • If a manager node becomes unavailable, you should either join a new manager node to the swarm or promote a worker node to be a manager.
  • ...8 more annotations...
  • docker node inspect self --pretty
  • docker node update --availability drain node
  • use node labels in service constraints
  • The labels you set for nodes using docker node update apply only to the node entity within the swarm
  • node labels can be used to limit critical tasks to nodes that meet certain requirements
  • promote a worker node to the manager role
  • demote a manager node to the worker role
  • If the last manager node leaves the swarm, the swarm becomes unavailable requiring you to take disaster recovery measures.
張 旭

Manage swarm security with public key infrastructure (PKI) | Docker Documentation - 0 views

  • The nodes in a swarm use mutual Transport Layer Security (TLS) to authenticate, authorize, and encrypt the communications with other nodes in the swarm.
  • By default, the manager node generates a new root Certificate Authority (CA) along with a key pair, which are used to secure communications with other nodes that join the swarm.
  • The manager node also generates two tokens to use when you join additional nodes to the swarm: one worker token and one manager token.
  • ...3 more annotations...
  • Each time a new node joins the swarm, the manager issues a certificate to the node
  • By default, each node in the swarm renews its certificate every three months.
  • a cluster CA key or a manager node is compromised, you can rotate the swarm root CA so that none of the nodes trust certificates signed by the old root CA anymore.
  •  
    "The nodes in a swarm use mutual Transport Layer Security (TLS) to authenticate, authorize, and encrypt the communications with other nodes in the swarm."
張 旭

Swarm task states | Docker Documentation - 0 views

  • Each service can start multiple tasks.
  • Tasks are execution units that run once to completion.
  • The task progresses forward through a number of states, and its state doesn’t go backward.
張 旭

VPCs and Subnets - Amazon Virtual Private Cloud - 0 views

  • you must specify a range of IPv4 addresses for the VPC in the form of a Classless Inter-Domain Routing (CIDR) block
  • A VPC spans all the Availability Zones in the region
  • add one or more subnets in each Availability Zone.
  • ...19 more annotations...
  • Each subnet must reside entirely within one Availability Zone and cannot span zones.
  • Availability Zones are distinct locations that are engineered to be isolated from failures in other Availability Zones
  • If a subnet's traffic is routed to an internet gateway, the subnet is known as a public subnet.
  • If a subnet doesn't have a route to the internet gateway, the subnet is known as a private subnet.
  • If a subnet doesn't have a route to the internet gateway, but has its traffic routed to a virtual private gateway for a VPN connection, the subnet is known as a VPN-only subnet.
  • By default, all VPCs and subnets must have IPv4 CIDR blocks—you can't change this behavior.
  • The allowed block size is between a /16 netmask (65,536 IP addresses) and /28 netmask (16 IP addresses).
  • The first four IP addresses and the last IP address in each subnet CIDR block are not available for you to use
  • The allowed block size is between a /28 netmask and /16 netmask
  • The CIDR block must not overlap with any existing CIDR block that's associated with the VPC.
  • Each subnet must be associated with a route table
  • Every subnet that you create is automatically associated with the main route table for the VPC
  • Security groups control inbound and outbound traffic for your instances
  • network ACLs control inbound and outbound traffic for your subnets
  • each subnet must be associated with a network ACL
  • You can create a flow log on your VPC or subnet to capture the traffic that flows to and from the network interfaces in your VPC or subnet.
  • A VPC peering connection enables you to route traffic between the VPCs using private IP addresses
  • you cannot create a VPC peering connection between VPCs that have overlapping CIDR blocks
  • recommend that you create a VPC with a CIDR range large enough for expected future growth, but not one that overlaps with current or expected future subnets anywhere in your corporate or home network, or that overlaps with current or future VPCs
張 旭

Best practices for writing Dockerfiles | Docker Documentation - 0 views

  • building efficient images
  • Docker builds images automatically by reading the instructions from a Dockerfile -- a text file that contains all commands, in order, needed to build a given image.
  • A Docker image consists of read-only layers each of which represents a Dockerfile instruction.
  • ...47 more annotations...
  • The layers are stacked and each one is a delta of the changes from the previous layer
  • When you run an image and generate a container, you add a new writable layer (the “container layer”) on top of the underlying layers.
  • By “ephemeral,” we mean that the container can be stopped and destroyed, then rebuilt and replaced with an absolute minimum set up and configuration.
  • Inadvertently including files that are not necessary for building an image results in a larger build context and larger image size.
  • To exclude files not relevant to the build (without restructuring your source repository) use a .dockerignore file. This file supports exclusion patterns similar to .gitignore files.
  • minimize image layers by leveraging build cache.
  • if your build contains several layers, you can order them from the less frequently changed (to ensure the build cache is reusable) to the more frequently changed
  • avoid installing extra or unnecessary packages just because they might be “nice to have.”
  • Each container should have only one concern.
  • Decoupling applications into multiple containers makes it easier to scale horizontally and reuse containers
  • Limiting each container to one process is a good rule of thumb, but it is not a hard and fast rule.
  • Use your best judgment to keep containers as clean and modular as possible.
  • do multi-stage builds and only copy the artifacts you need into the final image. This allows you to include tools and debug information in your intermediate build stages without increasing the size of the final image.
  • avoid duplication of packages and make the list much easier to update.
  • When building an image, Docker steps through the instructions in your Dockerfile, executing each in the order specified.
  • the next instruction is compared against all child images derived from that base image to see if one of them was built using the exact same instruction. If not, the cache is invalidated.
  • simply comparing the instruction in the Dockerfile with one of the child images is sufficient.
  • For the ADD and COPY instructions, the contents of the file(s) in the image are examined and a checksum is calculated for each file.
  • If anything has changed in the file(s), such as the contents and metadata, then the cache is invalidated.
  • cache checking does not look at the files in the container to determine a cache match.
  • In that case just the command string itself is used to find a match.
    • 張 旭
       
      RUN apt-get 這樣的指令,直接比對指令內容的意思。
  • Whenever possible, use current official repositories as the basis for your images.
  • Using RUN apt-get update && apt-get install -y ensures your Dockerfile installs the latest package versions with no further coding or manual intervention.
  • cache busting
  • Docker executes these commands using the /bin/sh -c interpreter, which only evaluates the exit code of the last operation in the pipe to determine success.
  • set -o pipefail && to ensure that an unexpected error prevents the build from inadvertently succeeding.
  • The CMD instruction should be used to run the software contained by your image, along with any arguments.
  • CMD should almost always be used in the form of CMD [“executable”, “param1”, “param2”…]
  • CMD should rarely be used in the manner of CMD [“param”, “param”] in conjunction with ENTRYPOINT
  • The ENV instruction is also useful for providing required environment variables specific to services you wish to containerize,
  • Each ENV line creates a new intermediate layer, just like RUN commands
  • COPY is preferred
  • COPY only supports the basic copying of local files into the container
  • the best use for ADD is local tar file auto-extraction into the image, as in ADD rootfs.tar.xz /
  • If you have multiple Dockerfile steps that use different files from your context, COPY them individually, rather than all at once.
  • using ADD to fetch packages from remote URLs is strongly discouraged; you should use curl or wget instead
  • The best use for ENTRYPOINT is to set the image’s main command, allowing that image to be run as though it was that command (and then use CMD as the default flags).
  • the image name can double as a reference to the binary as shown in the command
  • The VOLUME instruction should be used to expose any database storage area, configuration storage, or files/folders created by your docker container.
  • use VOLUME for any mutable and/or user-serviceable parts of your image
  • If you absolutely need functionality similar to sudo, such as initializing the daemon as root but running it as non-root), consider using “gosu”.
  • always use absolute paths for your WORKDIR
  • An ONBUILD command executes after the current Dockerfile build completes.
  • Think of the ONBUILD command as an instruction the parent Dockerfile gives to the child Dockerfile
  • A Docker build executes ONBUILD commands before any command in a child Dockerfile.
  • Be careful when putting ADD or COPY in ONBUILD. The “onbuild” image fails catastrophically if the new build’s context is missing the resource being added.
張 旭

The Twelve-Factor App - 0 views

  • Keep development, staging, and production as similar as possible
  • Developers write code, ops engineers deploy it.
  • The twelve-factor app is designed for continuous deployment by keeping the gap between development and production small
  • ...4 more annotations...
  • Backing services, such as the app’s database, queueing system, or cache, is one area where dev/prod parity is important
  • The twelve-factor developer resists the urge to use different backing services between development and production, even when adapters theoretically abstract away any differences in backing services.
  • declarative provisioning tools such as Chef and Puppet combined with light-weight virtual environments such as Docker and Vagrant allow developers to run local environments which closely approximate production environments.
  • all deploys of the app (developer environments, staging, production) should be using the same type and version of each of the backing services.
  •  
    "as similar as possible "
張 旭

Template Designer Documentation - Jinja2 Documentation (2.10) - 0 views

  • A Jinja template doesn’t need to have a specific extension
  • A Jinja template is simply a text file
  • tags, which control the logic of the template
  • ...106 more annotations...
  • {% ... %} for Statements
  • {{ ... }} for Expressions to print to the template output
  • use a dot (.) to access attributes of a variable
  • the outer double-curly braces are not part of the variable, but the print statement.
  • If you access variables inside tags don’t put the braces around them.
  • If a variable or attribute does not exist, you will get back an undefined value.
  • the default behavior is to evaluate to an empty string if printed or iterated over, and to fail for every other operation.
  • if an object has an item and attribute with the same name. Additionally, the attr() filter only looks up attributes.
  • Variables can be modified by filters. Filters are separated from the variable by a pipe symbol (|) and may have optional arguments in parentheses.
  • Multiple filters can be chained
  • Tests can be used to test a variable against a common expression.
  • add is plus the name of the test after the variable.
  • to find out if a variable is defined, you can do name is defined, which will then return true or false depending on whether name is defined in the current template context.
  • strip whitespace in templates by hand. If you add a minus sign (-) to the start or end of a block (e.g. a For tag), a comment, or a variable expression, the whitespaces before or after that block will be removed
  • not add whitespace between the tag and the minus sign
  • mark a block raw
  • Template inheritance allows you to build a base “skeleton” template that contains all the common elements of your site and defines blocks that child templates can override.
  • The {% extends %} tag is the key here. It tells the template engine that this template “extends” another template.
  • access templates in subdirectories with a slash
  • can’t define multiple {% block %} tags with the same name in the same template
  • use the special self variable and call the block with that name
  • self.title()
  • super()
  • put the name of the block after the end tag for better readability
  • if the block is replaced by a child template, a variable would appear that was not defined in the block or passed to the context.
  • setting the block to “scoped” by adding the scoped modifier to a block declaration
  • If you have a variable that may include any of the following chars (>, <, &, or ") you SHOULD escape it unless the variable contains well-formed and trusted HTML.
  • Jinja2 functions (macros, super, self.BLOCKNAME) always return template data that is marked as safe.
  • With the default syntax, control structures appear inside {% ... %} blocks.
  • the dictsort filter
  • loop.cycle
  • Unlike in Python, it’s not possible to break or continue in a loop
  • use loops recursively
  • add the recursive modifier to the loop definition and call the loop variable with the new iterable where you want to recurse.
  • The loop variable always refers to the closest (innermost) loop.
  • whether the value changed at all,
  • use it to test if a variable is defined, not empty and not false
  • Macros are comparable with functions in regular programming languages.
  • If a macro name starts with an underscore, it’s not exported and can’t be imported.
  • pass a macro to another macro
  • caller()
  • a single trailing newline is stripped if present
  • other whitespace (spaces, tabs, newlines etc.) is returned unchanged
  • a block tag works in “both” directions. That is, a block tag doesn’t just provide a placeholder to fill - it also defines the content that fills the placeholder in the parent.
  • Python dicts are not ordered
  • caller(user)
  • call(user)
  • This is a simple dialog rendered by using a macro and a call block.
  • Filter sections allow you to apply regular Jinja2 filters on a block of template data.
  • Assignments at top level (outside of blocks, macros or loops) are exported from the template like top level macros and can be imported by other templates.
  • using namespace objects which allow propagating of changes across scopes
  • use block assignments to capture the contents of a block into a variable name.
  • The extends tag can be used to extend one template from another.
  • Blocks are used for inheritance and act as both placeholders and replacements at the same time.
  • The include statement is useful to include a template and return the rendered contents of that file into the current namespace
  • Included templates have access to the variables of the active context by default.
  • putting often used code into macros
  • imports are cached and imported templates don’t have access to the current template variables, just the globals by default.
  • Macros and variables starting with one or more underscores are private and cannot be imported.
  • By default, included templates are passed the current context and imported templates are not.
  • imports are often used just as a module that holds macros.
  • Integers and floating point numbers are created by just writing the number down
  • Everything between two brackets is a list.
  • Tuples are like lists that cannot be modified (“immutable”).
  • A dict in Python is a structure that combines keys and values.
  • // Divide two numbers and return the truncated integer result
  • The special constants true, false, and none are indeed lowercase
  • all Jinja identifiers are lowercase
  • (expr) group an expression.
  • The is and in operators support negation using an infix notation
  • in Perform a sequence / mapping containment test.
  • | Applies a filter.
  • ~ Converts all operands into strings and concatenates them.
  • use inline if expressions.
  • always an attribute is returned and items are not looked up.
  • default(value, default_value=u'', boolean=False)¶ If the value is undefined it will return the passed default value, otherwise the value of the variable
  • dictsort(value, case_sensitive=False, by='key', reverse=False)¶ Sort a dict and yield (key, value) pairs.
  • format(value, *args, **kwargs)¶ Apply python string formatting on an object
  • groupby(value, attribute)¶ Group a sequence of objects by a common attribute.
  • grouping by is stored in the grouper attribute and the list contains all the objects that have this grouper in common.
  • indent(s, width=4, first=False, blank=False, indentfirst=None)¶ Return a copy of the string with each line indented by 4 spaces. The first line and blank lines are not indented by default.
  • join(value, d=u'', attribute=None)¶ Return a string which is the concatenation of the strings in the sequence.
  • map()¶ Applies a filter on a sequence of objects or looks up an attribute.
  • pprint(value, verbose=False)¶ Pretty print a variable. Useful for debugging.
  • reject()¶ Filters a sequence of objects by applying a test to each object, and rejecting the objects with the test succeeding.
  • replace(s, old, new, count=None)¶ Return a copy of the value with all occurrences of a substring replaced with a new one.
  • round(value, precision=0, method='common')¶ Round the number to a given precision
  • even if rounded to 0 precision, a float is returned.
  • select()¶ Filters a sequence of objects by applying a test to each object, and only selecting the objects with the test succeeding.
  • sort(value, reverse=False, case_sensitive=False, attribute=None)¶ Sort an iterable. Per default it sorts ascending, if you pass it true as first argument it will reverse the sorting.
  • striptags(value)¶ Strip SGML/XML tags and replace adjacent whitespace by one space.
  • tojson(value, indent=None)¶ Dumps a structure to JSON so that it’s safe to use in <script> tags.
  • trim(value)¶ Strip leading and trailing whitespace.
  • unique(value, case_sensitive=False, attribute=None)¶ Returns a list of unique items from the the given iterable
  • urlize(value, trim_url_limit=None, nofollow=False, target=None, rel=None)¶ Converts URLs in plain text into clickable links.
  • defined(value)¶ Return true if the variable is defined
  • in(value, seq)¶ Check if value is in seq.
  • mapping(value)¶ Return true if the object is a mapping (dict etc.).
  • number(value)¶ Return true if the variable is a number.
  • sameas(value, other)¶ Check if an object points to the same memory address than another object
  • undefined(value)¶ Like defined() but the other way round.
  • A joiner is passed a string and will return that string every time it’s called, except the first time (in which case it returns an empty string).
  • namespace(...)¶ Creates a new container that allows attribute assignment using the {% set %} tag
  • The with statement makes it possible to create a new inner scope. Variables set within this scope are not visible outside of the scope.
  • activate and deactivate the autoescaping from within the templates
  • With both trim_blocks and lstrip_blocks enabled, you can put block tags on their own lines, and the entire block line will be removed when rendered, preserving the whitespace of the contents
張 旭

The Twelve-Factor App - 0 views

  • software is commonly delivered as a service: called web apps, or software-as-a-service.
  • Use declarative formats for setup automation
  • offering maximum portability between execution environments
  • ...18 more annotations...
  • obviating the need for servers and systems administration
  • Minimize divergence between development and production
  • scale up without significant changes to tooling, architecture, or development practices
  • Ops engineers who deploy or manage such applications.
  • developer building applications which run as a service
  • One codebase
  • many deploys
  • in the environment
  • services as attached resources
  • Explicitly declare
  • separate build and run stages
  • stateless processes
  • Export services via port binding
  • Scale out
  • fast startup and graceful shutdown
  • as similar as possible
  • logs as event streams
  • admin/management tasks as one-off processes
  •  
    "software is commonly delivered as a service: called web apps, or software-as-a-service"
張 旭

Docker for AWS persistent data volumes | Docker Documentation - 0 views

  • Cloudstor is a modern volume plugin built by Docker
  • Docker swarm mode tasks and regular Docker containers can use a volume created with Cloudstor to mount a persistent data volume.
  • Global shared Cloudstor volumes mounted by all tasks in a swarm service.
  • ...14 more annotations...
  • Workloads running in a Docker service that require access to low latency/high IOPs persistent storage, such as a database engine, can use a relocatable Cloudstor volume backed by EBS.
  • Each relocatable Cloudstor volume is backed by a single EBS volume.
  • If a swarm task using a relocatable Cloudstor volume gets rescheduled to another node within the same availability zone as the original node where the task was running, Cloudstor detaches the backing EBS volume from the original node and attaches it to the new target node automatically.
  • in a different availability zone,
  • Cloudstor transfers the contents of the backing EBS volume to the destination availability zone using a snapshot, and cleans up the EBS volume in the original availability zone.
  • Typically the snapshot-based transfer process across availability zones takes between 2 and 5 minutes unless the work load is write-heavy.
  • A swarm task is not started until the volume it mounts becomes available
  • Sharing/mounting the same Cloudstor volume backed by EBS among multiple tasks is not a supported scenario and leads to data loss.
  • a Cloudstor volume to share data between tasks, choose the appropriate EFS backed shared volume option.
  • When multiple swarm service tasks need to share data in a persistent storage volume, you can use a shared Cloudstor volume backed by EFS.
  • a volume and its contents can be mounted by multiple swarm service tasks without the risk of data loss
  • over NFS
  • the persistent data backed by EFS volumes is always available.
  • shared Cloudstor volumes only work in those AWS regions where EFS is supported.
crazylion lee

Scaling Rails to 125,000 Requests per Minute on Heroku - 0 views

  •  
    "Summary: On Heroku, Rails scales fairly easily, but there are some important things to consider. We looked at how various dyno and Postgres settings effect overall performance on Heroku."
張 旭

The Asset Pipeline - Ruby on Rails Guides - 0 views

  • provides a framework to concatenate and minify or compress JavaScript and CSS assets
  • adds the ability to write these assets in other languages and pre-processors such as CoffeeScript, Sass and ERB
  • invalidate the cache by altering this fingerprint
  • ...80 more annotations...
  • Rails 4 automatically adds the sass-rails, coffee-rails and uglifier gems to your Gemfile
  • reduce the number of requests that a browser makes to render a web page
  • Starting with version 3.1, Rails defaults to concatenating all JavaScript files into one master .js file and all CSS files into one master .css file
  • In production, Rails inserts an MD5 fingerprint into each filename so that the file is cached by the web browser
  • The technique sprockets uses for fingerprinting is to insert a hash of the content into the name, usually at the end.
  • asset minification or compression
  • The sass-rails gem is automatically used for CSS compression if included in Gemfile and no config.assets.css_compressor option is set.
  • Supported languages include Sass for CSS, CoffeeScript for JavaScript, and ERB for both by default.
  • When a filename is unique and based on its content, HTTP headers can be set to encourage caches everywhere (whether at CDNs, at ISPs, in networking equipment, or in web browsers) to keep their own copy of the content
  • asset pipeline is technically no longer a core feature of Rails 4
  • Rails uses for fingerprinting is to insert a hash of the content into the name, usually at the end
  • With the asset pipeline, the preferred location for these assets is now the app/assets directory.
  • Fingerprinting is enabled by default for production and disabled for all other environments
  • The files in app/assets are never served directly in production.
  • Paths are traversed in the order that they occur in the search path
  • You should use app/assets for files that must undergo some pre-processing before they are served.
  • By default .coffee and .scss files will not be precompiled on their own
  • app/assets is for assets that are owned by the application, such as custom images, JavaScript files or stylesheets.
  • lib/assets is for your own libraries' code that doesn't really fit into the scope of the application or those libraries which are shared across applications.
  • vendor/assets is for assets that are owned by outside entities, such as code for JavaScript plugins and CSS frameworks.
  • Any path under assets/* will be searched
  • By default these files will be ready to use by your application immediately using the require_tree directive.
  • By default, this means the files in app/assets take precedence, and will mask corresponding paths in lib and vendor
  • Sprockets uses files named index (with the relevant extensions) for a special purpose
  • Rails.application.config.assets.paths
  • causes turbolinks to check if an asset has been updated and if so loads it into the page
  • if you add an erb extension to a CSS asset (for example, application.css.erb), then helpers like asset_path are available in your CSS rules
  • If you add an erb extension to a JavaScript asset, making it something such as application.js.erb, then you can use the asset_path helper in your JavaScript code
  • The asset pipeline automatically evaluates ERB
  • data URI — a method of embedding the image data directly into the CSS file — you can use the asset_data_uri helper.
  • Sprockets will also look through the paths specified in config.assets.paths, which includes the standard application paths and any paths added by Rails engines.
  • image_tag
  • the closing tag cannot be of the style -%>
  • asset_data_uri
  • app/assets/javascripts/application.js
  • sass-rails provides -url and -path helpers (hyphenated in Sass, underscored in Ruby) for the following asset classes: image, font, video, audio, JavaScript and stylesheet.
  • Rails.application.config.assets.compress
  • In JavaScript files, the directives begin with //=
  • The require_tree directive tells Sprockets to recursively include all JavaScript files in the specified directory into the output.
  • manifest files contain directives — instructions that tell Sprockets which files to require in order to build a single CSS or JavaScript file.
  • You should not rely on any particular order among those
  • Sprockets uses manifest files to determine which assets to include and serve.
  • the family of require directives prevents files from being included twice in the output
  • which files to require in order to build a single CSS or JavaScript file
  • Directives are processed top to bottom, but the order in which files are included by require_tree is unspecified.
  • In JavaScript files, Sprockets directives begin with //=
  • If require_self is called more than once, only the last call is respected.
  • require directive is used to tell Sprockets the files you wish to require.
  • You need not supply the extensions explicitly. Sprockets assumes you are requiring a .js file when done from within a .js file
  • paths must be specified relative to the manifest file
  • require_directory
  • Rails 4 creates both app/assets/javascripts/application.js and app/assets/stylesheets/application.css regardless of whether the --skip-sprockets option is used when creating a new rails application.
  • The file extensions used on an asset determine what preprocessing is applied.
  • app/assets/stylesheets/application.css
  • Additional layers of preprocessing can be requested by adding other extensions, where each extension is processed in a right-to-left manner
  • require_self
  • use the Sass @import rule instead of these Sprockets directives.
  • Keep in mind that the order of these preprocessors is important
  • In development mode, assets are served as separate files in the order they are specified in the manifest file.
  • when these files are requested they are processed by the processors provided by the coffee-script and sass gems and then sent back to the browser as JavaScript and CSS respectively.
  • css.scss.erb
  • js.coffee.erb
  • Keep in mind the order of these preprocessors is important.
  • By default Rails assumes that assets have been precompiled and will be served as static assets by your web server
  • with the Asset Pipeline the :cache and :concat options aren't used anymore
  • Assets are compiled and cached on the first request after the server is started
  • RAILS_ENV=production bundle exec rake assets:precompile
  • Debug mode can also be enabled in Rails helper methods
  • If you set config.assets.initialize_on_precompile to false, be sure to test rake assets:precompile locally before deploying
  • By default Rails assumes assets have been precompiled and will be served as static assets by your web server.
  • a rake task to compile the asset manifests and other files in the pipeline
  • RAILS_ENV=production bin/rake assets:precompile
  • a recipe to handle this in deployment
  • links the folder specified in config.assets.prefix to shared/assets
  • config/initializers/assets.rb
  • The initialize_on_precompile change tells the precompile task to run without invoking Rails
  • The X-Sendfile header is a directive to the web server to ignore the response from the application, and instead serve a specified file from disk
  • the jquery-rails gem which comes with Rails as the standard JavaScript library gem.
  • Possible options for JavaScript compression are :closure, :uglifier and :yui
  • concatenate assets
張 旭

Volumes - Kubernetes - 0 views

  • On-disk files in a Container are ephemeral,
  • when a Container crashes, kubelet will restart it, but the files will be lost - the Container starts with a clean state
  • In Docker, a volume is simply a directory on disk or in another Container.
  • ...105 more annotations...
  • A Kubernetes volume, on the other hand, has an explicit lifetime - the same as the Pod that encloses it.
  • a volume outlives any Containers that run within the Pod, and data is preserved across Container restarts.
    • 張 旭
       
      Kubernetes Volume 是跟著 Pod 的生命週期在走
  • Kubernetes supports many types of volumes, and a Pod can use any number of them simultaneously.
  • To use a volume, a Pod specifies what volumes to provide for the Pod (the .spec.volumes field) and where to mount those into Containers (the .spec.containers.volumeMounts field).
  • A process in a container sees a filesystem view composed from their Docker image and volumes.
  • Volumes can not mount onto other volumes or have hard links to other volumes.
  • Each Container in the Pod must independently specify where to mount each volume
  • localnfs
  • cephfs
  • awsElasticBlockStore
  • glusterfs
  • vsphereVolume
  • An awsElasticBlockStore volume mounts an Amazon Web Services (AWS) EBS Volume into your Pod.
  • the contents of an EBS volume are preserved and the volume is merely unmounted.
  • an EBS volume can be pre-populated with data, and that data can be “handed off” between Pods.
  • create an EBS volume using aws ec2 create-volume
  • the nodes on which Pods are running must be AWS EC2 instances
  • EBS only supports a single EC2 instance mounting a volume
  • check that the size and EBS volume type are suitable for your use!
  • A cephfs volume allows an existing CephFS volume to be mounted into your Pod.
  • the contents of a cephfs volume are preserved and the volume is merely unmounted.
    • 張 旭
       
      相當於自己的 AWS EBS
  • CephFS can be mounted by multiple writers simultaneously.
  • have your own Ceph server running with the share exported
  • configMap
  • The configMap resource provides a way to inject configuration data into Pods
  • When referencing a configMap object, you can simply provide its name in the volume to reference it
  • volumeMounts: - name: config-vol mountPath: /etc/config volumes: - name: config-vol configMap: name: log-config items: - key: log_level path: log_level
  • create a ConfigMap before you can use it.
  • A Container using a ConfigMap as a subPath volume mount will not receive ConfigMap updates.
  • An emptyDir volume is first created when a Pod is assigned to a Node, and exists as long as that Pod is running on that node.
  • When a Pod is removed from a node for any reason, the data in the emptyDir is deleted forever.
  • By default, emptyDir volumes are stored on whatever medium is backing the node - that might be disk or SSD or network storage, depending on your environment.
  • you can set the emptyDir.medium field to "Memory" to tell Kubernetes to mount a tmpfs (RAM-backed filesystem)
  • volumeMounts: - mountPath: /cache name: cache-volume volumes: - name: cache-volume emptyDir: {}
  • An fc volume allows an existing fibre channel volume to be mounted in a Pod.
  • configure FC SAN Zoning to allocate and mask those LUNs (volumes) to the target WWNs beforehand so that Kubernetes hosts can access them.
  • Flocker is an open-source clustered Container data volume manager. It provides management and orchestration of data volumes backed by a variety of storage backends.
  • emptyDir
  • flocker
  • A flocker volume allows a Flocker dataset to be mounted into a Pod
  • have your own Flocker installation running
  • A gcePersistentDisk volume mounts a Google Compute Engine (GCE) Persistent Disk into your Pod.
  • Using a PD on a Pod controlled by a ReplicationController will fail unless the PD is read-only or the replica count is 0 or 1
  • A glusterfs volume allows a Glusterfs (an open source networked filesystem) volume to be mounted into your Pod.
  • have your own GlusterFS installation running
  • A hostPath volume mounts a file or directory from the host node’s filesystem into your Pod.
  • a powerful escape hatch for some applications
  • access to Docker internals; use a hostPath of /var/lib/docker
  • allowing a Pod to specify whether a given hostPath should exist prior to the Pod running, whether it should be created, and what it should exist as
  • specify a type for a hostPath volume
  • the files or directories created on the underlying hosts are only writable by root.
  • hostPath: # directory location on host path: /data # this field is optional type: Directory
  • An iscsi volume allows an existing iSCSI (SCSI over IP) volume to be mounted into your Pod.
  • have your own iSCSI server running
  • A feature of iSCSI is that it can be mounted as read-only by multiple consumers simultaneously.
  • A local volume represents a mounted local storage device such as a disk, partition or directory.
  • Local volumes can only be used as a statically created PersistentVolume.
  • Compared to hostPath volumes, local volumes can be used in a durable and portable manner without manually scheduling Pods to nodes, as the system is aware of the volume’s node constraints by looking at the node affinity on the PersistentVolume.
  • If a node becomes unhealthy, then the local volume will also become inaccessible, and a Pod using it will not be able to run.
  • PersistentVolume spec using a local volume and nodeAffinity
  • PersistentVolume nodeAffinity is required when using local volumes. It enables the Kubernetes scheduler to correctly schedule Pods using local volumes to the correct node.
  • PersistentVolume volumeMode can now be set to “Block” (instead of the default value “Filesystem”) to expose the local volume as a raw block device.
  • When using local volumes, it is recommended to create a StorageClass with volumeBindingMode set to WaitForFirstConsumer
  • An nfs volume allows an existing NFS (Network File System) share to be mounted into your Pod.
  • NFS can be mounted by multiple writers simultaneously.
  • have your own NFS server running with the share exported
  • A persistentVolumeClaim volume is used to mount a PersistentVolume into a Pod.
  • PersistentVolumes are a way for users to “claim” durable storage (such as a GCE PersistentDisk or an iSCSI volume) without knowing the details of the particular cloud environment.
  • A projected volume maps several existing volume sources into the same directory.
  • All sources are required to be in the same namespace as the Pod. For more details, see the all-in-one volume design document.
  • Each projected volume source is listed in the spec under sources
  • A Container using a projected volume source as a subPath volume mount will not receive updates for those volume sources.
  • RBD volumes can only be mounted by a single consumer in read-write mode - no simultaneous writers allowed
  • A secret volume is used to pass sensitive information, such as passwords, to Pods
  • store secrets in the Kubernetes API and mount them as files for use by Pods
  • secret volumes are backed by tmpfs (a RAM-backed filesystem) so they are never written to non-volatile storage.
  • create a secret in the Kubernetes API before you can use it
  • A Container using a Secret as a subPath volume mount will not receive Secret updates.
  • StorageOS runs as a Container within your Kubernetes environment, making local or attached storage accessible from any node within the Kubernetes cluster.
  • Data can be replicated to protect against node failure. Thin provisioning and compression can improve utilization and reduce cost.
  • StorageOS provides block storage to Containers, accessible via a file system.
  • A vsphereVolume is used to mount a vSphere VMDK Volume into your Pod.
  • supports both VMFS and VSAN datastore.
  • create VMDK using one of the following methods before using with Pod.
  • share one volume for multiple uses in a single Pod.
  • The volumeMounts.subPath property can be used to specify a sub-path inside the referenced volume instead of its root.
  • volumeMounts: - name: workdir1 mountPath: /logs subPathExpr: $(POD_NAME)
  • env: - name: POD_NAME valueFrom: fieldRef: apiVersion: v1 fieldPath: metadata.name
  • Use the subPathExpr field to construct subPath directory names from Downward API environment variables
  • enable the VolumeSubpathEnvExpansion feature gate
  • The subPath and subPathExpr properties are mutually exclusive.
  • There is no limit on how much space an emptyDir or hostPath volume can consume, and no isolation between Containers or between Pods.
  • emptyDir and hostPath volumes will be able to request a certain amount of space using a resource specification, and to select the type of media to use, for clusters that have several media types.
  • the Container Storage Interface (CSI) and Flexvolume. They enable storage vendors to create custom storage plugins without adding them to the Kubernetes repository.
  • all volume plugins (like volume types listed above) were “in-tree” meaning they were built, linked, compiled, and shipped with the core Kubernetes binaries and extend the core Kubernetes API.
  • Container Storage Interface (CSI) defines a standard interface for container orchestration systems (like Kubernetes) to expose arbitrary storage systems to their container workloads.
  • Once a CSI compatible volume driver is deployed on a Kubernetes cluster, users may use the csi volume type to attach, mount, etc. the volumes exposed by the CSI driver.
  • The csi volume type does not support direct reference from Pod and may only be referenced in a Pod via a PersistentVolumeClaim object.
  • This feature requires CSIInlineVolume feature gate to be enabled:--feature-gates=CSIInlineVolume=true
  • In-tree plugins that support CSI Migration and have a corresponding CSI driver implemented are listed in the “Types of Volumes” section above.
  • Mount propagation allows for sharing volumes mounted by a Container to other Containers in the same Pod, or even to other Pods on the same node.
  • Mount propagation of a volume is controlled by mountPropagation field in Container.volumeMounts.
  • HostToContainer - This volume mount will receive all subsequent mounts that are mounted to this volume or any of its subdirectories.
  • Bidirectional - This volume mount behaves the same the HostToContainer mount. In addition, all volume mounts created by the Container will be propagated back to the host and to all Containers of all Pods that use the same volume.
  • Edit your Docker’s systemd service file. Set MountFlags as follows:MountFlags=shared
« First ‹ Previous 41 - 60 of 74 Next ›
Showing 20 items per page