Skip to main content

Home/ Larvata/ Group items tagged image

Rss Feed Group items tagged

張 旭

Build an Image - Getting Started - Packer by HashiCorp - 0 views

  • The configuration file used to define what image we want built and how is called a template in Packer terminology.
  • JSON struck the best balance between human-editable and machine-editable, allowing both hand-made templates as well as machine generated templates to easily be made.
  • keeping your secret keys out of the template
  • ...3 more annotations...
  • validate the template by running packer validate example.json. This command checks the syntax as well as the configuration values to verify they look valid.
  • At the end of running packer build, Packer outputs the artifacts that were created as part of the build.
  • Packer only builds images. It does not attempt to manage them in any way.
張 旭

Secrets - Kubernetes - 0 views

  • Putting this information in a secret is safer and more flexible than putting it verbatim in a PodThe smallest and simplest Kubernetes object. A Pod represents a set of running containers on your cluster. definition or in a container imageStored instance of a container that holds a set of software needed to run an application. .
  • A Secret is an object that contains a small amount of sensitive data such as a password, a token, or a key.
  • Users can create secrets, and the system also creates some secrets.
  • ...63 more annotations...
  • To use a secret, a pod needs to reference the secret.
  • A secret can be used with a pod in two ways: as files in a volumeA directory containing data, accessible to the containers in a pod. mounted on one or more of its containers, or used by kubelet when pulling images for the pod.
  • --from-file
  • You can also create a Secret in a file first, in json or yaml format, and then create that object.
  • The Secret contains two maps: data and stringData.
  • The data field is used to store arbitrary data, encoded using base64.
  • Kubernetes automatically creates secrets which contain credentials for accessing the API and it automatically modifies your pods to use this type of secret.
  • kubectl get and kubectl describe avoid showing the contents of a secret by default.
  • stringData field is provided for convenience, and allows you to provide secret data as unencoded strings.
  • where you are deploying an application that uses a Secret to store a configuration file, and you want to populate parts of that configuration file during your deployment process.
  • a field is specified in both data and stringData, the value from stringData is used.
  • The keys of data and stringData must consist of alphanumeric characters, ‘-’, ‘_’ or ‘.’.
  • Newlines are not valid within these strings and must be omitted.
  • When using the base64 utility on Darwin/macOS users should avoid using the -b option to split long lines.
  • create a Secret from generators and then apply it to create the object on the Apiserver.
  • The generated Secrets name has a suffix appended by hashing the contents.
  • base64 --decode
  • Secrets can be mounted as data volumes or be exposed as environment variablesContainer environment variables are name=value pairs that provide useful information into containers running in a Pod. to be used by a container in a pod.
  • Multiple pods can reference the same secret.
  • Each key in the secret data map becomes the filename under mountPath
  • each container needs its own volumeMounts block, but only one .spec.volumes is needed per secret
  • use .spec.volumes[].secret.items field to change target path of each key:
  • If .spec.volumes[].secret.items is used, only keys specified in items are projected. To consume all keys from the secret, all of them must be listed in the items field.
  • You can also specify the permission mode bits files part of a secret will have. If you don’t specify any, 0644 is used by default.
  • JSON spec doesn’t support octal notation, so use the value 256 for 0400 permissions.
  • Inside the container that mounts a secret volume, the secret keys appear as files and the secret values are base-64 decoded and stored inside these files.
  • Mounted Secrets are updated automatically
  • Kubelet is checking whether the mounted secret is fresh on every periodic sync.
  • cache propagation delay depends on the chosen cache type
  • A container using a Secret as a subPath volume mount will not receive Secret updates.
  • Multiple pods can reference the same secret.
  • env: - name: SECRET_USERNAME valueFrom: secretKeyRef: name: mysecret key: username
  • Inside a container that consumes a secret in an environment variables, the secret keys appear as normal environment variables containing the base-64 decoded values of the secret data.
  • An imagePullSecret is a way to pass a secret that contains a Docker (or other) image registry password to the Kubelet so it can pull a private image on behalf of your Pod.
  • a secret needs to be created before any pods that depend on it.
  • Secret API objects reside in a namespaceAn abstraction used by Kubernetes to support multiple virtual clusters on the same physical cluster. . They can only be referenced by pods in that same namespace.
  • Individual secrets are limited to 1MiB in size.
  • Kubelet only supports use of secrets for Pods it gets from the API server.
  • Secrets must be created before they are consumed in pods as environment variables unless they are marked as optional.
  • References to Secrets that do not exist will prevent the pod from starting.
  • References via secretKeyRef to keys that do not exist in a named Secret will prevent the pod from starting.
  • Once a pod is scheduled, the kubelet will try to fetch the secret value.
  • Think carefully before sending your own ssh keys: other users of the cluster may have access to the secret.
  • volumes: - name: secret-volume secret: secretName: ssh-key-secret
  • Special characters such as $, \*, and ! require escaping. If the password you are using has special characters, you need to escape them using the \\ character.
  • You do not need to escape special characters in passwords from files
  • make that key begin with a dot
  • Dotfiles in secret volume
  • .secret-file
  • a frontend container which handles user interaction and business logic, but which cannot see the private key;
  • a signer container that can see the private key, and responds to simple signing requests from the frontend
  • When deploying applications that interact with the secrets API, access should be limited using authorization policies such as RBAC
  • watch and list requests for secrets within a namespace are extremely powerful capabilities and should be avoided
  • watch and list all secrets in a cluster should be reserved for only the most privileged, system-level components.
  • additional precautions with secret objects, such as avoiding writing them to disk where possible.
  • A secret is only sent to a node if a pod on that node requires it
  • only the secrets that a pod requests are potentially visible within its containers
  • each container in a pod has to request the secret volume in its volumeMounts for it to be visible within the container.
  • In the API server secret data is stored in etcdConsistent and highly-available key value store used as Kubernetes’ backing store for all cluster data.
  • limit access to etcd to admin users
  • Base64 encoding is not an encryption method and is considered the same as plain text.
  • A user who can create a pod that uses a secret can also see the value of that secret.
  • anyone with root on any node can read any secret from the apiserver, by impersonating the kubelet.
張 旭

GitLab Auto DevOps 深入淺出,自動部署,連設定檔不用?! | 五倍紅寶石・專業程式教育 - 0 views

  • 一個 K8S 的 Cluster,Auto DevOps 將會把網站部署到這個 Cluster
  • 需要有一個 wildcard 的 DNS 讓部署在這個環境的網站有 Domain name
  • 一個可以跑 Docker 的 GitLab Runner,將會為由它來執行 CI / CD 的流程。
  • ...37 more annotations...
  • 其實 Auto DevOps 就是一份官方寫好的 gitlab-ci.yml,在啟動 Auto DevOps 的專案裡,如果找不到 gitlab-ci.yml 檔,那就會直接用官方 gitlab-ci.yml 去跑 CI / CD 流程。
  • Pod 是 K8S 中可以被部署的最小元件,一個 Pod 是由一到多個 Container 組成,同個 Pod 的不同 Container 之間彼此共享網路資源。
  • 每個 Pod 都會有它的 yaml 檔,用以描述 Pod 會使用的 Image 還有連接的 Port 等資訊。
  • Node 又分成 Worker Node 和 Master Node 兩種
  • Helm 透過參數 (parameter) 跟模板 (template) 的方式,讓我們可以在只修改參數的方式重複利用模板。
  • 為了要有 CI CD 的功能我們會把 .gitlab-ci.yml 放在專案的根目錄裡, GitLab 會依造 .gitlab-ci.yml 的設定產生 CI/CD Pipeline,每個 Pipeline 裡面可能有多個 Job,這時候就會需要有 GitLab Runner 來執行這些 Job 並把執行的結果回傳給 GitLab 讓它知道這個 Job 是否有正常執行。
  • 把專案打包成 Docker Image 這工作又或是 helm 的操作都會在 Container 內執行
  • CI/CD Pipeline 是由 stage 還有 job 組成的,stage 是有順序性的,前面的 stage 完成後才會開始下一個 stage。
  • 每個 stage 裡面包含一到多個 Job
  • Auto Devops 裡也會大量用到這種在指定 Container 內運行的工作。
  • 可以通過 health checks
  • 開 private 的話還要注意使用 Container Registry 的權限問題
  • 申請好的 wildcard 的 DNS
  • Auto Devops 也提供只要設定環境變數就能一定程度客製化的選項
  • 特別注意 namespace 有沒有設定對,不然會找不到資料喔
  • Auto Devops,如果想要進一步的客製化,而且是改 GitLab 環境變數都無法實現的客製化,這時候還是得回到 .gitlab-ci.yml 設定檔
  • 在 Docker in Docker 的環境用 Dockerfile 打包 Image
  • 用 helm upgrade 把 chart 部署到 K8S 上
  • GitLab CI 的環境變數主要有三個來源,優先度高到低依序為Settings > CI/CD 介面定義的變數gitlab_ci.yml 定義環境變數GitLab 預設環境變數
  • 把專案打包成 Docker Image 首先需要在專案下新增一份 Dockerfile
  • Auto Devops 裡面的做法,用 herokuish 提供的 Image 來打包專案
  • 在 Runner 的環境中是沒有 docker 指令可以用的,所以這邊啟動一個 Docker Container 在裡面執行就可以用 docker 指令了。
  • 其中 $CI_COMMIT_SHA $CI_COMMIT_BEFORE_SHA 這兩個都是 GitLab 預設環境變數,代表這次 commit 還有上次 commit 的 SHA 值。
  • dind 則是直接啟動 docker daemon,此外 dind 還會自動產生 TLS certificates
  • 為了在 Docker Container 內運行 Docker,會把 Host 上面的 Docker API 分享給 Container。
  • docker:stable 有執行 docker 需要的執行檔,他裡面也包含要啟動 docker 的程式(docker daemon),但啟動 Container 的 entrypoint 是 sh
  • docker:dind 繼承自 docker:stable,而且它 entrypoint 就是啟動 docker 的腳本,此外還會做完 TLS certificates
  • Container 要去連 Host 上的 Docker API 。但現在連線失敗卻是找 http://docker:2375,現在的 dind 已經不是被當做 services 來用了,而是要直接在裡面跑 Docker,所以他應該是要 unix:///var/run/docker.sock 用這種連線,於是把環境變數 DOCKER_HOST 從 tcp://docker:2375 改成空字串,讓 docker daemon 走預設連線就能成功囉!
  • auto-deploy preparationhelm init 建立 helm 專案設定 tiller 在背景執行設定 cluster 的 namespace
  • auto-deploy deploy使用 helm upgrade 部署 chart 到 K8S 上透過 --set 來設定要注入 template 的參數
  • set -x,這樣就能在執行前,顯示指令內容。
  • 用 helm repo list 看看現在有註冊哪些 Chart Repository
  • helm fetch gitlab/auto-deploy-app --untar
  • nohup 可以讓你在離線或登出系統後,還能夠讓工作繼續進行
  • 在不特別設定 CI_APPLICATION_REPOSITORY 的情況下,image_repository 的值就是預設環境變數 CI_REGISTRY_IMAGE/CI_COMMIT_REF_SLUG
  • A:-B 的意思是如果有 A 就用它,沒有就用 B
  • 研究 Auto Devops 難度最高的地方就是太多工具整合在一起,搞不清楚他們之間的關係,出錯也不知道從何查起
張 旭

Speeding up Docker image build process of a Rails application | BigBinary Blog - 1 views

  • we do not want to execute bundle install and rake assets:precompile tasks while starting a container in each pod which would prevent that pod from accepting any requests until these tasks are finished.
  • run bundle install and rake assets:precompile tasks while or before containerizing the Rails application.
  • Kubernetes pulls the image, starts a Docker container using that image inside the pod and runs puma server immediately.
  • ...7 more annotations...
  • Since source code changes often, the previously cached layer for the ADD instruction is invalidated due to the mismatching checksums.
  • The ARG instruction in the Dockerfile defines RAILS_ENV variable and is implicitly used as an environment variable by the rest of the instructions defined just after that ARG instruction.
  • RUN instructions are used to install gems and precompile static assets using sprockets
  • Instead, Docker automatically re-uses the previously built layer for the RUN bundle install instruction if the Gemfile.lock file remains unchanged.
  • everyday we need to build a lot of Docker images containing source code from varying Git branches as well as with varying environments.
  • it is hard for Docker to cache layers for bundle install and rake assets:precompile tasks and re-use those layers during every docker build command run with different application source code and a different environment.
  • By default, Bundler installs gems at the location which is set by Rubygems.
  •  
    "we do not want to execute bundle install and rake assets:precompile tasks while starting a container in each pod which would prevent that pod from accepting any requests until these tasks are finished."
crazylion lee

fogleman/primitive: Reproducing images with geometric primitives. - 0 views

  •  
    "Reproducing images with geometric primitives"
crazylion lee

FLIF - Free Lossless Image Format - 0 views

shared by crazylion lee on 09 Oct 15 - No Cached
  •  
    "FLIF is a novel lossless image format which outperforms PNG, lossless WebP, lossless BPG and lossless JPEG2000 in terms of compression ratio. "
張 旭

| Docker Documentation - 0 views

  • The host directory is declared at container run-time: The host directory (the mountpoint) is, by its nature, host-dependent. This is to preserve image portability, since a given host directory can’t be guaranteed to be available on all hosts.
  • This Dockerfile results in an image that causes docker run to create a new mount point at /myvol and copy the greeting file into the newly created volume.
  •  
    "The host directory is declared at container run-time: The host directory (the mountpoint) is, by its nature, host-dependent. This is to preserve image portability, since a given host directory can't be guaranteed to be available on all hosts."
張 旭

Use multi-stage builds | Docker Documentation - 0 views

  • Maintaining two Dockerfiles is not ideal.
  • This is failure-prone and hard to maintain. It’s easy to insert another command and forget to continue the line using the \ character
  • create a container from it to copy the artifact out
  • ...4 more annotations...
  • You only need the single Dockerfile. You don’t need a separate build script,
  • You don’t need to create any intermediate images and you don’t need to extract any artifacts to your local system at all.
  • Debugging a specific build stage
  • You can use the COPY --from instruction to copy from a separate image, either using the local image name, a tag available locally or on a Docker registry, or a tag ID.
張 旭

Glossary - CircleCI - 0 views

  • User authentication may use LDAP for an instance of the CircleCI application that is installed on your private server or cloud
  • The first user to log into a private installation of CircleCI
  • Contexts provide a mechanism for securing and sharing environment variables across projects.
  • ...22 more annotations...
  • The environment variables are defined as name/value pairs and are injected at runtime.
  • The CircleCI Docker Layer Caching feature allows builds to reuse Docker image layers
  • from previous builds.
  • Image layers are stored in separate volumes in the cloud and are not shared between projects.
  • Layers may only be used by builds from the same project.
  • Environment variables store customer data that is used by a project.
  • Defines the underlying technology to run a job.
  • machine to run your job inside a full virtual machine.
  • docker to run your job inside a Docker container with a specified image
  • A job is a collection of steps.
  • The first image listed in config.yml
  • A CircleCI project shares the name of the code repository for which it automates workflows, tests, and deployment.
  • must be added with the Add Project button
  • Following a project enables a user to subscribe to email notifications for the project build status and adds the project to their CircleCI dashboard.
  • A step is a collection of executable commands
  • Users must be added to a GitHub or Bitbucket org to view or follow associated CircleCI projects.
  • Users may not view project data that is stored in environment variables.  
  • A Workflow is a set of rules for defining a collection of jobs and their run order.
  • Workflows are implemented as a directed acyclic graph (DAG) of jobs for greatest flexibility.
  • referred to as Pipelines
  • A workspace is a workflows-aware storage mechanism.
  • A workspace stores data unique to the job, which may be needed in downstream jobs.
張 旭

The Asset Pipeline - Ruby on Rails Guides - 0 views

  • provides a framework to concatenate and minify or compress JavaScript and CSS assets
  • adds the ability to write these assets in other languages and pre-processors such as CoffeeScript, Sass and ERB
  • invalidate the cache by altering this fingerprint
  • ...80 more annotations...
  • Rails 4 automatically adds the sass-rails, coffee-rails and uglifier gems to your Gemfile
  • reduce the number of requests that a browser makes to render a web page
  • Starting with version 3.1, Rails defaults to concatenating all JavaScript files into one master .js file and all CSS files into one master .css file
  • In production, Rails inserts an MD5 fingerprint into each filename so that the file is cached by the web browser
  • The technique sprockets uses for fingerprinting is to insert a hash of the content into the name, usually at the end.
  • asset minification or compression
  • The sass-rails gem is automatically used for CSS compression if included in Gemfile and no config.assets.css_compressor option is set.
  • Supported languages include Sass for CSS, CoffeeScript for JavaScript, and ERB for both by default.
  • When a filename is unique and based on its content, HTTP headers can be set to encourage caches everywhere (whether at CDNs, at ISPs, in networking equipment, or in web browsers) to keep their own copy of the content
  • asset pipeline is technically no longer a core feature of Rails 4
  • Rails uses for fingerprinting is to insert a hash of the content into the name, usually at the end
  • With the asset pipeline, the preferred location for these assets is now the app/assets directory.
  • Fingerprinting is enabled by default for production and disabled for all other environments
  • The files in app/assets are never served directly in production.
  • Paths are traversed in the order that they occur in the search path
  • You should use app/assets for files that must undergo some pre-processing before they are served.
  • By default .coffee and .scss files will not be precompiled on their own
  • app/assets is for assets that are owned by the application, such as custom images, JavaScript files or stylesheets.
  • lib/assets is for your own libraries' code that doesn't really fit into the scope of the application or those libraries which are shared across applications.
  • vendor/assets is for assets that are owned by outside entities, such as code for JavaScript plugins and CSS frameworks.
  • Any path under assets/* will be searched
  • By default these files will be ready to use by your application immediately using the require_tree directive.
  • By default, this means the files in app/assets take precedence, and will mask corresponding paths in lib and vendor
  • Sprockets uses files named index (with the relevant extensions) for a special purpose
  • Rails.application.config.assets.paths
  • causes turbolinks to check if an asset has been updated and if so loads it into the page
  • if you add an erb extension to a CSS asset (for example, application.css.erb), then helpers like asset_path are available in your CSS rules
  • If you add an erb extension to a JavaScript asset, making it something such as application.js.erb, then you can use the asset_path helper in your JavaScript code
  • The asset pipeline automatically evaluates ERB
  • data URI — a method of embedding the image data directly into the CSS file — you can use the asset_data_uri helper.
  • Sprockets will also look through the paths specified in config.assets.paths, which includes the standard application paths and any paths added by Rails engines.
  • image_tag
  • the closing tag cannot be of the style -%>
  • asset_data_uri
  • app/assets/javascripts/application.js
  • sass-rails provides -url and -path helpers (hyphenated in Sass, underscored in Ruby) for the following asset classes: image, font, video, audio, JavaScript and stylesheet.
  • Rails.application.config.assets.compress
  • In JavaScript files, the directives begin with //=
  • The require_tree directive tells Sprockets to recursively include all JavaScript files in the specified directory into the output.
  • manifest files contain directives — instructions that tell Sprockets which files to require in order to build a single CSS or JavaScript file.
  • You should not rely on any particular order among those
  • Sprockets uses manifest files to determine which assets to include and serve.
  • the family of require directives prevents files from being included twice in the output
  • which files to require in order to build a single CSS or JavaScript file
  • Directives are processed top to bottom, but the order in which files are included by require_tree is unspecified.
  • In JavaScript files, Sprockets directives begin with //=
  • If require_self is called more than once, only the last call is respected.
  • require directive is used to tell Sprockets the files you wish to require.
  • You need not supply the extensions explicitly. Sprockets assumes you are requiring a .js file when done from within a .js file
  • paths must be specified relative to the manifest file
  • require_directory
  • Rails 4 creates both app/assets/javascripts/application.js and app/assets/stylesheets/application.css regardless of whether the --skip-sprockets option is used when creating a new rails application.
  • The file extensions used on an asset determine what preprocessing is applied.
  • app/assets/stylesheets/application.css
  • Additional layers of preprocessing can be requested by adding other extensions, where each extension is processed in a right-to-left manner
  • require_self
  • use the Sass @import rule instead of these Sprockets directives.
  • Keep in mind that the order of these preprocessors is important
  • In development mode, assets are served as separate files in the order they are specified in the manifest file.
  • when these files are requested they are processed by the processors provided by the coffee-script and sass gems and then sent back to the browser as JavaScript and CSS respectively.
  • css.scss.erb
  • js.coffee.erb
  • Keep in mind the order of these preprocessors is important.
  • By default Rails assumes that assets have been precompiled and will be served as static assets by your web server
  • with the Asset Pipeline the :cache and :concat options aren't used anymore
  • Assets are compiled and cached on the first request after the server is started
  • RAILS_ENV=production bundle exec rake assets:precompile
  • Debug mode can also be enabled in Rails helper methods
  • If you set config.assets.initialize_on_precompile to false, be sure to test rake assets:precompile locally before deploying
  • By default Rails assumes assets have been precompiled and will be served as static assets by your web server.
  • a rake task to compile the asset manifests and other files in the pipeline
  • RAILS_ENV=production bin/rake assets:precompile
  • a recipe to handle this in deployment
  • links the folder specified in config.assets.prefix to shared/assets
  • config/initializers/assets.rb
  • The initialize_on_precompile change tells the precompile task to run without invoking Rails
  • The X-Sendfile header is a directive to the web server to ignore the response from the application, and instead serve a specified file from disk
  • the jquery-rails gem which comes with Rails as the standard JavaScript library gem.
  • Possible options for JavaScript compression are :closure, :uglifier and :yui
  • concatenate assets
張 旭

The differences between Docker, containerd, CRI-O and runc - Tutorial Works - 0 views

  • Docker isn’t the only container contender on the block.
  • Container Runtime Interface (CRI), which defines an API between Kubernetes and the container runtime
  • Open Container Initiative (OCI) which publishes specifications for images and containers.
  • ...20 more annotations...
  • for a lot of people, the name “Docker” itself is synonymous with the word “container”.
  • Docker created a very ergonomic (nice-to-use) tool for working with containers – also called docker.
  • docker is designed to be installed on a workstation or server and comes with a bunch of tools to make it easy to build and run containers as a developer, or DevOps person.
  • containerd: This is a daemon process that manages and runs containers.
  • runc: This is the low-level container runtime (the thing that actually creates and runs containers).
  • libcontainer, a native Go-based implementation for creating containers.
  • Kubernetes includes a component called dockershim, which allows it to support Docker.
  • Kubernetes prefers to run containers through any container runtime which supports its Container Runtime Interface (CRI).
  • Kubernetes will remove support for Docker directly, and prefer to use only container runtimes that implement its Container Runtime Interface.
  • Both containerd and CRI-O can run Docker-formatted (actually OCI-formatted) images, they just do it without having to use the docker command or the Docker daemon.
  • Docker images, are actually images packaged in the Open Container Initiative (OCI) format.
  • CRI is the API that Kubernetes uses to control the different runtimes that create and manage containers.
  • CRI makes it easier for Kubernetes to use different container runtimes
  • containerd is a high-level container runtime that came from Docker, and implements the CRI spec
  • containerd was separated out of the Docker project, to make Docker more modular.
  • CRI-O is another high-level container runtime which implements the Container Runtime Interface (CRI).
  • The idea behind the OCI is that you can choose between different runtimes which conform to the spec.
  • runc is an OCI-compatible container runtime.
  • A reference implementation is a piece of software that has implemented all the requirements of a specification or standard.
  • runc provides all of the low-level functionality for containers, interacting with existing low-level Linux features, like namespaces and control groups.
張 旭

Services | GitLab - 0 views

  • The services keyword defines a Docker image that runs during a job linked to the Docker image that the image keyword defines. This allows you to access the service image during build time.
張 旭

Alpine, Slim, Stretch, Buster, Jessie, Bullseye - What are the Differences in Docker Im... - 0 views

  • if you are experiencing an unexplained issue in building your Dockerfile, try switching to the full image to see if that cures it.
  • Don’t ever use <image>:latest in a production Dockerfile.
張 旭

The problem with Docker and Alpine's package pinning | by Stefan Schindler | Medium - 0 views

  • What’s one of the biggest benefits of Docker? Clearly reproducibility: It doesn’t matter where you run your images, or when you run them: The result will always be the same.
  • For example, in Alpine 3.5, the package Node.js might be 2.0, and in Alpine 3.4 it’s 1.9. By pinning down the repository to Alpine 3.4, you will alwaysget Node.js 1.9, because Alpine 3.4 is an old version and not updated anymore.
  • Unfortunately Alpine Linux does not keep old packages.
  •  
    "What's one of the biggest benefits of Docker? Clearly reproducibility: It doesn't matter where you run your images, or when you run them: The result will always be the same."
crazylion lee

ErikvdVen/php-gif: Easy generating (real-time) GIF images with PHP - 0 views

shared by crazylion lee on 05 Dec 16 - No Cached
  •  
    "Easy generating (real-time) GIF images with PHP"
張 旭

How to write excellent Dockerfiles - 0 views

  • minimize image size, build time and number of layers.
  • maximize build cache usage
  • Container should do one thing
    • 張 旭
       
      這個有待商榷,在 baseimage 的 blog 介紹中有詳細的討論。
  • ...25 more annotations...
  • Use COPY and RUN commands in proper order
  • Merge multiple RUN commands into one
  • alpine versions should be enough
  • Use exec inside entrypoint script
  • Prefer COPY over ADD
  • Specify default environment variables, ports and volumes inside Dockerfile
  • problems with zombie processes
  • prepare separate Docker image for each component, and use Docker Compose to easily start multiple containers at the same time
  • Layers are cached and reused
  • Layers are immutable
  • They both makes you cry
  • rely on our base image updates
  • make a cleanup
  • alpine is a very tiny linux distribution, just about 4 MB in size.
  • Your disk will love you :)
  • WORKDIR command changes default directory, where we run our RUN / CMD / ENTRYPOINT commands.
  • CMD is a default command run after creating container without other command specified.
  • put your command inside array
  • entrypoint adds complexity
  • Entrypoint is a script, that will be run instead of command, and receive command as arguments
  • Without it, we would not be able to stop our application grecefully (SIGTERM is swallowed by bash script).
  • Use "exec" inside entrypoint script
  • ADD has some logic for downloading remote files and extracting archives.
  • stick with COPY.
  • ADD
    • 張 旭
       
      不是說要用 COPY 嗎?
crazylion lee

Best Rails image uploader - Paperclip vs. Carrierwave vs. Refile - Infinum - 0 views

  •  
    "Everybody that has ever implemented file upload by hand in a Rails app knows that it's no cakewalk, not to mention a major security risk. That's why we use gems to handle file upload for us! But often it's hard to decide which one to choose for your project."
張 旭

Introduction - Packer by HashiCorp - 0 views

  • A machine image is a single static unit that contains a pre-configured operating system and installed software which is used to quickly create new running machines.
  •  
    "A machine image is a single static unit that contains a pre-configured operating system and installed software which is used to quickly create new running machines."
張 旭

phusion/baseimage-docker - 1 views

    • 張 旭
       
      原始的 docker 在執行命令時,預設就是將傳入的 COMMAND 當成 PID 1 的程序,執行完畢就結束這個  docker,其他的 daemons 並不會執行,而 baseimage 解決了這個問題。
    • crazylion lee
       
      好棒棒
  • docker exec
  • Through SSH
  • ...57 more annotations...
  • docker exec -t -i YOUR-CONTAINER-ID bash -l
  • Login to the container
  • Baseimage-docker only advocates running multiple OS processes inside a single container.
  • Password and challenge-response authentication are disabled by default. Only key authentication is allowed.
  • A tool for running a command as another user
  • The Docker developers advocate the philosophy of running a single logical service per container. A logical service can consist of multiple OS processes.
  • All syslog messages are forwarded to "docker logs".
  • Baseimage-docker advocates running multiple OS processes inside a single container, and a single logical service can consist of multiple OS processes.
  • Baseimage-docker provides tools to encourage running processes as different users
  • sometimes it makes sense to run multiple services in a single container, and sometimes it doesn't.
  • Splitting your logical service into multiple OS processes also makes sense from a security standpoint.
  • using environment variables to pass parameters to containers is very much the "Docker way"
  • Baseimage-docker provides a facility to run a single one-shot command, while solving all of the aforementioned problems
  • the shell script must run the daemon without letting it daemonize/fork it.
  • All executable scripts in /etc/my_init.d, if this directory exists. The scripts are run in lexicographic order.
  • variables will also be passed to all child processes
  • Environment variables on Unix are inherited on a per-process basis
  • there is no good central place for defining environment variables for all applications and services
  • centrally defining environment variables
  • One of the ideas behind Docker is that containers should be stateless, easily restartable, and behave like a black box.
  • a one-shot command in a new container
  • immediately exit after the command exits,
  • However the downside of this approach is that the init system is not started. That is, while invoking COMMAND, important daemons such as cron and syslog are not running. Also, orphaned child processes are not properly reaped, because COMMAND is PID 1.
  • add additional daemons (e.g. your own app) to the image by creating runit entries.
  • Nginx is one such example: it removes all environment variables unless you explicitly instruct it to retain them through the env configuration option.
  • Mechanisms for easily running multiple processes, without violating the Docker philosophy
  • Ubuntu is not designed to be run inside Docker
  • According to the Unix process model, the init process -- PID 1 -- inherits all orphaned child processes and must reap them
  • Syslog-ng seems to be much more stable
  • cron daemon
  • Rotates and compresses logs
  • /sbin/setuser
  • A tool for installing apt packages that automatically cleans up after itself.
  • a single logical service inside a single container
  • A daemon is a program which runs in the background of its system, such as a web server.
  • The shell script must be called run, must be executable, and is to be placed in the directory /etc/service/<NAME>. runsv will switch to the directory and invoke ./run after your container starts.
  • If any script exits with a non-zero exit code, the booting will fail.
  • If your process is started with a shell script, make sure you exec the actual process, otherwise the shell will receive the signal and not your process.
  • any environment variables set with docker run --env or with the ENV command in the Dockerfile, will be picked up by my_init
  • not possible for a child process to change the environment variables of other processes
  • they will not see the environment variables that were originally passed by Docker.
  • We ignore HOME, SHELL, USER and a bunch of other environment variables on purpose, because not ignoring them will break multi-user containers.
  • my_init imports environment variables from the directory /etc/container_environment
  • /etc/container_environment.sh - a dump of the environment variables in Bash format.
  • modify the environment variables in my_init (and therefore the environment variables in all child processes that are spawned after that point in time), by altering the files in /etc/container_environment
  • my_init only activates changes in /etc/container_environment when running startup scripts
  • environment variables don't contain sensitive data, then you can also relax the permissions
  • Syslog messages are forwarded to the console
  • syslog-ng is started separately before the runit supervisor process, and shutdown after runit exits.
  • RUN apt-get update && apt-get upgrade -y -o Dpkg::Options::="--force-confold"
  • /sbin/my_init --skip-startup-files --quiet --
  • By default, no keys are installed, so nobody can login
  • provide a pregenerated, insecure key (PuTTY format)
  • RUN /usr/sbin/enable_insecure_key
  • docker run YOUR_IMAGE /sbin/my_init --enable-insecure-key
  • RUN cat /tmp/your_key.pub >> /root/.ssh/authorized_keys && rm -f /tmp/your_key.pub
  • The default baseimage-docker installs syslog-ng, cron and sshd services during the build process
‹ Previous 21 - 40 of 93 Next › Last »
Showing 20 items per page