Skip to main content

Home/ Larvata/ Group items tagged layer

Rss Feed Group items tagged

張 旭

Probably Done Before: Visualizing Docker Containers and Images - 0 views

  •  In my opinion, understanding how a technology works under the hood is the best way to achieve learning speed and to build confidence that you are using the tool in the correct way.
  • union view
    • 張 旭
       
      把多層 image layer 串接起來,看上去就像是在讀一個 image 檔案而已。
  • The top-level layer may be read by a union-ing file system (AUFS on my docker implementation) to present a single cohesive view of all the changes as one read-only file system
  • ...36 more annotations...
  • it is nearly the same thing as an image, except that the top layer is read-write
  • A container is defined only as a read-write layer atop an image (of read-only layers itself).  It does not have to be running.
  • a running container
    • 張 旭
       
      之前一直搞錯了!不是 run 起來的才會叫 container,只要有 read-write layer 就是了!
  • the the isolated process-space and processes within
  • A running container is defined as a read-write "union view" and
  • kernel-level technologies like cgroups, namespaces
  • The processes within this process-space may change, delete or create files within the "union view" file that will be captured in the read-write layer
  • there is no longer a running container
    • 張 旭
       
      這行指令執行結束之後,running container 就停掉了,但是該 container 還在!
  • each layer contains a pointer to a parent layer using the Id
  • The 'docker create' command adds a read-write layer to the top stack based on the image id.  It does not run this container.
  • The command 'docker start' creates a process space around the union view of the container's layers.
  • can only be one process space per container.
  • the docker run command starts with an image, creates a container, and starts the container
  • 'git pull' (which is a combination of 'git fetch' and 'git merge')
  • 'docker ps' lists out the inventory of running containers on your system
  • 'docker ps -a' where the 'a' is short for 'all' lists out all the containers on your system, whether stopped or running.
  • Only those images that have containers attached to them or that have been pulled are considered top-level.
  • 'docker stop' issues a SIGTERM to a running container which politely stops all the processes in that process-space.
  • results is a normal, but non-running, container
  • 'docker kill' issues a non-polite SIGKILL command to all the processes in a running container.
  • 'docker stop' and 'docker kill' which send actual UNIX signals to a running process
  • 'docker pause' uses a special cgroups feature to freeze/pause a running process-space
  • 'docker rm' removes the read-write layer that defines a container from your host system
  • It effectively deletes files
  • 'docker rmi' removes the read-layer that defines a "union view" of an image.
  • 'docker commit' takes a container's top-level read-write layer and burns it into a read-only layer.
  • turns a container (whether running or stopped) into an immutable image
  • uses the FROM directive in the Dockerfile file as the starting image and iteratively 1) runs (create and start) 2) modifies and 3) commits.
  • At each step in the iteration a new layer is created.
  • 'docker exec' command runs on a running container and executes a process in that running container's process space
  • 'docker inspect' fetches the metadata that has been associated with the top-layer of the container or image
  • 'docker save' creates a single tar file that can be used to import on a different host system
  • only be run on an image
  • 'docker export' command creates a tar file of the contents of the "union view" and flattens it for consumption for non-Docker usages
  • This command removes the metadata and the layers.  This command can only be run on containers.
  • 'docker history' command takes an image-id and recursively prints out the read-only layers
張 旭

ALB vs ELB | Differences Between an ELB and an ALB on AWS | Sumo Logic - 0 views

  • If you use AWS, you have two load-balancing options: ELB and ALB.
  • An ELB is a software-based load balancer which can be set up and configured in front of a collection of AWS Elastic Compute (EC2) instances.
  • The load balancer serves as a single entry point for consumers of the EC2 instances and distributes incoming traffic across all machines available to receive requests.
  • ...14 more annotations...
  • the ELB also performs a vital role in improving the fault tolerance of the services which it fronts.
  • he Open Systems Interconnection Model, or OSI Model, is a conceptual model which is used to facilitate communications between different computing systems.
  • Layer 1 is the physical layer, and represents the physical medium across which the request is sent.
  • Layer 2 describes the data link layer
  • Layer 3 (the network layer)
  • Layer 7, which serves the application layer.
  • The Classic ELB operates at Layer 4. Layer 4 represents the transport layer, and is controlled by the protocol being used to transmit the request.
  • A network device, of which the Classic ELB is an example, reads the protocol and port of the incoming request, and then routes it to one or more backend servers.
  • the ALB operates at Layer 7. Layer 7 represents the application layer, and as such allows for the redirection of traffic based on the content of the request.
  • Whereas a request to a specific URL backed by a Classic ELB would only enable routing to a particular pool of homogeneous servers, the ALB can route based on the content of the URL, and direct to a specific subgroup of backing servers existing in a heterogeneous collection registered with the load balancer.
  • The Classic ELB is a simple load balancer, is easy to configure
  • As organizations move towards microservice architecture or adopt a container-based infrastructure, the ability to merely map a single address to a specific service becomes more complicated and harder to maintain.
  • the ALB manages routing based on user-defined rules.
  • oute traffic to different services based on either the host or the content of the path contained within that URL.
張 旭

Best practices for writing Dockerfiles | Docker Documentation - 0 views

  • building efficient images
  • Docker builds images automatically by reading the instructions from a Dockerfile -- a text file that contains all commands, in order, needed to build a given image.
  • A Docker image consists of read-only layers each of which represents a Dockerfile instruction.
  • ...47 more annotations...
  • The layers are stacked and each one is a delta of the changes from the previous layer
  • When you run an image and generate a container, you add a new writable layer (the “container layer”) on top of the underlying layers.
  • By “ephemeral,” we mean that the container can be stopped and destroyed, then rebuilt and replaced with an absolute minimum set up and configuration.
  • Inadvertently including files that are not necessary for building an image results in a larger build context and larger image size.
  • To exclude files not relevant to the build (without restructuring your source repository) use a .dockerignore file. This file supports exclusion patterns similar to .gitignore files.
  • minimize image layers by leveraging build cache.
  • if your build contains several layers, you can order them from the less frequently changed (to ensure the build cache is reusable) to the more frequently changed
  • avoid installing extra or unnecessary packages just because they might be “nice to have.”
  • Each container should have only one concern.
  • Decoupling applications into multiple containers makes it easier to scale horizontally and reuse containers
  • Limiting each container to one process is a good rule of thumb, but it is not a hard and fast rule.
  • Use your best judgment to keep containers as clean and modular as possible.
  • do multi-stage builds and only copy the artifacts you need into the final image. This allows you to include tools and debug information in your intermediate build stages without increasing the size of the final image.
  • avoid duplication of packages and make the list much easier to update.
  • When building an image, Docker steps through the instructions in your Dockerfile, executing each in the order specified.
  • the next instruction is compared against all child images derived from that base image to see if one of them was built using the exact same instruction. If not, the cache is invalidated.
  • simply comparing the instruction in the Dockerfile with one of the child images is sufficient.
  • For the ADD and COPY instructions, the contents of the file(s) in the image are examined and a checksum is calculated for each file.
  • If anything has changed in the file(s), such as the contents and metadata, then the cache is invalidated.
  • cache checking does not look at the files in the container to determine a cache match.
  • In that case just the command string itself is used to find a match.
    • 張 旭
       
      RUN apt-get 這樣的指令,直接比對指令內容的意思。
  • Whenever possible, use current official repositories as the basis for your images.
  • Using RUN apt-get update && apt-get install -y ensures your Dockerfile installs the latest package versions with no further coding or manual intervention.
  • cache busting
  • Docker executes these commands using the /bin/sh -c interpreter, which only evaluates the exit code of the last operation in the pipe to determine success.
  • set -o pipefail && to ensure that an unexpected error prevents the build from inadvertently succeeding.
  • The CMD instruction should be used to run the software contained by your image, along with any arguments.
  • CMD should almost always be used in the form of CMD [“executable”, “param1”, “param2”…]
  • CMD should rarely be used in the manner of CMD [“param”, “param”] in conjunction with ENTRYPOINT
  • The ENV instruction is also useful for providing required environment variables specific to services you wish to containerize,
  • Each ENV line creates a new intermediate layer, just like RUN commands
  • COPY is preferred
  • COPY only supports the basic copying of local files into the container
  • the best use for ADD is local tar file auto-extraction into the image, as in ADD rootfs.tar.xz /
  • If you have multiple Dockerfile steps that use different files from your context, COPY them individually, rather than all at once.
  • using ADD to fetch packages from remote URLs is strongly discouraged; you should use curl or wget instead
  • The best use for ENTRYPOINT is to set the image’s main command, allowing that image to be run as though it was that command (and then use CMD as the default flags).
  • the image name can double as a reference to the binary as shown in the command
  • The VOLUME instruction should be used to expose any database storage area, configuration storage, or files/folders created by your docker container.
  • use VOLUME for any mutable and/or user-serviceable parts of your image
  • If you absolutely need functionality similar to sudo, such as initializing the daemon as root but running it as non-root), consider using “gosu”.
  • always use absolute paths for your WORKDIR
  • An ONBUILD command executes after the current Dockerfile build completes.
  • Think of the ONBUILD command as an instruction the parent Dockerfile gives to the child Dockerfile
  • A Docker build executes ONBUILD commands before any command in a child Dockerfile.
  • Be careful when putting ADD or COPY in ONBUILD. The “onbuild” image fails catastrophically if the new build’s context is missing the resource being added.
張 旭

Using cache in GitLab CI with Docker-in-Docker | $AYMDEV() - 0 views

  • optimize our images.
  • When you build an image, it is made of multiple layers: we add a layer per instruction.
  • If we build the same image again without modifying any file, Docker will use existing layers rather than re-executing the instructions.
  • ...21 more annotations...
  • an image is made of multiple layers, and we can accelerate its build by using layers cache from the previous image version.
  • by using Docker-in-Docker, we get a fresh Docker instance per job which local registry is empty.
  • docker build --cache-from "$CI_REGISTRY_IMAGE:latest" -t "$CI_REGISTRY_IMAGE:new-tag"
  • But if you maintain a CHANGELOG in this format, and/or your Git tags are also your Docker tags, you can get the previous version and use cache the this image version.
  • script: - export PREVIOUS_VERSION=$(perl -lne 'print "v${1}" if /^##\s\[(\d\.\d\.\d)\]\s-\s\d{4}(?:-\d{2}){2}\s*$/' CHANGELOG.md | sed -n '2 p') - docker build --cache-from "$CI_REGISTRY_IMAGE:$PREVIOUS_VERSION" -t "$CI_REGISTRY_IMAGE:$CI_COMMIT_TAG" -f ./prod.Dockerfile .
  • « Docker layer caching » is enough to optimize the build time.
  • Cache in CI/CD is about saving directories or files across pipelines.
  • We're building a Docker image, dependencies are installed inside a container.We can't cache a dependencies directory if it doesn't exists in the job workspace.
  • Dependencies will always be installed from a container but will be extracted by the GitLab Runner in the job workspace. Our goal is to send the cached version in the build context.
  • We set the directories to cache in the job settings with a key to share the cache per branch and stage.
  • - docker cp app:/var/www/html/vendor/ ./vendor
  • after_script
  • - docker cp app:/var/www/html/node_modules/ ./node_modules
  • To avoid old dependencies to be mixed with the new ones, at the risk of keeping unused dependencies in cache, which would make cache and images heavier.
  • If you need to cache directories in testing jobs, it's easier: use volumes !
  • version your cache keys !
  • sharing Docker image between jobs
  • In every job, we automatically get artifacts from previous stages.
  • docker save $DOCKER_CI_IMAGE | gzip > app.tar.gz
  • I personally use the « push / pull » technique,
  • we docker push after the build, then we docker pull if needed in the next jobs.
crazylion lee

darlinghq/darling: Darwin/macOS emulation layer for Linux - 0 views

  •  
    "Darwin/macOS emulation layer for Linux"
張 旭

cryptography - What's the difference between SSL, TLS, and HTTPS? - Information Securit... - 0 views

  • TLS is the new name for SSL
  • HTTPS is HTTP-within-SSL/TLS
  • SSL (TLS) establishes a secured, bidirectional tunnel for arbitrary binary data between two hosts
  • ...10 more annotations...
  • HTTP is meant to run over a bidirectional tunnel for arbitrary binary data; when that tunnel is an SSL/TLS connection, then the whole is called "HTTPS".
  • "SSL" means "Secure Sockets Layer".
  • "TLS" means "Transport Layer Security".
  • The name was changed to avoid any legal issues with Netscape so that the protocol could be "open and free" (and published as a RFC).
    • 張 旭
       
      看起來其實就指同一件事,只是講 TLS 可以避開 SSL 這個有產權糾紛的名諱。
  • not just Internet-based sockets
  • "HTTPS" is supposed to mean "HyperText Transfer Protocol Secure",
  • Other protocol acronyms have been built the same way, e.g. SMTPS, IMAPS, FTPS... all of them being a bare protocol that "got secured" by running it within some SSL/TLS.
  • To make the confusing perfect: SSL (secure socket layer) often refers to the old protocol variant which starts with the handshake right away and therefore requires another port for the encrypted protocol such as 443 instead of 80.
  • TLS (transport layer security) often refers to the new variant which allows to start with an unencrypted traditional protocol and then issuing a command (usually STARTTLS) to initialize the handshake.
  • Whether you use SSL or TLS for this depends on the configuration of your browser and of the server (there usually is an option to allow SSLv2, SSLv3 or TLS 1.x).
張 旭

Glossary - CircleCI - 0 views

  • User authentication may use LDAP for an instance of the CircleCI application that is installed on your private server or cloud
  • The first user to log into a private installation of CircleCI
  • Contexts provide a mechanism for securing and sharing environment variables across projects.
  • ...22 more annotations...
  • The environment variables are defined as name/value pairs and are injected at runtime.
  • The CircleCI Docker Layer Caching feature allows builds to reuse Docker image layers
  • from previous builds.
  • Image layers are stored in separate volumes in the cloud and are not shared between projects.
  • Layers may only be used by builds from the same project.
  • Environment variables store customer data that is used by a project.
  • Defines the underlying technology to run a job.
  • machine to run your job inside a full virtual machine.
  • docker to run your job inside a Docker container with a specified image
  • A job is a collection of steps.
  • The first image listed in config.yml
  • A CircleCI project shares the name of the code repository for which it automates workflows, tests, and deployment.
  • must be added with the Add Project button
  • Following a project enables a user to subscribe to email notifications for the project build status and adds the project to their CircleCI dashboard.
  • A step is a collection of executable commands
  • Users must be added to a GitHub or Bitbucket org to view or follow associated CircleCI projects.
  • Users may not view project data that is stored in environment variables.  
  • A Workflow is a set of rules for defining a collection of jobs and their run order.
  • Workflows are implemented as a directed acyclic graph (DAG) of jobs for greatest flexibility.
  • referred to as Pipelines
  • A workspace is a workflows-aware storage mechanism.
  • A workspace stores data unique to the job, which may be needed in downstream jobs.
張 旭

Speeding up Docker image build process of a Rails application | BigBinary Blog - 1 views

  • we do not want to execute bundle install and rake assets:precompile tasks while starting a container in each pod which would prevent that pod from accepting any requests until these tasks are finished.
  • run bundle install and rake assets:precompile tasks while or before containerizing the Rails application.
  • Kubernetes pulls the image, starts a Docker container using that image inside the pod and runs puma server immediately.
  • ...7 more annotations...
  • Since source code changes often, the previously cached layer for the ADD instruction is invalidated due to the mismatching checksums.
  • The ARG instruction in the Dockerfile defines RAILS_ENV variable and is implicitly used as an environment variable by the rest of the instructions defined just after that ARG instruction.
  • RUN instructions are used to install gems and precompile static assets using sprockets
  • Instead, Docker automatically re-uses the previously built layer for the RUN bundle install instruction if the Gemfile.lock file remains unchanged.
  • everyday we need to build a lot of Docker images containing source code from varying Git branches as well as with varying environments.
  • it is hard for Docker to cache layers for bundle install and rake assets:precompile tasks and re-use those layers during every docker build command run with different application source code and a different environment.
  • By default, Bundler installs gems at the location which is set by Rubygems.
  •  
    "we do not want to execute bundle install and rake assets:precompile tasks while starting a container in each pod which would prevent that pod from accepting any requests until these tasks are finished."
張 旭

How to write excellent Dockerfiles - 0 views

  • minimize image size, build time and number of layers.
  • maximize build cache usage
  • Container should do one thing
    • 張 旭
       
      這個有待商榷,在 baseimage 的 blog 介紹中有詳細的討論。
  • ...25 more annotations...
  • Use COPY and RUN commands in proper order
  • Merge multiple RUN commands into one
  • alpine versions should be enough
  • Use exec inside entrypoint script
  • Prefer COPY over ADD
  • Specify default environment variables, ports and volumes inside Dockerfile
  • problems with zombie processes
  • prepare separate Docker image for each component, and use Docker Compose to easily start multiple containers at the same time
  • Layers are cached and reused
  • Layers are immutable
  • They both makes you cry
  • rely on our base image updates
  • make a cleanup
  • alpine is a very tiny linux distribution, just about 4 MB in size.
  • Your disk will love you :)
  • WORKDIR command changes default directory, where we run our RUN / CMD / ENTRYPOINT commands.
  • CMD is a default command run after creating container without other command specified.
  • put your command inside array
  • entrypoint adds complexity
  • Entrypoint is a script, that will be run instead of command, and receive command as arguments
  • Without it, we would not be able to stop our application grecefully (SIGTERM is swallowed by bash script).
  • Use "exec" inside entrypoint script
  • ADD has some logic for downloading remote files and extracting archives.
  • stick with COPY.
  • ADD
    • 張 旭
       
      不是說要用 COPY 嗎?
張 旭

How To Install and Use Docker: Getting Started | DigitalOcean - 0 views

  • docker as a project offers you the complete set of higher-level tools to carry everything that forms an application across systems and machines - virtual or physical - and brings along loads more of great benefits with it
  • docker daemon: used to manage docker (LXC) containers on the host it runs
  • docker CLI: used to command and communicate with the docker daemon
  • ...20 more annotations...
  • containers: directories containing everything-your-application
  • images: snapshots of containers or base OS (e.g. Ubuntu) images
  • Dockerfiles: scripts automating the building process of images
  • Docker containers are basically directories which can be packed (e.g. tar-archived) like any other, then shared and run across various different machines and platforms (hosts).
  • Linux Containers can be defined as a combination various kernel-level features (i.e. things that Linux-kernel can do) which allow management of applications (and resources they use) contained within their own environment
  • Each container is layered like an onion and each action taken within a container consists of putting another block (which actually translates to a simple change within the file system) on top of the previous one.
  • Each docker container starts from a docker image which forms the base for other applications and layers to come.
  • Docker images constitute the base of docker containers from which everything starts to form
  • a solid, consistent and dependable base with everything that is needed to run the applications
  • As more layers (tools, applications etc.) are added on top of the base, new images can be formed by committing these changes.
  • a Dockerfile for automated image building
  • Dockerfiles are scripts containing a successive series of instructions, directions, and commands which are to be executed to form a new docker image.
  • As you work with a container and continue to perform actions on it (e.g. download and install software, configure files etc.), to have it keep its state, you need to “commit”.
  • Please remember to “commit” all your changes.
  • When you "run" any process using an image, in return, you will have a container.
  • When the process is not actively running, this container will be a non-running container. Nonetheless, all of them will reside on your system until you remove them via rm command.
  • To create a new container, you need to use a base image and specify a command to run.
  • you can not change the command you run after having created a container (hence specifying one during "creation")
  • If you would like to save the progress and changes you made with a container, you can use “commit”
  • turns your container to an image
張 旭

Manage swarm security with public key infrastructure (PKI) | Docker Documentation - 0 views

  • The nodes in a swarm use mutual Transport Layer Security (TLS) to authenticate, authorize, and encrypt the communications with other nodes in the swarm.
  • By default, the manager node generates a new root Certificate Authority (CA) along with a key pair, which are used to secure communications with other nodes that join the swarm.
  • The manager node also generates two tokens to use when you join additional nodes to the swarm: one worker token and one manager token.
  • ...3 more annotations...
  • Each time a new node joins the swarm, the manager issues a certificate to the node
  • By default, each node in the swarm renews its certificate every three months.
  • a cluster CA key or a manager node is compromised, you can rotate the swarm root CA so that none of the nodes trust certificates signed by the old root CA anymore.
  •  
    "The nodes in a swarm use mutual Transport Layer Security (TLS) to authenticate, authorize, and encrypt the communications with other nodes in the swarm."
crazylion lee

Keepalived for Linux - 0 views

  •  
    Keepalived is a routing software written in C. The main goal of this project is to provide simple and robust facilities for loadbalancing and high-availability to Linux system and Linux based infrastructures. Loadbalancing framework relies on well-known and widely used Linux Virtual Server (IPVS) kernel module providing Layer4 loadbalancing. Keepalived implements a set of checkers to dynamically and adaptively maintain and manage loadbalanced server pool according their health. On the other hand high-availability is achieved by VRRP protocol. VRRP is a fundamental brick for router failover. In addition, Keepalived implements a set of hooks to the VRRP finite state machine providing low-level and high-speed protocol interactions. Keepalived frameworks can be used independently or all together to provide resilient infrastructures.
crazylion lee

Fluentd | Open Source Data Collector - 0 views

  •  
    Fluentd is an open source data collector for unified logging layer. Fluentd allows you to unify data collection and consumption for a better use and understanding of data.
張 旭

Intro to deployment strategies: blue-green, canary, and more - DEV Community - 0 views

  • using a service-oriented architecture and microservices approach, developers can design a code base to be modular.
  • Modern applications are often distributed and cloud-based
  • different release cycles for different components
  • ...20 more annotations...
  • the abstraction of the infrastructure layer, which is now considered code. Deployment of a new application may require the deployment of new infrastructure code as well.
  • "big bang" deployments update whole or large parts of an application in one fell swoop.
  • Big bang deployments required the business to conduct extensive development and testing before release, often associated with the "waterfall model" of large sequential releases.
  • Rollbacks are often costly, time-consuming, or even impossible.
  • In a rolling deployment, an application’s new version gradually replaces the old one.
  • new and old versions will coexist without affecting functionality or user experience.
  • Each container is modified to download the latest image from the app vendor’s site.
  • two identical production environments work in parallel.
  • Once the testing results are successful, application traffic is routed from blue to green.
  • In a blue-green deployment, both systems use the same persistence layer or database back end.
  • You can use the primary database by blue for write operations and use the secondary by green for read operations.
  • Blue-green deployments rely on traffic routing.
  • long TTL values can delay these changes.
  • The main challenge of canary deployment is to devise a way to route some users to the new application.
  • Using an application logic to unlock new features to specific users and groups.
  • With CD, the CI-built code artifact is packaged and always ready to be deployed in one or more environments.
  • Use Build Automation tools to automate environment builds
  • Use configuration management tools
  • Enable automated rollbacks for deployments
  • An application performance monitoring (APM) tool can help your team monitor critical performance metrics including server response times after deployments.
張 旭

Persisting Data in Workflows: When to Use Caching, Artifacts, and Workspaces - CircleCI - 0 views

  • Repeatability is also important
  • When a CI process isn’t repeatable you’ll find yourself wasting time re-running jobs to get them to go green.
  • Workspaces persist data between jobs in a single Workflow.
  • ...9 more annotations...
  • Caching persists data between the same job in different Workflow builds.
  • Artifacts persist data after a Workflow has finished
  • When a Workspace is declared in a job, one or more files or directories can be added. Each addition creates a new layer in the Workspace filesystem. Downstreams jobs can then use this Workspace for its own needs or add more layers on top.
  • Unlike caching, Workspaces are not shared between runs as they no longer exists once a Workflow is complete.
  • Caching lets you reuse the data from expensive fetch operations from previous jobs.
  • A prime example is package dependency managers such as Yarn, Bundler, or Pip.
  • Caches are global within a project, a cache saved on one branch will be used by others so they should only be used for data that is OK to share across Branches
  • Artifacts are used for longer-term storage of the outputs of your build process.
  • If your project needs to be packaged in some form or fashion, say an Android app where the .apk file is uploaded to Google Play, that’s a great example of an artifact.
  •  
    "CircleCI 2.0 provides a number of different ways to move data into and out of jobs, persist data, and with the introduction of Workspaces, move data between jobs"
張 旭

MySQL on Docker: Running ProxySQL as Kubernetes Service | Severalnines - 0 views

  • Using Kubernetes ConfigMap approach, ProxySQL can be clustered with immutable configuration.
  • Kubernetes handles ProxySQL recovery and balance the connections to the instances automatically.
  • Can be used with external applications outside Kubernetes.
  • ...11 more annotations...
  • load balancing, connection failover and decoupling of the application tier from the underlying database topologies.
  • ProxySQL as a Kubernetes service (centralized deployment)
  • running as a service makes ProxySQL pods live independently from the applications and can be easily scaled and clustered together with the help of Kubernetes ConfigMap.
  • ProxySQL's multi-layer configuration system makes pod clustering possible with ConfigMap.
  • create ProxySQL pods and attach a Kubernetes service to be accessed by the other pods within the Kubernetes network or externally.
  • Default to 6033 for MySQL load-balanced connections and 6032 for ProxySQL administration console.
  • separated by "---" delimiter
  • deploy two ProxySQL pods as a ReplicaSet that matches containers labelled with "app=proxysql,tier=frontend".
  • A Kubernetes service is an abstraction layer which defines the logical set of pods and a policy by which to access them
  • The range of valid ports for NodePort resource is 30000-32767.
  • ConfigMap - To store ProxySQL configuration file as a volume so it can be mounted to multiple pods and can be remounted again if the pod is being rescheduled to the other Kubernetes node.
張 旭

bbatsov/rails-style-guide: A community-driven Ruby on Rails 4 style guide - 0 views

  • custom initialization code in config/initializers. The code in initializers executes on application startup
  • Keep initialization code for each gem in a separate file with the same name as the gem
  • Mark additional assets for precompilation
  • ...90 more annotations...
  • config/environments/production.rb
  • Create an additional staging environment that closely resembles the production one
  • Keep any additional configuration in YAML files under the config/ directory
  • Rails::Application.config_for(:yaml_file)
  • Use nested routes to express better the relationship between ActiveRecord models
  • nest routes more than 1 level deep then use the shallow: true option
  • namespaced routes to group related actions
  • Don't use match to define any routes unless there is need to map multiple request types among [:get, :post, :patch, :put, :delete] to a single action using :via option.
  • Keep the controllers skinny
  • all the business logic should naturally reside in the model
  • Share no more than two instance variables between a controller and a view.
  • using a template
  • Prefer render plain: over render text
  • Prefer corresponding symbols to numeric HTTP status codes
  • without abbreviations
  • Keep your models for business logic and data-persistence only
  • Avoid altering ActiveRecord defaults (table names, primary key, etc)
  • Group macro-style methods (has_many, validates, etc) in the beginning of the class definition
  • Prefer has_many :through to has_and_belongs_to_many
  • self[:attribute]
  • self[:attribute] = value
  • validates
  • Keep custom validators under app/validators
  • Consider extracting custom validators to a shared gem
  • preferable to make a class method instead which serves the same purpose of the named scope
  • returns an ActiveRecord::Relation object
  • .update_attributes
  • Override the to_param method of the model
  • Use the friendly_id gem. It allows creation of human-readable URLs by using some descriptive attribute of the model instead of its id
  • find_each to iterate over a collection of AR objects
  • .find_each
  • .find_each
  • Looping through a collection of records from the database (using the all method, for example) is very inefficient since it will try to instantiate all the objects at once
  • always call before_destroy callbacks that perform validation with prepend: true
  • Define the dependent option to the has_many and has_one associations
  • always use the exception raising bang! method or handle the method return value.
  • When persisting AR objects
  • Avoid string interpolation in queries
  • param will be properly escaped
  • Consider using named placeholders instead of positional placeholders
  • use of find over where when you need to retrieve a single record by id
  • use of find_by over where and find_by_attribute
  • use of where.not over SQL
  • use heredocs with squish
  • Keep the schema.rb (or structure.sql) under version control.
  • Use rake db:schema:load instead of rake db:migrate to initialize an empty database
  • Enforce default values in the migrations themselves instead of in the application layer
  • change_column_default
  • imposing data integrity from the Rails app is impossible
  • use the change method instead of up and down methods.
  • constructive migrations
  • use models in migrations, make sure you define them so that you don't end up with broken migrations in the future
  • Don't use non-reversible migration commands in the change method.
  • In this case, block will be used by create_table in rollback
  • Never call the model layer directly from a view
  • Never make complex formatting in the views, export the formatting to a method in the view helper or the model.
  • When the labels of an ActiveRecord model need to be translated, use the activerecord scope
  • Separate the texts used in the views from translations of ActiveRecord attributes
  • Place the locale files for the models in a folder locales/models
  • the texts used in the views in folder locales/views
  • config/application.rb config.i18n.load_path += Dir[Rails.root.join('config', 'locales', '**', '*.{rb,yml}')]
  • I18n.t
  • I18n.l
  • Use "lazy" lookup for the texts used in views.
  • Use the dot-separated keys in the controllers and models
  • Reserve app/assets for custom stylesheets, javascripts, or images
  • Third party code such as jQuery or bootstrap should be placed in vendor/assets
  • Provide both HTML and plain-text view templates
  • config.action_mailer.raise_delivery_errors = true
  • Use a local SMTP server like Mailcatcher in the development environment
  • Provide default settings for the host name
  • The _url methods include the host name and the _path methods don't
  • _url
  • Format the from and to addresses properly
  • default from:
  • sending html emails all styles should be inline
  • Sending emails while generating page response should be avoided. It causes delays in loading of the page and request can timeout if multiple email are sent.
  • .start_with?
  • .end_with?
  • &.
  • Config your timezone accordingly in application.rb
  • config.active_record.default_timezone = :local
  • it can be only :utc or :local
  • Don't use Time.parse
  • Time.zone.parse
  • Don't use Time.now
  • Time.zone.now
  • Put gems used only for development or testing in the appropriate group in the Gemfile
  • Add all OS X specific gems to a darwin group in the Gemfile, and all Linux specific gems to a linux group
  • Do not remove the Gemfile.lock from version control.
張 旭

An Introduction to HAProxy and Load Balancing Concepts | DigitalOcean - 0 views

  • HAProxy, which stands for High Availability Proxy
  • improve the performance and reliability of a server environment by distributing the workload across multiple servers (e.g. web, application, database).
  • ACLs are used to test some condition and perform an action (e.g. select a server, or block a request) based on the test result.
  • ...28 more annotations...
  • Access Control List (ACL)
  • ACLs allows flexible network traffic forwarding based on a variety of factors like pattern-matching and the number of connections to a backend
  • A backend is a set of servers that receives forwarded requests
  • adding more servers to your backend will increase your potential load capacity by spreading the load over multiple servers
  • mode http specifies that layer 7 proxying will be used
  • specifies the load balancing algorithm
  • health checks
  • A frontend defines how requests should be forwarded to backends
  • use_backend rules, which define which backends to use depending on which ACL conditions are matched, and/or a default_backend rule that handles every other case
  • A frontend can be configured to various types of network traffic
  • Load balancing this way will forward user traffic based on IP range and port
  • Generally, all of the servers in the web-backend should be serving identical content--otherwise the user might receive inconsistent content.
  • Using layer 7 allows the load balancer to forward requests to different backend servers based on the content of the user's request.
  • allows you to run multiple web application servers under the same domain and port
  • acl url_blog path_beg /blog matches a request if the path of the user's request begins with /blog.
  • Round Robin selects servers in turns
  • Selects the server with the least number of connections--it is recommended for longer sessions
  • This selects which server to use based on a hash of the source IP
  • ensure that a user will connect to the same server
  • require that a user continues to connect to the same backend server. This persistence is achieved through sticky sessions, using the appsession parameter in the backend that requires it.
  • HAProxy uses health checks to determine if a backend server is available to process requests.
  • The default health check is to try to establish a TCP connection to the server
  • If a server fails a health check, and therefore is unable to serve requests, it is automatically disabled in the backend
  • For certain types of backends, like database servers in certain situations, the default health check is insufficient to determine whether a server is still healthy.
  • However, your load balancer is a single point of failure in these setups; if it goes down or gets overwhelmed with requests, it can cause high latency or downtime for your service.
  • A high availability (HA) setup is an infrastructure without a single point of failure
  • a static IP address that can be remapped from one server to another.
  • If that load balancer fails, your failover mechanism will detect it and automatically reassign the IP address to one of the passive servers.
張 旭

Kubernetes Components | Kubernetes - 0 views

  • A Kubernetes cluster consists of a set of worker machines, called nodes, that run containerized applications
  • Every cluster has at least one worker node.
  • The control plane manages the worker nodes and the Pods in the cluster.
  • ...29 more annotations...
  • The control plane's components make global decisions about the cluster
  • Control plane components can be run on any machine in the cluster.
  • for simplicity, set up scripts typically start all control plane components on the same machine, and do not run user containers on this machine
  • The API server is the front end for the Kubernetes control plane.
  • kube-apiserver is designed to scale horizontally—that is, it scales by deploying more instances. You can run several instances of kube-apiserver and balance traffic between those instances.
  • Kubernetes cluster uses etcd as its backing store, make sure you have a back up plan for those data.
  • watches for newly created Pods with no assigned node, and selects a node for them to run on.
  • Factors taken into account for scheduling decisions include: individual and collective resource requirements, hardware/software/policy constraints, affinity and anti-affinity specifications, data locality, inter-workload interference, and deadlines.
  • each controller is a separate process, but to reduce complexity, they are all compiled into a single binary and run in a single process.
  • Node controller
  • Job controller
  • Endpoints controller
  • Service Account & Token controllers
  • The cloud controller manager lets you link your cluster into your cloud provider's API, and separates out the components that interact with that cloud platform from components that only interact with your cluster.
  • If you are running Kubernetes on your own premises, or in a learning environment inside your own PC, the cluster does not have a cloud controller manager.
  • An agent that runs on each node in the cluster. It makes sure that containers are running in a Pod.
  • The kubelet takes a set of PodSpecs that are provided through various mechanisms and ensures that the containers described in those PodSpecs are running and healthy.
  • The kubelet doesn't manage containers which were not created by Kubernetes.
  • kube-proxy is a network proxy that runs on each node in your cluster, implementing part of the Kubernetes Service concept.
  • kube-proxy maintains network rules on nodes. These network rules allow network communication to your Pods from network sessions inside or outside of your cluster.
  • kube-proxy uses the operating system packet filtering layer if there is one and it's available.
  • Kubernetes supports several container runtimes: Docker, containerd, CRI-O, and any implementation of the Kubernetes CRI (Container Runtime Interface).
  • Addons use Kubernetes resources (DaemonSet, Deployment, etc) to implement cluster features
  • namespaced resources for addons belong within the kube-system namespace.
  • all Kubernetes clusters should have cluster DNS,
  • Cluster DNS is a DNS server, in addition to the other DNS server(s) in your environment, which serves DNS records for Kubernetes services.
  • Containers started by Kubernetes automatically include this DNS server in their DNS searches.
  • Container Resource Monitoring records generic time-series metrics about containers in a central database, and provides a UI for browsing that data.
  • A cluster-level logging mechanism is responsible for saving container logs to a central log store with search/browsing interface.
1 - 20 of 48 Next › Last »
Showing 20 items per page