Skip to main content

Home/ Larvata/ Group items tagged io

Rss Feed Group items tagged

張 旭

Swarm Mode Cluster - Træfik - 0 views

  • The only requirement for Træfik to work with swarm mode is that it needs to run on a manager node
  •  
    "The only requirement for Træfik to work with swarm mode is that it needs to run on a manager node"
張 旭

What are Docker : images? - Project Atomic - 0 views

  • Now we understand what these <none>:<none> images stand for. They stand for intermediate images and can be seen using docker images -a
  • They don’t result into a disk space problem but it is definitely a screen real estate problem
  • dangling images
  • ...7 more annotations...
  • Another style of <none>:<none> images are the dangling images which can cause disk space problems.
  • In programming languages like Java or Golang a dangling block of memory is a block that is not referenced by any piece of code.
  • a dangling file system layer in Docker is something that is unused and is not being referenced by any images.
  • intermediate images
  • do docker images and see <none>:<none> images in the list, these are dangling images and needs to be pruned.
  • These dangling images are produced as a result of docker build or pull command.
  • docker rmi $(docker images -f "dangling=true" -q)
張 旭

Let's Encrypt & Docker - Træfik - 0 views

  • automatically discover any services on the Docker host and let Træfik reconfigure itself automatically when containers get created (or shut down) so HTTP traffic can be routed accordingly.
  • use Træfik as a layer-7 load balancer with SSL termination for a set of micro-services used to run a web application.
  • Docker containers can only communicate with each other over TCP when they share at least one network.
  • ...15 more annotations...
  • Docker under the hood creates IPTable rules so containers can't reach other containers unless you'd want to
  • Træfik can listen to Docker events and reconfigure its own internal configuration when containers are created (or shut down).
  • Enable the Docker provider and listen for container events on the Docker unix socket we've mounted earlier.
  • Enable automatic request and configuration of SSL certificates using Let's Encrypt. These certificates will be stored in the acme.json file, which you can back-up yourself and store off-premises.
  • there isn't a single container that has any published ports to the host -- everything is routed through Docker networks.
  • Thanks to Docker labels, we can tell Træfik how to create its internal routing configuration.
  • container labels and service labels
  • With the traefik.enable label, we tell Træfik to include this container in its internal configuration.
  • tell Træfik to use the web network to route HTTP traffic to this container.
  • Service labels allow managing many routes for the same container.
  • When both container labels and service labels are defined, container labels are just used as default values for missing service labels but no frontend/backend are going to be defined only with these labels.
  • In the example, two service names are defined : basic and admin. They allow creating two frontends and two backends.
  • Always specify the correct port where the container expects HTTP traffic using traefik.port label.
  • all containers that are placed in the same network as Træfik will automatically be reachable from the outside world
  • With the traefik.frontend.auth.basic label, it's possible for Træfik to provide a HTTP basic-auth challenge for the endpoints you provide the label for.
張 旭

How It Works - Let's Encrypt - Free SSL/TLS Certificates - 0 views

  • The objective of Let’s Encrypt and the ACME protocol is to make it possible to set up an HTTPS server and have it automatically obtain a browser-trusted certificate, without any human intervention.
  • First, the agent proves to the CA that the web server controls a domain.
  • Then, the agent can request, renew, and revoke certificates for that domain.
  • ...4 more annotations...
  • The first time the agent software interacts with Let’s Encrypt, it generates a new key pair and proves to the Let’s Encrypt CA that the server controls one or more domains.
  • The Let’s Encrypt CA will look at the domain name being requested and issue one or more sets of challenges
  • different ways that the agent can prove control of the domain
  • Once the agent has an authorized key pair, requesting, renewing, and revoking certificates is simple—just send certificate management messages and sign them with the authorized key pair.
張 旭

Data Sources - Configuration Language - Terraform by HashiCorp - 0 views

  • refer to this resource from elsewhere in the same Terraform module, but has no significance outside of the scope of a module.
  • A data block requests that Terraform read from a given data source ("aws_ami") and export the result under the given local name ("example").
  • A data source is accessed via a special kind of resource known as a data resource
  •  
    "refer to this resource from elsewhere in the same Terraform module, but has no significance outside of the scope of a module."
張 旭

Introduction - Packer by HashiCorp - 0 views

  • A machine image is a single static unit that contains a pre-configured operating system and installed software which is used to quickly create new running machines.
  •  
    "A machine image is a single static unit that contains a pre-configured operating system and installed software which is used to quickly create new running machines."
張 旭

Backends: State Storage and Locking - Terraform by HashiCorp - 0 views

  • Backends determine where state is stored.
  • backends happen to provide locking: local via system APIs and Consul via locking APIs.
  • manually retrieve the state from the remote state using the terraform state pull command
  • ...3 more annotations...
  • manually write state with terraform state push. This is extremely dangerous and should be avoided if possible. This will overwrite the remote state.
  • The "lineage" is a unique ID assigned to a state when it is created.
  • Every state has a monotonically increasing "serial" number.
  •  
    "Backends determine where state is stored."
張 旭

Backends: Configuration - Terraform by HashiCorp - 0 views

  • merged configuration is stored on disk in the .terraform directory, which should be ignored from version control.
  • When using partial configuration, Terraform requires at a minimum that an empty backend configuration is specified in one of the root Terraform configuration files, to specify the backend type.
  •  
    "merged configuration is stored on disk in the .terraform directory, which should be ignored from version control."
張 旭

User Variables - Templates - Packer by HashiCorp - 0 views

  • User variables allow your templates to be further configured with variables from the command-line, environment variables, Vault, or files.
  • define it either within the variables section within your template, or using the command-line -var or -var-file flags.
  • If the default value is null, then the user variable will be required.
  • ...7 more annotations...
  • User variables are available globally within the rest of the template.
  • The env function is available only within the default value of a user variable, allowing you to default a user variable to an environment variable.
  • As Packer doesn't run inside a shell, it won't expand ~
  • To set user variables from the command line, the -var flag is used as a parameter to packer build (and some other commands).
  • Variables can also be set from an external JSON file. The -var-file flag reads a file containing a key/value mapping of variables to values and sets those variables.
  • -var-file=
  • sensitive variables won't get printed to the logs by adding them to the "sensitive-variables" list within the Packer template
張 旭

Providers - Configuration Language - Terraform by HashiCorp - 0 views

  • By default, terraform init downloads plugins into a subdirectory of the working directory so that each working directory is self-contained.
  • Terraform optionally allows the use of a local directory as a shared plugin cache, which then allows each distinct plugin binary to be downloaded only once.
  • directory must already exist before Terraform will cache plugins; Terraform will not create the directory itself.
  • ...3 more annotations...
  • When a plugin cache directory is enabled, the terraform init command will still access the plugin distribution server to obtain metadata about which plugins are available, but once a suitable version has been selected it will first check to see if the selected plugin is already available in the cache directory.
  • When possible, Terraform will use hardlinks or symlinks to avoid storing a separate copy of a cached plugin in multiple directories.
  • Terraform will never itself delete a plugin from the plugin cache once it's been placed there.
  •  
    "By default, terraform init downloads plugins into a subdirectory of the working directory so that each working directory is self-contained."
張 旭

Build an Image - Getting Started - Packer by HashiCorp - 0 views

  • The configuration file used to define what image we want built and how is called a template in Packer terminology.
  • JSON struck the best balance between human-editable and machine-editable, allowing both hand-made templates as well as machine generated templates to easily be made.
  • keeping your secret keys out of the template
  • ...3 more annotations...
  • validate the template by running packer validate example.json. This command checks the syntax as well as the configuration values to verify they look valid.
  • At the end of running packer build, Packer outputs the artifacts that were created as part of the build.
  • Packer only builds images. It does not attempt to manage them in any way.
張 旭

Template Engine - Templates - Packer by HashiCorp - 0 views

  • All strings within templates are processed by a common Packer templating engine, where variables and functions can be used to modify the value of a configuration parameter at runtime.
  • Anything template related happens within double-braces: {{ }}.
  • Functions are specified directly within the braces, such as {{timestamp}}
  • ...8 more annotations...
  • Template variables are prefixed with a period and capitalized, such as {{.Variable}}.
  • Functions perform operations on and within strings
  • the {{timestamp}} function can be used in any string to generate the current timestamp.
  • pwd - The working directory while executing Packer.
  • template_dir - The directory to the template for the build.
  • uuid - Returns a random UUID.
  • user - Specifies a user variable.
  • Template variables are special variables automatically set by Packer at build time.
張 旭

VMware ISO - Builders - Packer by HashiCorp - 0 views

  • Packer can use a remote VMware Hypervisor to build the virtual machine.
  • enable GuestIPHack
  • When using a remote VMware Hypervisor, the builder still downloads the ISO and various files locally, and uploads these to the remote machine.
  • ...3 more annotations...
  • Packer needs to decide on a port to use for VNC when building remotely.
  • vnc_disable_password - This must be set to "true" when using VNC with ESXi 6.5 or 6.7
  • remote_type (string) - The type of remote machine that will be used to build this VM rather than a local desktop product. The only value accepted for this currently is esx5. If this is not set, a desktop product will be used. By default, this is not set.
  •  
    "Packer can use a remote VMware Hypervisor to build the virtual machine."
張 旭

Configuration - docker-sync 0.5.10 documentation - 0 views

  • Be sure to use a sync-name which is unique, since it will be a container name.
    • 張 旭
       
      慣例是 docker-sync 的 container name 後綴都是 -sync
  • split your docker-compose configuration for production and development (as usual)
  • ...9 more annotations...
  • production stack (docker-compose.yml) does not need any changes and would look like this (and is portable, no docker-sync adjustments).
  • docker-compose-dev.yml ( it needs to be called that way, look like this ) will override
    • 張 旭
       
      開發版的 docker-compose-dev.yml 僅會覆寫 production docker-compose.yml 的 volumes 設定,也就接上 docker-sync.yml 的 volumes,其它都維持不變
  • nocopy # nocopy is important
  • nocopy # nocopy is important
  • docker-compose -f docker-compose.yml -f docker-compose-dev.yml up
  • add the external volume and the mount here
  • In case the folder we mount to has been declared as a VOLUME during image build, its content will be merged with the name volume we mount from the host
    • 張 旭
       
      如果在 Dockerfile 裡面有宣告一個 volume,那麼在 docker build 的時候這個 volume mount point 會被記錄起來,在 container 跑起來的時候,會將 host (server) 上的同名的 volume 內容合併進來 (取代)。也就是說 container 跑起來的時候,會去接上已經存在的既有的 host (server) 上的 volume。
  • enforce the content from our host on the initial wiring
  • set your environment variables by creating a .env file at the root of your project
  •  
    "Be sure to use a sync-name which is unique, since it will be a container name."
張 旭

Baseimage-docker: A minimal Ubuntu base image modified for Docker-friendliness - 0 views

  • We encourage you to use multiple processes.
  • Baseimage-docker is a special Docker image that is configured for correct use within Docker containers.
  • When your Docker container starts, only the CMD command is run.
  • ...16 more annotations...
  • You're not running them, you're only running your app.
  • You have Ubuntu installed in Docker. The files are there. But that doesn't mean Ubuntu's running as it should.
  • The only processes that will be running inside the container is the CMD command, and all processes that it spawns.
  • A proper Unix system should run all kinds of important system services.
  • Ubuntu is not designed to be run inside Docker
  • When a system is started, the first process in the system is called the init process, with PID 1. The system halts when this processs halts.
  • Runit (written in C) is much lighter weight than supervisord (written in Python).
  • Docker runs fine with multiple processes in a container.
  • Baseimage-docker encourages you to run multiple processes through the use of runit.
  • If your init process is your app, then it'll probably only shut down itself, not all the other processes in the container.
  • a Docker container, which is a locked down environment with e.g. no direct access to many kernel resources.
  • Used for service supervision and management.
  • A custom tool for running a command as another user.
  • add additional daemons (e.g. your own app) to the image by creating runit entries.
  • write a small shell script which runs your daemon, and runit will keep it up and running for you, restarting it when it crashes, etc.
  • the shell script must run the daemon without letting it daemonize/fork it.
張 旭

Scalable architecture without magic (and how to build it if you're not Google) - DEV Co... - 0 views

  • Don’t mess up write-first and read-first databases.
  • keep them stateless.
  • you should know how to make a scalable setup on bare metal.
  • ...29 more annotations...
  • Different programming languages are for different tasks.
  • Go or C which are compiled to run on bare metal.
  • To run NodeJS on multiple cores, you have to use something like PM2, but since this you have to keep your code stateless.
  • Python have very rich and sugary syntax that’s great for working with data while keeping your code small and expressive.
  • SQL is almost always slower than NoSQL
  • databases are often read-first or write-first
  • write-first, just like Cassandra.
  • store all of your data to your databases and leave nothing at backend
  • Functional code is stateless by default
  • It’s better to go for stateless right from the very beginning.
  • deliver exactly the same responses for same requests.
  • Sessions? Store them at Redis and allow all of your servers to access it.
  • Only the first user will trigger a data query, and all others will be receiving exactly the same data straight from the RAM
  • never, never cache user input
  • Only the server output should be cached
  • Varnish is a great cache option that works with HTTP responses, so it may work with any backend.
  • a rate limiter – if there’s not enough time have passed since last request, the ongoing request will be denied.
  • different requests blasting every 10ms can bring your server down
  • Just set up entry relations and allow your database to calculate external keys for you
  • the query planner will always be faster than your backend.
  • Backend should have different responsibilities: hashing, building web pages from data and templates, managing sessions and so on.
  • For anything related to data management or data models, move it to your database as procedures or queries.
  • a distributed database.
  • your code has to be stateless
  • Move anything related to the data to the database.
  • For load-balancing a database, go for cluster.
  • DB is balancing requests, as well as your backend.
  • Users from different continents are separated with DNS.
  • Keep is scalable, keep is stateless.
  •  
    "Don't mess up write-first and read-first databases."
crazylion lee

FreeOTP - 0 views

shared by crazylion lee on 11 Mar 19 - No Cached
張 旭

Queues - Laravel - The PHP Framework For Web Artisans - 0 views

  • Laravel queues provide a unified API across a variety of different queue backends, such as Beanstalk, Amazon SQS, Redis, or even a relational database.
  • The queue configuration file is stored in config/queue.php
  • a synchronous driver that will execute jobs immediately (for local use)
  • ...56 more annotations...
  • A null queue driver is also included which discards queued jobs.
  • In your config/queue.php configuration file, there is a connections configuration option.
  • any given queue connection may have multiple "queues" which may be thought of as different stacks or piles of queued jobs.
  • each connection configuration example in the queue configuration file contains a queue attribute.
  • if you dispatch a job without explicitly defining which queue it should be dispatched to, the job will be placed on the queue that is defined in the queue attribute of the connection configuration
  • pushing jobs to multiple queues can be especially useful for applications that wish to prioritize or segment how jobs are processed
  • specify which queues it should process by priority.
  • If your Redis queue connection uses a Redis Cluster, your queue names must contain a key hash tag.
  • ensure all of the Redis keys for a given queue are placed into the same hash slot
  • all of the queueable jobs for your application are stored in the app/Jobs directory.
  • Job classes are very simple, normally containing only a handle method which is called when the job is processed by the queue.
  • we were able to pass an Eloquent model directly into the queued job's constructor. Because of the SerializesModels trait that the job is using, Eloquent models will be gracefully serialized and unserialized when the job is processing.
  • When the job is actually handled, the queue system will automatically re-retrieve the full model instance from the database.
  • The handle method is called when the job is processed by the queue
  • The arguments passed to the dispatch method will be given to the job's constructor
  • delay the execution of a queued job, you may use the delay method when dispatching a job.
  • dispatch a job immediately (synchronously), you may use the dispatchNow method.
  • When using this method, the job will not be queued and will be run immediately within the current process
  • specify a list of queued jobs that should be run in sequence.
  • Deleting jobs using the $this->delete() method will not prevent chained jobs from being processed. The chain will only stop executing if a job in the chain fails.
  • this does not push jobs to different queue "connections" as defined by your queue configuration file, but only to specific queues within a single connection.
  • To specify the queue, use the onQueue method when dispatching the job
  • To specify the connection, use the onConnection method when dispatching the job
  • defining the maximum number of attempts on the job class itself.
  • to defining how many times a job may be attempted before it fails, you may define a time at which the job should timeout.
  • using the funnel method, you may limit jobs of a given type to only be processed by one worker at a time
  • using the throttle method, you may throttle a given type of job to only run 10 times every 60 seconds.
  • If an exception is thrown while the job is being processed, the job will automatically be released back onto the queue so it may be attempted again.
  • dispatch a Closure. This is great for quick, simple tasks that need to be executed outside of the current request cycle
  • When dispatching Closures to the queue, the Closure's code contents is cryptographically signed so it can not be modified in transit.
  • Laravel includes a queue worker that will process new jobs as they are pushed onto the queue.
  • once the queue:work command has started, it will continue to run until it is manually stopped or you close your terminal
  • queue workers are long-lived processes and store the booted application state in memory.
  • they will not notice changes in your code base after they have been started.
  • during your deployment process, be sure to restart your queue workers.
  • customize your queue worker even further by only processing particular queues for a given connection
  • The --once option may be used to instruct the worker to only process a single job from the queue
  • The --stop-when-empty option may be used to instruct the worker to process all jobs and then exit gracefully.
  • Daemon queue workers do not "reboot" the framework before processing each job.
  • you should free any heavy resources after each job completes.
  • Since queue workers are long-lived processes, they will not pick up changes to your code without being restarted.
  • restart the workers during your deployment process.
  • php artisan queue:restart
  • The queue uses the cache to store restart signals
  • the queue workers will die when the queue:restart command is executed, you should be running a process manager such as Supervisor to automatically restart the queue workers.
  • each queue connection defines a retry_after option. This option specifies how many seconds the queue connection should wait before retrying a job that is being processed.
  • The --timeout option specifies how long the Laravel queue master process will wait before killing off a child queue worker that is processing a job.
  • When jobs are available on the queue, the worker will keep processing jobs with no delay in between them.
  • While sleeping, the worker will not process any new jobs - the jobs will be processed after the worker wakes up again
  • the numprocs directive will instruct Supervisor to run 8 queue:work processes and monitor all of them, automatically restarting them if they fail.
  • Laravel includes a convenient way to specify the maximum number of times a job should be attempted.
  • define a failed method directly on your job class, allowing you to perform job specific clean-up when a failure occurs.
  • a great opportunity to notify your team via email or Slack.
  • php artisan queue:retry all
  • php artisan queue:flush
  • When injecting an Eloquent model into a job, it is automatically serialized before being placed on the queue and restored when the job is processed
« First ‹ Previous 161 - 180 of 270 Next › Last »
Showing 20 items per page