Skip to main content

Home/ Larvata/ Group items tagged check

Rss Feed Group items tagged

張 旭

Bash If Statements: Beginner to Advanced - DEV Community - 0 views

  • "[" is a command. It's actually syntactic sugar for the built-in command test which checks and compares its arguments. The "]" is actually an argument to the [ command that tells it to stop checking for arguments!
  • why > and < get weird inside single square brackets -- Bash actually thinks you're trying to do an input or output redirect inside a command!
  • the [[ double square brackets ]] and (( double parens )) are not exactly commands. They're actually Bash language keywords, which is what makes them behave a little more predictably.
  • ...8 more annotations...
  • The [[ double square brackets ]] work essentially the same as [ single square brackets ], albeit with some more superpowers like more powerful regex support.
  • The (( double parentheses )) are actually a construct that allow arithmetic inside Bash.
  • If the results inside are zero, it returns an exit code of 1. (Essentially, zero is "falsey.")
  • the greater and less-than symbols work just fine inside arithmetic parens.
  • exit code 0 for success.
  • exit code 1 for failure.
  • If the regex works out, the return code of the double square brackets is 0, and thus the function returns 0. If not, everything returns 1. This is a really great way to name regexes.
  • the stuff immediately after the if can be any command in the whole wide world, as long as it provides an exit code, which is pretty much always.
  •  
    ""[" is a command. It's actually syntactic sugar for the built-in command test which checks and compares its arguments. The "]" is actually an argument to the [ command that tells it to stop checking for arguments!"
張 旭

Choose when to run jobs | GitLab - 0 views

  • Rules are evaluated in order until the first match.
  • no rules match, so the job is not added to any other pipeline.
  • define a set of rules to exclude jobs in a few cases, but run them in all other cases
  • ...32 more annotations...
  • use all rules keywords, like if, changes, and exists, in the same rule. The rule evaluates to true only when all included keywords evaluate to true.
  • use parentheses with && and || to build more complicated variable expressions.
  • Use workflow to specify which types of pipelines can run.
  • every push to an open merge request’s source branch causes duplicated pipelines.
  • avoid duplicate pipelines by changing the job rules to avoid either push (branch) pipelines or merge request pipelines.
  • do not mix only/except jobs with rules jobs in the same pipeline.
  • For behavior similar to the only/except keywords, you can check the value of the $CI_PIPELINE_SOURCE variable
  • commonly used variables for if clauses
  • rules:changes expressions to determine when to add jobs to a pipeline
  • Use !reference tags to reuse rules in different jobs.
  • Use except to define when a job does not run.
  • only or except used without refs is the same as only:refs / except/refs
  • If you change multiple files, but only one file ends in .md, the build job is still skipped.
  • If you use multiple keywords with only or except, the keywords are evaluated as a single conjoined expression.
  • only includes the job if all of the keys have at least one condition that matches.
  • except excludes the job if any of the keys have at least one condition that matches.
  • With only, individual keys are logically joined by an AND
  • With except, individual keys are logically joined by an OR
  • To specify a job as manual, add when: manual to the job in the .gitlab-ci.yml file.
  • Use protected environments to define a list of users authorized to run a manual job.
  • Use when: delayed to execute scripts after a waiting period, or if you want to avoid jobs immediately entering the pending state.
  • To split a large job into multiple smaller jobs that run in parallel, use the parallel keyword
  • run a trigger job multiple times in parallel in a single pipeline, but with different variable values for each instance of the job.
  • The @ symbol denotes the beginning of a ref’s repository path. To match a ref name that contains the @ character in a regular expression, you must use the hex character code match \x40.
  • Compare a variable to a string
  • Check if a variable is undefined
  • Check if a variable is empty
  • Check if a variable exists
  • Check if a variable is empty
  • Matches are found when using =~.
  • Matches are not found when using !~.
  • Join variable expressions together with && or ||
  •  
    "Rules are evaluated in order until the first match."
張 旭

Active Record Validations - Ruby on Rails Guides - 0 views

  • validates :name, presence: true
  • Validations are used to ensure that only valid data is saved into your database
  • Model-level validations are the best way to ensure that only valid data is saved into your database.
  • ...117 more annotations...
  • native database constraints
  • client-side validations
  • controller-level validations
  • Database constraints and/or stored procedures make the validation mechanisms database-dependent and can make testing and maintenance more difficult
  • Client-side validations can be useful, but are generally unreliable
  • combined with other techniques, client-side validation can be a convenient way to provide users with immediate feedback
  • it's a good idea to keep your controllers skinny
  • model-level validations are the most appropriate in most circumstances.
  • Active Record uses the new_record? instance method to determine whether an object is already in the database or not.
  • Creating and saving a new record will send an SQL INSERT operation to the database. Updating an existing record will send an SQL UPDATE operation instead. Validations are typically run before these commands are sent to the database
  • The bang versions (e.g. save!) raise an exception if the record is invalid.
  • save and update return false
  • create just returns the object
  • skip validations, and will save the object to the database regardless of its validity.
  • be used with caution
  • update_all
  • save also has the ability to skip validations if passed validate: false as argument.
  • save(validate: false)
  • valid? triggers your validations and returns true if no errors
  • After Active Record has performed validations, any errors found can be accessed through the errors.messages instance method
  • By definition, an object is valid if this collection is empty after running validations.
  • validations are not run when using new.
  • invalid? is simply the inverse of valid?.
  • To verify whether or not a particular attribute of an object is valid, you can use errors[:attribute]. I
  • only useful after validations have been run
  • Every time a validation fails, an error message is added to the object's errors collection,
  • All of them accept the :on and :message options, which define when the validation should be run and what message should be added to the errors collection if it fails, respectively.
  • validates that a checkbox on the user interface was checked when a form was submitted.
  • agree to your application's terms of service
  • 'acceptance' does not need to be recorded anywhere in your database (if you don't have a field for it, the helper will just create a virtual attribute).
  • It defaults to "1" and can be easily changed.
  • use this helper when your model has associations with other models and they also need to be validated
  • valid? will be called upon each one of the associated objects.
  • work with all of the association types
  • Don't use validates_associated on both ends of your associations.
    • 張 旭
       
      關聯式的物件驗證,在其中一方啟動就好了!
  • each associated object will contain its own errors collection
  • errors do not bubble up to the calling model
  • when you have two text fields that should receive exactly the same content
  • This validation creates a virtual attribute whose name is the name of the field that has to be confirmed with "_confirmation" appended.
  • To require confirmation, make sure to add a presence check for the confirmation attribute
  • this set can be any enumerable object.
  • The exclusion helper has an option :in that receives the set of values that will not be accepted for the validated attributes.
  • :in option has an alias called :within
  • validates the attributes' values by testing whether they match a given regular expression, which is specified using the :with option.
  • attribute does not match the regular expression by using the :without option.
  • validates that the attributes' values are included in a given set
  • :in option has an alias called :within
  • specify length constraints
  • :minimum
  • :maximum
  • :in (or :within)
  • :is - The attribute length must be equal to the given value.
  • :wrong_length, :too_long, and :too_short options and %{count} as a placeholder for the number corresponding to the length constraint being used.
  • split the value in a different way using the :tokenizer option:
    • 張 旭
       
      自己提供切割算字數的方式
  • validates that your attributes have only numeric values
  • By default, it will match an optional sign followed by an integral or floating point number.
  • set :only_integer to true.
  • allows a trailing newline character.
  • :greater_than
  • :greater_than_or_equal_to
  • :equal_to
  • :less_than
  • :less_than_or_equal_to
  • :odd - Specifies the value must be an odd number if set to true.
  • :even - Specifies the value must be an even number if set to true.
  • validates that the specified attributes are not empty
  • if the value is either nil or a blank string
  • validate associated records whose presence is required, you must specify the :inverse_of option for the association
  • inverse_of
  • an association is present, you'll need to test whether the associated object itself is present, and not the foreign key used to map the association
  • false.blank? is true
  • validate the presence of a boolean field
  • ensure the value will NOT be nil
  • validates that the specified attributes are absent
  • not either nil or a blank string
  • be sure that an association is absent
  • false.present? is false
  • validate the absence of a boolean field you should use validates :field_name, exclusion: { in: [true, false] }.
  • validates that the attribute's value is unique right before the object gets saved
  • a :scope option that you can use to specify other attributes that are used to limit the uniqueness check
  • a :case_sensitive option that you can use to define whether the uniqueness constraint will be case sensitive or not.
  • There is no default error message for validates_with.
  • To implement the validate method, you must have a record parameter defined, which is the record to be validated.
  • the validator will be initialized only once for the whole application life cycle, and not on each validation run, so be careful about using instance variables inside it.
  • passes the record to a separate class for validation
  • use a plain old Ruby object
  • validates attributes against a block
  • The block receives the record, the attribute's name and the attribute's value. You can do anything you like to check for valid data within the block
  • will let validation pass if the attribute's value is blank?, like nil or an empty string
  • the :message option lets you specify the message that will be added to the errors collection when validation fails
  • skips the validation when the value being validated is nil
  • specify when the validation should happen
  • raise ActiveModel::StrictValidationFailed when the object is invalid
  • You can do that by using the :if and :unless options, which can take a symbol, a string, a Proc or an Array.
  • use the :if option when you want to specify when the validation should happen
  • using eval and needs to contain valid Ruby code.
  • Using a Proc object gives you the ability to write an inline condition instead of a separate method
  • have multiple validations use one condition, it can be easily achieved using with_options.
  • implement a validate method which takes a record as an argument and performs the validation on it
  • validates_with method
  • implement a validate_each method which takes three arguments: record, attribute, and value
  • combine standard validations with your own custom validators.
  • :expiration_date_cannot_be_in_the_past,    :discount_cannot_be_greater_than_total_value
  • By default such validations will run every time you call valid?
  • errors[] is used when you want to check the error messages for a specific attribute.
  • Returns an instance of the class ActiveModel::Errors containing all errors.
  • lets you manually add messages that are related to particular attributes
  • using []= setter
  • errors[:base] is an array, you can simply add a string to it and it will be used as an error message.
  • use this method when you want to say that the object is invalid, no matter the values of its attributes.
  • clear all the messages in the errors collection
  • calling errors.clear upon an invalid object won't actually make it valid: the errors collection will now be empty, but the next time you call valid? or any method that tries to save this object to the database, the validations will run again.
  • the total number of error messages for the object.
  • .errors.full_messages.each
  • .field_with_errors
張 旭

Auto DevOps | GitLab - 0 views

  • Auto DevOps provides pre-defined CI/CD configuration which allows you to automatically detect, build, test, deploy, and monitor your applications
  • Just push your code and GitLab takes care of everything else.
  • Auto DevOps will be automatically disabled on the first pipeline failure.
  • ...78 more annotations...
  • Your project will continue to use an alternative CI/CD configuration file if one is found
  • Auto DevOps works with any Kubernetes cluster;
  • using the Docker or Kubernetes executor, with privileged mode enabled.
  • Base domain (needed for Auto Review Apps and Auto Deploy)
  • Kubernetes (needed for Auto Review Apps, Auto Deploy, and Auto Monitoring)
  • Prometheus (needed for Auto Monitoring)
  • scrape your Kubernetes cluster.
  • project level as a variable: KUBE_INGRESS_BASE_DOMAIN
  • A wildcard DNS A record matching the base domain(s) is required
  • Once set up, all requests will hit the load balancer, which in turn will route them to the Kubernetes pods that run your application(s).
  • review/ (every environment starting with review/)
  • staging
  • production
  • need to define a separate KUBE_INGRESS_BASE_DOMAIN variable for all the above based on the environment.
  • Continuous deployment to production: Enables Auto Deploy with master branch directly deployed to production.
  • Continuous deployment to production using timed incremental rollout
  • Automatic deployment to staging, manual deployment to production
  • Auto Build creates a build of the application using an existing Dockerfile or Heroku buildpacks.
  • If a project’s repository contains a Dockerfile, Auto Build will use docker build to create a Docker image.
  • Each buildpack requires certain files to be in your project’s repository for Auto Build to successfully build your application.
  • Auto Test automatically runs the appropriate tests for your application using Herokuish and Heroku buildpacks by analyzing your project to detect the language and framework.
  • Auto Code Quality uses the Code Quality image to run static analysis and other code checks on the current code.
  • Static Application Security Testing (SAST) uses the SAST Docker image to run static analysis on the current code and checks for potential security issues.
  • Dependency Scanning uses the Dependency Scanning Docker image to run analysis on the project dependencies and checks for potential security issues.
  • License Management uses the License Management Docker image to search the project dependencies for their license.
  • Vulnerability Static Analysis for containers uses Clair to run static analysis on a Docker image and checks for potential security issues.
  • Review Apps are temporary application environments based on the branch’s code so developers, designers, QA, product managers, and other reviewers can actually see and interact with code changes as part of the review process. Auto Review Apps create a Review App for each branch. Auto Review Apps will deploy your app to your Kubernetes cluster only. When no cluster is available, no deployment will occur.
  • The Review App will have a unique URL based on the project ID, the branch or tag name, and a unique number, combined with the Auto DevOps base domain.
  • Review apps are deployed using the auto-deploy-app chart with Helm, which can be customized.
  • Your apps should not be manipulated outside of Helm (using Kubernetes directly).
  • Dynamic Application Security Testing (DAST) uses the popular open source tool OWASP ZAProxy to perform an analysis on the current code and checks for potential security issues.
  • Auto Browser Performance Testing utilizes the Sitespeed.io container to measure the performance of a web page.
  • add the paths to a file named .gitlab-urls.txt in the root directory, one per line.
  • After a branch or merge request is merged into the project’s default branch (usually master), Auto Deploy deploys the application to a production environment in the Kubernetes cluster, with a namespace based on the project name and unique project ID
  • Auto Deploy doesn’t include deployments to staging or canary by default, but the Auto DevOps template contains job definitions for these tasks if you want to enable them.
  • Apps are deployed using the auto-deploy-app chart with Helm.
  • For internal and private projects a GitLab Deploy Token will be automatically created, when Auto DevOps is enabled and the Auto DevOps settings are saved.
  • If the GitLab Deploy Token cannot be found, CI_REGISTRY_PASSWORD is used. Note that CI_REGISTRY_PASSWORD is only valid during deployment.
  • If present, DB_INITIALIZE will be run as a shell command within an application pod as a helm post-install hook.
  • a post-install hook means that if any deploy succeeds, DB_INITIALIZE will not be processed thereafter.
  • DB_MIGRATE will be run as a shell command within an application pod as a helm pre-upgrade hook.
    • 張 旭
       
      如果專案類型不同,就要去查 buildpacks 裡面如何叫用該指令,例如 laravel 的 migration
    • 張 旭
       
      如果是自己的 Dockerfile 建立起來的,看來就不用鳥 buildpacks 的作法
  • Once your application is deployed, Auto Monitoring makes it possible to monitor your application’s server and response metrics right out of the box.
  • annotate the NGINX Ingress deployment to be scraped by Prometheus using prometheus.io/scrape: "true" and prometheus.io/port: "10254"
  • If you are also using Auto Review Apps and Auto Deploy and choose to provide your own Dockerfile, make sure you expose your application to port 5000 as this is the port assumed by the default Helm chart.
  • While Auto DevOps provides great defaults to get you started, you can customize almost everything to fit your needs; from custom buildpacks, to Dockerfiles, Helm charts, or even copying the complete CI/CD configuration into your project to enable staging and canary deployments, and more.
  • If your project has a Dockerfile in the root of the project repo, Auto DevOps will build a Docker image based on the Dockerfile rather than using buildpacks.
  • Auto DevOps uses Helm to deploy your application to Kubernetes.
  • Bundled chart - If your project has a ./chart directory with a Chart.yaml file in it, Auto DevOps will detect the chart and use it instead of the default one.
  • Create a project variable AUTO_DEVOPS_CHART with the URL of a custom chart to use or create two project variables AUTO_DEVOPS_CHART_REPOSITORY with the URL of a custom chart repository and AUTO_DEVOPS_CHART with the path to the chart.
  • make use of the HELM_UPGRADE_EXTRA_ARGS environment variable to override the default values in the values.yaml file in the default Helm chart.
  • specify the use of a custom Helm chart per environment by scoping the environment variable to the desired environment.
    • 張 旭
       
      Auto DevOps 就是一套人家寫好好的傳便便的 .gitlab-ci.yml
  • Your additions will be merged with the Auto DevOps template using the behaviour described for include
  • copy and paste the contents of the Auto DevOps template into your project and edit this as needed.
  • In order to support applications that require a database, PostgreSQL is provisioned by default.
  • Set up the replica variables using a project variable and scale your application by just redeploying it!
  • You should not scale your application using Kubernetes directly.
  • Some applications need to define secret variables that are accessible by the deployed application.
  • Auto DevOps detects variables where the key starts with K8S_SECRET_ and make these prefixed variables available to the deployed application, as environment variables.
  • Auto DevOps pipelines will take your application secret variables to populate a Kubernetes secret.
  • Environment variables are generally considered immutable in a Kubernetes pod.
  • if you update an application secret without changing any code then manually create a new pipeline, you will find that any running application pods will not have the updated secrets.
  • Variables with multiline values are not currently supported
  • The normal behavior of Auto DevOps is to use Continuous Deployment, pushing automatically to the production environment every time a new pipeline is run on the default branch.
  • If STAGING_ENABLED is defined in your project (e.g., set STAGING_ENABLED to 1 as a CI/CD variable), then the application will be automatically deployed to a staging environment, and a production_manual job will be created for you when you’re ready to manually deploy to production.
  • If CANARY_ENABLED is defined in your project (e.g., set CANARY_ENABLED to 1 as a CI/CD variable) then two manual jobs will be created: canary which will deploy the application to the canary environment production_manual which is to be used by you when you’re ready to manually deploy to production.
  • If INCREMENTAL_ROLLOUT_MODE is set to manual in your project, then instead of the standard production job, 4 different manual jobs will be created: rollout 10% rollout 25% rollout 50% rollout 100%
  • The percentage is based on the REPLICAS variable and defines the number of pods you want to have for your deployment.
  • To start a job, click on the play icon next to the job’s name.
  • Once you get to 100%, you cannot scale down, and you’d have to roll back by redeploying the old version using the rollback button in the environment page.
  • With INCREMENTAL_ROLLOUT_MODE set to manual and with STAGING_ENABLED
  • not all buildpacks support Auto Test yet
  • When a project has been marked as private, GitLab’s Container Registry requires authentication when downloading containers.
  • Authentication credentials will be valid while the pipeline is running, allowing for a successful initial deployment.
  • After the pipeline completes, Kubernetes will no longer be able to access the Container Registry.
  • We strongly advise using GitLab Container Registry with Auto DevOps in order to simplify configuration and prevent any unforeseen issues.
張 旭

An Introduction to HAProxy and Load Balancing Concepts | DigitalOcean - 0 views

  • HAProxy, which stands for High Availability Proxy
  • improve the performance and reliability of a server environment by distributing the workload across multiple servers (e.g. web, application, database).
  • ACLs are used to test some condition and perform an action (e.g. select a server, or block a request) based on the test result.
  • ...28 more annotations...
  • Access Control List (ACL)
  • ACLs allows flexible network traffic forwarding based on a variety of factors like pattern-matching and the number of connections to a backend
  • A backend is a set of servers that receives forwarded requests
  • adding more servers to your backend will increase your potential load capacity by spreading the load over multiple servers
  • mode http specifies that layer 7 proxying will be used
  • specifies the load balancing algorithm
  • health checks
  • A frontend defines how requests should be forwarded to backends
  • use_backend rules, which define which backends to use depending on which ACL conditions are matched, and/or a default_backend rule that handles every other case
  • A frontend can be configured to various types of network traffic
  • Load balancing this way will forward user traffic based on IP range and port
  • Generally, all of the servers in the web-backend should be serving identical content--otherwise the user might receive inconsistent content.
  • Using layer 7 allows the load balancer to forward requests to different backend servers based on the content of the user's request.
  • allows you to run multiple web application servers under the same domain and port
  • acl url_blog path_beg /blog matches a request if the path of the user's request begins with /blog.
  • Round Robin selects servers in turns
  • Selects the server with the least number of connections--it is recommended for longer sessions
  • This selects which server to use based on a hash of the source IP
  • ensure that a user will connect to the same server
  • require that a user continues to connect to the same backend server. This persistence is achieved through sticky sessions, using the appsession parameter in the backend that requires it.
  • HAProxy uses health checks to determine if a backend server is available to process requests.
  • The default health check is to try to establish a TCP connection to the server
  • If a server fails a health check, and therefore is unable to serve requests, it is automatically disabled in the backend
  • For certain types of backends, like database servers in certain situations, the default health check is insufficient to determine whether a server is still healthy.
  • However, your load balancer is a single point of failure in these setups; if it goes down or gets overwhelmed with requests, it can cause high latency or downtime for your service.
  • A high availability (HA) setup is an infrastructure without a single point of failure
  • a static IP address that can be remapped from one server to another.
  • If that load balancer fails, your failover mechanism will detect it and automatically reassign the IP address to one of the passive servers.
crazylion lee

Gobot - 0 views

shared by crazylion lee on 17 Jan 14 - No Cached
  •  
    Gobot is set of libraries in the Go programming language for robotics and physical computing. It provides a simple, yet powerful way to create solutions that incorporate multiple, different hardware devices at the same time. Want to use Ruby on robots? Check out our sister project Artoo (http://artoo.io). Want to use Node.js? Check out our sister project Cylon (http://cylonjs.com).
張 旭

Basics - Træfik - 0 views

  • Modifier rules only modify the request. They do not have any impact on routing decisions being made.
  • A frontend consists of a set of rules that determine how incoming requests are forwarded from an entrypoint to a backend.
  • Entrypoints are the network entry points into Træfik
  • ...27 more annotations...
  • Modifiers and matchers
  • Matcher rules determine if a particular request should be forwarded to a backend
  • if any rule matches
  • if all rules match
  • In order to use regular expressions with Host and Path matchers, you must declare an arbitrarily named variable followed by the colon-separated regular expression, all enclosed in curly braces.
  • Use a *Prefix* matcher if your backend listens on a particular base path but also serves requests on sub-paths. For instance, PathPrefix: /products would match /products but also /products/shoes and /products/shirts. Since the path is forwarded as-is, your backend is expected to listen on /products
  • Use Path if your backend listens on the exact path only. For instance, Path: /products would match /products but not /products/shoes.
  • Modifier rules ALWAYS apply after the Matcher rules.
  • A backend is responsible to load-balance the traffic coming from one or more frontends to a set of http servers
  • wrr: Weighted Round Robin
  • drr: Dynamic Round Robin: increases weights on servers that perform better than others.
  • A circuit breaker can also be applied to a backend, preventing high loads on failing servers.
  • To proactively prevent backends from being overwhelmed with high load, a maximum connection limit can also be applied to each backend.
  • Sticky sessions are supported with both load balancers.
  • When sticky sessions are enabled, a cookie is set on the initial request.
  • The check is defined by a path appended to the backend URL and an interval (given in a format understood by time.ParseDuration) specifying how often the health check should be executed (the default being 30 seconds). Each backend must respond to the health check within 5 seconds.
  • The static configuration is the global configuration which is setting up connections to configuration backends and entrypoints.
  • We only need to enable watch option to make Træfik watch configuration backend changes and generate its configuration automatically.
  • Separate the regular expression and the replacement by a space.
  • a comma-separated key/value pair where both key and value must be literals.
  • namespacing of your backends happens on the basis of hosts in addition to paths
  • Modifiers will be applied in a pre-determined order regardless of their order in the rule configuration section.
  • customize priority
  • Custom headers can be configured through the frontends, to add headers to either requests or responses that match the frontend's rules.
  • Security related headers (HSTS headers, SSL redirection, Browser XSS filter, etc) can be added and configured per frontend in a similar manner to the custom headers above.
  • Servers are simply defined using a url. You can also apply a custom weight to each server (this will be used by load-balancing).
  • Maximum connections can be configured by specifying an integer value for maxconn.amount and maxconn.extractorfunc which is a strategy used to determine how to categorize requests in order to evaluate the maximum connections.
張 旭

Understanding Nginx Server and Location Block Selection Algorithms | DigitalOcean - 0 views

  • A server block is a subset of Nginx’s configuration that defines a virtual server used to handle requests of a defined type. Administrators often configure multiple server blocks and decide which block should handle which connection based on the requested domain name, port, and IP address.
  • A location block lives within a server block and is used to define how Nginx should handle requests for different resources and URIs for the parent server. The URI space can be subdivided in whatever way the administrator likes using these blocks. It is an extremely flexible model.
  • Nginx logically divides the configurations meant to serve different content into blocks, which live in a hierarchical structure. Each time a client request is made, Nginx begins a process of determining which configuration blocks should be used to handle the request.
  • ...37 more annotations...
  • Nginx is one of the most popular web servers in the world. It can successfully handle high loads with many concurrent client connections, and can easily function as a web server, a mail server, or a reverse proxy server.
  • The main server block directives that Nginx is concerned with during this process are the listen directive, and the server_name directive.
  • The listen directive typically defines which IP address and port that the server block will respond to.
  • 0.0.0.0:8080 if Nginx is being run by a normal, non-root user
  • Nginx translates all “incomplete” listen directives by substituting missing values with their default values so that each block can be evaluated by its IP address and port.
  • In any case, the port must be matched exactly.
  • If there are multiple server blocks with the same level of specificity matching, Nginx then begins to evaluate the server_name directive of each server block.
  • Nginx will only evaluate the server_name directive when it needs to distinguish between server blocks that match to the same level of specificity in the listen directive.
  • Nginx checks the request’s “Host” header. This value holds the domain or IP address that the client was actually trying to reach.
  • Nginx will first try to find a server block with a server_name that matches the value in the “Host” header of the request exactly.
  • If no exact match is found, Nginx will then try to find a server block with a server_name that matches using a leading wildcard (indicated by a * at the beginning of the name in the config).
  • If no match is found using a leading wildcard, Nginx then looks for a server block with a server_name that matches using a trailing wildcard (indicated by a server name ending with a * in the config)
  • If no match is found using a trailing wildcard, Nginx then evaluates server blocks that define the server_name using regular expressions (indicated by a ~ before the name).
  • If no regular expression match is found, Nginx then selects the default server block for that IP address and port.
  • There can be only one default_server declaration per each IP address/port combination.
  • Location blocks live within server blocks (or other location blocks) and are used to decide how to process the request URI (the part of the request that comes after the domain name or IP address/port).
  • If no modifiers are present, the location is interpreted as a prefix match.
  • =: If an equal sign is used, this block will be considered a match if the request URI exactly matches the location given.
  • ~: If a tilde modifier is present, this location will be interpreted as a case-sensitive regular expression match.
  • ~*: If a tilde and asterisk modifier is used, the location block will be interpreted as a case-insensitive regular expression match.
  • ^~: If a carat and tilde modifier is present, and if this block is selected as the best non-regular expression match, regular expression matching will not take place.
  • Keep in mind that if this block is selected and the request is fulfilled using an index page, an internal redirect will take place to another location that will be the actual handler of the request
  • Keeping in mind the types of location declarations we described above, Nginx evaluates the possible location contexts by comparing the request URI to each of the locations.
  • Nginx begins by checking all prefix-based location matches (all location types not involving a regular expression).
  • First, Nginx looks for an exact match.
  • If no exact (with the = modifier) location block matches are found, Nginx then moves on to evaluating non-exact prefixes.
  • After the longest matching prefix location is determined and stored, Nginx moves on to evaluating the regular expression locations (both case sensitive and insensitive).
  • by default, Nginx will serve regular expression matches in preference to prefix matches.
  • regular expression matches within the longest prefix match will “jump the line” when Nginx evaluates regex locations.
  • The exceptions to the “only one location block” rule may have implications on how the request is actually served and may not align with the expectations you had when designing your location blocks.
  • The index directive always leads to an internal redirect if it is used to handle the request.
  • In the case above, if you really need the execution to stay in the first block, you will have to come up with a different method of satisfying the request to the directory.
  • one way of preventing an index from switching contexts, but it’s probably not useful for most configurations
  • the try_files directive. This directive tells Nginx to check for the existence of a named set of files or directories.
  • the rewrite directive. When using the last parameter with the rewrite directive, or when using no parameter at all, Nginx will search for a new matching location based on the results of the rewrite.
  • The error_page directive can lead to an internal redirect similar to that created by try_files.
  • when certain status codes are encountered.
張 旭

Active Record Associations - Ruby on Rails Guides - 0 views

  • With Active Record associations, we can streamline these - and other - operations by declaratively telling Rails that there is a connection between the two models.
  • belongs_to has_one has_many has_many :through has_one :through has_and_belongs_to_many
  • an association is a connection between two Active Record models
  • ...195 more annotations...
  • Associations are implemented using macro-style calls, so that you can declaratively add features to your models
  • A belongs_to association sets up a one-to-one connection with another model, such that each instance of the declaring model "belongs to" one instance of the other model.
  • belongs_to associations must use the singular term.
  • belongs_to
  • A has_one association also sets up a one-to-one connection with another model, but with somewhat different semantics (and consequences).
  • This association indicates that each instance of a model contains or possesses one instance of another model
  • belongs_to
  • A has_many association indicates a one-to-many connection with another model.
  • This association indicates that each instance of the model has zero or more instances of another model.
  • belongs_to
  • A has_many :through association is often used to set up a many-to-many connection with another model
  • This association indicates that the declaring model can be matched with zero or more instances of another model by proceeding through a third model.
  • through:
  • through:
  • The collection of join models can be managed via the API
  • new join models are created for newly associated objects, and if some are gone their rows are deleted.
  • The has_many :through association is also useful for setting up "shortcuts" through nested has_many associations
  • A has_one :through association sets up a one-to-one connection with another model. This association indicates that the declaring model can be matched with one instance of another model by proceeding through a third model.
  • A has_and_belongs_to_many association creates a direct many-to-many connection with another model, with no intervening model.
  • id: false
  • The has_one relationship says that one of something is yours
  • using t.references :supplier instead.
  • declare a many-to-many relationship is to use has_many :through. This makes the association indirectly, through a join model
  • set up a has_many :through relationship if you need to work with the relationship model as an independent entity
  • set up a has_and_belongs_to_many relationship (though you'll need to remember to create the joining table in the database).
  • use has_many :through if you need validations, callbacks, or extra attributes on the join model
  • With polymorphic associations, a model can belong to more than one other model, on a single association.
  • belongs_to :imageable, polymorphic: true
  • a polymorphic belongs_to declaration as setting up an interface that any other model can use.
    • 張 旭
       
      _id 記錄的是不同類型的外連鍵 id;_type 記錄的是不同類型的表格名稱。
  • In designing a data model, you will sometimes find a model that should have a relation to itself
  • add a references column to the model itself
  • Controlling caching Avoiding name collisions Updating the schema Controlling association scope Bi-directional associations
  • All of the association methods are built around caching, which keeps the result of the most recent query available for further operations.
  • it is a bad idea to give an association a name that is already used for an instance method of ActiveRecord::Base. The association method would override the base method and break things.
  • You are responsible for maintaining your database schema to match your associations.
  • belongs_to associations you need to create foreign keys
  • has_and_belongs_to_many associations you need to create the appropriate join table
  • If you create an association some time after you build the underlying model, you need to remember to create an add_column migration to provide the necessary foreign key.
  • Active Record creates the name by using the lexical order of the class names
  • So a join between customer and order models will give the default join table name of "customers_orders" because "c" outranks "o" in lexical ordering.
  • For example, one would expect the tables "paper_boxes" and "papers" to generate a join table name of "papers_paper_boxes" because of the length of the name "paper_boxes", but it in fact generates a join table name of "paper_boxes_papers" (because the underscore '' is lexicographically _less than 's' in common encodings).
  • id: false
  • pass id: false to create_table because that table does not represent a model
  • By default, associations look for objects only within the current module's scope.
  • will work fine, because both the Supplier and the Account class are defined within the same scope.
  • To associate a model with a model in a different namespace, you must specify the complete class name in your association declaration:
  • class_name
  • class_name
  • Active Record provides the :inverse_of option
    • 張 旭
       
      意思是說第一次比較兩者的 first_name 是相同的;但透過 c 實體修改 first_name 之後,再次比較就不相同了,因為兩個是記憶體裡面兩個不同的物件。
  • preventing inconsistencies and making your application more efficient
  • Every association will attempt to automatically find the inverse association and set the :inverse_of option heuristically (based on the association name)
  • In database terms, this association says that this class contains the foreign key.
  • In all of these methods, association is replaced with the symbol passed as the first argument to belongs_to.
  • (force_reload = false)
  • The association method returns the associated object, if any. If no associated object is found, it returns nil.
  • the cached version will be returned.
  • The association= method assigns an associated object to this object.
  • Behind the scenes, this means extracting the primary key from the associate object and setting this object's foreign key to the same value.
  • The build_association method returns a new object of the associated type
  • but the associated object will not yet be saved.
  • The create_association method returns a new object of the associated type
  • once it passes all of the validations specified on the associated model, the associated object will be saved
  • raises ActiveRecord::RecordInvalid if the record is invalid.
  • dependent
  • counter_cache
  • :autosave :class_name :counter_cache :dependent :foreign_key :inverse_of :polymorphic :touch :validate
  • finding the number of belonging objects more efficient.
  • Although the :counter_cache option is specified on the model that includes the belongs_to declaration, the actual column must be added to the associated model.
  • add a column named orders_count to the Customer model.
  • :destroy, when the object is destroyed, destroy will be called on its associated objects.
  • deleted directly from the database without calling their destroy method.
  • Rails will not create foreign key columns for you
  • The :inverse_of option specifies the name of the has_many or has_one association that is the inverse of this association
  • set the :touch option to :true, then the updated_at or updated_on timestamp on the associated object will be set to the current time whenever this object is saved or destroyed
  • specify a particular timestamp attribute to update
  • If you set the :validate option to true, then associated objects will be validated whenever you save this object
  • By default, this is false: associated objects will not be validated when this object is saved.
  • where includes readonly select
  • make your code somewhat more efficient
  • no need to use includes for immediate associations
  • will be read-only when retrieved via the association
  • The select method lets you override the SQL SELECT clause that is used to retrieve data about the associated object
  • using the association.nil?
  • Assigning an object to a belongs_to association does not automatically save the object. It does not save the associated object either.
  • In database terms, this association says that the other class contains the foreign key.
  • the cached version will be returned.
  • :as :autosave :class_name :dependent :foreign_key :inverse_of :primary_key :source :source_type :through :validate
  • Setting the :as option indicates that this is a polymorphic association
  • :nullify causes the foreign key to be set to NULL. Callbacks are not executed.
  • It's necessary not to set or leave :nullify option for those associations that have NOT NULL database constraints.
  • The :source_type option specifies the source association type for a has_one :through association that proceeds through a polymorphic association.
  • The :source option specifies the source association name for a has_one :through association.
  • The :through option specifies a join model through which to perform the query
  • more efficient by including representatives in the association from suppliers to accounts
  • When you assign an object to a has_one association, that object is automatically saved (in order to update its foreign key).
  • If either of these saves fails due to validation errors, then the assignment statement returns false and the assignment itself is cancelled.
  • If the parent object (the one declaring the has_one association) is unsaved (that is, new_record? returns true) then the child objects are not saved.
  • If you want to assign an object to a has_one association without saving the object, use the association.build method
  • collection(force_reload = false) collection<<(object, ...) collection.delete(object, ...) collection.destroy(object, ...) collection=(objects) collection_singular_ids collection_singular_ids=(ids) collection.clear collection.empty? collection.size collection.find(...) collection.where(...) collection.exists?(...) collection.build(attributes = {}, ...) collection.create(attributes = {}) collection.create!(attributes = {})
  • In all of these methods, collection is replaced with the symbol passed as the first argument to has_many, and collection_singular is replaced with the singularized version of that symbol.
  • The collection<< method adds one or more objects to the collection by setting their foreign keys to the primary key of the calling model
  • The collection.delete method removes one or more objects from the collection by setting their foreign keys to NULL.
  • objects will be destroyed if they're associated with dependent: :destroy, and deleted if they're associated with dependent: :delete_all
  • The collection.destroy method removes one or more objects from the collection by running destroy on each object.
  • The collection_singular_ids method returns an array of the ids of the objects in the collection.
  • The collection_singular_ids= method makes the collection contain only the objects identified by the supplied primary key values, by adding and deleting as appropriate
  • The default strategy for has_many :through associations is delete_all, and for has_many associations is to set the foreign keys to NULL.
  • The collection.clear method removes all objects from the collection according to the strategy specified by the dependent option
  • uses the same syntax and options as ActiveRecord::Base.find
  • The collection.where method finds objects within the collection based on the conditions supplied but the objects are loaded lazily meaning that the database is queried only when the object(s) are accessed.
  • The collection.build method returns one or more new objects of the associated type. These objects will be instantiated from the passed attributes, and the link through their foreign key will be created, but the associated objects will not yet be saved.
  • The collection.create method returns a new object of the associated type. This object will be instantiated from the passed attributes, the link through its foreign key will be created, and, once it passes all of the validations specified on the associated model, the associated object will be saved.
  • :as :autosave :class_name :dependent :foreign_key :inverse_of :primary_key :source :source_type :through :validate
  • :delete_all causes all the associated objects to be deleted directly from the database (so callbacks will not execute)
  • :nullify causes the foreign keys to be set to NULL. Callbacks are not executed.
  • where includes readonly select
  • :conditions :through :polymorphic :foreign_key
  • By convention, Rails assumes that the column used to hold the primary key of the association is id. You can override this and explicitly specify the primary key with the :primary_key option.
  • The :source option specifies the source association name for a has_many :through association.
  • You only need to use this option if the name of the source association cannot be automatically inferred from the association name.
  • The :source_type option specifies the source association type for a has_many :through association that proceeds through a polymorphic association.
  • The :through option specifies a join model through which to perform the query.
  • has_many :through associations provide a way to implement many-to-many relationships,
  • By default, this is true: associated objects will be validated when this object is saved.
  • where extending group includes limit offset order readonly select uniq
  • If you use a hash-style where option, then record creation via this association will be automatically scoped using the hash
  • The extending method specifies a named module to extend the association proxy.
  • Association extensions
  • The group method supplies an attribute name to group the result set by, using a GROUP BY clause in the finder SQL.
  • has_many :line_items, -> { group 'orders.id' },                        through: :orders
  • more efficient by including line items in the association from customers to orders
  • The limit method lets you restrict the total number of objects that will be fetched through an association.
  • The offset method lets you specify the starting offset for fetching objects via an association
  • The order method dictates the order in which associated objects will be received (in the syntax used by an SQL ORDER BY clause).
  • Use the distinct method to keep the collection free of duplicates.
  • mostly useful together with the :through option
  • -> { distinct }
  • .all.inspect
  • If you want to make sure that, upon insertion, all of the records in the persisted association are distinct (so that you can be sure that when you inspect the association that you will never find duplicate records), you should add a unique index on the table itself
  • unique: true
  • Do not attempt to use include? to enforce distinctness in an association.
  • multiple users could be attempting this at the same time
  • checking for uniqueness using something like include? is subject to race conditions
  • When you assign an object to a has_many association, that object is automatically saved (in order to update its foreign key).
  • If any of these saves fails due to validation errors, then the assignment statement returns false and the assignment itself is cancelled.
  • If the parent object (the one declaring the has_many association) is unsaved (that is, new_record? returns true) then the child objects are not saved when they are added
  • All unsaved members of the association will automatically be saved when the parent is saved.
  • assign an object to a has_many association without saving the object, use the collection.build method
  • collection(force_reload = false) collection<<(object, ...) collection.delete(object, ...) collection.destroy(object, ...) collection=(objects) collection_singular_ids collection_singular_ids=(ids) collection.clear collection.empty? collection.size collection.find(...) collection.where(...) collection.exists?(...) collection.build(attributes = {}) collection.create(attributes = {}) collection.create!(attributes = {})
  • If the join table for a has_and_belongs_to_many association has additional columns beyond the two foreign keys, these columns will be added as attributes to records retrieved via that association.
  • Records returned with additional attributes will always be read-only
  • If you require this sort of complex behavior on the table that joins two models in a many-to-many relationship, you should use a has_many :through association instead of has_and_belongs_to_many.
  • aliased as collection.concat and collection.push.
  • The collection.delete method removes one or more objects from the collection by deleting records in the join table
  • not destroy the objects
  • The collection.destroy method removes one or more objects from the collection by running destroy on each record in the join table, including running callbacks.
  • not destroy the objects.
  • The collection.clear method removes every object from the collection by deleting the rows from the joining table.
  • not destroy the associated objects.
  • The collection.find method finds objects within the collection. It uses the same syntax and options as ActiveRecord::Base.find.
  • The collection.where method finds objects within the collection based on the conditions supplied but the objects are loaded lazily meaning that the database is queried only when the object(s) are accessed.
  • The collection.exists? method checks whether an object meeting the supplied conditions exists in the collection.
  • The collection.build method returns a new object of the associated type.
  • the associated object will not yet be saved.
  • the associated object will be saved.
  • The collection.create method returns a new object of the associated type.
  • it passes all of the validations specified on the associated model
  • :association_foreign_key :autosave :class_name :foreign_key :join_table :validate
  • The :foreign_key and :association_foreign_key options are useful when setting up a many-to-many self-join.
  • Rails assumes that the column in the join table used to hold the foreign key pointing to the other model is the name of that model with the suffix _id added.
  • If you set the :autosave option to true, Rails will save any loaded members and destroy members that are marked for destruction whenever you save the parent object.
  • By convention, Rails assumes that the column in the join table used to hold the foreign key pointing to this model is the name of this model with the suffix _id added.
  • By default, this is true: associated objects will be validated when this object is saved.
  • where extending group includes limit offset order readonly select uniq
  • set conditions via a hash
  • In this case, using @parts.assemblies.create or @parts.assemblies.build will create orders where the factory column has the value "Seattle"
  • If you use a hash-style where, then record creation via this association will be automatically scoped using the hash
  • using a GROUP BY clause in the finder SQL.
  • Use the uniq method to remove duplicates from the collection.
  • assign an object to a has_and_belongs_to_many association, that object is automatically saved (in order to update the join table).
  • If any of these saves fails due to validation errors, then the assignment statement returns false and the assignment itself is cancelled.
  • If the parent object (the one declaring the has_and_belongs_to_many association) is unsaved (that is, new_record? returns true) then the child objects are not saved when they are added.
  • If you want to assign an object to a has_and_belongs_to_many association without saving the object, use the collection.build method.
  • Normal callbacks hook into the life cycle of Active Record objects, allowing you to work with those objects at various points
  • define association callbacks by adding options to the association declaration
  • Rails passes the object being added or removed to the callback.
  • stack callbacks on a single event by passing them as an array
  • If a before_add callback throws an exception, the object does not get added to the collection.
  • if a before_remove callback throws an exception, the object does not get removed from the collection
  • extend these objects through anonymous modules, adding new finders, creators, or other methods.
  • order_number
  • use a named extension module
  • proxy_association.owner returns the object that the association is a part of.
crazylion lee

crystal-lang/crystal: The Crystal Programming Language - 1 views

  •  
    "Crystal is a programming language with the following goals: Have a syntax similar to Ruby (but compatibility with it is not a goal) Statically type-checked but without having to specify the type of variables or method arguments. Be able to call C code by writing bindings to it in Crystal. Have compile-time evaluation and generation of code, to avoid boilerplate code. Compile to efficient native code. "
crazylion lee

/bin/bash based SSL/TLS tester: testssl.sh - 0 views

shared by crazylion lee on 29 Sep 15 - No Cached
  •  
    "testssl.sh is a free command line tool which checks a server's service on any port for the support of TLS/SSL ciphers, protocols as well as recent cryptographic flaws and more. "
張 旭

Active Record Migrations - Ruby on Rails Guides - 0 views

    • 張 旭
       
       跟 belongs_to 與 has_many 設定對應的 Migrattion
    • 張 旭
       
      has_and_belongs_to_many 的對應?
  • add_column and remove_column
  • ...114 more annotations...
  • allowing your schema and changes to be database independent.
  • each migration as being a new 'version' of the database
  • each migration modifies it to add or remove tables, columns, or entries
  • Active Record will also update your db/schema.rb file to match the up-to-date structure of your database.
  • A primary key column called id will also be added implicitly, as it's the default primary key for all Active Record models
  • roll this migration back, it will remove the table
  • timestamps macro adds two columns, created_at and updated_at
  • On databases that support transactions with statements that change the schema, migrations are wrapped in a transaction
  • reversible
  • use up and down instead of change
  • Migrations are stored as files in the db/migrate directory, one for each migration class.
  • a UTC timestamp identifying
  • Rails uses this timestamp to determine which migration should be run and in what order
  • "AddXXXToYYY" or "RemoveXXXFromYYY"
  • use a Ruby DSL
  • column type as references
  • part_number:string:index
  • a migration to remove a column
  • "CreateXXX"
  • change_column_null
  • AddUserRefToProducts
  • :references
  • produce join tables if JoinTable is part of the name
  • CreateJoinTable
  • The model and scaffold generators will create migrations appropriate for adding a new model.
  • enclosed by curly braces and follow the field type
  • create_table
  • By default, create_table will create a primary key called id
  • add an index on the new column
  • when using MySQL, the default is ENGINE=InnoDB
  • create_join_table creates an HABTM (has and belongs to many) join table
  • To customize the name of the table, provide a :table_name option:
  • create_join_table also accepts a block
  • change_table, used for changing existing tables
  • remove
  • rename
  • add_column
  • change_column
  • remove_column
  • change_column_default
  • place an SQL fragment in the :options option.
  • limit
  • precision
  • scale
  • polymorphic
  • default
  • index
  • add_foreign_key
  • Active Record only supports single column foreign keys.
  • use the old style of migration using up and down methods instead of the change method.
  • .connection.execute
  • change_table is also reversible, as long as the block does not call change, change_default or remove.
  • remove_column is reversible if you supply the column type as the third argument
  • Complex migrations may require processing that Active Record doesn't know how to reverse
  • reversible
  • Using reversible will ensure that the instructions are executed in the right order too.
  • add_column add_foreign_key add_index add_reference add_timestamps change_column_default (must supply a :from and :to option) change_column_null create_join_table create_table disable_extension drop_join_table drop_table (must supply a block) enable_extension remove_column (must supply a type) remove_foreign_key (must supply a second table) remove_index remove_reference remove_timestamps rename_column rename_index rename_table
  • :column_options option
  • have the option :null set to false by default
  • By default, the name of the join table comes from the union of the first two arguments provided to create_join_table
  • in alphabetical order
  • change_column command is irreversible.
    • 張 旭
       
      關聯物在前,被關聯物在後。 A 關聯到 B
  • If the column names can not be derived from the table names, you can use the :column and :primary_key options.
  • figure out the column name
  • foreign key for a specific column
  • foreign key by name
    • 張 旭
       
      不懂 column 跟 name 的用法差異,基本上一樣。
  • Active Record knows how to reverse the migration automatically
    • 張 旭
       
      使用內建的 method,Rails 比較容易自動 rollback
    • 張 旭
       
      除了幾個特殊的 change_ 跟 remove_
  • should use reversible or write the up and down methods instead of using the change method
  • If your migration is irreversible, you should raise ActiveRecord::IrreversibleMigration from your down method.
  • DontUseConstraintForZipcodeValidationMigration
  • rails db:migrate
  • the db:migrate task also invokes the db:schema:dump task, which will update your db/schema.rb file to match the structure of your database.
  • specify a target version
  • all migrations up to and including 20080906120000
  • run the down method on all the migrations down to, but not including, 20080906120000
  • rails db:rollback
  • db:migrate:redo task is a shortcut for doing a rollback and then migrating back up again
    • 張 旭
       
      舊版的還是 rake!
  • STEP parameter
  • db:setup task will create the database, load the schema and initialize it with the seed data
  • db:reset task will drop the database and set it up again. This is functionally equivalent to rails db:drop db:setup.
  • run a specific migration up or down, the db:migrate:up and db:migrate:down
  • the RAILS_ENV environment variable
  • db:migrate VERBOSE=false will suppress all output.
  • If you have already run the migration, then you cannot just edit the migration and run the migration again: Rails thinks it has already run the migration and so will do nothing when you run rails db:migrate.
  • must rollback the migration (for example with bin/rails db:rollback), edit your migration and then run rails db:migrate to run the corrected version.
  • editing existing migrations is not a good idea.
  • should write a new migration that performs the changes you require
  • revert method can be helpful when writing a new migration to undo previous migrations in whole or in part
  • require_relative
  • revert
  • They are not designed to be edited, they just represent the current state of the database.
  • Schema Files for
  • Schema files are also useful if you want a quick look at what attributes an Active Record object has
  • annotate_models gem automatically adds and updates comments at the top of each model summarizing the schema if you desire that functionality.
  • database-independent
  • multiple databases
  • db/schema.rb cannot express database specific items such as triggers, stored procedures or check constraints
  • you can execute custom SQL statements, the schema dumper cannot reconstitute those statements from the database
  • db:structure:dump
    • 張 旭
       
      資料庫種類不相依的 schema 付出的代價就是有些特殊的資料庫特性無法描述出來,例如 trigger;如果有在 migration 寫 SQL 的,簡單說 schema dumper 這邊就要設定成 :sql 而不是預設的 :ruby
  • set in config/application.rb by the config.active_record.schema_format setting, which may be either :sql or :ruby.
  • check them into source control.
  • db/schema.rb contains the current version number of the database
  • Validations such as validates :foreign_key, uniqueness: true are one way in which models can enforce data integrity
  • The :dependent option on associations allows models to automatically destroy child objects when the parent is destroyed.
  • Migrations can also be used to add or modify data
  • Initial
  • To add initial data after a database is created, Rails has a built-in 'seeds' feature that makes the process quick and easy.
  • db/seeds.rb
  • rails db:seed
張 旭

Template Designer Documentation - Jinja2 Documentation (2.10) - 0 views

  • A Jinja template doesn’t need to have a specific extension
  • A Jinja template is simply a text file
  • tags, which control the logic of the template
  • ...106 more annotations...
  • {% ... %} for Statements
  • {{ ... }} for Expressions to print to the template output
  • use a dot (.) to access attributes of a variable
  • the outer double-curly braces are not part of the variable, but the print statement.
  • If you access variables inside tags don’t put the braces around them.
  • If a variable or attribute does not exist, you will get back an undefined value.
  • the default behavior is to evaluate to an empty string if printed or iterated over, and to fail for every other operation.
  • if an object has an item and attribute with the same name. Additionally, the attr() filter only looks up attributes.
  • Variables can be modified by filters. Filters are separated from the variable by a pipe symbol (|) and may have optional arguments in parentheses.
  • Multiple filters can be chained
  • Tests can be used to test a variable against a common expression.
  • add is plus the name of the test after the variable.
  • to find out if a variable is defined, you can do name is defined, which will then return true or false depending on whether name is defined in the current template context.
  • strip whitespace in templates by hand. If you add a minus sign (-) to the start or end of a block (e.g. a For tag), a comment, or a variable expression, the whitespaces before or after that block will be removed
  • not add whitespace between the tag and the minus sign
  • mark a block raw
  • Template inheritance allows you to build a base “skeleton” template that contains all the common elements of your site and defines blocks that child templates can override.
  • The {% extends %} tag is the key here. It tells the template engine that this template “extends” another template.
  • access templates in subdirectories with a slash
  • can’t define multiple {% block %} tags with the same name in the same template
  • use the special self variable and call the block with that name
  • self.title()
  • super()
  • put the name of the block after the end tag for better readability
  • if the block is replaced by a child template, a variable would appear that was not defined in the block or passed to the context.
  • setting the block to “scoped” by adding the scoped modifier to a block declaration
  • If you have a variable that may include any of the following chars (>, <, &, or ") you SHOULD escape it unless the variable contains well-formed and trusted HTML.
  • Jinja2 functions (macros, super, self.BLOCKNAME) always return template data that is marked as safe.
  • With the default syntax, control structures appear inside {% ... %} blocks.
  • the dictsort filter
  • loop.cycle
  • Unlike in Python, it’s not possible to break or continue in a loop
  • use loops recursively
  • add the recursive modifier to the loop definition and call the loop variable with the new iterable where you want to recurse.
  • The loop variable always refers to the closest (innermost) loop.
  • whether the value changed at all,
  • use it to test if a variable is defined, not empty and not false
  • Macros are comparable with functions in regular programming languages.
  • If a macro name starts with an underscore, it’s not exported and can’t be imported.
  • pass a macro to another macro
  • caller()
  • a single trailing newline is stripped if present
  • other whitespace (spaces, tabs, newlines etc.) is returned unchanged
  • a block tag works in “both” directions. That is, a block tag doesn’t just provide a placeholder to fill - it also defines the content that fills the placeholder in the parent.
  • Python dicts are not ordered
  • caller(user)
  • call(user)
  • This is a simple dialog rendered by using a macro and a call block.
  • Filter sections allow you to apply regular Jinja2 filters on a block of template data.
  • Assignments at top level (outside of blocks, macros or loops) are exported from the template like top level macros and can be imported by other templates.
  • using namespace objects which allow propagating of changes across scopes
  • use block assignments to capture the contents of a block into a variable name.
  • The extends tag can be used to extend one template from another.
  • Blocks are used for inheritance and act as both placeholders and replacements at the same time.
  • The include statement is useful to include a template and return the rendered contents of that file into the current namespace
  • Included templates have access to the variables of the active context by default.
  • putting often used code into macros
  • imports are cached and imported templates don’t have access to the current template variables, just the globals by default.
  • Macros and variables starting with one or more underscores are private and cannot be imported.
  • By default, included templates are passed the current context and imported templates are not.
  • imports are often used just as a module that holds macros.
  • Integers and floating point numbers are created by just writing the number down
  • Everything between two brackets is a list.
  • Tuples are like lists that cannot be modified (“immutable”).
  • A dict in Python is a structure that combines keys and values.
  • // Divide two numbers and return the truncated integer result
  • The special constants true, false, and none are indeed lowercase
  • all Jinja identifiers are lowercase
  • (expr) group an expression.
  • The is and in operators support negation using an infix notation
  • in Perform a sequence / mapping containment test.
  • | Applies a filter.
  • ~ Converts all operands into strings and concatenates them.
  • use inline if expressions.
  • always an attribute is returned and items are not looked up.
  • default(value, default_value=u'', boolean=False)¶ If the value is undefined it will return the passed default value, otherwise the value of the variable
  • dictsort(value, case_sensitive=False, by='key', reverse=False)¶ Sort a dict and yield (key, value) pairs.
  • format(value, *args, **kwargs)¶ Apply python string formatting on an object
  • groupby(value, attribute)¶ Group a sequence of objects by a common attribute.
  • grouping by is stored in the grouper attribute and the list contains all the objects that have this grouper in common.
  • indent(s, width=4, first=False, blank=False, indentfirst=None)¶ Return a copy of the string with each line indented by 4 spaces. The first line and blank lines are not indented by default.
  • join(value, d=u'', attribute=None)¶ Return a string which is the concatenation of the strings in the sequence.
  • map()¶ Applies a filter on a sequence of objects or looks up an attribute.
  • pprint(value, verbose=False)¶ Pretty print a variable. Useful for debugging.
  • reject()¶ Filters a sequence of objects by applying a test to each object, and rejecting the objects with the test succeeding.
  • replace(s, old, new, count=None)¶ Return a copy of the value with all occurrences of a substring replaced with a new one.
  • round(value, precision=0, method='common')¶ Round the number to a given precision
  • even if rounded to 0 precision, a float is returned.
  • select()¶ Filters a sequence of objects by applying a test to each object, and only selecting the objects with the test succeeding.
  • sort(value, reverse=False, case_sensitive=False, attribute=None)¶ Sort an iterable. Per default it sorts ascending, if you pass it true as first argument it will reverse the sorting.
  • striptags(value)¶ Strip SGML/XML tags and replace adjacent whitespace by one space.
  • tojson(value, indent=None)¶ Dumps a structure to JSON so that it’s safe to use in <script> tags.
  • trim(value)¶ Strip leading and trailing whitespace.
  • unique(value, case_sensitive=False, attribute=None)¶ Returns a list of unique items from the the given iterable
  • urlize(value, trim_url_limit=None, nofollow=False, target=None, rel=None)¶ Converts URLs in plain text into clickable links.
  • defined(value)¶ Return true if the variable is defined
  • in(value, seq)¶ Check if value is in seq.
  • mapping(value)¶ Return true if the object is a mapping (dict etc.).
  • number(value)¶ Return true if the variable is a number.
  • sameas(value, other)¶ Check if an object points to the same memory address than another object
  • undefined(value)¶ Like defined() but the other way round.
  • A joiner is passed a string and will return that string every time it’s called, except the first time (in which case it returns an empty string).
  • namespace(...)¶ Creates a new container that allows attribute assignment using the {% set %} tag
  • The with statement makes it possible to create a new inner scope. Variables set within this scope are not visible outside of the scope.
  • activate and deactivate the autoescaping from within the templates
  • With both trim_blocks and lstrip_blocks enabled, you can put block tags on their own lines, and the entire block line will be removed when rendered, preserving the whitespace of the contents
張 旭

Reusing Config - CircleCI - 0 views

  • Change the version key to 2.1 in your .circleci/config.yml file and commit the changes to test your build.
  • Reusable commands are invoked with specific parameters as steps inside a job.
  • Commands can use other commands in the scope of execution
  • ...19 more annotations...
  • Executors define the environment in which the steps of a job will be run.
  • Executor declarations in config outside of jobs can be used by all jobs in the scope of that declaration, allowing you to reuse a single executor definition across multiple jobs.
  • It is also possible to allow an orb to define the executor used by all of its commands.
  • When invoking an executor in a job any keys in the job itself will override those of the executor invoked.
  • Steps are used when you have a job or command that needs to mix predefined and user-defined steps.
  • Use the enum parameter type when you want to enforce that the value must be one from a specific set of string values.
  • Use an executor parameter type to allow the invoker of a job to decide what executor it will run on
  • invoke the same job more than once in the workflows stanza of config.yml, passing any necessary parameters as subkeys to the job.
  • If a job is declared inside an orb it can use commands in that orb or the global commands.
  • To use parameters in executors, define the parameters under the given executor.
  • Parameters are in-scope only within the job or command that defined them.
  • A single configuration may invoke a job multiple times.
  • Every job invocation may optionally accept two special arguments: pre-steps and post-steps.
  • Pre and post steps allow you to execute steps in a given job without modifying the job.
  • conditions are checked before a workflow is actually run
  • you cannot use a condition to check an environment variable.
  • Conditional steps may be located anywhere a regular step could and may only use parameter values as inputs.
  • A conditional step consists of a step with the key when or unless. Under this conditional key are the subkeys steps and condition
  • A condition is a single value that evaluates to true or false at the time the config is processed, so you cannot use environment variables as conditions
張 旭

elabs/pundit: Minimal authorization through OO design and pure Ruby classes - 0 views

  • The class implements some kind of query method
  • Pundit will call the current_user method to retrieve what to send into this argumen
  • put these classes in app/policies
  • ...49 more annotations...
  • in leveraging regular Ruby classes and object oriented design patterns to build a simple, robust and scaleable authorization system
  • map to the name of a particular controller action
  • In the generated ApplicationPolicy, the model object is called record.
  • record
  • authorize
  • authorize would have done something like this: raise "not authorized" unless PostPolicy.new(current_user, @post).update?
  • pass a second argument to authorize if the name of the permission you want to check doesn't match the action name.
  • you can chain it
  • authorize returns the object passed to it
  • the policy method in both the view and controller.
  • have some kind of view listing records which a particular user has access to
  • ActiveRecord::Relation
  • Instances of this class respond to the method resolve, which should return some kind of result which can be iterated over.
  • scope.where(published: true)
    • 張 旭
       
      我想大概的意思就是:如果是 admin 可以看到全部 post,如果不是只能看到 published = true 的 post
  • use this class from your controller via the policy_scope method:
  • PostPolicy::Scope.new(current_user, Post).resolve
  • policy_scope(@user.posts).each
  • This method will raise an exception if authorize has not yet been called.
  • verify_policy_scoped to your controller. This will raise an exception in the vein of verify_authorized. However, it tracks if policy_scope is used instead of authorize
  • need to conditionally bypass verification, you can use skip_authorization
  • skip_policy_scope
  • Having a mechanism that ensures authorization happens allows developers to thoroughly test authorization scenarios as units on the policy objects themselves.
  • Pundit doesn't do anything you couldn't have easily done yourself. It's a very small library, it just provides a few neat helpers.
  • all of the policy and scope classes are just plain Ruby classes
  • rails g pundit:policy post
  • define a filter that redirects unauthenticated users to the login page
  • fail more gracefully
  • raise Pundit::NotAuthorizedError, "must be logged in" unless user
  • having rails handle them as a 403 error and serving a 403 error page.
  • config.action_dispatch.rescue_responses["Pundit::NotAuthorizedError"] = :forbidden
  • with I18n to generate error messages
  • retrieve a policy for a record outside the controller or view
  • define a method in your controller called pundit_user
  • Pundit strongly encourages you to model your application in such a way that the only context you need for authorization is a user object and a domain model that you want to check authorization for.
  • Pundit does not allow you to pass additional arguments to policies
  • authorization is dependent on IP address in addition to the authenticated user
  • create a special class which wraps up both user and IP and passes it to the policy.
  • set up a permitted_attributes method in your policy
  • policy(@post).permitted_attributes
  • permitted_attributes(@post)
  • Pundit provides a convenient helper method
  • permit different attributes based on the current action,
  • If you have defined an action-specific method on your policy for the current action, the permitted_attributes helper will call it instead of calling permitted_attributes on your controller
  • If you don't have an instance for the first argument to authorize, then you can pass the class
  • restart the Rails server
  • Given there is a policy without a corresponding model / ruby class, you can retrieve it by passing a symbol
  • after_action :verify_authorized
  • It is not some kind of failsafe mechanism or authorization mechanism.
  • Pundit will work just fine without using verify_authorized and verify_policy_scoped
  •  
    "Minimal authorization through OO design and pure Ruby classes"
張 旭

Rails Routing from the Outside In - Ruby on Rails Guides - 0 views

  • Resource routing allows you to quickly declare all of the common routes for a given resourceful controller.
  • Rails would dispatch that request to the destroy method on the photos controller with { id: '17' } in params.
  • a resourceful route provides a mapping between HTTP verbs and URLs to controller actions.
  • ...86 more annotations...
  • each action also maps to particular CRUD operations in a database
  • resource :photo and resources :photos creates both singular and plural routes that map to the same controller (PhotosController).
  • One way to avoid deep nesting (as recommended above) is to generate the collection actions scoped under the parent, so as to get a sense of the hierarchy, but to not nest the member actions.
  • to only build routes with the minimal amount of information to uniquely identify the resource
  • The shallow method of the DSL creates a scope inside of which every nesting is shallow
  • These concerns can be used in resources to avoid code duplication and share behavior across routes
  • add a member route, just add a member block into the resource block
  • You can leave out the :on option, this will create the same member route except that the resource id value will be available in params[:photo_id] instead of params[:id].
  • Singular Resources
  • use a singular resource to map /profile (rather than /profile/:id) to the show action
  • Passing a String to get will expect a controller#action format
  • workaround
  • organize groups of controllers under a namespace
  • route /articles (without the prefix /admin) to Admin::ArticlesController
  • route /admin/articles to ArticlesController (without the Admin:: module prefix)
  • Nested routes allow you to capture this relationship in your routing.
  • helpers take an instance of Magazine as the first parameter (magazine_ads_url(@magazine)).
  • Resources should never be nested more than 1 level deep.
  • via the :shallow option
  • a balance between descriptive routes and deep nesting
  • :shallow_path prefixes member paths with the specified parameter
  • Routing Concerns allows you to declare common routes that can be reused inside other resources and routes
  • Rails can also create paths and URLs from an array of parameters.
  • use url_for with a set of objects
  • In helpers like link_to, you can specify just the object in place of the full url_for call
  • insert the action name as the first element of the array
  • This will recognize /photos/1/preview with GET, and route to the preview action of PhotosController, with the resource id value passed in params[:id]. It will also create the preview_photo_url and preview_photo_path helpers.
  • pass :on to a route, eliminating the block:
  • Collection Routes
  • This will enable Rails to recognize paths such as /photos/search with GET, and route to the search action of PhotosController. It will also create the search_photos_url and search_photos_path route helpers.
  • simple routing makes it very easy to map legacy URLs to new Rails actions
  • add an alternate new action using the :on shortcut
  • When you set up a regular route, you supply a series of symbols that Rails maps to parts of an incoming HTTP request.
  • :controller maps to the name of a controller in your application
  • :action maps to the name of an action within that controller
  • optional parameters, denoted by parentheses
  • This route will also route the incoming request of /photos to PhotosController#index, since :action and :id are
  • use a constraint on :controller that matches the namespace you require
  • dynamic segments don't accept dots
  • The params will also include any parameters from the query string
  • :defaults option.
  • set params[:format] to "jpg"
  • cannot override defaults via query parameters
  • specify a name for any route using the :as option
  • create logout_path and logout_url as named helpers in your application.
  • Inside the show action of UsersController, params[:username] will contain the username for the user.
  • should use the get, post, put, patch and delete methods to constrain a route to a particular verb.
  • use the match method with the :via option to match multiple verbs at once
  • Routing both GET and POST requests to a single action has security implications
  • 'GET' in Rails won't check for CSRF token. You should never write to the database from 'GET' requests
  • use the :constraints option to enforce a format for a dynamic segment
  • constraints
  • don't need to use anchors
  • Request-Based Constraints
  • the same name as the hash key and then compare the return value with the hash value.
  • constraint values should match the corresponding Request object method return type
    • 張 旭
       
      應該就是檢查來源的 request, 如果是某個特定的 request 來訪問的,就通過。
  • blacklist
    • 張 旭
       
      這裡有點複雜 ...
  • redirect helper
  • reuse dynamic segments from the match in the path to redirect
  • this redirection is a 301 "Moved Permanently" redirect.
  • root method
  • put the root route at the top of the file
  • The root route only routes GET requests to the action.
  • root inside namespaces and scopes
  • For namespaced controllers you can use the directory notation
  • Only the directory notation is supported
  • use the :constraints option to specify a required format on the implicit id
  • specify a single constraint to apply to a number of routes by using the block
  • non-resourceful routes
  • :id parameter doesn't accept dots
  • :as option lets you override the normal naming for the named route helpers
  • use the :as option to prefix the named route helpers that Rails generates for a rout
  • prevent name collisions
  • prefix routes with a named parameter
  • This will provide you with URLs such as /bob/articles/1 and will allow you to reference the username part of the path as params[:username] in controllers, helpers and views
  • :only option
  • :except option
  • generate only the routes that you actually need can cut down on memory use and speed up the routing process.
  • alter path names
  • http://localhost:3000/rails/info/routes
  • rake routes
  • setting the CONTROLLER environment variable
  • Routes should be included in your testing strategy
  • assert_generates assert_recognizes assert_routing
張 旭

Using Workflows to Schedule Jobs - CircleCI - 1 views

  • A workflow is a set of rules for defining a collection of jobs and their run order.
  • Schedule workflows for jobs that should only run periodically.
  • run multiple jobs in parallel
  • ...37 more annotations...
  • rerun just the failed job
  • Builds without workflows require a build job.
  • Refer the YAML Anchors/Aliases documentation for information about how to alias and reuse syntax to keep your .circleci/config.yml file small.
  • workflow orchestration with two parallel jobs
  • jobs run according to configured requirements, each job waiting to start until the required job finishes successfully
  • requires: key
  • fans-out to run a set of acceptance test jobs in parallel, and finally fans-in to run a common deploy job.
  • Holding a Workflow for a Manual Approval
  • Workflows can be configured to wait for manual approval of a job before continuing to the next job
  • add a job to the jobs list with the key type: approval
  • approval is a special job type that is only available to jobs under the workflow key
  • The name of the job to hold is arbitrary - it could be wait or pause, for example, as long as the job has a type: approval key in it.
  • schedule a workflow to run at a certain time for specific branches.
  • The triggers key is only added under your workflows key
  • using cron syntax to represent Coordinated Universal Time (UTC) for specified branches.
  • By default, a workflow is triggered on every git push
  • the commit workflow has no triggers key and will run on every git push
  • The nightly workflow has a triggers key and will run on the specified schedule
  • Cron step syntax (for example, */1, */20) is not supported.
  • use a context to share environment variables
  • use the same shared environment variables when initiated by a user who is part of the organization.
  • CircleCI does not run workflows for tags unless you explicitly specify tag filters.
  • CircleCI branch and tag filters support the Java variant of regex pattern matching.
  • Each workflow has an associated workspace which can be used to transfer files to downstream jobs as the workflow progresses.
  • The workspace is an additive-only store of data.
  • Jobs can persist data to the workspace
  • Downstream jobs can attach the workspace to their container filesystem.
  • Attaching the workspace downloads and unpacks each layer based on the ordering of the upstream jobs in the workflow graph.
  • Workflows that include jobs running on multiple branches may require data to be shared using workspaces
  • To persist data from a job and make it available to other jobs, configure the job to use the persist_to_workspace key.
  • Files and directories named in the paths: property of persist_to_workspace will be uploaded to the workflow’s temporary workspace relative to the directory specified with the root key.
  • Configure a job to get saved data by configuring the attach_workspace key.
  • persist_to_workspace
  • attach_workspace
  • To rerun only a workflow’s failed jobs, click the Workflows icon in the app and select a workflow to see the status of each job, then click the Rerun button and select Rerun from failed.
  • if you do not see your workflows triggering, a configuration error is preventing the workflow from starting.
  • check your Workflows page of the CircleCI app (not the Job page)
  •  
    "A workflow is a set of rules for defining a collection of jobs and their run order."
張 旭

Setup ProxySQL for High Availability (not a Single Point of Failure) - Percona Database... - 0 views

  • ProxySQL doesn’t natively support any high availability solution
  • most common solution is setting up ProxySQL as part of a tile architecture, where Application/ProxySQL are deployed together.
    • 張 旭
       
      直接把 ProxySQL 跟 App 捆綁發佈
  • If we have 400 instances of ProxySQL, we end up keeping our databases busy just performing the checks.
  • ...5 more annotations...
  • Another possible approach is to have two layers of ProxySQL, one close to the application and another in the middle to connect to the database.
  • creates additional complexity in the management of the platform, and it adds network hops.
  • combining existing solutions and existing blocks: KeepAlived + ProxySQl + MySQL.
  • Keepalived implements a set of checkers to dynamically and adaptively maintain and manage load-balanced server pool according to their health.
  • Keepalived implements a set of hooks to the VRRP finite state machine providing low-level and high-speed protocol interactions.
張 旭

Makefiles - Best practices and suggestions | MDN - 0 views

  • hardcoded values - avoid them like the plague
  • For classes of hardware (unix/windows) place your makefile in a subdirectory, unix/Makefile.in
  • Initial make call should always be the workhorse: build, generate, deploy, install, etc.
  • ...6 more annotations...
  • All subsequent make calls must become a NOP unless sources or dependencies change or have been removed.
    • 張 旭
       
      No Operation 或 No Operation Performed 的縮寫,意為無操作
    • 張 旭
  • Do not use directories as a dependency for generated targets, ever.
  • Parallel make: add an explicit timestamp dependency (.done) that make can synchronize threaded calls on to avoid a race condition.
  • Maintain clean targets - makefiles should be able to remove all content that is generated so "make clean" will return the sandbox/directory back to a clean state.
  • Wrapper check/unit tests with a ENABLE_TESTS conditional
張 旭

Best practices for writing Dockerfiles | Docker Documentation - 0 views

  • building efficient images
  • Docker builds images automatically by reading the instructions from a Dockerfile -- a text file that contains all commands, in order, needed to build a given image.
  • A Docker image consists of read-only layers each of which represents a Dockerfile instruction.
  • ...47 more annotations...
  • The layers are stacked and each one is a delta of the changes from the previous layer
  • When you run an image and generate a container, you add a new writable layer (the “container layer”) on top of the underlying layers.
  • By “ephemeral,” we mean that the container can be stopped and destroyed, then rebuilt and replaced with an absolute minimum set up and configuration.
  • Inadvertently including files that are not necessary for building an image results in a larger build context and larger image size.
  • To exclude files not relevant to the build (without restructuring your source repository) use a .dockerignore file. This file supports exclusion patterns similar to .gitignore files.
  • minimize image layers by leveraging build cache.
  • if your build contains several layers, you can order them from the less frequently changed (to ensure the build cache is reusable) to the more frequently changed
  • avoid installing extra or unnecessary packages just because they might be “nice to have.”
  • Each container should have only one concern.
  • Decoupling applications into multiple containers makes it easier to scale horizontally and reuse containers
  • Limiting each container to one process is a good rule of thumb, but it is not a hard and fast rule.
  • Use your best judgment to keep containers as clean and modular as possible.
  • do multi-stage builds and only copy the artifacts you need into the final image. This allows you to include tools and debug information in your intermediate build stages without increasing the size of the final image.
  • avoid duplication of packages and make the list much easier to update.
  • When building an image, Docker steps through the instructions in your Dockerfile, executing each in the order specified.
  • the next instruction is compared against all child images derived from that base image to see if one of them was built using the exact same instruction. If not, the cache is invalidated.
  • simply comparing the instruction in the Dockerfile with one of the child images is sufficient.
  • For the ADD and COPY instructions, the contents of the file(s) in the image are examined and a checksum is calculated for each file.
  • If anything has changed in the file(s), such as the contents and metadata, then the cache is invalidated.
  • cache checking does not look at the files in the container to determine a cache match.
  • In that case just the command string itself is used to find a match.
    • 張 旭
       
      RUN apt-get 這樣的指令,直接比對指令內容的意思。
  • Whenever possible, use current official repositories as the basis for your images.
  • Using RUN apt-get update && apt-get install -y ensures your Dockerfile installs the latest package versions with no further coding or manual intervention.
  • cache busting
  • Docker executes these commands using the /bin/sh -c interpreter, which only evaluates the exit code of the last operation in the pipe to determine success.
  • set -o pipefail && to ensure that an unexpected error prevents the build from inadvertently succeeding.
  • The CMD instruction should be used to run the software contained by your image, along with any arguments.
  • CMD should almost always be used in the form of CMD [“executable”, “param1”, “param2”…]
  • CMD should rarely be used in the manner of CMD [“param”, “param”] in conjunction with ENTRYPOINT
  • The ENV instruction is also useful for providing required environment variables specific to services you wish to containerize,
  • Each ENV line creates a new intermediate layer, just like RUN commands
  • COPY is preferred
  • COPY only supports the basic copying of local files into the container
  • the best use for ADD is local tar file auto-extraction into the image, as in ADD rootfs.tar.xz /
  • If you have multiple Dockerfile steps that use different files from your context, COPY them individually, rather than all at once.
  • using ADD to fetch packages from remote URLs is strongly discouraged; you should use curl or wget instead
  • The best use for ENTRYPOINT is to set the image’s main command, allowing that image to be run as though it was that command (and then use CMD as the default flags).
  • the image name can double as a reference to the binary as shown in the command
  • The VOLUME instruction should be used to expose any database storage area, configuration storage, or files/folders created by your docker container.
  • use VOLUME for any mutable and/or user-serviceable parts of your image
  • If you absolutely need functionality similar to sudo, such as initializing the daemon as root but running it as non-root), consider using “gosu”.
  • always use absolute paths for your WORKDIR
  • An ONBUILD command executes after the current Dockerfile build completes.
  • Think of the ONBUILD command as an instruction the parent Dockerfile gives to the child Dockerfile
  • A Docker build executes ONBUILD commands before any command in a child Dockerfile.
  • Be careful when putting ADD or COPY in ONBUILD. The “onbuild” image fails catastrophically if the new build’s context is missing the resource being added.
1 - 20 of 45 Next › Last »
Showing 20 items per page