Skip to main content

Home/ Larvata/ Group items tagged classes

Rss Feed Group items tagged

張 旭

Template Designer Documentation - Jinja2 Documentation (2.10) - 0 views

  • A Jinja template doesn’t need to have a specific extension
  • A Jinja template is simply a text file
  • tags, which control the logic of the template
  • ...106 more annotations...
  • {% ... %} for Statements
  • {{ ... }} for Expressions to print to the template output
  • use a dot (.) to access attributes of a variable
  • the outer double-curly braces are not part of the variable, but the print statement.
  • If you access variables inside tags don’t put the braces around them.
  • If a variable or attribute does not exist, you will get back an undefined value.
  • the default behavior is to evaluate to an empty string if printed or iterated over, and to fail for every other operation.
  • if an object has an item and attribute with the same name. Additionally, the attr() filter only looks up attributes.
  • Variables can be modified by filters. Filters are separated from the variable by a pipe symbol (|) and may have optional arguments in parentheses.
  • Multiple filters can be chained
  • Tests can be used to test a variable against a common expression.
  • add is plus the name of the test after the variable.
  • to find out if a variable is defined, you can do name is defined, which will then return true or false depending on whether name is defined in the current template context.
  • strip whitespace in templates by hand. If you add a minus sign (-) to the start or end of a block (e.g. a For tag), a comment, or a variable expression, the whitespaces before or after that block will be removed
  • not add whitespace between the tag and the minus sign
  • mark a block raw
  • Template inheritance allows you to build a base “skeleton” template that contains all the common elements of your site and defines blocks that child templates can override.
  • The {% extends %} tag is the key here. It tells the template engine that this template “extends” another template.
  • access templates in subdirectories with a slash
  • can’t define multiple {% block %} tags with the same name in the same template
  • use the special self variable and call the block with that name
  • self.title()
  • super()
  • put the name of the block after the end tag for better readability
  • if the block is replaced by a child template, a variable would appear that was not defined in the block or passed to the context.
  • setting the block to “scoped” by adding the scoped modifier to a block declaration
  • If you have a variable that may include any of the following chars (>, <, &, or ") you SHOULD escape it unless the variable contains well-formed and trusted HTML.
  • Jinja2 functions (macros, super, self.BLOCKNAME) always return template data that is marked as safe.
  • With the default syntax, control structures appear inside {% ... %} blocks.
  • the dictsort filter
  • loop.cycle
  • Unlike in Python, it’s not possible to break or continue in a loop
  • use loops recursively
  • add the recursive modifier to the loop definition and call the loop variable with the new iterable where you want to recurse.
  • The loop variable always refers to the closest (innermost) loop.
  • whether the value changed at all,
  • use it to test if a variable is defined, not empty and not false
  • Macros are comparable with functions in regular programming languages.
  • If a macro name starts with an underscore, it’s not exported and can’t be imported.
  • pass a macro to another macro
  • caller()
  • a single trailing newline is stripped if present
  • other whitespace (spaces, tabs, newlines etc.) is returned unchanged
  • a block tag works in “both” directions. That is, a block tag doesn’t just provide a placeholder to fill - it also defines the content that fills the placeholder in the parent.
  • Python dicts are not ordered
  • caller(user)
  • call(user)
  • This is a simple dialog rendered by using a macro and a call block.
  • Filter sections allow you to apply regular Jinja2 filters on a block of template data.
  • Assignments at top level (outside of blocks, macros or loops) are exported from the template like top level macros and can be imported by other templates.
  • using namespace objects which allow propagating of changes across scopes
  • use block assignments to capture the contents of a block into a variable name.
  • The extends tag can be used to extend one template from another.
  • Blocks are used for inheritance and act as both placeholders and replacements at the same time.
  • The include statement is useful to include a template and return the rendered contents of that file into the current namespace
  • Included templates have access to the variables of the active context by default.
  • putting often used code into macros
  • imports are cached and imported templates don’t have access to the current template variables, just the globals by default.
  • Macros and variables starting with one or more underscores are private and cannot be imported.
  • By default, included templates are passed the current context and imported templates are not.
  • imports are often used just as a module that holds macros.
  • Integers and floating point numbers are created by just writing the number down
  • Everything between two brackets is a list.
  • Tuples are like lists that cannot be modified (“immutable”).
  • A dict in Python is a structure that combines keys and values.
  • // Divide two numbers and return the truncated integer result
  • The special constants true, false, and none are indeed lowercase
  • all Jinja identifiers are lowercase
  • (expr) group an expression.
  • The is and in operators support negation using an infix notation
  • in Perform a sequence / mapping containment test.
  • | Applies a filter.
  • ~ Converts all operands into strings and concatenates them.
  • use inline if expressions.
  • always an attribute is returned and items are not looked up.
  • default(value, default_value=u'', boolean=False)¶ If the value is undefined it will return the passed default value, otherwise the value of the variable
  • dictsort(value, case_sensitive=False, by='key', reverse=False)¶ Sort a dict and yield (key, value) pairs.
  • format(value, *args, **kwargs)¶ Apply python string formatting on an object
  • groupby(value, attribute)¶ Group a sequence of objects by a common attribute.
  • grouping by is stored in the grouper attribute and the list contains all the objects that have this grouper in common.
  • indent(s, width=4, first=False, blank=False, indentfirst=None)¶ Return a copy of the string with each line indented by 4 spaces. The first line and blank lines are not indented by default.
  • join(value, d=u'', attribute=None)¶ Return a string which is the concatenation of the strings in the sequence.
  • map()¶ Applies a filter on a sequence of objects or looks up an attribute.
  • pprint(value, verbose=False)¶ Pretty print a variable. Useful for debugging.
  • reject()¶ Filters a sequence of objects by applying a test to each object, and rejecting the objects with the test succeeding.
  • replace(s, old, new, count=None)¶ Return a copy of the value with all occurrences of a substring replaced with a new one.
  • round(value, precision=0, method='common')¶ Round the number to a given precision
  • even if rounded to 0 precision, a float is returned.
  • select()¶ Filters a sequence of objects by applying a test to each object, and only selecting the objects with the test succeeding.
  • sort(value, reverse=False, case_sensitive=False, attribute=None)¶ Sort an iterable. Per default it sorts ascending, if you pass it true as first argument it will reverse the sorting.
  • striptags(value)¶ Strip SGML/XML tags and replace adjacent whitespace by one space.
  • tojson(value, indent=None)¶ Dumps a structure to JSON so that it’s safe to use in <script> tags.
  • trim(value)¶ Strip leading and trailing whitespace.
  • unique(value, case_sensitive=False, attribute=None)¶ Returns a list of unique items from the the given iterable
  • urlize(value, trim_url_limit=None, nofollow=False, target=None, rel=None)¶ Converts URLs in plain text into clickable links.
  • defined(value)¶ Return true if the variable is defined
  • in(value, seq)¶ Check if value is in seq.
  • mapping(value)¶ Return true if the object is a mapping (dict etc.).
  • number(value)¶ Return true if the variable is a number.
  • sameas(value, other)¶ Check if an object points to the same memory address than another object
  • undefined(value)¶ Like defined() but the other way round.
  • A joiner is passed a string and will return that string every time it’s called, except the first time (in which case it returns an empty string).
  • namespace(...)¶ Creates a new container that allows attribute assignment using the {% set %} tag
  • The with statement makes it possible to create a new inner scope. Variables set within this scope are not visible outside of the scope.
  • activate and deactivate the autoescaping from within the templates
  • With both trim_blocks and lstrip_blocks enabled, you can put block tags on their own lines, and the entire block line will be removed when rendered, preserving the whitespace of the contents
張 旭

Boosting your kubectl productivity ♦︎ Learnk8s - 0 views

  • kubectl is your cockpit to control Kubernetes.
  • kubectl is a client for the Kubernetes API
  • Kubernetes API is an HTTP REST API.
  • ...75 more annotations...
  • This API is the real Kubernetes user interface.
  • Kubernetes is fully controlled through this API
  • every Kubernetes operation is exposed as an API endpoint and can be executed by an HTTP request to this endpoint.
  • the main job of kubectl is to carry out HTTP requests to the Kubernetes API
  • Kubernetes maintains an internal state of resources, and all Kubernetes operations are CRUD operations on these resources.
  • Kubernetes is a fully resource-centred system
  • Kubernetes API reference is organised as a list of resource types with their associated operations.
  • This is how kubectl works for all commands that interact with the Kubernetes cluster.
  • kubectl simply makes HTTP requests to the appropriate Kubernetes API endpoints.
  • it's totally possible to control Kubernetes with a tool like curl by manually issuing HTTP requests to the Kubernetes API.
  • Kubernetes consists of a set of independent components that run as separate processes on the nodes of a cluster.
  • components on the master nodes
  • Storage backend: stores resource definitions (usually etcd is used)
  • API server: provides Kubernetes API and manages storage backend
  • Controller manager: ensures resource statuses match specifications
  • Scheduler: schedules Pods to worker nodes
  • component on the worker nodes
  • Kubelet: manages execution of containers on a worker node
  • triggers the ReplicaSet controller, which is a sub-process of the controller manager.
  • the scheduler, who watches for Pod definitions that are not yet scheduled to a worker node.
  • creating and updating resources in the storage backend on the master node.
  • The kubelet of the worker node your ReplicaSet Pods have been scheduled to instructs the configured container runtime (which may be Docker) to download the required container images and run the containers.
  • Kubernetes components (except the API server and the storage backend) work by watching for resource changes in the storage backend and manipulating resources in the storage backend.
  • However, these components do not access the storage backend directly, but only through the Kubernetes API.
    • 張 旭
       
      很精彩,相互之間都是使用 API call 溝通,良好的微服務行為。
  • double usage of the Kubernetes API for internal components as well as for external users is a fundamental design concept of Kubernetes.
  • All other Kubernetes components and users read, watch, and manipulate the state (i.e. resources) of Kubernetes through the Kubernetes API
  • The storage backend stores the state (i.e. resources) of Kubernetes.
  • command completion is a shell feature that works by the means of a completion script.
  • A completion script is a shell script that defines the completion behaviour for a specific command. Sourcing a completion script enables completion for the corresponding command.
  • kubectl completion zsh
  • /etc/bash_completion.d directory (create it, if it doesn't exist)
  • source <(kubectl completion bash)
  • source <(kubectl completion zsh)
  • autoload -Uz compinit compinit
  • the API reference, which contains the full specifications of all resources.
  • kubectl api-resources
  • displays the resource names in their plural form (e.g. deployments instead of deployment). It also displays the shortname (e.g. deploy) for those resources that have one. Don't worry about these differences. All of these name variants are equivalent for kubectl.
  • .spec
  • custom columns output format comes in. It lets you freely define the columns and the data to display in them. You can choose any field of a resource to be displayed as a separate column in the output
  • kubectl get pods -o custom-columns='NAME:metadata.name,NODE:spec.nodeName'
  • kubectl explain pod.spec.
  • kubectl explain pod.metadata.
  • browse the resource specifications and try it out with any fields you like!
  • JSONPath is a language to extract data from JSON documents (it is similar to XPath for XML).
  • with kubectl explain, only a subset of the JSONPath capabilities is supported
  • Many fields of Kubernetes resources are lists, and this operator allows you to select items of these lists. It is often used with a wildcard as [*] to select all items of the list.
  • kubectl get pods -o custom-columns='NAME:metadata.name,IMAGES:spec.containers[*].image'
  • a Pod may contain more than one container.
  • The availability zones for each node are obtained through the special failure-domain.beta.kubernetes.io/zone label.
  • kubectl get nodes -o yaml kubectl get nodes -o json
  • The default kubeconfig file is ~/.kube/config
  • with multiple clusters, then you have connection parameters for multiple clusters configured in your kubeconfig file.
  • Within a cluster, you can set up multiple namespaces (a namespace is kind of "virtual" clusters within a physical cluster)
  • overwrite the default kubeconfig file with the --kubeconfig option for every kubectl command.
  • Namespace: the namespace to use when connecting to the cluster
  • a one-to-one mapping between clusters and contexts.
  • When kubectl reads a kubeconfig file, it always uses the information from the current context.
  • just change the current context in the kubeconfig file
  • to switch to another namespace in the same cluster, you can change the value of the namespace element of the current context
  • kubectl also provides the --cluster, --user, --namespace, and --context options that allow you to overwrite individual elements and the current context itself, regardless of what is set in the kubeconfig file.
  • for switching between clusters and namespaces is kubectx.
  • kubectl config get-contexts
  • just have to download the shell scripts named kubectl-ctx and kubectl-ns to any directory in your PATH and make them executable (for example, with chmod +x)
  • kubectl proxy
  • kubectl get roles
  • kubectl get pod
  • Kubectl plugins are distributed as simple executable files with a name of the form kubectl-x. The prefix kubectl- is mandatory,
  • To install a plugin, you just have to copy the kubectl-x file to any directory in your PATH and make it executable (for example, with chmod +x)
  • krew itself is a kubectl plugin
  • check out the kubectl-plugins GitHub topic
  • The executable can be of any type, a Bash script, a compiled Go program, a Python script, it really doesn't matter. The only requirement is that it can be directly executed by the operating system.
  • kubectl plugins can be written in any programming or scripting language.
  • you can write more sophisticated plugins with real programming languages, for example, using a Kubernetes client library. If you use Go, you can also use the cli-runtime library, which exists specifically for writing kubectl plugins.
  • a kubeconfig file consists of a set of contexts
  • changing the current context means changing the cluster, if you have only a single context per cluster.
張 旭

Replication - MongoDB Manual - 0 views

  • A replica set in MongoDB is a group of mongod processes that maintain the same data set.
  • Replica sets provide redundancy and high availability, and are the basis for all production deployments.
  • With multiple copies of data on different database servers, replication provides a level of fault tolerance against the loss of a single database server.
  • ...18 more annotations...
  • replication can provide increased read capacity as clients can send read operations to different servers.
  • A replica set is a group of mongod instances that maintain the same data set.
  • A replica set contains several data bearing nodes and optionally one arbiter node.
  • one and only one member is deemed the primary node, while the other nodes are deemed secondary nodes.
  • A replica set can have only one primary capable of confirming writes with { w: "majority" } write concern; although in some circumstances, another mongod instance may transiently believe itself to also be primary.
  • The secondaries replicate the primary’s oplog and apply the operations to their data sets such that the secondaries’ data sets reflect the primary’s data set
  • add a mongod instance to a replica set as an arbiter. An arbiter participates in elections but does not hold data
  • An arbiter will always be an arbiter whereas a primary may step down and become a secondary and a secondary may become the primary during an election.
  • Secondaries replicate the primary’s oplog and apply the operations to their data sets asynchronously.
  • These slow oplog messages are logged for the secondaries in the diagnostic log under the REPL component with the text applied op: <oplog entry> took <num>ms.
  • Replication lag refers to the amount of time that it takes to copy (i.e. replicate) a write operation on the primary to a secondary.
  • When a primary does not communicate with the other members of the set for more than the configured electionTimeoutMillis period (10 seconds by default), an eligible secondary calls for an election to nominate itself as the new primary.
  • The replica set cannot process write operations until the election completes successfully.
  • The median time before a cluster elects a new primary should not typically exceed 12 seconds, assuming default replica configuration settings.
  • Factors such as network latency may extend the time required for replica set elections to complete, which in turn affects the amount of time your cluster may operate without a primary.
  • Your application connection logic should include tolerance for automatic failovers and the subsequent elections.
  • MongoDB drivers can detect the loss of the primary and automatically retry certain write operations a single time, providing additional built-in handling of automatic failovers and elections
  • By default, clients read from the primary [1]; however, clients can specify a read preference to send read operations to secondaries.
張 旭

Auto DevOps | GitLab - 0 views

  • Auto DevOps provides pre-defined CI/CD configuration which allows you to automatically detect, build, test, deploy, and monitor your applications
  • Just push your code and GitLab takes care of everything else.
  • Auto DevOps will be automatically disabled on the first pipeline failure.
  • ...78 more annotations...
  • Your project will continue to use an alternative CI/CD configuration file if one is found
  • Auto DevOps works with any Kubernetes cluster;
  • using the Docker or Kubernetes executor, with privileged mode enabled.
  • Base domain (needed for Auto Review Apps and Auto Deploy)
  • Kubernetes (needed for Auto Review Apps, Auto Deploy, and Auto Monitoring)
  • Prometheus (needed for Auto Monitoring)
  • scrape your Kubernetes cluster.
  • project level as a variable: KUBE_INGRESS_BASE_DOMAIN
  • A wildcard DNS A record matching the base domain(s) is required
  • Once set up, all requests will hit the load balancer, which in turn will route them to the Kubernetes pods that run your application(s).
  • review/ (every environment starting with review/)
  • staging
  • production
  • need to define a separate KUBE_INGRESS_BASE_DOMAIN variable for all the above based on the environment.
  • Continuous deployment to production: Enables Auto Deploy with master branch directly deployed to production.
  • Continuous deployment to production using timed incremental rollout
  • Automatic deployment to staging, manual deployment to production
  • Auto Build creates a build of the application using an existing Dockerfile or Heroku buildpacks.
  • If a project’s repository contains a Dockerfile, Auto Build will use docker build to create a Docker image.
  • Each buildpack requires certain files to be in your project’s repository for Auto Build to successfully build your application.
  • Auto Test automatically runs the appropriate tests for your application using Herokuish and Heroku buildpacks by analyzing your project to detect the language and framework.
  • Auto Code Quality uses the Code Quality image to run static analysis and other code checks on the current code.
  • Static Application Security Testing (SAST) uses the SAST Docker image to run static analysis on the current code and checks for potential security issues.
  • Dependency Scanning uses the Dependency Scanning Docker image to run analysis on the project dependencies and checks for potential security issues.
  • License Management uses the License Management Docker image to search the project dependencies for their license.
  • Vulnerability Static Analysis for containers uses Clair to run static analysis on a Docker image and checks for potential security issues.
  • Review Apps are temporary application environments based on the branch’s code so developers, designers, QA, product managers, and other reviewers can actually see and interact with code changes as part of the review process. Auto Review Apps create a Review App for each branch. Auto Review Apps will deploy your app to your Kubernetes cluster only. When no cluster is available, no deployment will occur.
  • The Review App will have a unique URL based on the project ID, the branch or tag name, and a unique number, combined with the Auto DevOps base domain.
  • Review apps are deployed using the auto-deploy-app chart with Helm, which can be customized.
  • Your apps should not be manipulated outside of Helm (using Kubernetes directly).
  • Dynamic Application Security Testing (DAST) uses the popular open source tool OWASP ZAProxy to perform an analysis on the current code and checks for potential security issues.
  • Auto Browser Performance Testing utilizes the Sitespeed.io container to measure the performance of a web page.
  • add the paths to a file named .gitlab-urls.txt in the root directory, one per line.
  • After a branch or merge request is merged into the project’s default branch (usually master), Auto Deploy deploys the application to a production environment in the Kubernetes cluster, with a namespace based on the project name and unique project ID
  • Auto Deploy doesn’t include deployments to staging or canary by default, but the Auto DevOps template contains job definitions for these tasks if you want to enable them.
  • Apps are deployed using the auto-deploy-app chart with Helm.
  • For internal and private projects a GitLab Deploy Token will be automatically created, when Auto DevOps is enabled and the Auto DevOps settings are saved.
  • If the GitLab Deploy Token cannot be found, CI_REGISTRY_PASSWORD is used. Note that CI_REGISTRY_PASSWORD is only valid during deployment.
  • If present, DB_INITIALIZE will be run as a shell command within an application pod as a helm post-install hook.
  • a post-install hook means that if any deploy succeeds, DB_INITIALIZE will not be processed thereafter.
  • DB_MIGRATE will be run as a shell command within an application pod as a helm pre-upgrade hook.
    • 張 旭
       
      如果專案類型不同,就要去查 buildpacks 裡面如何叫用該指令,例如 laravel 的 migration
    • 張 旭
       
      如果是自己的 Dockerfile 建立起來的,看來就不用鳥 buildpacks 的作法
  • Once your application is deployed, Auto Monitoring makes it possible to monitor your application’s server and response metrics right out of the box.
  • annotate the NGINX Ingress deployment to be scraped by Prometheus using prometheus.io/scrape: "true" and prometheus.io/port: "10254"
  • If you are also using Auto Review Apps and Auto Deploy and choose to provide your own Dockerfile, make sure you expose your application to port 5000 as this is the port assumed by the default Helm chart.
  • While Auto DevOps provides great defaults to get you started, you can customize almost everything to fit your needs; from custom buildpacks, to Dockerfiles, Helm charts, or even copying the complete CI/CD configuration into your project to enable staging and canary deployments, and more.
  • If your project has a Dockerfile in the root of the project repo, Auto DevOps will build a Docker image based on the Dockerfile rather than using buildpacks.
  • Auto DevOps uses Helm to deploy your application to Kubernetes.
  • Bundled chart - If your project has a ./chart directory with a Chart.yaml file in it, Auto DevOps will detect the chart and use it instead of the default one.
  • Create a project variable AUTO_DEVOPS_CHART with the URL of a custom chart to use or create two project variables AUTO_DEVOPS_CHART_REPOSITORY with the URL of a custom chart repository and AUTO_DEVOPS_CHART with the path to the chart.
  • make use of the HELM_UPGRADE_EXTRA_ARGS environment variable to override the default values in the values.yaml file in the default Helm chart.
  • specify the use of a custom Helm chart per environment by scoping the environment variable to the desired environment.
    • 張 旭
       
      Auto DevOps 就是一套人家寫好好的傳便便的 .gitlab-ci.yml
  • Your additions will be merged with the Auto DevOps template using the behaviour described for include
  • copy and paste the contents of the Auto DevOps template into your project and edit this as needed.
  • In order to support applications that require a database, PostgreSQL is provisioned by default.
  • Set up the replica variables using a project variable and scale your application by just redeploying it!
  • You should not scale your application using Kubernetes directly.
  • Some applications need to define secret variables that are accessible by the deployed application.
  • Auto DevOps detects variables where the key starts with K8S_SECRET_ and make these prefixed variables available to the deployed application, as environment variables.
  • Auto DevOps pipelines will take your application secret variables to populate a Kubernetes secret.
  • Environment variables are generally considered immutable in a Kubernetes pod.
  • if you update an application secret without changing any code then manually create a new pipeline, you will find that any running application pods will not have the updated secrets.
  • Variables with multiline values are not currently supported
  • The normal behavior of Auto DevOps is to use Continuous Deployment, pushing automatically to the production environment every time a new pipeline is run on the default branch.
  • If STAGING_ENABLED is defined in your project (e.g., set STAGING_ENABLED to 1 as a CI/CD variable), then the application will be automatically deployed to a staging environment, and a production_manual job will be created for you when you’re ready to manually deploy to production.
  • If CANARY_ENABLED is defined in your project (e.g., set CANARY_ENABLED to 1 as a CI/CD variable) then two manual jobs will be created: canary which will deploy the application to the canary environment production_manual which is to be used by you when you’re ready to manually deploy to production.
  • If INCREMENTAL_ROLLOUT_MODE is set to manual in your project, then instead of the standard production job, 4 different manual jobs will be created: rollout 10% rollout 25% rollout 50% rollout 100%
  • The percentage is based on the REPLICAS variable and defines the number of pods you want to have for your deployment.
  • To start a job, click on the play icon next to the job’s name.
  • Once you get to 100%, you cannot scale down, and you’d have to roll back by redeploying the old version using the rollback button in the environment page.
  • With INCREMENTAL_ROLLOUT_MODE set to manual and with STAGING_ENABLED
  • not all buildpacks support Auto Test yet
  • When a project has been marked as private, GitLab’s Container Registry requires authentication when downloading containers.
  • Authentication credentials will be valid while the pipeline is running, allowing for a successful initial deployment.
  • After the pipeline completes, Kubernetes will no longer be able to access the Container Registry.
  • We strongly advise using GitLab Container Registry with Auto DevOps in order to simplify configuration and prevent any unforeseen issues.
張 旭

Queues - Laravel - The PHP Framework For Web Artisans - 0 views

  • Laravel queues provide a unified API across a variety of different queue backends, such as Beanstalk, Amazon SQS, Redis, or even a relational database.
  • The queue configuration file is stored in config/queue.php
  • a synchronous driver that will execute jobs immediately (for local use)
  • ...56 more annotations...
  • A null queue driver is also included which discards queued jobs.
  • In your config/queue.php configuration file, there is a connections configuration option.
  • any given queue connection may have multiple "queues" which may be thought of as different stacks or piles of queued jobs.
  • each connection configuration example in the queue configuration file contains a queue attribute.
  • if you dispatch a job without explicitly defining which queue it should be dispatched to, the job will be placed on the queue that is defined in the queue attribute of the connection configuration
  • pushing jobs to multiple queues can be especially useful for applications that wish to prioritize or segment how jobs are processed
  • specify which queues it should process by priority.
  • If your Redis queue connection uses a Redis Cluster, your queue names must contain a key hash tag.
  • ensure all of the Redis keys for a given queue are placed into the same hash slot
  • all of the queueable jobs for your application are stored in the app/Jobs directory.
  • Job classes are very simple, normally containing only a handle method which is called when the job is processed by the queue.
  • we were able to pass an Eloquent model directly into the queued job's constructor. Because of the SerializesModels trait that the job is using, Eloquent models will be gracefully serialized and unserialized when the job is processing.
  • When the job is actually handled, the queue system will automatically re-retrieve the full model instance from the database.
  • The handle method is called when the job is processed by the queue
  • The arguments passed to the dispatch method will be given to the job's constructor
  • delay the execution of a queued job, you may use the delay method when dispatching a job.
  • dispatch a job immediately (synchronously), you may use the dispatchNow method.
  • When using this method, the job will not be queued and will be run immediately within the current process
  • specify a list of queued jobs that should be run in sequence.
  • Deleting jobs using the $this->delete() method will not prevent chained jobs from being processed. The chain will only stop executing if a job in the chain fails.
  • this does not push jobs to different queue "connections" as defined by your queue configuration file, but only to specific queues within a single connection.
  • To specify the queue, use the onQueue method when dispatching the job
  • To specify the connection, use the onConnection method when dispatching the job
  • defining the maximum number of attempts on the job class itself.
  • to defining how many times a job may be attempted before it fails, you may define a time at which the job should timeout.
  • using the funnel method, you may limit jobs of a given type to only be processed by one worker at a time
  • using the throttle method, you may throttle a given type of job to only run 10 times every 60 seconds.
  • If an exception is thrown while the job is being processed, the job will automatically be released back onto the queue so it may be attempted again.
  • dispatch a Closure. This is great for quick, simple tasks that need to be executed outside of the current request cycle
  • When dispatching Closures to the queue, the Closure's code contents is cryptographically signed so it can not be modified in transit.
  • Laravel includes a queue worker that will process new jobs as they are pushed onto the queue.
  • once the queue:work command has started, it will continue to run until it is manually stopped or you close your terminal
  • queue workers are long-lived processes and store the booted application state in memory.
  • they will not notice changes in your code base after they have been started.
  • during your deployment process, be sure to restart your queue workers.
  • customize your queue worker even further by only processing particular queues for a given connection
  • The --once option may be used to instruct the worker to only process a single job from the queue
  • The --stop-when-empty option may be used to instruct the worker to process all jobs and then exit gracefully.
  • Daemon queue workers do not "reboot" the framework before processing each job.
  • you should free any heavy resources after each job completes.
  • Since queue workers are long-lived processes, they will not pick up changes to your code without being restarted.
  • restart the workers during your deployment process.
  • php artisan queue:restart
  • The queue uses the cache to store restart signals
  • the queue workers will die when the queue:restart command is executed, you should be running a process manager such as Supervisor to automatically restart the queue workers.
  • each queue connection defines a retry_after option. This option specifies how many seconds the queue connection should wait before retrying a job that is being processed.
  • The --timeout option specifies how long the Laravel queue master process will wait before killing off a child queue worker that is processing a job.
  • When jobs are available on the queue, the worker will keep processing jobs with no delay in between them.
  • While sleeping, the worker will not process any new jobs - the jobs will be processed after the worker wakes up again
  • the numprocs directive will instruct Supervisor to run 8 queue:work processes and monitor all of them, automatically restarting them if they fail.
  • Laravel includes a convenient way to specify the maximum number of times a job should be attempted.
  • define a failed method directly on your job class, allowing you to perform job specific clean-up when a failure occurs.
  • a great opportunity to notify your team via email or Slack.
  • php artisan queue:retry all
  • php artisan queue:flush
  • When injecting an Eloquent model into a job, it is automatically serialized before being placed on the queue and restored when the job is processed
張 旭

Incremental Backup - 0 views

  • xtrabackup supports incremental backups, which means that they can copy only the data that has changed since the last backup.
  • You can perform many incremental backups between each full backup, so you can set up a backup process such as a full backup once a week and an incremental backup every day, or full backups every day and incremental backups every hour.
  • each InnoDB page contains a log sequence number, or LSN. The LSN is the system version number for the entire database. Each page’s LSN shows how recently it was changed.
  • ...18 more annotations...
  • In full backups, two types of operations are performed to make the database consistent: committed transactions are replayed from the log file against the data files, and uncommitted transactions are rolled back.
  • You should use the --apply-log-only option to prevent the rollback phase.
  • An incremental backup copies each page whose LSN is newer than the previous incremental or full backup’s LSN.
  • Incremental backups do not actually compare the data files to the previous backup’s data files.
  • you can use --incremental-lsn to perform an incremental backup without even having the previous backup, if you know its LSN
  • Incremental backups simply read the pages and compare their LSN to the last backup’s LSN.
  • without a full backup to act as a base, the incremental backups are useless.
  • The xtrabackup binary writes a file called xtrabackup_checkpoints into the backup’s target directory. This file contains a line showing the to_lsn, which is the database’s LSN at the end of the backup.
  • from_lsn is the starting LSN of the backup and for incremental it has to be the same as to_lsn (if it is the last checkpoint) of the previous/base backup.
  • If you do not use the --apply-log-only option to prevent the rollback phase, then your incremental backups will be useless.
  • run --prepare as usual, but prevent the rollback phase
  • If you restore it and start MySQL, InnoDB will detect that the rollback phase was not performed, and it will do that in the background, as it usually does for a crash recovery upon start.
  • xtrabackup --prepare --apply-log-only --target-dir=/data/backups/base \ --incremental-dir=/data/backups/inc1
  • The final data is in /data/backups/base, not in the incremental directory.
  • Do not run xtrabackup --prepare with the same incremental backup directory (the value of –incremental-dir) more than once.
  • xtrabackup --prepare --target-dir=/data/backups/base \ --incremental-dir=/data/backups/inc2
  • --apply-log-only should be used when merging all incrementals except the last one.
  • Even if the --apply-log-only was used on the last step, backup would still be consistent but in that case server would perform the rollback phase.
張 旭

Best practices for writing Dockerfiles | Docker Documentation - 0 views

  • building efficient images
  • Docker builds images automatically by reading the instructions from a Dockerfile -- a text file that contains all commands, in order, needed to build a given image.
  • A Docker image consists of read-only layers each of which represents a Dockerfile instruction.
  • ...47 more annotations...
  • The layers are stacked and each one is a delta of the changes from the previous layer
  • When you run an image and generate a container, you add a new writable layer (the “container layer”) on top of the underlying layers.
  • By “ephemeral,” we mean that the container can be stopped and destroyed, then rebuilt and replaced with an absolute minimum set up and configuration.
  • Inadvertently including files that are not necessary for building an image results in a larger build context and larger image size.
  • To exclude files not relevant to the build (without restructuring your source repository) use a .dockerignore file. This file supports exclusion patterns similar to .gitignore files.
  • minimize image layers by leveraging build cache.
  • if your build contains several layers, you can order them from the less frequently changed (to ensure the build cache is reusable) to the more frequently changed
  • avoid installing extra or unnecessary packages just because they might be “nice to have.”
  • Each container should have only one concern.
  • Decoupling applications into multiple containers makes it easier to scale horizontally and reuse containers
  • Limiting each container to one process is a good rule of thumb, but it is not a hard and fast rule.
  • Use your best judgment to keep containers as clean and modular as possible.
  • do multi-stage builds and only copy the artifacts you need into the final image. This allows you to include tools and debug information in your intermediate build stages without increasing the size of the final image.
  • avoid duplication of packages and make the list much easier to update.
  • When building an image, Docker steps through the instructions in your Dockerfile, executing each in the order specified.
  • the next instruction is compared against all child images derived from that base image to see if one of them was built using the exact same instruction. If not, the cache is invalidated.
  • simply comparing the instruction in the Dockerfile with one of the child images is sufficient.
  • For the ADD and COPY instructions, the contents of the file(s) in the image are examined and a checksum is calculated for each file.
  • If anything has changed in the file(s), such as the contents and metadata, then the cache is invalidated.
  • cache checking does not look at the files in the container to determine a cache match.
  • In that case just the command string itself is used to find a match.
    • 張 旭
       
      RUN apt-get 這樣的指令,直接比對指令內容的意思。
  • Whenever possible, use current official repositories as the basis for your images.
  • Using RUN apt-get update && apt-get install -y ensures your Dockerfile installs the latest package versions with no further coding or manual intervention.
  • cache busting
  • Docker executes these commands using the /bin/sh -c interpreter, which only evaluates the exit code of the last operation in the pipe to determine success.
  • set -o pipefail && to ensure that an unexpected error prevents the build from inadvertently succeeding.
  • The CMD instruction should be used to run the software contained by your image, along with any arguments.
  • CMD should almost always be used in the form of CMD [“executable”, “param1”, “param2”…]
  • CMD should rarely be used in the manner of CMD [“param”, “param”] in conjunction with ENTRYPOINT
  • The ENV instruction is also useful for providing required environment variables specific to services you wish to containerize,
  • Each ENV line creates a new intermediate layer, just like RUN commands
  • COPY is preferred
  • COPY only supports the basic copying of local files into the container
  • the best use for ADD is local tar file auto-extraction into the image, as in ADD rootfs.tar.xz /
  • If you have multiple Dockerfile steps that use different files from your context, COPY them individually, rather than all at once.
  • using ADD to fetch packages from remote URLs is strongly discouraged; you should use curl or wget instead
  • The best use for ENTRYPOINT is to set the image’s main command, allowing that image to be run as though it was that command (and then use CMD as the default flags).
  • the image name can double as a reference to the binary as shown in the command
  • The VOLUME instruction should be used to expose any database storage area, configuration storage, or files/folders created by your docker container.
  • use VOLUME for any mutable and/or user-serviceable parts of your image
  • If you absolutely need functionality similar to sudo, such as initializing the daemon as root but running it as non-root), consider using “gosu”.
  • always use absolute paths for your WORKDIR
  • An ONBUILD command executes after the current Dockerfile build completes.
  • Think of the ONBUILD command as an instruction the parent Dockerfile gives to the child Dockerfile
  • A Docker build executes ONBUILD commands before any command in a child Dockerfile.
  • Be careful when putting ADD or COPY in ONBUILD. The “onbuild” image fails catastrophically if the new build’s context is missing the resource being added.
張 旭

Syntax - Configuration Language | Terraform | HashiCorp Developer - 0 views

  • the native syntax of the Terraform language, which is a rich language designed to be relatively easy for humans to read and write.
  • Terraform's configuration language is based on a more general language called HCL, and HCL's documentation usually uses the word "attribute" instead of "argument."
  • A particular block type may have any number of required labels, or it may require none
  • ...34 more annotations...
  • After the block type keyword and any labels, the block body is delimited by the { and } characters
  • Identifiers can contain letters, digits, underscores (_), and hyphens (-). The first character of an identifier must not be a digit, to avoid ambiguity with literal numbers.
  • The # single-line comment style is the default comment style and should be used in most cases.
  • he idiomatic style is to use the Unix convention
  • Indent two spaces for each nesting level.
  • align their equals signs
  • Use empty lines to separate logical groups of arguments within a block.
  • Use one blank line to separate the arguments from the blocks.
  • "meta-arguments" (as defined by the Terraform language semantics)
  • Avoid separating multiple blocks of the same type with other blocks of a different type, unless the block types are defined by semantics to form a family.
  • Resource names must start with a letter or underscore, and may contain only letters, digits, underscores, and dashes.
  • Each resource is associated with a single resource type, which determines the kind of infrastructure object it manages and what arguments and other attributes the resource supports.
  • Each resource type is implemented by a provider, which is a plugin for Terraform that offers a collection of resource types.
  • By convention, resource type names start with their provider's preferred local name.
  • Most publicly available providers are distributed on the Terraform Registry, which also hosts their documentation.
  • The Terraform language defines several meta-arguments, which can be used with any resource type to change the behavior of resources.
  • use precondition and postcondition blocks to specify assumptions and guarantees about how the resource operates.
  • Some resource types provide a special timeouts nested block argument that allows you to customize how long certain operations are allowed to take before being considered to have failed.
  • Timeouts are handled entirely by the resource type implementation in the provider
  • Most resource types do not support the timeouts block at all.
  • A resource block declares that you want a particular infrastructure object to exist with the given settings.
  • Destroy resources that exist in the state but no longer exist in the configuration.
  • Destroy and re-create resources whose arguments have changed but which cannot be updated in-place due to remote API limitations.
  • Expressions within a Terraform module can access information about resources in the same module, and you can use that information to help configure other resources. Use the <RESOURCE TYPE>.<NAME>.<ATTRIBUTE> syntax to reference a resource attribute in an expression.
  • resources often provide read-only attributes with information obtained from the remote API; this often includes things that can't be known until the resource is created, like the resource's unique random ID.
  • data sources, which are a special type of resource used only for looking up information.
  • some dependencies cannot be recognized implicitly in configuration.
  • local-only resource types exist for generating private keys, issuing self-signed TLS certificates, and even generating random ids.
  • The behavior of local-only resources is the same as all other resources, but their result data exists only within the Terraform state.
  • The count meta-argument accepts a whole number, and creates that many instances of the resource or module.
  • count.index — The distinct index number (starting with 0) corresponding to this instance.
  • the count value must be known before Terraform performs any remote resource actions. This means count can't refer to any resource attributes that aren't known until after a configuration is applied
  • Within nested provisioner or connection blocks, the special self object refers to the current resource instance, not the resource block as a whole.
  • This was fragile, because the resource instances were still identified by their index instead of the string values in the list.
  •  
    "the native syntax of the Terraform language, which is a rich language designed to be relatively easy for humans to read and write. "
張 旭

Choose when to run jobs | GitLab - 0 views

  • Rules are evaluated in order until the first match.
  • no rules match, so the job is not added to any other pipeline.
  • define a set of rules to exclude jobs in a few cases, but run them in all other cases
  • ...32 more annotations...
  • use all rules keywords, like if, changes, and exists, in the same rule. The rule evaluates to true only when all included keywords evaluate to true.
  • use parentheses with && and || to build more complicated variable expressions.
  • Use workflow to specify which types of pipelines can run.
  • every push to an open merge request’s source branch causes duplicated pipelines.
  • avoid duplicate pipelines by changing the job rules to avoid either push (branch) pipelines or merge request pipelines.
  • do not mix only/except jobs with rules jobs in the same pipeline.
  • For behavior similar to the only/except keywords, you can check the value of the $CI_PIPELINE_SOURCE variable
  • commonly used variables for if clauses
  • rules:changes expressions to determine when to add jobs to a pipeline
  • Use !reference tags to reuse rules in different jobs.
  • Use except to define when a job does not run.
  • only or except used without refs is the same as only:refs / except/refs
  • If you change multiple files, but only one file ends in .md, the build job is still skipped.
  • If you use multiple keywords with only or except, the keywords are evaluated as a single conjoined expression.
  • only includes the job if all of the keys have at least one condition that matches.
  • except excludes the job if any of the keys have at least one condition that matches.
  • With only, individual keys are logically joined by an AND
  • With except, individual keys are logically joined by an OR
  • To specify a job as manual, add when: manual to the job in the .gitlab-ci.yml file.
  • Use protected environments to define a list of users authorized to run a manual job.
  • Use when: delayed to execute scripts after a waiting period, or if you want to avoid jobs immediately entering the pending state.
  • To split a large job into multiple smaller jobs that run in parallel, use the parallel keyword
  • run a trigger job multiple times in parallel in a single pipeline, but with different variable values for each instance of the job.
  • The @ symbol denotes the beginning of a ref’s repository path. To match a ref name that contains the @ character in a regular expression, you must use the hex character code match \x40.
  • Compare a variable to a string
  • Check if a variable is undefined
  • Check if a variable is empty
  • Check if a variable exists
  • Check if a variable is empty
  • Matches are found when using =~.
  • Matches are not found when using !~.
  • Join variable expressions together with && or ||
  •  
    "Rules are evaluated in order until the first match."
張 旭

How Percona XtraBackup Works - 0 views

  • Percona XtraBackup is based on InnoDB‘s crash-recovery functionality.
  • it performs crash recovery on the files to make them a consistent, usable database again
  • InnoDB maintains a redo log, also called the transaction log. This contains a record of every change to InnoDB data.
  • ...14 more annotations...
  • When InnoDB starts, it inspects the data files and the transaction log, and performs two steps. It applies committed transaction log entries to the data files, and it performs an undo operation on any transactions that modified data but did not commit.
  • Percona XtraBackup works by remembering the log sequence number (LSN) when it starts, and then copying away the data files.
  • Percona XtraBackup runs a background process that watches the transaction log files, and copies changes from it.
  • Percona XtraBackup needs to do this continually
  • Percona XtraBackup needs the transaction log records for every change to the data files since it began execution.
  • Percona XtraBackup uses Backup locks where available as a lightweight alternative to FLUSH TABLES WITH READ LOCK.
  • Locking is only done for MyISAM and other non-InnoDB tables after Percona XtraBackup finishes backing up all InnoDB/XtraDB data and logs.
  • xtrabackup tries to avoid backup locks and FLUSH TABLES WITH READ LOCK when the instance contains only InnoDB tables. In this case, xtrabackup obtains binary log coordinates from performance_schema.log_status
  • When backup locks are supported by the server, xtrabackup first copies InnoDB data, runs the LOCK TABLES FOR BACKUP and then copies the MyISAM tables.
  • the STDERR of xtrabackup is not written in any file. You will have to redirect it to a file, e.g., xtrabackup OPTIONS 2> backupout.log
  • During the prepare phase, Percona XtraBackup performs crash recovery against the copied data files, using the copied transaction log file. After this is done, the database is ready to restore and use.
  • the tools enable you to do operations such as streaming and incremental backups with various combinations of copying the data files, copying the log files, and applying the logs to the data.
  • To restore a backup with xtrabackup you can use the --copy-back or --move-back options.
  • you may have to change the files’ ownership to mysql before starting the database server, as they will be owned by the user who created the backup.
  •  
    "Percona XtraBackup is based on InnoDB's crash-recovery functionality."
張 旭

The for_each Meta-Argument - Configuration Language | Terraform | HashiCorp Developer - 0 views

  • A given resource or module block cannot use both count and for_each
  • The for_each meta-argument accepts a map or a set of strings, and creates an instance for each item in that map or set
  • each.key — The map key (or set member) corresponding to this instance.
  • ...10 more annotations...
  • each.value — The map value corresponding to this instance. (If a set was provided, this is the same as each.key.)
  • for_each keys cannot be the result (or rely on the result of) of impure functions, including uuid, bcrypt, or timestamp, as their evaluation is deferred during the main evaluation step.
  • The value used in for_each is used to identify the resource instance and will always be disclosed in UI output, which is why sensitive values are not allowed.
  • if you would like to call keys(local.map), where local.map is an object with sensitive values (but non-sensitive keys), you can create a value to pass to for_each with toset([for k,v in local.map : k]).
  • for_each can't refer to any resource attributes that aren't known until after a configuration is applied (such as a unique ID generated by the remote API when an object is created).
  • he for_each argument does not implicitly convert lists or tuples to sets.
  • Transform a multi-level nested structure into a flat list by using nested for expressions with the flatten function.
  • Instances are identified by a map key (or set member) from the value provided to for_each
  • Within nested provisioner or connection blocks, the special self object refers to the current resource instance, not the resource block as a whole.
  • Conversion from list to set discards the ordering of the items in the list and removes any duplicate elements.
張 旭

elabs/pundit: Minimal authorization through OO design and pure Ruby classes - 0 views

  • The class implements some kind of query method
  • Pundit will call the current_user method to retrieve what to send into this argumen
  • put these classes in app/policies
  • ...49 more annotations...
  • in leveraging regular Ruby classes and object oriented design patterns to build a simple, robust and scaleable authorization system
  • map to the name of a particular controller action
  • In the generated ApplicationPolicy, the model object is called record.
  • record
  • authorize
  • authorize would have done something like this: raise "not authorized" unless PostPolicy.new(current_user, @post).update?
  • pass a second argument to authorize if the name of the permission you want to check doesn't match the action name.
  • you can chain it
  • authorize returns the object passed to it
  • the policy method in both the view and controller.
  • have some kind of view listing records which a particular user has access to
  • ActiveRecord::Relation
  • Instances of this class respond to the method resolve, which should return some kind of result which can be iterated over.
  • scope.where(published: true)
    • 張 旭
       
      我想大概的意思就是:如果是 admin 可以看到全部 post,如果不是只能看到 published = true 的 post
  • use this class from your controller via the policy_scope method:
  • PostPolicy::Scope.new(current_user, Post).resolve
  • policy_scope(@user.posts).each
  • This method will raise an exception if authorize has not yet been called.
  • verify_policy_scoped to your controller. This will raise an exception in the vein of verify_authorized. However, it tracks if policy_scope is used instead of authorize
  • need to conditionally bypass verification, you can use skip_authorization
  • skip_policy_scope
  • Having a mechanism that ensures authorization happens allows developers to thoroughly test authorization scenarios as units on the policy objects themselves.
  • Pundit doesn't do anything you couldn't have easily done yourself. It's a very small library, it just provides a few neat helpers.
  • all of the policy and scope classes are just plain Ruby classes
  • rails g pundit:policy post
  • define a filter that redirects unauthenticated users to the login page
  • fail more gracefully
  • raise Pundit::NotAuthorizedError, "must be logged in" unless user
  • having rails handle them as a 403 error and serving a 403 error page.
  • config.action_dispatch.rescue_responses["Pundit::NotAuthorizedError"] = :forbidden
  • with I18n to generate error messages
  • retrieve a policy for a record outside the controller or view
  • define a method in your controller called pundit_user
  • Pundit strongly encourages you to model your application in such a way that the only context you need for authorization is a user object and a domain model that you want to check authorization for.
  • Pundit does not allow you to pass additional arguments to policies
  • authorization is dependent on IP address in addition to the authenticated user
  • create a special class which wraps up both user and IP and passes it to the policy.
  • set up a permitted_attributes method in your policy
  • policy(@post).permitted_attributes
  • permitted_attributes(@post)
  • Pundit provides a convenient helper method
  • permit different attributes based on the current action,
  • If you have defined an action-specific method on your policy for the current action, the permitted_attributes helper will call it instead of calling permitted_attributes on your controller
  • If you don't have an instance for the first argument to authorize, then you can pass the class
  • restart the Rails server
  • Given there is a policy without a corresponding model / ruby class, you can retrieve it by passing a symbol
  • after_action :verify_authorized
  • It is not some kind of failsafe mechanism or authorization mechanism.
  • Pundit will work just fine without using verify_authorized and verify_policy_scoped
  •  
    "Minimal authorization through OO design and pure Ruby classes"
張 旭

Kubernetes Volumes Guide - Examples for NFS and Persistent Volume - 0 views

  • Persistent volumes exist beyond containers, pods, and nodes.
  • Volumes also let you share data between containers in the same pod.
  • data in that volume will be destroyed when the pod is restarted.
  • ...9 more annotations...
  • Persistent volumes are long-term storage in your Kubernetes cluster.
  • A pod uses a persistent volume claim to to get read and write access to the persistent volume.
  • NFS stands for Network File System – it's a shared filesystem that can be accessed over the network.
  • The NFS must already exist – Kubernetes doesn't run the NFS, pods in just access it.
  • what's already stored in the NFS is not deleted when a pod is destroyed. Data is persistent.
  • an NFS can be accessed from multiple pods at the same time. An NFS can be used to share data between pods!
  • volumes: - name: nfs-volume nfs: # URL for the NFS server server: 10.108.211.244 # Change this! path: /
  • volumeMounts: - name: nfs-volume mountPath: /var/nfs
  • Just add the volume to each pod, and add a volume mount to use the NFS volume from each container.
  •  
    "Persistent volumes exist beyond containers, pods, and nodes. "
張 旭

Using cache in GitLab CI with Docker-in-Docker | $AYMDEV() - 0 views

  • optimize our images.
  • When you build an image, it is made of multiple layers: we add a layer per instruction.
  • If we build the same image again without modifying any file, Docker will use existing layers rather than re-executing the instructions.
  • ...21 more annotations...
  • an image is made of multiple layers, and we can accelerate its build by using layers cache from the previous image version.
  • by using Docker-in-Docker, we get a fresh Docker instance per job which local registry is empty.
  • docker build --cache-from "$CI_REGISTRY_IMAGE:latest" -t "$CI_REGISTRY_IMAGE:new-tag"
  • But if you maintain a CHANGELOG in this format, and/or your Git tags are also your Docker tags, you can get the previous version and use cache the this image version.
  • script: - export PREVIOUS_VERSION=$(perl -lne 'print "v${1}" if /^##\s\[(\d\.\d\.\d)\]\s-\s\d{4}(?:-\d{2}){2}\s*$/' CHANGELOG.md | sed -n '2 p') - docker build --cache-from "$CI_REGISTRY_IMAGE:$PREVIOUS_VERSION" -t "$CI_REGISTRY_IMAGE:$CI_COMMIT_TAG" -f ./prod.Dockerfile .
  • « Docker layer caching » is enough to optimize the build time.
  • Cache in CI/CD is about saving directories or files across pipelines.
  • We're building a Docker image, dependencies are installed inside a container.We can't cache a dependencies directory if it doesn't exists in the job workspace.
  • Dependencies will always be installed from a container but will be extracted by the GitLab Runner in the job workspace. Our goal is to send the cached version in the build context.
  • We set the directories to cache in the job settings with a key to share the cache per branch and stage.
  • - docker cp app:/var/www/html/vendor/ ./vendor
  • after_script
  • - docker cp app:/var/www/html/node_modules/ ./node_modules
  • To avoid old dependencies to be mixed with the new ones, at the risk of keeping unused dependencies in cache, which would make cache and images heavier.
  • If you need to cache directories in testing jobs, it's easier: use volumes !
  • version your cache keys !
  • sharing Docker image between jobs
  • In every job, we automatically get artifacts from previous stages.
  • docker save $DOCKER_CI_IMAGE | gzip > app.tar.gz
  • I personally use the « push / pull » technique,
  • we docker push after the build, then we docker pull if needed in the next jobs.
張 旭

Run your CI/CD jobs in Docker containers | GitLab - 0 views

  • If you run Docker on your local machine, you can run tests in the container, rather than testing on a dedicated CI/CD server.
  • Run other services, like MySQL, in containers. Do this by specifying services in your .gitlab-ci.yml file.
  • By default, the executor pulls images from Docker Hub
  • ...10 more annotations...
  • Maps must contain at least the name option, which is the same image name as used for the string setting.
  • When a CI job runs in a Docker container, the before_script, script, and after_script commands run in the /builds/<project-path>/ directory. Your image may have a different default WORKDIR defined. To move to your WORKDIR, save the WORKDIR as an environment variable so you can reference it in the container during the job’s runtime.
  • The runner starts a Docker container using the defined entrypoint. The default from Dockerfile that may be overridden in the .gitlab-ci.yml file.
  • attaches itself to a running container.
  • sends the script to the container’s shell stdin and receives the output.
  • To override the entrypoint of a Docker image, define an empty entrypoint in the .gitlab-ci.yml file, so the runner does not start a useless shell layer. However, that does not work for all Docker versions. For Docker 17.06 and later, the entrypoint can be set to an empty value. For Docker 17.03 and earlier, the entrypoint can be set to /bin/sh -c, /bin/bash -c, or an equivalent shell available in the image.
  • The runner expects that the image has no entrypoint or that the entrypoint is prepared to start a shell command.
  • entrypoint: [""]
  • entrypoint: ["/bin/sh", "-c"]
  • A DOCKER_AUTH_CONFIG CI/CD variable
  •  
    "If you run Docker on your local machine, you can run tests in the container, rather than testing on a dedicated CI/CD server. "
張 旭

bbatsov/rails-style-guide: A community-driven Ruby on Rails 4 style guide - 0 views

  • custom initialization code in config/initializers. The code in initializers executes on application startup
  • Keep initialization code for each gem in a separate file with the same name as the gem
  • Mark additional assets for precompilation
  • ...90 more annotations...
  • config/environments/production.rb
  • Create an additional staging environment that closely resembles the production one
  • Keep any additional configuration in YAML files under the config/ directory
  • Rails::Application.config_for(:yaml_file)
  • Use nested routes to express better the relationship between ActiveRecord models
  • nest routes more than 1 level deep then use the shallow: true option
  • namespaced routes to group related actions
  • Don't use match to define any routes unless there is need to map multiple request types among [:get, :post, :patch, :put, :delete] to a single action using :via option.
  • Keep the controllers skinny
  • all the business logic should naturally reside in the model
  • Share no more than two instance variables between a controller and a view.
  • using a template
  • Prefer render plain: over render text
  • Prefer corresponding symbols to numeric HTTP status codes
  • without abbreviations
  • Keep your models for business logic and data-persistence only
  • Avoid altering ActiveRecord defaults (table names, primary key, etc)
  • Group macro-style methods (has_many, validates, etc) in the beginning of the class definition
  • Prefer has_many :through to has_and_belongs_to_many
  • self[:attribute]
  • self[:attribute] = value
  • validates
  • Keep custom validators under app/validators
  • Consider extracting custom validators to a shared gem
  • preferable to make a class method instead which serves the same purpose of the named scope
  • returns an ActiveRecord::Relation object
  • .update_attributes
  • Override the to_param method of the model
  • Use the friendly_id gem. It allows creation of human-readable URLs by using some descriptive attribute of the model instead of its id
  • find_each to iterate over a collection of AR objects
  • .find_each
  • .find_each
  • Looping through a collection of records from the database (using the all method, for example) is very inefficient since it will try to instantiate all the objects at once
  • always call before_destroy callbacks that perform validation with prepend: true
  • Define the dependent option to the has_many and has_one associations
  • always use the exception raising bang! method or handle the method return value.
  • When persisting AR objects
  • Avoid string interpolation in queries
  • param will be properly escaped
  • Consider using named placeholders instead of positional placeholders
  • use of find over where when you need to retrieve a single record by id
  • use of find_by over where and find_by_attribute
  • use of where.not over SQL
  • use heredocs with squish
  • Keep the schema.rb (or structure.sql) under version control.
  • Use rake db:schema:load instead of rake db:migrate to initialize an empty database
  • Enforce default values in the migrations themselves instead of in the application layer
  • change_column_default
  • imposing data integrity from the Rails app is impossible
  • use the change method instead of up and down methods.
  • constructive migrations
  • use models in migrations, make sure you define them so that you don't end up with broken migrations in the future
  • Don't use non-reversible migration commands in the change method.
  • In this case, block will be used by create_table in rollback
  • Never call the model layer directly from a view
  • Never make complex formatting in the views, export the formatting to a method in the view helper or the model.
  • When the labels of an ActiveRecord model need to be translated, use the activerecord scope
  • Separate the texts used in the views from translations of ActiveRecord attributes
  • Place the locale files for the models in a folder locales/models
  • the texts used in the views in folder locales/views
  • config/application.rb config.i18n.load_path += Dir[Rails.root.join('config', 'locales', '**', '*.{rb,yml}')]
  • I18n.t
  • I18n.l
  • Use "lazy" lookup for the texts used in views.
  • Use the dot-separated keys in the controllers and models
  • Reserve app/assets for custom stylesheets, javascripts, or images
  • Third party code such as jQuery or bootstrap should be placed in vendor/assets
  • Provide both HTML and plain-text view templates
  • config.action_mailer.raise_delivery_errors = true
  • Use a local SMTP server like Mailcatcher in the development environment
  • Provide default settings for the host name
  • The _url methods include the host name and the _path methods don't
  • _url
  • Format the from and to addresses properly
  • default from:
  • sending html emails all styles should be inline
  • Sending emails while generating page response should be avoided. It causes delays in loading of the page and request can timeout if multiple email are sent.
  • .start_with?
  • .end_with?
  • &.
  • Config your timezone accordingly in application.rb
  • config.active_record.default_timezone = :local
  • it can be only :utc or :local
  • Don't use Time.parse
  • Time.zone.parse
  • Don't use Time.now
  • Time.zone.now
  • Put gems used only for development or testing in the appropriate group in the Gemfile
  • Add all OS X specific gems to a darwin group in the Gemfile, and all Linux specific gems to a linux group
  • Do not remove the Gemfile.lock from version control.
張 旭

Using Workflows to Schedule Jobs - CircleCI - 1 views

  • A workflow is a set of rules for defining a collection of jobs and their run order.
  • Schedule workflows for jobs that should only run periodically.
  • run multiple jobs in parallel
  • ...37 more annotations...
  • rerun just the failed job
  • Builds without workflows require a build job.
  • Refer the YAML Anchors/Aliases documentation for information about how to alias and reuse syntax to keep your .circleci/config.yml file small.
  • workflow orchestration with two parallel jobs
  • jobs run according to configured requirements, each job waiting to start until the required job finishes successfully
  • requires: key
  • fans-out to run a set of acceptance test jobs in parallel, and finally fans-in to run a common deploy job.
  • Holding a Workflow for a Manual Approval
  • Workflows can be configured to wait for manual approval of a job before continuing to the next job
  • add a job to the jobs list with the key type: approval
  • approval is a special job type that is only available to jobs under the workflow key
  • The name of the job to hold is arbitrary - it could be wait or pause, for example, as long as the job has a type: approval key in it.
  • schedule a workflow to run at a certain time for specific branches.
  • The triggers key is only added under your workflows key
  • using cron syntax to represent Coordinated Universal Time (UTC) for specified branches.
  • By default, a workflow is triggered on every git push
  • the commit workflow has no triggers key and will run on every git push
  • The nightly workflow has a triggers key and will run on the specified schedule
  • Cron step syntax (for example, */1, */20) is not supported.
  • use a context to share environment variables
  • use the same shared environment variables when initiated by a user who is part of the organization.
  • CircleCI does not run workflows for tags unless you explicitly specify tag filters.
  • CircleCI branch and tag filters support the Java variant of regex pattern matching.
  • Each workflow has an associated workspace which can be used to transfer files to downstream jobs as the workflow progresses.
  • The workspace is an additive-only store of data.
  • Jobs can persist data to the workspace
  • Downstream jobs can attach the workspace to their container filesystem.
  • Attaching the workspace downloads and unpacks each layer based on the ordering of the upstream jobs in the workflow graph.
  • Workflows that include jobs running on multiple branches may require data to be shared using workspaces
  • To persist data from a job and make it available to other jobs, configure the job to use the persist_to_workspace key.
  • Files and directories named in the paths: property of persist_to_workspace will be uploaded to the workflow’s temporary workspace relative to the directory specified with the root key.
  • Configure a job to get saved data by configuring the attach_workspace key.
  • persist_to_workspace
  • attach_workspace
  • To rerun only a workflow’s failed jobs, click the Workflows icon in the app and select a workflow to see the status of each job, then click the Rerun button and select Rerun from failed.
  • if you do not see your workflows triggering, a configuration error is preventing the workflow from starting.
  • check your Workflows page of the CircleCI app (not the Job page)
  •  
    "A workflow is a set of rules for defining a collection of jobs and their run order."
張 旭

Providers - Configuration Language | Terraform | HashiCorp Developer - 0 views

  • Terraform relies on plugins called providers to interact with cloud providers, SaaS providers, and other APIs.
  • Terraform configurations must declare which providers they require so that Terraform can install and use them.
  • Each provider adds a set of resource types and/or data sources that Terraform can manage.
  • ...6 more annotations...
  • Every resource type is implemented by a provider; without providers, Terraform can't manage any kind of infrastructure.
  • The Terraform Registry is the main directory of publicly available Terraform providers, and hosts providers for most major infrastructure platforms.
  • Dependency Lock File documents an additional HCL file that can be included with a configuration, which tells Terraform to always use a specific set of provider versions.
  • Terraform CLI finds and installs providers when initializing a working directory. It can automatically download providers from a Terraform registry, or load them from a local mirror or cache.
  • To save time and bandwidth, Terraform CLI supports an optional plugin cache. You can enable the cache using the plugin_cache_dir setting in the CLI configuration file.
  • you can use Terraform CLI to create a dependency lock file and commit it to version control along with your configuration.
張 旭

Active Record Associations - Ruby on Rails Guides - 0 views

  • With Active Record associations, we can streamline these - and other - operations by declaratively telling Rails that there is a connection between the two models.
  • belongs_to has_one has_many has_many :through has_one :through has_and_belongs_to_many
  • an association is a connection between two Active Record models
  • ...195 more annotations...
  • Associations are implemented using macro-style calls, so that you can declaratively add features to your models
  • A belongs_to association sets up a one-to-one connection with another model, such that each instance of the declaring model "belongs to" one instance of the other model.
  • belongs_to associations must use the singular term.
  • belongs_to
  • A has_one association also sets up a one-to-one connection with another model, but with somewhat different semantics (and consequences).
  • This association indicates that each instance of a model contains or possesses one instance of another model
  • belongs_to
  • A has_many association indicates a one-to-many connection with another model.
  • This association indicates that each instance of the model has zero or more instances of another model.
  • belongs_to
  • A has_many :through association is often used to set up a many-to-many connection with another model
  • This association indicates that the declaring model can be matched with zero or more instances of another model by proceeding through a third model.
  • through:
  • through:
  • The collection of join models can be managed via the API
  • new join models are created for newly associated objects, and if some are gone their rows are deleted.
  • The has_many :through association is also useful for setting up "shortcuts" through nested has_many associations
  • A has_one :through association sets up a one-to-one connection with another model. This association indicates that the declaring model can be matched with one instance of another model by proceeding through a third model.
  • A has_and_belongs_to_many association creates a direct many-to-many connection with another model, with no intervening model.
  • id: false
  • The has_one relationship says that one of something is yours
  • using t.references :supplier instead.
  • declare a many-to-many relationship is to use has_many :through. This makes the association indirectly, through a join model
  • set up a has_many :through relationship if you need to work with the relationship model as an independent entity
  • set up a has_and_belongs_to_many relationship (though you'll need to remember to create the joining table in the database).
  • use has_many :through if you need validations, callbacks, or extra attributes on the join model
  • With polymorphic associations, a model can belong to more than one other model, on a single association.
  • belongs_to :imageable, polymorphic: true
  • a polymorphic belongs_to declaration as setting up an interface that any other model can use.
    • 張 旭
       
      _id 記錄的是不同類型的外連鍵 id;_type 記錄的是不同類型的表格名稱。
  • In designing a data model, you will sometimes find a model that should have a relation to itself
  • add a references column to the model itself
  • Controlling caching Avoiding name collisions Updating the schema Controlling association scope Bi-directional associations
  • All of the association methods are built around caching, which keeps the result of the most recent query available for further operations.
  • it is a bad idea to give an association a name that is already used for an instance method of ActiveRecord::Base. The association method would override the base method and break things.
  • You are responsible for maintaining your database schema to match your associations.
  • belongs_to associations you need to create foreign keys
  • has_and_belongs_to_many associations you need to create the appropriate join table
  • If you create an association some time after you build the underlying model, you need to remember to create an add_column migration to provide the necessary foreign key.
  • Active Record creates the name by using the lexical order of the class names
  • So a join between customer and order models will give the default join table name of "customers_orders" because "c" outranks "o" in lexical ordering.
  • For example, one would expect the tables "paper_boxes" and "papers" to generate a join table name of "papers_paper_boxes" because of the length of the name "paper_boxes", but it in fact generates a join table name of "paper_boxes_papers" (because the underscore '' is lexicographically _less than 's' in common encodings).
  • id: false
  • pass id: false to create_table because that table does not represent a model
  • By default, associations look for objects only within the current module's scope.
  • will work fine, because both the Supplier and the Account class are defined within the same scope.
  • To associate a model with a model in a different namespace, you must specify the complete class name in your association declaration:
  • class_name
  • class_name
  • Active Record provides the :inverse_of option
    • 張 旭
       
      意思是說第一次比較兩者的 first_name 是相同的;但透過 c 實體修改 first_name 之後,再次比較就不相同了,因為兩個是記憶體裡面兩個不同的物件。
  • preventing inconsistencies and making your application more efficient
  • Every association will attempt to automatically find the inverse association and set the :inverse_of option heuristically (based on the association name)
  • In database terms, this association says that this class contains the foreign key.
  • In all of these methods, association is replaced with the symbol passed as the first argument to belongs_to.
  • (force_reload = false)
  • The association method returns the associated object, if any. If no associated object is found, it returns nil.
  • the cached version will be returned.
  • The association= method assigns an associated object to this object.
  • Behind the scenes, this means extracting the primary key from the associate object and setting this object's foreign key to the same value.
  • The build_association method returns a new object of the associated type
  • but the associated object will not yet be saved.
  • The create_association method returns a new object of the associated type
  • once it passes all of the validations specified on the associated model, the associated object will be saved
  • raises ActiveRecord::RecordInvalid if the record is invalid.
  • dependent
  • counter_cache
  • :autosave :class_name :counter_cache :dependent :foreign_key :inverse_of :polymorphic :touch :validate
  • finding the number of belonging objects more efficient.
  • Although the :counter_cache option is specified on the model that includes the belongs_to declaration, the actual column must be added to the associated model.
  • add a column named orders_count to the Customer model.
  • :destroy, when the object is destroyed, destroy will be called on its associated objects.
  • deleted directly from the database without calling their destroy method.
  • Rails will not create foreign key columns for you
  • The :inverse_of option specifies the name of the has_many or has_one association that is the inverse of this association
  • set the :touch option to :true, then the updated_at or updated_on timestamp on the associated object will be set to the current time whenever this object is saved or destroyed
  • specify a particular timestamp attribute to update
  • If you set the :validate option to true, then associated objects will be validated whenever you save this object
  • By default, this is false: associated objects will not be validated when this object is saved.
  • where includes readonly select
  • make your code somewhat more efficient
  • no need to use includes for immediate associations
  • will be read-only when retrieved via the association
  • The select method lets you override the SQL SELECT clause that is used to retrieve data about the associated object
  • using the association.nil?
  • Assigning an object to a belongs_to association does not automatically save the object. It does not save the associated object either.
  • In database terms, this association says that the other class contains the foreign key.
  • the cached version will be returned.
  • :as :autosave :class_name :dependent :foreign_key :inverse_of :primary_key :source :source_type :through :validate
  • Setting the :as option indicates that this is a polymorphic association
  • :nullify causes the foreign key to be set to NULL. Callbacks are not executed.
  • It's necessary not to set or leave :nullify option for those associations that have NOT NULL database constraints.
  • The :source_type option specifies the source association type for a has_one :through association that proceeds through a polymorphic association.
  • The :source option specifies the source association name for a has_one :through association.
  • The :through option specifies a join model through which to perform the query
  • more efficient by including representatives in the association from suppliers to accounts
  • When you assign an object to a has_one association, that object is automatically saved (in order to update its foreign key).
  • If either of these saves fails due to validation errors, then the assignment statement returns false and the assignment itself is cancelled.
  • If the parent object (the one declaring the has_one association) is unsaved (that is, new_record? returns true) then the child objects are not saved.
  • If you want to assign an object to a has_one association without saving the object, use the association.build method
  • collection(force_reload = false) collection<<(object, ...) collection.delete(object, ...) collection.destroy(object, ...) collection=(objects) collection_singular_ids collection_singular_ids=(ids) collection.clear collection.empty? collection.size collection.find(...) collection.where(...) collection.exists?(...) collection.build(attributes = {}, ...) collection.create(attributes = {}) collection.create!(attributes = {})
  • In all of these methods, collection is replaced with the symbol passed as the first argument to has_many, and collection_singular is replaced with the singularized version of that symbol.
  • The collection<< method adds one or more objects to the collection by setting their foreign keys to the primary key of the calling model
  • The collection.delete method removes one or more objects from the collection by setting their foreign keys to NULL.
  • objects will be destroyed if they're associated with dependent: :destroy, and deleted if they're associated with dependent: :delete_all
  • The collection.destroy method removes one or more objects from the collection by running destroy on each object.
  • The collection_singular_ids method returns an array of the ids of the objects in the collection.
  • The collection_singular_ids= method makes the collection contain only the objects identified by the supplied primary key values, by adding and deleting as appropriate
  • The default strategy for has_many :through associations is delete_all, and for has_many associations is to set the foreign keys to NULL.
  • The collection.clear method removes all objects from the collection according to the strategy specified by the dependent option
  • uses the same syntax and options as ActiveRecord::Base.find
  • The collection.where method finds objects within the collection based on the conditions supplied but the objects are loaded lazily meaning that the database is queried only when the object(s) are accessed.
  • The collection.build method returns one or more new objects of the associated type. These objects will be instantiated from the passed attributes, and the link through their foreign key will be created, but the associated objects will not yet be saved.
  • The collection.create method returns a new object of the associated type. This object will be instantiated from the passed attributes, the link through its foreign key will be created, and, once it passes all of the validations specified on the associated model, the associated object will be saved.
  • :as :autosave :class_name :dependent :foreign_key :inverse_of :primary_key :source :source_type :through :validate
  • :delete_all causes all the associated objects to be deleted directly from the database (so callbacks will not execute)
  • :nullify causes the foreign keys to be set to NULL. Callbacks are not executed.
  • where includes readonly select
  • :conditions :through :polymorphic :foreign_key
  • By convention, Rails assumes that the column used to hold the primary key of the association is id. You can override this and explicitly specify the primary key with the :primary_key option.
  • The :source option specifies the source association name for a has_many :through association.
  • You only need to use this option if the name of the source association cannot be automatically inferred from the association name.
  • The :source_type option specifies the source association type for a has_many :through association that proceeds through a polymorphic association.
  • The :through option specifies a join model through which to perform the query.
  • has_many :through associations provide a way to implement many-to-many relationships,
  • By default, this is true: associated objects will be validated when this object is saved.
  • where extending group includes limit offset order readonly select uniq
  • If you use a hash-style where option, then record creation via this association will be automatically scoped using the hash
  • The extending method specifies a named module to extend the association proxy.
  • Association extensions
  • The group method supplies an attribute name to group the result set by, using a GROUP BY clause in the finder SQL.
  • has_many :line_items, -> { group 'orders.id' },                        through: :orders
  • more efficient by including line items in the association from customers to orders
  • The limit method lets you restrict the total number of objects that will be fetched through an association.
  • The offset method lets you specify the starting offset for fetching objects via an association
  • The order method dictates the order in which associated objects will be received (in the syntax used by an SQL ORDER BY clause).
  • Use the distinct method to keep the collection free of duplicates.
  • mostly useful together with the :through option
  • -> { distinct }
  • .all.inspect
  • If you want to make sure that, upon insertion, all of the records in the persisted association are distinct (so that you can be sure that when you inspect the association that you will never find duplicate records), you should add a unique index on the table itself
  • unique: true
  • Do not attempt to use include? to enforce distinctness in an association.
  • multiple users could be attempting this at the same time
  • checking for uniqueness using something like include? is subject to race conditions
  • When you assign an object to a has_many association, that object is automatically saved (in order to update its foreign key).
  • If any of these saves fails due to validation errors, then the assignment statement returns false and the assignment itself is cancelled.
  • If the parent object (the one declaring the has_many association) is unsaved (that is, new_record? returns true) then the child objects are not saved when they are added
  • All unsaved members of the association will automatically be saved when the parent is saved.
  • assign an object to a has_many association without saving the object, use the collection.build method
  • collection(force_reload = false) collection<<(object, ...) collection.delete(object, ...) collection.destroy(object, ...) collection=(objects) collection_singular_ids collection_singular_ids=(ids) collection.clear collection.empty? collection.size collection.find(...) collection.where(...) collection.exists?(...) collection.build(attributes = {}) collection.create(attributes = {}) collection.create!(attributes = {})
  • If the join table for a has_and_belongs_to_many association has additional columns beyond the two foreign keys, these columns will be added as attributes to records retrieved via that association.
  • Records returned with additional attributes will always be read-only
  • If you require this sort of complex behavior on the table that joins two models in a many-to-many relationship, you should use a has_many :through association instead of has_and_belongs_to_many.
  • aliased as collection.concat and collection.push.
  • The collection.delete method removes one or more objects from the collection by deleting records in the join table
  • not destroy the objects
  • The collection.destroy method removes one or more objects from the collection by running destroy on each record in the join table, including running callbacks.
  • not destroy the objects.
  • The collection.clear method removes every object from the collection by deleting the rows from the joining table.
  • not destroy the associated objects.
  • The collection.find method finds objects within the collection. It uses the same syntax and options as ActiveRecord::Base.find.
  • The collection.where method finds objects within the collection based on the conditions supplied but the objects are loaded lazily meaning that the database is queried only when the object(s) are accessed.
  • The collection.exists? method checks whether an object meeting the supplied conditions exists in the collection.
  • The collection.build method returns a new object of the associated type.
  • the associated object will not yet be saved.
  • the associated object will be saved.
  • The collection.create method returns a new object of the associated type.
  • it passes all of the validations specified on the associated model
  • :association_foreign_key :autosave :class_name :foreign_key :join_table :validate
  • The :foreign_key and :association_foreign_key options are useful when setting up a many-to-many self-join.
  • Rails assumes that the column in the join table used to hold the foreign key pointing to the other model is the name of that model with the suffix _id added.
  • If you set the :autosave option to true, Rails will save any loaded members and destroy members that are marked for destruction whenever you save the parent object.
  • By convention, Rails assumes that the column in the join table used to hold the foreign key pointing to this model is the name of this model with the suffix _id added.
  • By default, this is true: associated objects will be validated when this object is saved.
  • where extending group includes limit offset order readonly select uniq
  • set conditions via a hash
  • In this case, using @parts.assemblies.create or @parts.assemblies.build will create orders where the factory column has the value "Seattle"
  • If you use a hash-style where, then record creation via this association will be automatically scoped using the hash
  • using a GROUP BY clause in the finder SQL.
  • Use the uniq method to remove duplicates from the collection.
  • assign an object to a has_and_belongs_to_many association, that object is automatically saved (in order to update the join table).
  • If any of these saves fails due to validation errors, then the assignment statement returns false and the assignment itself is cancelled.
  • If the parent object (the one declaring the has_and_belongs_to_many association) is unsaved (that is, new_record? returns true) then the child objects are not saved when they are added.
  • If you want to assign an object to a has_and_belongs_to_many association without saving the object, use the collection.build method.
  • Normal callbacks hook into the life cycle of Active Record objects, allowing you to work with those objects at various points
  • define association callbacks by adding options to the association declaration
  • Rails passes the object being added or removed to the callback.
  • stack callbacks on a single event by passing them as an array
  • If a before_add callback throws an exception, the object does not get added to the collection.
  • if a before_remove callback throws an exception, the object does not get removed from the collection
  • extend these objects through anonymous modules, adding new finders, creators, or other methods.
  • order_number
  • use a named extension module
  • proxy_association.owner returns the object that the association is a part of.
張 旭

The Exhaustive Guide to Rails Time Zones - Alexander Danilenko - 0 views

  • you can use "wrong" methods in development and fairly often get valid results. But then you'll face with unexpected problems on production.
  • Ruby provides two classes to manage time: Time and DateTime
  • that's in Ruby! When it comes to Rails things get a bit more complicated
  • ...15 more annotations...
  • Rails gives your ability to configure application time zone.
  • we have 3 (!) different time zones in our application: system time, application time and database time.
  • DateTime.now and Time.now both give you the time in system time zone
  • Ruby standard library methods that know nothing about Rails time zone configuration
  • It's not Rails responsible for adding time zone, but ActiveSupport
  • switch from Time.now to Time.zone.now
  • Time.zone.now
  • no need to use it explicitly as there is shorter and more clear option.
  • Time.zone.today
  • Time.zone.local
  • Time.zone.at
  • Time.zone.parse
  • DateTime.strptime(str, "%Y-%m-%d %H:%M %Z").in_time_zone
  • always keep in mind that when you build time or date object you should respect current time zone.
  • use Time.zone instead of Time, Date or DateTime
1 - 20 of 203 Next › Last »
Showing 20 items per page