Skip to main content

Home/ Larvata/ Group items tagged variable

Rss Feed Group items tagged

張 旭

Helm | - 0 views

  • Helm will figure out where to install Tiller by reading your Kubernetes configuration file (usually $HOME/.kube/config). This is the same file that kubectl uses.
  • kubectl cluster-info
  • Role-Based Access Control (RBAC) enabled
  • ...133 more annotations...
  • initialize the local CLI
  • install Tiller into your Kubernetes cluster
  • helm install
  • helm init --upgrade
  • By default, when Tiller is installed, it does not have authentication enabled.
  • helm repo update
  • Without a max history set the history is kept indefinitely, leaving a large number of records for helm and tiller to maintain.
  • helm init --upgrade
  • Whenever you install a chart, a new release is created.
  • one chart can be installed multiple times into the same cluster. And each can be independently managed and upgraded.
  • helm list function will show you a list of all deployed releases.
  • helm delete
  • helm status
  • you can audit a cluster’s history, and even undelete a release (with helm rollback).
  • the Helm server (Tiller).
  • The Helm client (helm)
  • brew install kubernetes-helm
  • Tiller, the server portion of Helm, typically runs inside of your Kubernetes cluster.
  • it can also be run locally, and configured to talk to a remote Kubernetes cluster.
  • Role-Based Access Control - RBAC for short
  • create a service account for Tiller with the right roles and permissions to access resources.
  • run Tiller in an RBAC-enabled Kubernetes cluster.
  • run kubectl get pods --namespace kube-system and see Tiller running.
  • helm inspect
  • Helm will look for Tiller in the kube-system namespace unless --tiller-namespace or TILLER_NAMESPACE is set.
  • For development, it is sometimes easier to work on Tiller locally, and configure it to connect to a remote Kubernetes cluster.
  • even when running locally, Tiller will store release configuration in ConfigMaps inside of Kubernetes.
  • helm version should show you both the client and server version.
  • Tiller stores its data in Kubernetes ConfigMaps, you can safely delete and re-install Tiller without worrying about losing any data.
  • helm reset
  • The --node-selectors flag allows us to specify the node labels required for scheduling the Tiller pod.
  • --override allows you to specify properties of Tiller’s deployment manifest.
  • helm init --override manipulates the specified properties of the final manifest (there is no “values” file).
  • The --output flag allows us skip the installation of Tiller’s deployment manifest and simply output the deployment manifest to stdout in either JSON or YAML format.
  • By default, tiller stores release information in ConfigMaps in the namespace where it is running.
  • switch from the default backend to the secrets backend, you’ll have to do the migration for this on your own.
  • a beta SQL storage backend that stores release information in an SQL database (only postgres has been tested so far).
  • Once you have the Helm Client and Tiller successfully installed, you can move on to using Helm to manage charts.
  • Helm requires that kubelet have access to a copy of the socat program to proxy connections to the Tiller API.
  • A Release is an instance of a chart running in a Kubernetes cluster. One chart can often be installed many times into the same cluster.
  • helm init --client-only
  • helm init --dry-run --debug
  • A panic in Tiller is almost always the result of a failure to negotiate with the Kubernetes API server
  • Tiller and Helm have to negotiate a common version to make sure that they can safely communicate without breaking API assumptions
  • helm delete --purge
  • Helm stores some files in $HELM_HOME, which is located by default in ~/.helm
  • A Chart is a Helm package. It contains all of the resource definitions necessary to run an application, tool, or service inside of a Kubernetes cluster.
  • it like the Kubernetes equivalent of a Homebrew formula, an Apt dpkg, or a Yum RPM file.
  • A Repository is the place where charts can be collected and shared.
  • Set the $HELM_HOME environment variable
  • each time it is installed, a new release is created.
  • Helm installs charts into Kubernetes, creating a new release for each installation. And to find new charts, you can search Helm chart repositories.
  • chart repository is named stable by default
  • helm search shows you all of the available charts
  • helm inspect
  • To install a new package, use the helm install command. At its simplest, it takes only one argument: The name of the chart.
  • If you want to use your own release name, simply use the --name flag on helm install
  • additional configuration steps you can or should take.
  • Helm does not wait until all of the resources are running before it exits. Many charts require Docker images that are over 600M in size, and may take a long time to install into the cluster.
  • helm status
  • helm inspect values
  • helm inspect values stable/mariadb
  • override any of these settings in a YAML formatted file, and then pass that file during installation.
  • helm install -f config.yaml stable/mariadb
  • --values (or -f): Specify a YAML file with overrides.
  • --set (and its variants --set-string and --set-file): Specify overrides on the command line.
  • Values that have been --set can be cleared by running helm upgrade with --reset-values specified.
  • Chart designers are encouraged to consider the --set usage when designing the format of a values.yaml file.
  • --set-file key=filepath is another variant of --set. It reads the file and use its content as a value.
  • inject a multi-line text into values without dealing with indentation in YAML.
  • An unpacked chart directory
  • When a new version of a chart is released, or when you want to change the configuration of your release, you can use the helm upgrade command.
  • Kubernetes charts can be large and complex, Helm tries to perform the least invasive upgrade.
  • It will only update things that have changed since the last release
  • $ helm upgrade -f panda.yaml happy-panda stable/mariadb
  • deployment
  • If both are used, --set values are merged into --values with higher precedence.
  • The helm get command is a useful tool for looking at a release in the cluster.
  • helm rollback
  • A release version is an incremental revision. Every time an install, upgrade, or rollback happens, the revision number is incremented by 1.
  • helm history
  • a release name cannot be re-used.
  • you can rollback a deleted resource, and have it re-activate.
  • helm repo list
  • helm repo add
  • helm repo update
  • The Chart Development Guide explains how to develop your own charts.
  • helm create
  • helm lint
  • helm package
  • Charts that are archived can be loaded into chart repositories.
  • chart repository server
  • Tiller can be installed into any namespace.
  • Limiting Tiller to only be able to install into specific namespaces and/or resource types is controlled by Kubernetes RBAC roles and rolebindings
  • Release names are unique PER TILLER INSTANCE
  • Charts should only contain resources that exist in a single namespace.
  • not recommended to have multiple Tillers configured to manage resources in the same namespace.
  • a client-side Helm plugin. A plugin is a tool that can be accessed through the helm CLI, but which is not part of the built-in Helm codebase.
  • Helm plugins are add-on tools that integrate seamlessly with Helm. They provide a way to extend the core feature set of Helm, but without requiring every new feature to be written in Go and added to the core tool.
  • Helm plugins live in $(helm home)/plugins
  • The Helm plugin model is partially modeled on Git’s plugin model
  • helm referred to as the porcelain layer, with plugins being the plumbing.
  • helm plugin install https://github.com/technosophos/helm-template
  • command is the command that this plugin will execute when it is called.
  • Environment variables are interpolated before the plugin is executed.
  • The command itself is not executed in a shell. So you can’t oneline a shell script.
  • Helm is able to fetch Charts using HTTP/S
  • Variables like KUBECONFIG are set for the plugin if they are set in the outer environment.
  • In Kubernetes, granting a role to an application-specific service account is a best practice to ensure that your application is operating in the scope that you have specified.
  • restrict Tiller’s capabilities to install resources to certain namespaces, or to grant a Helm client running access to a Tiller instance.
  • Service account with cluster-admin role
  • The cluster-admin role is created by default in a Kubernetes cluster
  • Deploy Tiller in a namespace, restricted to deploying resources only in that namespace
  • Deploy Tiller in a namespace, restricted to deploying resources in another namespace
  • When running a Helm client in a pod, in order for the Helm client to talk to a Tiller instance, it will need certain privileges to be granted.
  • SSL Between Helm and Tiller
  • The Tiller authentication model uses client-side SSL certificates.
  • creating an internal CA, and using both the cryptographic and identity functions of SSL.
  • Helm is a powerful and flexible package-management and operations tool for Kubernetes.
  • default installation applies no security configurations
  • with a cluster that is well-secured in a private network with no data-sharing or no other users or teams.
  • With great power comes great responsibility.
  • Choose the Best Practices you should apply to your helm installation
  • Role-based access control, or RBAC
  • Tiller’s gRPC endpoint and its usage by Helm
  • Kubernetes employ a role-based access control (or RBAC) system (as do modern operating systems) to help mitigate the damage that can be done if credentials are misused or bugs exist.
  • In the default installation the gRPC endpoint that Tiller offers is available inside the cluster (not external to the cluster) without authentication configuration applied.
  • Tiller stores its release information in ConfigMaps. We suggest changing the default to Secrets.
  • release information
  • charts
  • charts are a kind of package that not only installs containers you may or may not have validated yourself, but it may also install into more than one namespace.
  • As with all shared software, in a controlled or shared environment you must validate all software you install yourself before you install it.
  • Helm’s provenance tools to ensure the provenance and integrity of charts
  •  
    "Helm will figure out where to install Tiller by reading your Kubernetes configuration file (usually $HOME/.kube/config). This is the same file that kubectl uses."
張 旭

Form Helpers - Ruby on Rails Guides - 0 views

  • While the *_tag helpers can certainly be used for this task they are somewhat verbose as for each tag you would have to ensure the correct parameter name is used and set the default value of the input appropriately.
  • For these helpers the first argument is the name of an instance variable and the second is the name of a method (usually an attribute) to call on that object.
  • must pass the name of an instance variable, i.e. :person or "person", not an actual instance of your model object.
  • ...2 more annotations...
  • provide a :namespace option for your form to ensure uniqueness of id attributes on form elements.
  • a form builder object (the f variable).
張 旭

Understanding the Nginx Configuration File Structure and Configuration Contexts | Digit... - 0 views

  • discussing the basic structure of an Nginx configuration file along with some guidelines on how to design your files
  • /etc/nginx/nginx.conf
  • In Nginx parlance, the areas that these brackets define are called "contexts" because they contain configuration details that are separated according to their area of concern
  • ...50 more annotations...
  • contexts can be layered within one another
  • if a directive is valid in multiple nested scopes, a declaration in a broader context will be passed on to any child contexts as default values.
  • The children contexts can override these values at will
  • Nginx will error out on reading a configuration file with directives that are declared in the wrong context.
  • The most general context is the "main" or "global" context
  • Any directive that exist entirely outside of these blocks is said to inhabit the "main" context
  • The main context represents the broadest environment for Nginx configuration.
  • The "events" context is contained within the "main" context. It is used to set global options that affect how Nginx handles connections at a general level.
  • Nginx uses an event-based connection processing model, so the directives defined within this context determine how worker processes should handle connections.
  • the connection processing method is automatically selected based on the most efficient choice that the platform has available
  • a worker will only take a single connection at a time
  • When configuring Nginx as a web server or reverse proxy, the "http" context will hold the majority of the configuration.
  • The http context is a sibling of the events context, so they should be listed side-by-side, rather than nested
  • fine-tune the TCP keep alive settings (keepalive_disable, keepalive_requests, and keepalive_timeout)
  • The "server" context is declared within the "http" context.
  • multiple declarations
  • each instance defines a specific virtual server to handle client requests
  • Each client request will be handled according to the configuration defined in a single server context, so Nginx must decide which server context is most appropriate based on details of the request.
  • listen: The ip address / port combination that this server block is designed to respond to.
  • server_name: This directive is the other component used to select a server block for processing.
  • "Host" header
  • configure files to try to respond to requests (try_files)
  • issue redirects and rewrites (return and rewrite)
  • set arbitrary variables (set)
  • Location contexts share many relational qualities with server contexts
  • multiple location contexts can be defined, each location is used to handle a certain type of client request, and each location is selected by virtue of matching the location definition against the client request through a selection algorithm
  • Location blocks live within server contexts and, unlike server blocks, can be nested inside one another.
  • While server contexts are selected based on the requested IP address/port combination and the host name in the "Host" header, location blocks further divide up the request handling within a server block by looking at the request URI
  • The request URI is the portion of the request that comes after the domain name or IP address/port combination.
  • New directives at this level allow you to reach locations outside of the document root (alias), mark the location as only internally accessible (internal), and proxy to other servers or locations (using http, fastcgi, scgi, and uwsgi proxying).
  • These can then be used to do A/B testing by providing different content to different hosts.
  • configures Perl handlers for the location they appear in
  • set the value of a variable depending on the value of another variable
  • used to map MIME types to the file extensions that should be associated with them.
  • this context defines a named pool of servers that Nginx can then proxy requests to
  • The upstream context should be placed within the http context, outside of any specific server contexts.
  • The upstream context can then be referenced by name within server or location blocks to pass requests of a certain type to the pool of servers that have been defined.
  • function as a high performance mail proxy server
  • The mail context is defined within the "main" or "global" context (outside of the http context).
  • Nginx has the ability to redirect authentication requests to an external authentication server
  • the if directive in Nginx will execute the instructions contained if a given test returns "true".
  • Since Nginx will test conditions of a request with many other purpose-made directives, if should not be used for most forms of conditional execution.
  • The limit_except context is used to restrict the use of certain HTTP methods within a location context.
  • The result of the above example is that any client can use the GET and HEAD verbs, but only clients coming from the 192.168.1.1/24 subnet are allowed to use other methods.
  • Many directives are valid in more than one context
  • it is usually best to declare directives in the highest context to which they are applicable, and overriding them in lower contexts as necessary.
  • Declaring at higher levels provides you with a sane default
  • Nginx already engages in a well-documented selection algorithm for things like selecting server blocks and location blocks.
  • instead of relying on rewrites to get a user supplied request into the format that you would like to work with, you should try to set up two blocks for the request, one of which represents the desired method, and the other that catches messy requests and redirects (and possibly rewrites) them to your correct block.
  • incorrect requests can get by with a redirect rather than a rewrite, which should execute with lower overhead.
張 旭

Run your CI/CD jobs in Docker containers | GitLab - 0 views

  • If you run Docker on your local machine, you can run tests in the container, rather than testing on a dedicated CI/CD server.
  • Run other services, like MySQL, in containers. Do this by specifying services in your .gitlab-ci.yml file.
  • By default, the executor pulls images from Docker Hub
  • ...10 more annotations...
  • Maps must contain at least the name option, which is the same image name as used for the string setting.
  • When a CI job runs in a Docker container, the before_script, script, and after_script commands run in the /builds/<project-path>/ directory. Your image may have a different default WORKDIR defined. To move to your WORKDIR, save the WORKDIR as an environment variable so you can reference it in the container during the job’s runtime.
  • The runner starts a Docker container using the defined entrypoint. The default from Dockerfile that may be overridden in the .gitlab-ci.yml file.
  • attaches itself to a running container.
  • sends the script to the container’s shell stdin and receives the output.
  • To override the entrypoint of a Docker image, define an empty entrypoint in the .gitlab-ci.yml file, so the runner does not start a useless shell layer. However, that does not work for all Docker versions. For Docker 17.06 and later, the entrypoint can be set to an empty value. For Docker 17.03 and earlier, the entrypoint can be set to /bin/sh -c, /bin/bash -c, or an equivalent shell available in the image.
  • The runner expects that the image has no entrypoint or that the entrypoint is prepared to start a shell command.
  • entrypoint: [""]
  • entrypoint: ["/bin/sh", "-c"]
  • A DOCKER_AUTH_CONFIG CI/CD variable
  •  
    "If you run Docker on your local machine, you can run tests in the container, rather than testing on a dedicated CI/CD server. "
張 旭

Provisioners Without a Resource | Terraform | HashiCorp Developer - 0 views

  • triggers - A map of values which should cause this set of provisioners to re-run. Values are meant to be interpolated references to variables or attributes of other resources.
  •  
    "triggers - A map of values which should cause this set of provisioners to re-run. Values are meant to be interpolated references to variables or attributes of other resources. "
crazylion lee

crystal-lang/crystal: The Crystal Programming Language - 1 views

  •  
    "Crystal is a programming language with the following goals: Have a syntax similar to Ruby (but compatibility with it is not a goal) Statically type-checked but without having to specify the type of variables or method arguments. Be able to call C code by writing bindings to it in Crystal. Have compile-time evaluation and generation of code, to avoid boilerplate code. Compile to efficient native code. "
張 旭

I am a puts debuggerer | Tenderlovemaking - 0 views

  • method(:render).source_location
  • method(:render).super_method.source_location
  • unbind the method from Kernel, rebind it to the request
  • ...9 more annotations...
  • The TracePoint allocated here will fire on every “call” event and the block will print out the method name and location of the call.
  • The -d flag will enable warnings and also print out every line where an exception was raised.
  • re-raised
  • RUBYOPT=-d bundle exec rake test
  • The RUBYOPT environment variable will get applied to every Ruby program that is executed in this shell, even sub shells executed by rake.
  • @sharing.freeze
  • can't modify frozen Hash
  • where the first mutation happened
  • I hit Ctrl-T (sorry, this only works on OS X, you’ll need to use kill on Linux
張 旭

Using Workflows to Schedule Jobs - CircleCI - 1 views

  • A workflow is a set of rules for defining a collection of jobs and their run order.
  • Schedule workflows for jobs that should only run periodically.
  • run multiple jobs in parallel
  • ...37 more annotations...
  • rerun just the failed job
  • Builds without workflows require a build job.
  • Refer the YAML Anchors/Aliases documentation for information about how to alias and reuse syntax to keep your .circleci/config.yml file small.
  • workflow orchestration with two parallel jobs
  • jobs run according to configured requirements, each job waiting to start until the required job finishes successfully
  • requires: key
  • fans-out to run a set of acceptance test jobs in parallel, and finally fans-in to run a common deploy job.
  • Holding a Workflow for a Manual Approval
  • Workflows can be configured to wait for manual approval of a job before continuing to the next job
  • add a job to the jobs list with the key type: approval
  • approval is a special job type that is only available to jobs under the workflow key
  • The name of the job to hold is arbitrary - it could be wait or pause, for example, as long as the job has a type: approval key in it.
  • schedule a workflow to run at a certain time for specific branches.
  • The triggers key is only added under your workflows key
  • using cron syntax to represent Coordinated Universal Time (UTC) for specified branches.
  • By default, a workflow is triggered on every git push
  • the commit workflow has no triggers key and will run on every git push
  • The nightly workflow has a triggers key and will run on the specified schedule
  • Cron step syntax (for example, */1, */20) is not supported.
  • use a context to share environment variables
  • use the same shared environment variables when initiated by a user who is part of the organization.
  • CircleCI does not run workflows for tags unless you explicitly specify tag filters.
  • CircleCI branch and tag filters support the Java variant of regex pattern matching.
  • Each workflow has an associated workspace which can be used to transfer files to downstream jobs as the workflow progresses.
  • The workspace is an additive-only store of data.
  • Jobs can persist data to the workspace
  • Downstream jobs can attach the workspace to their container filesystem.
  • Attaching the workspace downloads and unpacks each layer based on the ordering of the upstream jobs in the workflow graph.
  • Workflows that include jobs running on multiple branches may require data to be shared using workspaces
  • To persist data from a job and make it available to other jobs, configure the job to use the persist_to_workspace key.
  • Files and directories named in the paths: property of persist_to_workspace will be uploaded to the workflow’s temporary workspace relative to the directory specified with the root key.
  • Configure a job to get saved data by configuring the attach_workspace key.
  • persist_to_workspace
  • attach_workspace
  • To rerun only a workflow’s failed jobs, click the Workflows icon in the app and select a workflow to see the status of each job, then click the Rerun button and select Rerun from failed.
  • if you do not see your workflows triggering, a configuration error is preventing the workflow from starting.
  • check your Workflows page of the CircleCI app (not the Job page)
  •  
    "A workflow is a set of rules for defining a collection of jobs and their run order."
張 旭

Bash Reference Manual: Shell Parameter Expansion - 1 views

  • parameter expansion
  • command substitution
  • arithmetic expansion
  • ...16 more annotations...
  • The parameter name or symbol to be expanded may be enclosed in braces, which are optional but serve to protect the variable to be expanded from characters immediately following it which could be interpreted as part of the name.
  • When braces are used, the matching ending brace is the first ‘}’ not escaped by a backslash or within a quoted string, and not within an embedded arithmetic expansion, command substitution, or parameter expansion.
  • ${parameter}
  • braces are required
  • If the first character of parameter is an exclamation point (!), and parameter is not a nameref, it introduces a level of variable indirection.
  • ${parameter:-word} If parameter is unset or null, the expansion of word is substituted. Otherwise, the value of parameter is substituted.
  • ${parameter:=word} If parameter is unset or null, the expansion of word is assigned to parameter.
  • ${parameter:?word} If parameter is null or unset, the expansion of word (or a message to that effect if word is not present) is written to the standard error and the shell, if it is not interactive, exits.
  • ${parameter:+word} If parameter is null or unset, nothing is substituted, otherwise the expansion of word is substituted.
  • ${parameter:offset} ${parameter:offset:length}
  • Substring expansion applied to an associative array produces undefined results.
  • ${parameter/pattern/string} The pattern is expanded to produce a pattern just as in filename expansion.
  • If pattern begins with ‘/’, all matches of pattern are replaced with string.
  • Normally only the first match is replaced
  • The ‘^’ operator converts lowercase letters matching pattern to uppercase
  • the ‘,’ operator converts matching uppercase letters to lowercase.
張 旭

153 ☞ Sourcing a shell script in Make - 0 views

  • Make runs its commands in a subshell, so the variables exported by source aren’t available to other commands.
  • Make and Bash have awfully similar syntaces for setting variables
  • Make doesn’t parse quotes
  • ...2 more annotations...
  • needs to run before any target
  • If there’s a target for makefile, and its prerequisites are new, the target will run before anything, because the makefile might change.
張 旭

A Clear, Concise & Comfy Code Review Checklist - DEV Community - 0 views

  • 2 blocks doing similar things might be allowable, but 3 or more is a definitive red cross from me!
  • This would ultimately be integrated into your CI/CD pipelines running on each build/commit/deploy too; stopping any rogue commits getting in.
  • not to say that every code block that is duplicated needs to be refactored
  • ...13 more annotations...
  • Refactoring is a cyclical process
  • Before accessing variables within objects and collections make sure they are there! PLEASE!
  • If that variable is a constant or won't be changed then use the Const keyword in applicable languages and the CAPITALISATION convention to let users aware of your decisions about them.
  • The name of a method is more important than we give it credit for, when a method changes so should its name.
  • Make sure you are returning the right thing, trying to make it as generic as possible.
  • Void should do something, not change something!
  • Private vs Public, this is a big topic
  • keeping an eye of the access level of a method can stop issues further down the line
  • Gherkin is a Business Readable, Domain Specific Language created especially for behavior descriptions.
  • specify the 3 main points of a test, including what you expect to happen using the following keywords GIVEN,  WHEN / AND , THEN.
  • look at how the code is structured, make sure methods aren't too long, don't have too many branches, and that for and if statements could be simplified.
  • Use your initiative and discuss if a rewrite would benefit maintainability for the future.
  • it's unnecessary to leave commented code when working in and around areas with them.
張 旭

Better Specs { rspec guidelines with ruby } - 0 views

  • # when referring to an instance method's name
  • . (or ::) when referring to a class method's name
  • Be clear about what method you are describing.
  • ...9 more annotations...
  • When describing a context, start its description with "when" or "with".
  • is_expected.to respond_with
  • In isolated unit specs, you want each example to specify one (and only one) behavior.
  • Contexts are a powerful method to make your tests clear and well organized
  • not isolated
  • est valid, edge and invalid case.
  • new projects always use the expect syntax
  • On one line expectations or with implicit subject we should use is_expected.to
  • When you have to assign a variable instead of using a before block to create an instance variable, use let.
張 旭

Reusing Config - CircleCI - 0 views

  • Change the version key to 2.1 in your .circleci/config.yml file and commit the changes to test your build.
  • Reusable commands are invoked with specific parameters as steps inside a job.
  • Commands can use other commands in the scope of execution
  • ...19 more annotations...
  • Executors define the environment in which the steps of a job will be run.
  • Executor declarations in config outside of jobs can be used by all jobs in the scope of that declaration, allowing you to reuse a single executor definition across multiple jobs.
  • It is also possible to allow an orb to define the executor used by all of its commands.
  • When invoking an executor in a job any keys in the job itself will override those of the executor invoked.
  • Steps are used when you have a job or command that needs to mix predefined and user-defined steps.
  • Use the enum parameter type when you want to enforce that the value must be one from a specific set of string values.
  • Use an executor parameter type to allow the invoker of a job to decide what executor it will run on
  • invoke the same job more than once in the workflows stanza of config.yml, passing any necessary parameters as subkeys to the job.
  • If a job is declared inside an orb it can use commands in that orb or the global commands.
  • To use parameters in executors, define the parameters under the given executor.
  • Parameters are in-scope only within the job or command that defined them.
  • A single configuration may invoke a job multiple times.
  • Every job invocation may optionally accept two special arguments: pre-steps and post-steps.
  • Pre and post steps allow you to execute steps in a given job without modifying the job.
  • conditions are checked before a workflow is actually run
  • you cannot use a condition to check an environment variable.
  • Conditional steps may be located anywhere a regular step could and may only use parameter values as inputs.
  • A conditional step consists of a step with the key when or unless. Under this conditional key are the subkeys steps and condition
  • A condition is a single value that evaluates to true or false at the time the config is processed, so you cannot use environment variables as conditions
張 旭

Quick start - 0 views

  • Terragrunt will forward almost all commands, arguments, and options directly to Terraform, but based on the settings in your terragrunt.hcl file
  • the backend configuration does not support variables or expressions of any sort
  • the path_relative_to_include() built-in function,
  • ...9 more annotations...
  • The generate attribute is used to inform Terragrunt to generate the Terraform code for configuring the backend.
  • The find_in_parent_folders() helper will automatically search up the directory tree to find the root terragrunt.hcl and inherit the remote_state configuration from it.
  • Unlike the backend configurations, provider configurations support variables,
  • if you needed to modify the configuration to expose another parameter (e.g session_name), you would have to then go through each of your modules to make this change.
  • instructs Terragrunt to create the file provider.tf in the working directory (where Terragrunt calls terraform) before it calls any of the Terraform commands
  • large modules should be considered harmful.
  • it is a Bad Idea to define all of your environments (dev, stage, prod, etc), or even a large amount of infrastructure (servers, databases, load balancers, DNS, etc), in a single Terraform module.
  • Large modules are slow, insecure, hard to update, hard to code review, hard to test, and brittle (i.e., you have all your eggs in one basket).
  • Terragrunt allows you to define your Terraform code once and to promote a versioned, immutable “artifact” of that exact same code from environment to environment.
張 旭

Speeding up Docker image build process of a Rails application | BigBinary Blog - 1 views

  • we do not want to execute bundle install and rake assets:precompile tasks while starting a container in each pod which would prevent that pod from accepting any requests until these tasks are finished.
  • run bundle install and rake assets:precompile tasks while or before containerizing the Rails application.
  • Kubernetes pulls the image, starts a Docker container using that image inside the pod and runs puma server immediately.
  • ...7 more annotations...
  • Since source code changes often, the previously cached layer for the ADD instruction is invalidated due to the mismatching checksums.
  • The ARG instruction in the Dockerfile defines RAILS_ENV variable and is implicitly used as an environment variable by the rest of the instructions defined just after that ARG instruction.
  • RUN instructions are used to install gems and precompile static assets using sprockets
  • Instead, Docker automatically re-uses the previously built layer for the RUN bundle install instruction if the Gemfile.lock file remains unchanged.
  • everyday we need to build a lot of Docker images containing source code from varying Git branches as well as with varying environments.
  • it is hard for Docker to cache layers for bundle install and rake assets:precompile tasks and re-use those layers during every docker build command run with different application source code and a different environment.
  • By default, Bundler installs gems at the location which is set by Rubygems.
  •  
    "we do not want to execute bundle install and rake assets:precompile tasks while starting a container in each pod which would prevent that pod from accepting any requests until these tasks are finished."
張 旭

Rails Routing from the Outside In - Ruby on Rails Guides - 0 views

  • Resource routing allows you to quickly declare all of the common routes for a given resourceful controller.
  • Rails would dispatch that request to the destroy method on the photos controller with { id: '17' } in params.
  • a resourceful route provides a mapping between HTTP verbs and URLs to controller actions.
  • ...86 more annotations...
  • each action also maps to particular CRUD operations in a database
  • resource :photo and resources :photos creates both singular and plural routes that map to the same controller (PhotosController).
  • One way to avoid deep nesting (as recommended above) is to generate the collection actions scoped under the parent, so as to get a sense of the hierarchy, but to not nest the member actions.
  • to only build routes with the minimal amount of information to uniquely identify the resource
  • The shallow method of the DSL creates a scope inside of which every nesting is shallow
  • These concerns can be used in resources to avoid code duplication and share behavior across routes
  • add a member route, just add a member block into the resource block
  • You can leave out the :on option, this will create the same member route except that the resource id value will be available in params[:photo_id] instead of params[:id].
  • Singular Resources
  • use a singular resource to map /profile (rather than /profile/:id) to the show action
  • Passing a String to get will expect a controller#action format
  • workaround
  • organize groups of controllers under a namespace
  • route /articles (without the prefix /admin) to Admin::ArticlesController
  • route /admin/articles to ArticlesController (without the Admin:: module prefix)
  • Nested routes allow you to capture this relationship in your routing.
  • helpers take an instance of Magazine as the first parameter (magazine_ads_url(@magazine)).
  • Resources should never be nested more than 1 level deep.
  • via the :shallow option
  • a balance between descriptive routes and deep nesting
  • :shallow_path prefixes member paths with the specified parameter
  • Routing Concerns allows you to declare common routes that can be reused inside other resources and routes
  • Rails can also create paths and URLs from an array of parameters.
  • use url_for with a set of objects
  • In helpers like link_to, you can specify just the object in place of the full url_for call
  • insert the action name as the first element of the array
  • This will recognize /photos/1/preview with GET, and route to the preview action of PhotosController, with the resource id value passed in params[:id]. It will also create the preview_photo_url and preview_photo_path helpers.
  • pass :on to a route, eliminating the block:
  • Collection Routes
  • This will enable Rails to recognize paths such as /photos/search with GET, and route to the search action of PhotosController. It will also create the search_photos_url and search_photos_path route helpers.
  • simple routing makes it very easy to map legacy URLs to new Rails actions
  • add an alternate new action using the :on shortcut
  • When you set up a regular route, you supply a series of symbols that Rails maps to parts of an incoming HTTP request.
  • :controller maps to the name of a controller in your application
  • :action maps to the name of an action within that controller
  • optional parameters, denoted by parentheses
  • This route will also route the incoming request of /photos to PhotosController#index, since :action and :id are
  • use a constraint on :controller that matches the namespace you require
  • dynamic segments don't accept dots
  • The params will also include any parameters from the query string
  • :defaults option.
  • set params[:format] to "jpg"
  • cannot override defaults via query parameters
  • specify a name for any route using the :as option
  • create logout_path and logout_url as named helpers in your application.
  • Inside the show action of UsersController, params[:username] will contain the username for the user.
  • should use the get, post, put, patch and delete methods to constrain a route to a particular verb.
  • use the match method with the :via option to match multiple verbs at once
  • Routing both GET and POST requests to a single action has security implications
  • 'GET' in Rails won't check for CSRF token. You should never write to the database from 'GET' requests
  • use the :constraints option to enforce a format for a dynamic segment
  • constraints
  • don't need to use anchors
  • Request-Based Constraints
  • the same name as the hash key and then compare the return value with the hash value.
  • constraint values should match the corresponding Request object method return type
    • 張 旭
       
      應該就是檢查來源的 request, 如果是某個特定的 request 來訪問的,就通過。
  • blacklist
    • 張 旭
       
      這裡有點複雜 ...
  • redirect helper
  • reuse dynamic segments from the match in the path to redirect
  • this redirection is a 301 "Moved Permanently" redirect.
  • root method
  • put the root route at the top of the file
  • The root route only routes GET requests to the action.
  • root inside namespaces and scopes
  • For namespaced controllers you can use the directory notation
  • Only the directory notation is supported
  • use the :constraints option to specify a required format on the implicit id
  • specify a single constraint to apply to a number of routes by using the block
  • non-resourceful routes
  • :id parameter doesn't accept dots
  • :as option lets you override the normal naming for the named route helpers
  • use the :as option to prefix the named route helpers that Rails generates for a rout
  • prevent name collisions
  • prefix routes with a named parameter
  • This will provide you with URLs such as /bob/articles/1 and will allow you to reference the username part of the path as params[:username] in controllers, helpers and views
  • :only option
  • :except option
  • generate only the routes that you actually need can cut down on memory use and speed up the routing process.
  • alter path names
  • http://localhost:3000/rails/info/routes
  • rake routes
  • setting the CONTROLLER environment variable
  • Routes should be included in your testing strategy
  • assert_generates assert_recognizes assert_routing
張 旭

How to write excellent Dockerfiles - 0 views

  • minimize image size, build time and number of layers.
  • maximize build cache usage
  • Container should do one thing
    • 張 旭
       
      這個有待商榷,在 baseimage 的 blog 介紹中有詳細的討論。
  • ...25 more annotations...
  • Use COPY and RUN commands in proper order
  • Merge multiple RUN commands into one
  • alpine versions should be enough
  • Use exec inside entrypoint script
  • Prefer COPY over ADD
  • Specify default environment variables, ports and volumes inside Dockerfile
  • problems with zombie processes
  • prepare separate Docker image for each component, and use Docker Compose to easily start multiple containers at the same time
  • Layers are cached and reused
  • Layers are immutable
  • They both makes you cry
  • rely on our base image updates
  • make a cleanup
  • alpine is a very tiny linux distribution, just about 4 MB in size.
  • Your disk will love you :)
  • WORKDIR command changes default directory, where we run our RUN / CMD / ENTRYPOINT commands.
  • CMD is a default command run after creating container without other command specified.
  • put your command inside array
  • entrypoint adds complexity
  • Entrypoint is a script, that will be run instead of command, and receive command as arguments
  • Without it, we would not be able to stop our application grecefully (SIGTERM is swallowed by bash script).
  • Use "exec" inside entrypoint script
  • ADD has some logic for downloading remote files and extracting archives.
  • stick with COPY.
  • ADD
    • 張 旭
       
      不是說要用 COPY 嗎?
張 旭

Understanding Ruby Blocks, Procs and Lambdas - Robert Sosinski - 0 views

  • Ruby has four different ways of using closures
  • The code block interacts with a variable
  • collect! will use the code provided within the block on each element in the array
  • ...10 more annotations...
  • do not need to specify the name of blocks within your methods
  • use the yield keyword. Calling this keyword will execute the code within the block provided to the method
  • A block is just a Proc!
  • saving reusable code as an object itself. This reusable code is called a Proc (short for procedure)
  • The only difference between blocks and Procs is that a block is a Proc that cannot be saved, and as such, is a one time use solution
  • a bang at the end
  • That value is now available to the block and returned by the yield call
  • The block has the number available (also called n)
  • a flexible way to interact with our method
  • an ampersand argument
‹ Previous 21 - 40 of 66 Next › Last »
Showing 20 items per page