Skip to main content

Home/ Larvata/ Group items tagged url

Rss Feed Group items tagged

crazylion lee

The History of the URL: Domain, Protocol, and Port - Eager Blog - 0 views

  •  
    "The History of the URL: Domain, Protocol, and Port"
張 旭

Rails Routing from the Outside In - Ruby on Rails Guides - 0 views

  • Resource routing allows you to quickly declare all of the common routes for a given resourceful controller.
  • Rails would dispatch that request to the destroy method on the photos controller with { id: '17' } in params.
  • a resourceful route provides a mapping between HTTP verbs and URLs to controller actions.
  • ...86 more annotations...
  • each action also maps to particular CRUD operations in a database
  • resource :photo and resources :photos creates both singular and plural routes that map to the same controller (PhotosController).
  • One way to avoid deep nesting (as recommended above) is to generate the collection actions scoped under the parent, so as to get a sense of the hierarchy, but to not nest the member actions.
  • to only build routes with the minimal amount of information to uniquely identify the resource
  • The shallow method of the DSL creates a scope inside of which every nesting is shallow
  • These concerns can be used in resources to avoid code duplication and share behavior across routes
  • add a member route, just add a member block into the resource block
  • You can leave out the :on option, this will create the same member route except that the resource id value will be available in params[:photo_id] instead of params[:id].
  • Singular Resources
  • use a singular resource to map /profile (rather than /profile/:id) to the show action
  • Passing a String to get will expect a controller#action format
  • workaround
  • organize groups of controllers under a namespace
  • route /articles (without the prefix /admin) to Admin::ArticlesController
  • route /admin/articles to ArticlesController (without the Admin:: module prefix)
  • Nested routes allow you to capture this relationship in your routing.
  • helpers take an instance of Magazine as the first parameter (magazine_ads_url(@magazine)).
  • Resources should never be nested more than 1 level deep.
  • via the :shallow option
  • a balance between descriptive routes and deep nesting
  • :shallow_path prefixes member paths with the specified parameter
  • Routing Concerns allows you to declare common routes that can be reused inside other resources and routes
  • Rails can also create paths and URLs from an array of parameters.
  • use url_for with a set of objects
  • In helpers like link_to, you can specify just the object in place of the full url_for call
  • insert the action name as the first element of the array
  • This will recognize /photos/1/preview with GET, and route to the preview action of PhotosController, with the resource id value passed in params[:id]. It will also create the preview_photo_url and preview_photo_path helpers.
  • pass :on to a route, eliminating the block:
  • Collection Routes
  • This will enable Rails to recognize paths such as /photos/search with GET, and route to the search action of PhotosController. It will also create the search_photos_url and search_photos_path route helpers.
  • simple routing makes it very easy to map legacy URLs to new Rails actions
  • add an alternate new action using the :on shortcut
  • When you set up a regular route, you supply a series of symbols that Rails maps to parts of an incoming HTTP request.
  • :controller maps to the name of a controller in your application
  • :action maps to the name of an action within that controller
  • optional parameters, denoted by parentheses
  • This route will also route the incoming request of /photos to PhotosController#index, since :action and :id are
  • use a constraint on :controller that matches the namespace you require
  • dynamic segments don't accept dots
  • The params will also include any parameters from the query string
  • :defaults option.
  • set params[:format] to "jpg"
  • cannot override defaults via query parameters
  • specify a name for any route using the :as option
  • create logout_path and logout_url as named helpers in your application.
  • Inside the show action of UsersController, params[:username] will contain the username for the user.
  • should use the get, post, put, patch and delete methods to constrain a route to a particular verb.
  • use the match method with the :via option to match multiple verbs at once
  • Routing both GET and POST requests to a single action has security implications
  • 'GET' in Rails won't check for CSRF token. You should never write to the database from 'GET' requests
  • use the :constraints option to enforce a format for a dynamic segment
  • constraints
  • don't need to use anchors
  • Request-Based Constraints
  • the same name as the hash key and then compare the return value with the hash value.
  • constraint values should match the corresponding Request object method return type
    • 張 旭
       
      應該就是檢查來源的 request, 如果是某個特定的 request 來訪問的,就通過。
  • blacklist
    • 張 旭
       
      這裡有點複雜 ...
  • redirect helper
  • reuse dynamic segments from the match in the path to redirect
  • this redirection is a 301 "Moved Permanently" redirect.
  • root method
  • put the root route at the top of the file
  • The root route only routes GET requests to the action.
  • root inside namespaces and scopes
  • For namespaced controllers you can use the directory notation
  • Only the directory notation is supported
  • use the :constraints option to specify a required format on the implicit id
  • specify a single constraint to apply to a number of routes by using the block
  • non-resourceful routes
  • :id parameter doesn't accept dots
  • :as option lets you override the normal naming for the named route helpers
  • use the :as option to prefix the named route helpers that Rails generates for a rout
  • prevent name collisions
  • prefix routes with a named parameter
  • This will provide you with URLs such as /bob/articles/1 and will allow you to reference the username part of the path as params[:username] in controllers, helpers and views
  • :only option
  • :except option
  • generate only the routes that you actually need can cut down on memory use and speed up the routing process.
  • alter path names
  • http://localhost:3000/rails/info/routes
  • rake routes
  • setting the CONTROLLER environment variable
  • Routes should be included in your testing strategy
  • assert_generates assert_recognizes assert_routing
張 旭

Auto DevOps | GitLab - 0 views

  • Auto DevOps provides pre-defined CI/CD configuration which allows you to automatically detect, build, test, deploy, and monitor your applications
  • Just push your code and GitLab takes care of everything else.
  • Auto DevOps will be automatically disabled on the first pipeline failure.
  • ...78 more annotations...
  • Your project will continue to use an alternative CI/CD configuration file if one is found
  • Auto DevOps works with any Kubernetes cluster;
  • using the Docker or Kubernetes executor, with privileged mode enabled.
  • Base domain (needed for Auto Review Apps and Auto Deploy)
  • Kubernetes (needed for Auto Review Apps, Auto Deploy, and Auto Monitoring)
  • Prometheus (needed for Auto Monitoring)
  • scrape your Kubernetes cluster.
  • project level as a variable: KUBE_INGRESS_BASE_DOMAIN
  • A wildcard DNS A record matching the base domain(s) is required
  • Once set up, all requests will hit the load balancer, which in turn will route them to the Kubernetes pods that run your application(s).
  • review/ (every environment starting with review/)
  • staging
  • production
  • need to define a separate KUBE_INGRESS_BASE_DOMAIN variable for all the above based on the environment.
  • Continuous deployment to production: Enables Auto Deploy with master branch directly deployed to production.
  • Continuous deployment to production using timed incremental rollout
  • Automatic deployment to staging, manual deployment to production
  • Auto Build creates a build of the application using an existing Dockerfile or Heroku buildpacks.
  • If a project’s repository contains a Dockerfile, Auto Build will use docker build to create a Docker image.
  • Each buildpack requires certain files to be in your project’s repository for Auto Build to successfully build your application.
  • Auto Test automatically runs the appropriate tests for your application using Herokuish and Heroku buildpacks by analyzing your project to detect the language and framework.
  • Auto Code Quality uses the Code Quality image to run static analysis and other code checks on the current code.
  • Static Application Security Testing (SAST) uses the SAST Docker image to run static analysis on the current code and checks for potential security issues.
  • Dependency Scanning uses the Dependency Scanning Docker image to run analysis on the project dependencies and checks for potential security issues.
  • License Management uses the License Management Docker image to search the project dependencies for their license.
  • Vulnerability Static Analysis for containers uses Clair to run static analysis on a Docker image and checks for potential security issues.
  • Review Apps are temporary application environments based on the branch’s code so developers, designers, QA, product managers, and other reviewers can actually see and interact with code changes as part of the review process. Auto Review Apps create a Review App for each branch. Auto Review Apps will deploy your app to your Kubernetes cluster only. When no cluster is available, no deployment will occur.
  • The Review App will have a unique URL based on the project ID, the branch or tag name, and a unique number, combined with the Auto DevOps base domain.
  • Review apps are deployed using the auto-deploy-app chart with Helm, which can be customized.
  • Your apps should not be manipulated outside of Helm (using Kubernetes directly).
  • Dynamic Application Security Testing (DAST) uses the popular open source tool OWASP ZAProxy to perform an analysis on the current code and checks for potential security issues.
  • Auto Browser Performance Testing utilizes the Sitespeed.io container to measure the performance of a web page.
  • add the paths to a file named .gitlab-urls.txt in the root directory, one per line.
  • After a branch or merge request is merged into the project’s default branch (usually master), Auto Deploy deploys the application to a production environment in the Kubernetes cluster, with a namespace based on the project name and unique project ID
  • Auto Deploy doesn’t include deployments to staging or canary by default, but the Auto DevOps template contains job definitions for these tasks if you want to enable them.
  • Apps are deployed using the auto-deploy-app chart with Helm.
  • For internal and private projects a GitLab Deploy Token will be automatically created, when Auto DevOps is enabled and the Auto DevOps settings are saved.
  • If the GitLab Deploy Token cannot be found, CI_REGISTRY_PASSWORD is used. Note that CI_REGISTRY_PASSWORD is only valid during deployment.
  • If present, DB_INITIALIZE will be run as a shell command within an application pod as a helm post-install hook.
  • a post-install hook means that if any deploy succeeds, DB_INITIALIZE will not be processed thereafter.
  • DB_MIGRATE will be run as a shell command within an application pod as a helm pre-upgrade hook.
    • 張 旭
       
      如果專案類型不同,就要去查 buildpacks 裡面如何叫用該指令,例如 laravel 的 migration
    • 張 旭
       
      如果是自己的 Dockerfile 建立起來的,看來就不用鳥 buildpacks 的作法
  • Once your application is deployed, Auto Monitoring makes it possible to monitor your application’s server and response metrics right out of the box.
  • annotate the NGINX Ingress deployment to be scraped by Prometheus using prometheus.io/scrape: "true" and prometheus.io/port: "10254"
  • If you are also using Auto Review Apps and Auto Deploy and choose to provide your own Dockerfile, make sure you expose your application to port 5000 as this is the port assumed by the default Helm chart.
  • While Auto DevOps provides great defaults to get you started, you can customize almost everything to fit your needs; from custom buildpacks, to Dockerfiles, Helm charts, or even copying the complete CI/CD configuration into your project to enable staging and canary deployments, and more.
  • If your project has a Dockerfile in the root of the project repo, Auto DevOps will build a Docker image based on the Dockerfile rather than using buildpacks.
  • Auto DevOps uses Helm to deploy your application to Kubernetes.
  • Bundled chart - If your project has a ./chart directory with a Chart.yaml file in it, Auto DevOps will detect the chart and use it instead of the default one.
  • Create a project variable AUTO_DEVOPS_CHART with the URL of a custom chart to use or create two project variables AUTO_DEVOPS_CHART_REPOSITORY with the URL of a custom chart repository and AUTO_DEVOPS_CHART with the path to the chart.
  • make use of the HELM_UPGRADE_EXTRA_ARGS environment variable to override the default values in the values.yaml file in the default Helm chart.
  • specify the use of a custom Helm chart per environment by scoping the environment variable to the desired environment.
    • 張 旭
       
      Auto DevOps 就是一套人家寫好好的傳便便的 .gitlab-ci.yml
  • Your additions will be merged with the Auto DevOps template using the behaviour described for include
  • copy and paste the contents of the Auto DevOps template into your project and edit this as needed.
  • In order to support applications that require a database, PostgreSQL is provisioned by default.
  • Set up the replica variables using a project variable and scale your application by just redeploying it!
  • You should not scale your application using Kubernetes directly.
  • Some applications need to define secret variables that are accessible by the deployed application.
  • Auto DevOps detects variables where the key starts with K8S_SECRET_ and make these prefixed variables available to the deployed application, as environment variables.
  • Auto DevOps pipelines will take your application secret variables to populate a Kubernetes secret.
  • Environment variables are generally considered immutable in a Kubernetes pod.
  • if you update an application secret without changing any code then manually create a new pipeline, you will find that any running application pods will not have the updated secrets.
  • Variables with multiline values are not currently supported
  • The normal behavior of Auto DevOps is to use Continuous Deployment, pushing automatically to the production environment every time a new pipeline is run on the default branch.
  • If STAGING_ENABLED is defined in your project (e.g., set STAGING_ENABLED to 1 as a CI/CD variable), then the application will be automatically deployed to a staging environment, and a production_manual job will be created for you when you’re ready to manually deploy to production.
  • If CANARY_ENABLED is defined in your project (e.g., set CANARY_ENABLED to 1 as a CI/CD variable) then two manual jobs will be created: canary which will deploy the application to the canary environment production_manual which is to be used by you when you’re ready to manually deploy to production.
  • If INCREMENTAL_ROLLOUT_MODE is set to manual in your project, then instead of the standard production job, 4 different manual jobs will be created: rollout 10% rollout 25% rollout 50% rollout 100%
  • The percentage is based on the REPLICAS variable and defines the number of pods you want to have for your deployment.
  • To start a job, click on the play icon next to the job’s name.
  • Once you get to 100%, you cannot scale down, and you’d have to roll back by redeploying the old version using the rollback button in the environment page.
  • With INCREMENTAL_ROLLOUT_MODE set to manual and with STAGING_ENABLED
  • not all buildpacks support Auto Test yet
  • When a project has been marked as private, GitLab’s Container Registry requires authentication when downloading containers.
  • Authentication credentials will be valid while the pipeline is running, allowing for a successful initial deployment.
  • After the pipeline completes, Kubernetes will no longer be able to access the Container Registry.
  • We strongly advise using GitLab Container Registry with Auto DevOps in order to simplify configuration and prevent any unforeseen issues.
張 旭

Ingress - Kubernetes - 0 views

  • An API object that manages external access to the services in a cluster, typically HTTP.
  • load balancing
  • SSL termination
  • ...62 more annotations...
  • name-based virtual hosting
  • Edge routerA router that enforces the firewall policy for your cluster.
  • Cluster networkA set of links, logical or physical, that facilitate communication within a cluster according to the Kubernetes networking model.
  • A Kubernetes ServiceA way to expose an application running on a set of Pods as a network service. that identifies a set of Pods using labelTags objects with identifying attributes that are meaningful and relevant to users. selectors.
  • Services are assumed to have virtual IPs only routable within the cluster network.
  • Ingress exposes HTTP and HTTPS routes from outside the cluster to services within the cluster.
  • Traffic routing is controlled by rules defined on the Ingress resource.
  • An Ingress can be configured to give Services externally-reachable URLs, load balance traffic, terminate SSL / TLS, and offer name based virtual hosting.
  • Exposing services other than HTTP and HTTPS to the internet typically uses a service of type Service.Type=NodePort or Service.Type=LoadBalancer.
  • You must have an ingress controller to satisfy an Ingress. Only creating an Ingress resource has no effect.
  • As with all other Kubernetes resources, an Ingress needs apiVersion, kind, and metadata fields
  • Ingress frequently uses annotations to configure some options depending on the Ingress controller,
  • Ingress resource only supports rules for directing HTTP traffic.
  • An optional host.
  • A list of paths
  • A backend is a combination of Service and port names
  • has an associated backend
  • Both the host and path must match the content of an incoming request before the load balancer directs traffic to the referenced Service.
  • HTTP (and HTTPS) requests to the Ingress that matches the host and path of the rule are sent to the listed backend.
  • A default backend is often configured in an Ingress controller to service any requests that do not match a path in the spec.
  • An Ingress with no rules sends all traffic to a single default backend.
  • Ingress controllers and load balancers may take a minute or two to allocate an IP address.
  • A fanout configuration routes traffic from a single IP address to more than one Service, based on the HTTP URI being requested.
  • nginx.ingress.kubernetes.io/rewrite-target: /
  • describe ingress
  • get ingress
  • Name-based virtual hosts support routing HTTP traffic to multiple host names at the same IP address.
  • route requests based on the Host header.
  • an Ingress resource without any hosts defined in the rules, then any web traffic to the IP address of your Ingress controller can be matched without a name based virtual host being required.
  • secure an Ingress by specifying a SecretStores sensitive information, such as passwords, OAuth tokens, and ssh keys. that contains a TLS private key and certificate.
  • Currently the Ingress only supports a single TLS port, 443, and assumes TLS termination.
  • An Ingress controller is bootstrapped with some load balancing policy settings that it applies to all Ingress, such as the load balancing algorithm, backend weight scheme, and others.
  • persistent sessions, dynamic weights) are not yet exposed through the Ingress. You can instead get these features through the load balancer used for a Service.
  • review the controller specific documentation to see how they handle health checks
  • edit ingress
  • After you save your changes, kubectl updates the resource in the API server, which tells the Ingress controller to reconfigure the load balancer.
  • kubectl replace -f on a modified Ingress YAML file.
  • Node: A worker machine in Kubernetes, part of a cluster.
  • in most common Kubernetes deployments, nodes in the cluster are not part of the public internet.
  • Edge router: A router that enforces the firewall policy for your cluster.
  • a gateway managed by a cloud provider or a physical piece of hardware.
  • Cluster network: A set of links, logical or physical, that facilitate communication within a cluster according to the Kubernetes networking model.
  • Service: A Kubernetes Service that identifies a set of Pods using label selectors.
  • An Ingress may be configured to give Services externally-reachable URLs, load balance traffic, terminate SSL / TLS, and offer name-based virtual hosting.
  • An Ingress does not expose arbitrary ports or protocols.
  • You must have an Ingress controller to satisfy an Ingress. Only creating an Ingress resource has no effect.
  • The name of an Ingress object must be a valid DNS subdomain name
  • The Ingress spec has all the information needed to configure a load balancer or proxy server.
  • Ingress resource only supports rules for directing HTTP(S) traffic.
  • An Ingress with no rules sends all traffic to a single default backend and .spec.defaultBackend is the backend that should handle requests in that case.
  • If defaultBackend is not set, the handling of requests that do not match any of the rules will be up to the ingress controller
  • A common usage for a Resource backend is to ingress data to an object storage backend with static assets.
  • Exact: Matches the URL path exactly and with case sensitivity.
  • Prefix: Matches based on a URL path prefix split by /. Matching is case sensitive and done on a path element by element basis.
  • multiple paths within an Ingress will match a request. In those cases precedence will be given first to the longest matching path.
  • Hosts can be precise matches (for example “foo.bar.com”) or a wildcard (for example “*.foo.com”).
  • No match, wildcard only covers a single DNS label
  • Each Ingress should specify a class, a reference to an IngressClass resource that contains additional configuration including the name of the controller that should implement the class.
  • secure an Ingress by specifying a Secret that contains a TLS private key and certificate.
  • The Ingress resource only supports a single TLS port, 443, and assumes TLS termination at the ingress point (traffic to the Service and its Pods is in plaintext).
  • TLS will not work on the default rule because the certificates would have to be issued for all the possible sub-domains.
  • hosts in the tls section need to explicitly match the host in the rules section.
張 旭

The Rails Command Line - Ruby on Rails Guides - 0 views

  • rake --tasks
  • Think of destroy as the opposite of generate.
  • runner runs Ruby code in the context of Rails non-interactively
  • ...28 more annotations...
  • rails dbconsole figures out which database you're using and drops you into whichever command line interface you would use with it
  • The console command lets you interact with your Rails application from the command line. On the underside, rails console uses IRB
  • rake about gives information about version numbers for Ruby, RubyGems, Rails, the Rails subcomponents, your application's folder, the current Rails environment name, your app's database adapter, and schema version
  • You can precompile the assets in app/assets using rake assets:precompile and remove those compiled assets using rake assets:clean.
  • rake db:version is useful when troubleshooting
  • The doc: namespace has the tools to generate documentation for your app, API documentation, guides.
  • rake notes will search through your code for comments beginning with FIXME, OPTIMIZE or TODO.
  • You can also use custom annotations in your code and list them using rake notes:custom by specifying the annotation using an environment variable ANNOTATION.
  • rake routes will list all of your defined routes, which is useful for tracking down routing problems in your app, or giving you a good overview of the URLs in an app you're trying to get familiar with.
  • rake secret will give you a pseudo-random key to use for your session secret.
  • Custom rake tasks have a .rake extension and are placed in Rails.root/lib/tasks.
  • rails new . --git --database=postgresql
  • All commands can run with -h or --help to list more information
  • The rails server command launches a small web server named WEBrick which comes bundled with Ruby
  • rails server -e production -p 4000
  • You can run a server as a daemon by passing a -d option
  • The rails generate command uses templates to create a whole lot of things.
  • Using generators will save you a large amount of time by writing boilerplate code, code that is necessary for the app to work.
  • All Rails console utilities have help text.
  • generate controller ControllerName action1 action2.
  • With a normal, plain-old Rails application, your URLs will generally follow the pattern of http://(host)/(controller)/(action), and a URL like http://(host)/(controller) will hit the index action of that controller.
  • A scaffold in Rails is a full set of model, database migration for that model, controller to manipulate it, views to view and manipulate the data, and a test suite for each of the above.
  • Unit tests are code that tests and makes assertions about code.
  • Unit tests are your friend.
  • rails console --sandbox
  • rails db
  • Each task has a description, and should help you find the thing you need.
  • rake tmp:clear clears all the three: cache, sessions and sockets.
張 旭

Helm | - 0 views

  • A chart is a collection of files that describe a related set of Kubernetes resources.
  • A single chart might be used to deploy something simple, like a memcached pod, or something complex, like a full web app stack with HTTP servers, databases, caches, and so on.
  • Charts are created as files laid out in a particular directory tree, then they can be packaged into versioned archives to be deployed.
  • ...170 more annotations...
  • A chart is organized as a collection of files inside of a directory.
  • values.yaml # The default configuration values for this chart
  • charts/ # A directory containing any charts upon which this chart depends.
  • templates/ # A directory of templates that, when combined with values, # will generate valid Kubernetes manifest files.
  • version: A SemVer 2 version (required)
  • apiVersion: The chart API version, always "v1" (required)
  • Every chart must have a version number. A version must follow the SemVer 2 standard.
  • non-SemVer names are explicitly disallowed by the system.
  • When generating a package, the helm package command will use the version that it finds in the Chart.yaml as a token in the package name.
  • the appVersion field is not related to the version field. It is a way of specifying the version of the application.
  • appVersion: The version of the app that this contains (optional). This needn't be SemVer.
  • If the latest version of a chart in the repository is marked as deprecated, then the chart as a whole is considered to be deprecated.
  • deprecated: Whether this chart is deprecated (optional, boolean)
  • one chart may depend on any number of other charts.
  • dependencies can be dynamically linked through the requirements.yaml file or brought in to the charts/ directory and managed manually.
  • the preferred method of declaring dependencies is by using a requirements.yaml file inside of your chart.
  • A requirements.yaml file is a simple file for listing your dependencies.
  • The repository field is the full URL to the chart repository.
  • you must also use helm repo add to add that repo locally.
  • helm dependency update and it will use your dependency file to download all the specified charts into your charts/ directory for you.
  • When helm dependency update retrieves charts, it will store them as chart archives in the charts/ directory.
  • Managing charts with requirements.yaml is a good way to easily keep charts updated, and also share requirements information throughout a team.
  • All charts are loaded by default.
  • The condition field holds one or more YAML paths (delimited by commas). If this path exists in the top parent’s values and resolves to a boolean value, the chart will be enabled or disabled based on that boolean value.
  • The tags field is a YAML list of labels to associate with this chart.
  • all charts with tags can be enabled or disabled by specifying the tag and a boolean value.
  • The --set parameter can be used as usual to alter tag and condition values.
  • Conditions (when set in values) always override tags.
  • The first condition path that exists wins and subsequent ones for that chart are ignored.
  • The keys containing the values to be imported can be specified in the parent chart’s requirements.yaml file using a YAML list. Each item in the list is a key which is imported from the child chart’s exports field.
  • specifying the key data in our import list, Helm looks in the exports field of the child chart for data key and imports its contents.
  • the parent key data is not contained in the parent’s final values. If you need to specify the parent key, use the ‘child-parent’ format.
  • To access values that are not contained in the exports key of the child chart’s values, you will need to specify the source key of the values to be imported (child) and the destination path in the parent chart’s values (parent).
  • To drop a dependency into your charts/ directory, use the helm fetch command
  • A dependency can be either a chart archive (foo-1.2.3.tgz) or an unpacked chart directory.
  • name cannot start with _ or .. Such files are ignored by the chart loader.
  • a single release is created with all the objects for the chart and its dependencies.
  • Helm Chart templates are written in the Go template language, with the addition of 50 or so add-on template functions from the Sprig library and a few other specialized functions
  • When Helm renders the charts, it will pass every file in that directory through the template engine.
  • Chart developers may supply a file called values.yaml inside of a chart. This file can contain default values.
  • Chart users may supply a YAML file that contains values. This can be provided on the command line with helm install.
  • When a user supplies custom values, these values will override the values in the chart’s values.yaml file.
  • Template files follow the standard conventions for writing Go templates
  • {{default "minio" .Values.storage}}
  • Values that are supplied via a values.yaml file (or via the --set flag) are accessible from the .Values object in a template.
  • pre-defined, are available to every template, and cannot be overridden
  • the names are case sensitive
  • Release.Name: The name of the release (not the chart)
  • Release.IsUpgrade: This is set to true if the current operation is an upgrade or rollback.
  • Release.Revision: The revision number. It begins at 1, and increments with each helm upgrade
  • Chart: The contents of the Chart.yaml
  • Files: A map-like object containing all non-special files in the chart.
  • Files can be accessed using {{index .Files "file.name"}} or using the {{.Files.Get name}} or {{.Files.GetString name}} functions.
  • .helmignore
  • access the contents of the file as []byte using {{.Files.GetBytes}}
  • Any unknown Chart.yaml fields will be dropped
  • Chart.yaml cannot be used to pass arbitrarily structured data into the template.
  • A values file is formatted in YAML.
  • A chart may include a default values.yaml file
  • be merged into the default values file.
  • The default values file included inside of a chart must be named values.yaml
  • accessible inside of templates using the .Values object
  • Values files can declare values for the top-level chart, as well as for any of the charts that are included in that chart’s charts/ directory.
  • Charts at a higher level have access to all of the variables defined beneath.
  • lower level charts cannot access things in parent charts
  • Values are namespaced, but namespaces are pruned.
  • the scope of the values has been reduced and the namespace prefix removed
  • Helm supports special “global” value.
  • a way of sharing one top-level variable with all subcharts, which is useful for things like setting metadata properties like labels.
  • If a subchart declares a global variable, that global will be passed downward (to the subchart’s subcharts), but not upward to the parent chart.
  • global variables of parent charts take precedence over the global variables from subcharts.
  • helm lint
  • A chart repository is an HTTP server that houses one or more packaged charts
  • Any HTTP server that can serve YAML files and tar files and can answer GET requests can be used as a repository server.
  • Helm does not provide tools for uploading charts to remote repository servers.
  • the only way to add a chart to $HELM_HOME/starters is to manually copy it there.
  • Helm provides a hook mechanism to allow chart developers to intervene at certain points in a release’s life cycle.
  • Execute a Job to back up a database before installing a new chart, and then execute a second job after the upgrade in order to restore data.
  • Hooks are declared as an annotation in the metadata section of a manifest
  • Hooks work like regular templates, but they have special annotations
  • pre-install
  • post-install: Executes after all resources are loaded into Kubernetes
  • pre-delete
  • post-delete: Executes on a deletion request after all of the release’s resources have been deleted.
  • pre-upgrade
  • post-upgrade
  • pre-rollback
  • post-rollback: Executes on a rollback request after all resources have been modified.
  • crd-install
  • test-success: Executes when running helm test and expects the pod to return successfully (return code == 0).
  • test-failure: Executes when running helm test and expects the pod to fail (return code != 0).
  • Hooks allow you, the chart developer, an opportunity to perform operations at strategic points in a release lifecycle
  • Tiller then loads the hook with the lowest weight first (negative to positive)
  • Tiller returns the release name (and other data) to the client
  • If the resources is a Job kind, Tiller will wait until the job successfully runs to completion.
  • if the job fails, the release will fail. This is a blocking operation, so the Helm client will pause while the Job is run.
  • If they have hook weights (see below), they are executed in weighted order. Otherwise, ordering is not guaranteed.
  • good practice to add a hook weight, and set it to 0 if weight is not important.
  • The resources that a hook creates are not tracked or managed as part of the release.
  • leave the hook resource alone.
  • To destroy such resources, you need to either write code to perform this operation in a pre-delete or post-delete hook or add "helm.sh/hook-delete-policy" annotation to the hook template file.
  • Hooks are just Kubernetes manifest files with special annotations in the metadata section
  • One resource can implement multiple hooks
  • no limit to the number of different resources that may implement a given hook.
  • When subcharts declare hooks, those are also evaluated. There is no way for a top-level chart to disable the hooks declared by subcharts.
  • Hook weights can be positive or negative numbers but must be represented as strings.
  • sort those hooks in ascending order.
  • Hook deletion policies
  • "before-hook-creation" specifies Tiller should delete the previous hook before the new hook is launched.
  • By default Tiller will wait for 60 seconds for a deleted hook to no longer exist in the API server before timing out.
  • Custom Resource Definitions (CRDs) are a special kind in Kubernetes.
  • The crd-install hook is executed very early during an installation, before the rest of the manifests are verified.
  • A common reason why the hook resource might already exist is that it was not deleted following use on a previous install/upgrade.
  • Helm uses Go templates for templating your resource files.
  • two special template functions: include and required
  • include function allows you to bring in another template, and then pass the results to other template functions.
  • The required function allows you to declare a particular values entry as required for template rendering.
  • If the value is empty, the template rendering will fail with a user submitted error message.
  • When you are working with string data, you are always safer quoting the strings than leaving them as bare words
  • Quote Strings, Don’t Quote Integers
  • when working with integers do not quote the values
  • env variables values which are expected to be string
  • to include a template, and then perform an operation on that template’s output, Helm has a special include function
  • The above includes a template called toYaml, passes it $value, and then passes the output of that template to the nindent function.
  • Go provides a way for setting template options to control behavior when a map is indexed with a key that’s not present in the map
  • The required function gives developers the ability to declare a value entry as required for template rendering.
  • The tpl function allows developers to evaluate strings as templates inside a template.
  • Rendering a external configuration file
  • (.Files.Get "conf/app.conf")
  • Image pull secrets are essentially a combination of registry, username, and password.
  • Automatically Roll Deployments When ConfigMaps or Secrets change
  • configmaps or secrets are injected as configuration files in containers
  • a restart may be required should those be updated with a subsequent helm upgrade
  • The sha256sum function can be used to ensure a deployment’s annotation section is updated if another file changes
  • checksum/config: {{ include (print $.Template.BasePath "/configmap.yaml") . | sha256sum }}
  • helm upgrade --recreate-pods
  • "helm.sh/resource-policy": keep
  • resources that should not be deleted when Helm runs a helm delete
  • this resource becomes orphaned. Helm will no longer manage it in any way.
  • create some reusable parts in your chart
  • In the templates/ directory, any file that begins with an underscore(_) is not expected to output a Kubernetes manifest file.
  • by convention, helper templates and partials are placed in a _helpers.tpl file.
  • The current best practice for composing a complex application from discrete parts is to create a top-level umbrella chart that exposes the global configurations, and then use the charts/ subdirectory to embed each of the components.
  • SAP’s Converged charts: These charts install SAP Converged Cloud a full OpenStack IaaS on Kubernetes. All of the charts are collected together in one GitHub repository, except for a few submodules.
  • Deis’s Workflow: This chart exposes the entire Deis PaaS system with one chart. But it’s different from the SAP chart in that this umbrella chart is built from each component, and each component is tracked in a different Git repository.
  • YAML is a superset of JSON
  • any valid JSON structure ought to be valid in YAML.
  • As a best practice, templates should follow a YAML-like syntax unless the JSON syntax substantially reduces the risk of a formatting issue.
  • There are functions in Helm that allow you to generate random data, cryptographic keys, and so on.
  • a chart repository is a location where packaged charts can be stored and shared.
  • A chart repository is an HTTP server that houses an index.yaml file and optionally some packaged charts.
  • Because a chart repository can be any HTTP server that can serve YAML and tar files and can answer GET requests, you have a plethora of options when it comes down to hosting your own chart repository.
  • It is not required that a chart package be located on the same server as the index.yaml file.
  • A valid chart repository must have an index file. The index file contains information about each chart in the chart repository.
  • The Helm project provides an open-source Helm repository server called ChartMuseum that you can host yourself.
  • $ helm repo index fantastic-charts --url https://fantastic-charts.storage.googleapis.com
  • A repository will not be added if it does not contain a valid index.yaml
  • add the repository to their helm client via the helm repo add [NAME] [URL] command with any name they would like to use to reference the repository.
  • Helm has provenance tools which help chart users verify the integrity and origin of a package.
  • Integrity is established by comparing a chart to a provenance record
  • The provenance file contains a chart’s YAML file plus several pieces of verification information
  • Chart repositories serve as a centralized collection of Helm charts.
  • Chart repositories must make it possible to serve provenance files over HTTP via a specific request, and must make them available at the same URI path as the chart.
  • We don’t want to be “the certificate authority” for all chart signers. Instead, we strongly favor a decentralized model, which is part of the reason we chose OpenPGP as our foundational technology.
  • The Keybase platform provides a public centralized repository for trust information.
  • A chart contains a number of Kubernetes resources and components that work together.
  • A test in a helm chart lives under the templates/ directory and is a pod definition that specifies a container with a given command to run.
  • The pod definition must contain one of the helm test hook annotations: helm.sh/hook: test-success or helm.sh/hook: test-failure
  • helm test
  • nest your test suite under a tests/ directory like <chart-name>/templates/tests/
張 旭

Open source load testing tool review 2020 - 0 views

  • Hey is a simple tool, written in Go, with good performance and the most common features you'll need to run simple static URL tests.
  • Hey supports HTTP/2, which neither Wrk nor Apachebench does
  • Apachebench is very fast, so often you will not need more than one CPU core to generate enough traffic
  • ...16 more annotations...
  • Hey has rate limiting, which can be used to run fixed-rate tests.
  • Vegeta was designed to be run on the command line; it reads from stdin a list of HTTP transactions to generate, and sends results in binary format to stdout,
  • Vegeta is a really strong tool that caters to people who want a tool to test simple, static URLs (perhaps API end points) but also want a bit more functionality.
  • Vegeta can even be used as a Golang library/package if you want to create your own load testing tool.
  • Wrk is so damn fast
  • being fast and measuring correctly is about all that Wrk does
  • k6 is scriptable in plain Javascript
  • k6 is average or better. In some categories (documentation, scripting API, command line UX) it is outstanding.
  • Jmeter is a huge beast compared to most other tools.
  • Siege is a simple tool, similar to e.g. Apachebench in that it has no scripting and is primarily used when you want to hit a single, static URL repeatedly.
  • A good way of testing the testing tools is to not test them on your code, but on some third-party thing that is sure to be very high-performing.
  • use a tool like e.g. top to keep track of Nginx CPU usage while testing. If you see just one process, and see it using close to 100% CPU, it means you could be CPU-bound on the target side.
  • If you see multiple Nginx processes but only one is using a lot of CPU, it means your load testing tool is only talking to that particular worker process.
  • Network delay is also important to take into account as it sets an upper limit on the number of requests per second you can push through.
  • If, say, the Nginx default page requires a transfer of 250 bytes to load, it means that if the servers are connected via a 100 Mbit/s link, the theoretical max RPS rate would be around 100,000,000 divided by 8 (bits per byte) divided by 250 => 100M/2000 = 50,000 RPS. Though that is a very optimistic calculation - protocol overhead will make the actual number a lot lower so in the case above I would start to get worried bandwidth was an issue if I saw I could push through max 30,000 RPS, or something like that.
  • Wrk managed to push through over 50,000 RPS and that made 8 Nginx workers on the target system consume about 600% CPU.
張 旭

ALB vs ELB | Differences Between an ELB and an ALB on AWS | Sumo Logic - 0 views

  • If you use AWS, you have two load-balancing options: ELB and ALB.
  • An ELB is a software-based load balancer which can be set up and configured in front of a collection of AWS Elastic Compute (EC2) instances.
  • The load balancer serves as a single entry point for consumers of the EC2 instances and distributes incoming traffic across all machines available to receive requests.
  • ...14 more annotations...
  • the ELB also performs a vital role in improving the fault tolerance of the services which it fronts.
  • he Open Systems Interconnection Model, or OSI Model, is a conceptual model which is used to facilitate communications between different computing systems.
  • Layer 1 is the physical layer, and represents the physical medium across which the request is sent.
  • Layer 2 describes the data link layer
  • Layer 3 (the network layer)
  • Layer 7, which serves the application layer.
  • The Classic ELB operates at Layer 4. Layer 4 represents the transport layer, and is controlled by the protocol being used to transmit the request.
  • A network device, of which the Classic ELB is an example, reads the protocol and port of the incoming request, and then routes it to one or more backend servers.
  • the ALB operates at Layer 7. Layer 7 represents the application layer, and as such allows for the redirection of traffic based on the content of the request.
  • Whereas a request to a specific URL backed by a Classic ELB would only enable routing to a particular pool of homogeneous servers, the ALB can route based on the content of the URL, and direct to a specific subgroup of backing servers existing in a heterogeneous collection registered with the load balancer.
  • The Classic ELB is a simple load balancer, is easy to configure
  • As organizations move towards microservice architecture or adopt a container-based infrastructure, the ability to merely map a single address to a specific service becomes more complicated and harder to maintain.
  • the ALB manages routing based on user-defined rules.
  • oute traffic to different services based on either the host or the content of the path contained within that URL.
張 旭

Specification - Swagger - 0 views

shared by 張 旭 on 29 Jul 16 - No Cached
  • A list of parameters that are applicable for all the operations described under this path.
  • MUST NOT include duplicated parameters
  • this field SHOULD be less than 120 characters.
  • ...33 more annotations...
  • Unique string used to identify the operation.
  • The id MUST be unique among all operations described in the API.
  • A list of MIME types the operation can consume.
  • A list of MIME types the operation can produce
  • A unique parameter is defined by a combination of a name and location.
  • There can be one "body" parameter at most.
  • Required. The list of possible responses as they are returned from executing this operation.
  • The transfer protocol for the operation. Values MUST be from the list: "http", "https", "ws", "wss".
  • Declares this operation to be deprecated. Usage of the declared operation should be refrained. Default value is
  • A declaration of which security schemes are applied for this operation.
  • A unique parameter is defined by a combination of a name and location.
  • Path
  • Query
  • Header
  • Body
  • Form
  • Required. The location of the parameter. Possible values are "query", "header", "path", "formData" or "body".
  • the parameter value is actually part of the operation's URL
  • Parameters that are appended to the URL
  • The payload that's appended to the HTTP request.
  • Since there can only be one payload, there can only be one body parameter.
  • The name of the body parameter has no effect on the parameter itself and is used for documentation purposes only
  • body and form parameters cannot exist together for the same operation
  • This is the only parameter type that can be used to send files, thus supporting the file type.
  • If the parameter is in "path", this property is required and its value MUST be true.
  • default value is false.
  • The schema defining the type used for the body parameter.
  • The value MUST be one of "string", "number", "integer", "boolean", "array" or "file"
  • Default value is false
  • Required if type is "array". Describes the type of items in the array.
  • Determines the format of the array if type array is used
  • enum
  • pattern
張 旭

Basics - Træfik - 0 views

  • Modifier rules only modify the request. They do not have any impact on routing decisions being made.
  • A frontend consists of a set of rules that determine how incoming requests are forwarded from an entrypoint to a backend.
  • Entrypoints are the network entry points into Træfik
  • ...27 more annotations...
  • Modifiers and matchers
  • Matcher rules determine if a particular request should be forwarded to a backend
  • if any rule matches
  • if all rules match
  • In order to use regular expressions with Host and Path matchers, you must declare an arbitrarily named variable followed by the colon-separated regular expression, all enclosed in curly braces.
  • Use a *Prefix* matcher if your backend listens on a particular base path but also serves requests on sub-paths. For instance, PathPrefix: /products would match /products but also /products/shoes and /products/shirts. Since the path is forwarded as-is, your backend is expected to listen on /products
  • Use Path if your backend listens on the exact path only. For instance, Path: /products would match /products but not /products/shoes.
  • Modifier rules ALWAYS apply after the Matcher rules.
  • A backend is responsible to load-balance the traffic coming from one or more frontends to a set of http servers
  • wrr: Weighted Round Robin
  • drr: Dynamic Round Robin: increases weights on servers that perform better than others.
  • A circuit breaker can also be applied to a backend, preventing high loads on failing servers.
  • To proactively prevent backends from being overwhelmed with high load, a maximum connection limit can also be applied to each backend.
  • Sticky sessions are supported with both load balancers.
  • When sticky sessions are enabled, a cookie is set on the initial request.
  • The check is defined by a path appended to the backend URL and an interval (given in a format understood by time.ParseDuration) specifying how often the health check should be executed (the default being 30 seconds). Each backend must respond to the health check within 5 seconds.
  • The static configuration is the global configuration which is setting up connections to configuration backends and entrypoints.
  • We only need to enable watch option to make Træfik watch configuration backend changes and generate its configuration automatically.
  • Separate the regular expression and the replacement by a space.
  • a comma-separated key/value pair where both key and value must be literals.
  • namespacing of your backends happens on the basis of hosts in addition to paths
  • Modifiers will be applied in a pre-determined order regardless of their order in the rule configuration section.
  • customize priority
  • Custom headers can be configured through the frontends, to add headers to either requests or responses that match the frontend's rules.
  • Security related headers (HSTS headers, SSL redirection, Browser XSS filter, etc) can be added and configured per frontend in a similar manner to the custom headers above.
  • Servers are simply defined using a url. You can also apply a custom weight to each server (this will be used by load-balancing).
  • Maximum connections can be configured by specifying an integer value for maxconn.amount and maxconn.extractorfunc which is a strategy used to determine how to categorize requests in order to evaluate the maximum connections.
張 旭

ruby-grape/grape: An opinionated framework for creating REST-like APIs in Ruby. - 0 views

shared by 張 旭 on 17 Dec 16 - No Cached
  • Grape is a REST-like API framework for Ruby.
  • designed to run on Rack or complement existing web application frameworks such as Rails and Sinatra by providing a simple DSL to easily develop RESTful APIs
  • Grape APIs are Rack applications that are created by subclassing Grape::API
  • ...54 more annotations...
  • Rails expects a subdirectory that matches the name of the Ruby module and a file name that matches the name of the class
  • mount multiple API implementations inside another one
  • mount on a path, which is similar to using prefix inside the mounted API itself.
  • four strategies in which clients can reach your API's endpoints: :path, :header, :accept_version_header and :param
  • clients should pass the desired version as a request parameter, either in the URL query string or in the request body.
  • clients should pass the desired version in the HTTP Accept head
  • clients should pass the desired version in the UR
  • clients should pass the desired version in the HTTP Accept-Version header.
  • add a description to API methods and namespaces
  • Request parameters are available through the params hash object
  • Parameters are automatically populated from the request body on POST and PUT
  • route string parameters will have precedence.
  • Grape allows you to access only the parameters that have been declared by your params block
  • By default declared(params) includes parameters that have nil values
  • all valid types
  • type: File
  • JSON objects and arrays of objects are accepted equally
  • any class can be used as a type so long as an explicit coercion method is supplied
  • As a special case, variant-member-type collections may also be declared, by passing a Set or Array with more than one member to type
  • Parameters can be nested using group or by calling requires or optional with a block
  • relevant if another parameter is given
  • Parameters options can be grouped
  • allow_blank can be combined with both requires and optional
  • Parameters can be restricted to a specific set of values
  • Parameters can be restricted to match a specific regular expression
  • Never define mutually exclusive sets with any required params
  • Namespaces allow parameter definitions and apply to every method within the namespace
  • define a route parameter as a namespace using route_param
  • create custom validation that use request to validate the attribute
  • rescue a Grape::Exceptions::ValidationErrors and respond with a custom response or turn the response into well-formatted JSON for a JSON API that separates individual parameters and the corresponding error messages
  • custom validation messages
  • Request headers are available through the headers helper or from env in their original form
  • define requirements for your named route parameters using regular expressions on namespace or endpoint
  • route will match only if all requirements are met
  • mix in a module
  • define reusable params
  • using cookies method
  • a 201 for POST-Requests
  • 204 for DELETE-Requests
  • 200 status code for all other Requests
  • use status to query and set the actual HTTP Status Code
  • raising errors with error!
  • It is very crucial to define this endpoint at the very end of your API, as it literally accepts every request.
  • rescue_from will rescue the exceptions listed and all their subclasses.
  • Grape::API provides a logger method which by default will return an instance of the Logger class from Ruby's standard library.
  • Grape supports a range of ways to present your data
  • Grape has built-in Basic and Digest authentication (the given block is executed in the context of the current Endpoint).
  • Authentication applies to the current namespace and any children, but not parents.
  • Blocks can be executed before or after every API call, using before, after, before_validation and after_validation
  • Before and after callbacks execute in the following order
  • Grape by default anchors all request paths, which means that the request URL should match from start to end to match
  • The namespace method has a number of aliases, including: group, resource, resources, and segment. Use whichever reads the best for your API.
  • test a Grape API with RSpec by making HTTP requests and examining the response
  • POST JSON data and specify the correct content-type.
crazylion lee

Jupyter Notebook Viewer - 0 views

  •  
    "Advent of Code Solutions"
張 旭

Blog Tutorial - Adding a layer - CakePHP Cookbook v2.x documentation - 1 views

  • Cake views are just presentation-flavored fragments that fit inside an application’s layout. For most applications they’re HTML mixed with PHP, but they may end up as XML, CSV, or even binary data.
  • Layouts are presentation code that is wrapped around a view, and can be defined and switched between
  • if the HTTP method of the request was POST, try to save the data using the Post model.
  • ...6 more annotations...
  • You can also specify URLs relative to the base of the application in the form of /controller/action/param1/param2.
  • Every CakePHP request includes a CakeRequest object which is accessible using $this->request.
  • When a user uses a form to POST data to your application, that information is available in $this->request->data.
  • The $this->Form->input() method is used to create form elements of the same name.
  • postLink() will create a link that uses Javascript to do a POST request deleting our post.
  • Allowing content to be deleted using GET requests is dangerous
crazylion lee

ShareX - Take screenshots or screencasts, annotate, upload and share URL in clipboard - 0 views

  •  
    "Sharing has never been easier."
張 旭

Best practices for writing Dockerfiles - Docker Documentation - 0 views

  • Run only one process per container
  • use current Official Repositories as the basis for your image
  • put long or complex RUN statements on multiple lines separated with backslashes.
  • ...16 more annotations...
  • CMD instruction should be used to run the software contained by your image, along with any arguments
  • CMD should be given an interactive shell (bash, python, perl, etc)
  • COPY them individually, rather than all at once
  • COPY is preferred
  • using ADD to fetch packages from remote URLs is strongly discouraged
  • always use COPY
  • The best use for ENTRYPOINT is to set the image's main command, allowing that image to be run as though it was that command (and then use CMD as the default flags).
  • the image name can double as a reference to the binary as shown in the command above
  • ENTRYPOINT instruction can also be used in combination with a helper script
  • The VOLUME instruction should be used to expose any database storage area, configuration storage, or files/folders created by your docker container.
  • use USER to change to a non-root user
  • avoid installing or using sudo
  • avoid switching USER back and forth frequently.
  • always use absolute paths for your WORKDIR
  • ONBUILD is only useful for images that are going to be built FROM a given image
  • The “onbuild” image will fail catastrophically if the new build's context is missing the resource being added.
張 旭

我所认为的RESTful API最佳实践 · ScienJus's Blog - 0 views

  • 最好将过滤、分页和排序的相关信息全权交给客户端,包括过滤条件、页数或是游标、每页的数量、排序方式、升降序等,这样可以使 API 更加灵活。
  • 过度纠结如何遵守规范只是徒增烦恼
  • API 的版本号和客户端 APP 的版本号是毫无关系的
  • ...13 more annotations...
  • 除了 POST 其他三种请求都具备幂等性(多次请求的效果相同)
  • PUT 也可以用于创建操作,只要在创建前就可以确定资源的 id。
  • 永远去使用可以指向资源的的最短 URL 路径
  • RESTful 中不建议出现动词
  • 这里我选择了 PUT 而不是 POST,因为我觉得点赞这种行为应该是幂等的,多次操作的结果应该相同。
  • Token 的状态保持一般有两种方式实现:一种是在用户每次操作都会延长或重置 TOKEN 的生存时间(类似于缓存的机制),另一种是 Token 的生存时间固定不变,但是同时返回一个刷新用的 Token,当 Token 过期时可以将其刷新而不是重新登录。
  • 一般客户端会把请求参数拼接后并加密作为 Sign 传给服务端,这样即使被抓包了,对方只修改参数而无法生成对应的 Sign 也会被服务端识破。
  • 尽量使用 HTTP 状态码
  • data是真正需要返回的数据,并且只会在请求成功时才存在,msg只用在开发环境,并且只为了开发人员识别。客户端逻辑只允许识别code,并且不允许直接将msg的内容展示给用户。
  • JSON 比 XML 可视化更好,也更加节约流量,所以尽量不要使用 XML。
  • 创建和修改操作成功后,需要返回该资源的全部信息。
  • 即使客户端一个页面需要展示多个资源,也不要在一个接口中全部返回,而是让客户端分别请求多个接口。
  • 推荐使用游标分页
張 旭

Best practices for writing Dockerfiles | Docker Documentation - 0 views

  • building efficient images
  • Docker builds images automatically by reading the instructions from a Dockerfile -- a text file that contains all commands, in order, needed to build a given image.
  • A Docker image consists of read-only layers each of which represents a Dockerfile instruction.
  • ...47 more annotations...
  • The layers are stacked and each one is a delta of the changes from the previous layer
  • When you run an image and generate a container, you add a new writable layer (the “container layer”) on top of the underlying layers.
  • By “ephemeral,” we mean that the container can be stopped and destroyed, then rebuilt and replaced with an absolute minimum set up and configuration.
  • Inadvertently including files that are not necessary for building an image results in a larger build context and larger image size.
  • To exclude files not relevant to the build (without restructuring your source repository) use a .dockerignore file. This file supports exclusion patterns similar to .gitignore files.
  • minimize image layers by leveraging build cache.
  • if your build contains several layers, you can order them from the less frequently changed (to ensure the build cache is reusable) to the more frequently changed
  • avoid installing extra or unnecessary packages just because they might be “nice to have.”
  • Each container should have only one concern.
  • Decoupling applications into multiple containers makes it easier to scale horizontally and reuse containers
  • Limiting each container to one process is a good rule of thumb, but it is not a hard and fast rule.
  • Use your best judgment to keep containers as clean and modular as possible.
  • do multi-stage builds and only copy the artifacts you need into the final image. This allows you to include tools and debug information in your intermediate build stages without increasing the size of the final image.
  • avoid duplication of packages and make the list much easier to update.
  • When building an image, Docker steps through the instructions in your Dockerfile, executing each in the order specified.
  • the next instruction is compared against all child images derived from that base image to see if one of them was built using the exact same instruction. If not, the cache is invalidated.
  • simply comparing the instruction in the Dockerfile with one of the child images is sufficient.
  • For the ADD and COPY instructions, the contents of the file(s) in the image are examined and a checksum is calculated for each file.
  • If anything has changed in the file(s), such as the contents and metadata, then the cache is invalidated.
  • cache checking does not look at the files in the container to determine a cache match.
  • In that case just the command string itself is used to find a match.
    • 張 旭
       
      RUN apt-get 這樣的指令,直接比對指令內容的意思。
  • Whenever possible, use current official repositories as the basis for your images.
  • Using RUN apt-get update && apt-get install -y ensures your Dockerfile installs the latest package versions with no further coding or manual intervention.
  • cache busting
  • Docker executes these commands using the /bin/sh -c interpreter, which only evaluates the exit code of the last operation in the pipe to determine success.
  • set -o pipefail && to ensure that an unexpected error prevents the build from inadvertently succeeding.
  • The CMD instruction should be used to run the software contained by your image, along with any arguments.
  • CMD should almost always be used in the form of CMD [“executable”, “param1”, “param2”…]
  • CMD should rarely be used in the manner of CMD [“param”, “param”] in conjunction with ENTRYPOINT
  • The ENV instruction is also useful for providing required environment variables specific to services you wish to containerize,
  • Each ENV line creates a new intermediate layer, just like RUN commands
  • COPY is preferred
  • COPY only supports the basic copying of local files into the container
  • the best use for ADD is local tar file auto-extraction into the image, as in ADD rootfs.tar.xz /
  • If you have multiple Dockerfile steps that use different files from your context, COPY them individually, rather than all at once.
  • using ADD to fetch packages from remote URLs is strongly discouraged; you should use curl or wget instead
  • The best use for ENTRYPOINT is to set the image’s main command, allowing that image to be run as though it was that command (and then use CMD as the default flags).
  • the image name can double as a reference to the binary as shown in the command
  • The VOLUME instruction should be used to expose any database storage area, configuration storage, or files/folders created by your docker container.
  • use VOLUME for any mutable and/or user-serviceable parts of your image
  • If you absolutely need functionality similar to sudo, such as initializing the daemon as root but running it as non-root), consider using “gosu”.
  • always use absolute paths for your WORKDIR
  • An ONBUILD command executes after the current Dockerfile build completes.
  • Think of the ONBUILD command as an instruction the parent Dockerfile gives to the child Dockerfile
  • A Docker build executes ONBUILD commands before any command in a child Dockerfile.
  • Be careful when putting ADD or COPY in ONBUILD. The “onbuild” image fails catastrophically if the new build’s context is missing the resource being added.
張 旭

Template Designer Documentation - Jinja2 Documentation (2.10) - 0 views

  • A Jinja template doesn’t need to have a specific extension
  • A Jinja template is simply a text file
  • tags, which control the logic of the template
  • ...106 more annotations...
  • {% ... %} for Statements
  • {{ ... }} for Expressions to print to the template output
  • use a dot (.) to access attributes of a variable
  • the outer double-curly braces are not part of the variable, but the print statement.
  • If you access variables inside tags don’t put the braces around them.
  • If a variable or attribute does not exist, you will get back an undefined value.
  • the default behavior is to evaluate to an empty string if printed or iterated over, and to fail for every other operation.
  • if an object has an item and attribute with the same name. Additionally, the attr() filter only looks up attributes.
  • Variables can be modified by filters. Filters are separated from the variable by a pipe symbol (|) and may have optional arguments in parentheses.
  • Multiple filters can be chained
  • Tests can be used to test a variable against a common expression.
  • add is plus the name of the test after the variable.
  • to find out if a variable is defined, you can do name is defined, which will then return true or false depending on whether name is defined in the current template context.
  • strip whitespace in templates by hand. If you add a minus sign (-) to the start or end of a block (e.g. a For tag), a comment, or a variable expression, the whitespaces before or after that block will be removed
  • not add whitespace between the tag and the minus sign
  • mark a block raw
  • Template inheritance allows you to build a base “skeleton” template that contains all the common elements of your site and defines blocks that child templates can override.
  • The {% extends %} tag is the key here. It tells the template engine that this template “extends” another template.
  • access templates in subdirectories with a slash
  • can’t define multiple {% block %} tags with the same name in the same template
  • use the special self variable and call the block with that name
  • self.title()
  • super()
  • put the name of the block after the end tag for better readability
  • if the block is replaced by a child template, a variable would appear that was not defined in the block or passed to the context.
  • setting the block to “scoped” by adding the scoped modifier to a block declaration
  • If you have a variable that may include any of the following chars (>, <, &, or ") you SHOULD escape it unless the variable contains well-formed and trusted HTML.
  • Jinja2 functions (macros, super, self.BLOCKNAME) always return template data that is marked as safe.
  • With the default syntax, control structures appear inside {% ... %} blocks.
  • the dictsort filter
  • loop.cycle
  • Unlike in Python, it’s not possible to break or continue in a loop
  • use loops recursively
  • add the recursive modifier to the loop definition and call the loop variable with the new iterable where you want to recurse.
  • The loop variable always refers to the closest (innermost) loop.
  • whether the value changed at all,
  • use it to test if a variable is defined, not empty and not false
  • Macros are comparable with functions in regular programming languages.
  • If a macro name starts with an underscore, it’s not exported and can’t be imported.
  • pass a macro to another macro
  • caller()
  • a single trailing newline is stripped if present
  • other whitespace (spaces, tabs, newlines etc.) is returned unchanged
  • a block tag works in “both” directions. That is, a block tag doesn’t just provide a placeholder to fill - it also defines the content that fills the placeholder in the parent.
  • Python dicts are not ordered
  • caller(user)
  • call(user)
  • This is a simple dialog rendered by using a macro and a call block.
  • Filter sections allow you to apply regular Jinja2 filters on a block of template data.
  • Assignments at top level (outside of blocks, macros or loops) are exported from the template like top level macros and can be imported by other templates.
  • using namespace objects which allow propagating of changes across scopes
  • use block assignments to capture the contents of a block into a variable name.
  • The extends tag can be used to extend one template from another.
  • Blocks are used for inheritance and act as both placeholders and replacements at the same time.
  • The include statement is useful to include a template and return the rendered contents of that file into the current namespace
  • Included templates have access to the variables of the active context by default.
  • putting often used code into macros
  • imports are cached and imported templates don’t have access to the current template variables, just the globals by default.
  • Macros and variables starting with one or more underscores are private and cannot be imported.
  • By default, included templates are passed the current context and imported templates are not.
  • imports are often used just as a module that holds macros.
  • Integers and floating point numbers are created by just writing the number down
  • Everything between two brackets is a list.
  • Tuples are like lists that cannot be modified (“immutable”).
  • A dict in Python is a structure that combines keys and values.
  • // Divide two numbers and return the truncated integer result
  • The special constants true, false, and none are indeed lowercase
  • all Jinja identifiers are lowercase
  • (expr) group an expression.
  • The is and in operators support negation using an infix notation
  • in Perform a sequence / mapping containment test.
  • | Applies a filter.
  • ~ Converts all operands into strings and concatenates them.
  • use inline if expressions.
  • always an attribute is returned and items are not looked up.
  • default(value, default_value=u'', boolean=False)¶ If the value is undefined it will return the passed default value, otherwise the value of the variable
  • dictsort(value, case_sensitive=False, by='key', reverse=False)¶ Sort a dict and yield (key, value) pairs.
  • format(value, *args, **kwargs)¶ Apply python string formatting on an object
  • groupby(value, attribute)¶ Group a sequence of objects by a common attribute.
  • grouping by is stored in the grouper attribute and the list contains all the objects that have this grouper in common.
  • indent(s, width=4, first=False, blank=False, indentfirst=None)¶ Return a copy of the string with each line indented by 4 spaces. The first line and blank lines are not indented by default.
  • join(value, d=u'', attribute=None)¶ Return a string which is the concatenation of the strings in the sequence.
  • map()¶ Applies a filter on a sequence of objects or looks up an attribute.
  • pprint(value, verbose=False)¶ Pretty print a variable. Useful for debugging.
  • reject()¶ Filters a sequence of objects by applying a test to each object, and rejecting the objects with the test succeeding.
  • replace(s, old, new, count=None)¶ Return a copy of the value with all occurrences of a substring replaced with a new one.
  • round(value, precision=0, method='common')¶ Round the number to a given precision
  • even if rounded to 0 precision, a float is returned.
  • select()¶ Filters a sequence of objects by applying a test to each object, and only selecting the objects with the test succeeding.
  • sort(value, reverse=False, case_sensitive=False, attribute=None)¶ Sort an iterable. Per default it sorts ascending, if you pass it true as first argument it will reverse the sorting.
  • striptags(value)¶ Strip SGML/XML tags and replace adjacent whitespace by one space.
  • tojson(value, indent=None)¶ Dumps a structure to JSON so that it’s safe to use in <script> tags.
  • trim(value)¶ Strip leading and trailing whitespace.
  • unique(value, case_sensitive=False, attribute=None)¶ Returns a list of unique items from the the given iterable
  • urlize(value, trim_url_limit=None, nofollow=False, target=None, rel=None)¶ Converts URLs in plain text into clickable links.
  • defined(value)¶ Return true if the variable is defined
  • in(value, seq)¶ Check if value is in seq.
  • mapping(value)¶ Return true if the object is a mapping (dict etc.).
  • number(value)¶ Return true if the variable is a number.
  • sameas(value, other)¶ Check if an object points to the same memory address than another object
  • undefined(value)¶ Like defined() but the other way round.
  • A joiner is passed a string and will return that string every time it’s called, except the first time (in which case it returns an empty string).
  • namespace(...)¶ Creates a new container that allows attribute assignment using the {% set %} tag
  • The with statement makes it possible to create a new inner scope. Variables set within this scope are not visible outside of the scope.
  • activate and deactivate the autoescaping from within the templates
  • With both trim_blocks and lstrip_blocks enabled, you can put block tags on their own lines, and the entire block line will be removed when rendered, preserving the whitespace of the contents
張 旭

DNS Records: an Introduction - 0 views

  • reading from right to left
  • top-level domain, or TLD
  • first-level subdomains plus their TLDs (example.com) are referred to as “domains.”
  • ...37 more annotations...
  • Name servers host a domain’s DNS information in a text file called the zone file
  • Start of Authority (SOA) records
  • You’ll want to specify at least two name servers. That way, if one of them is down, the next one can continue to serve your DNS information.
  • Every domain’s zone file contains the admin’s email address, the name servers, and the DNS records.
  • a zone file, which lists domains and their corresponding IP addresses (and a few other things)
  • TLD nameserver
  • ISPs cache a lot of DNS information after they’ve looked it up the first time
  • Usually caching is a good thing, but it can be a problem if you’ve recently made a change to your DNS information
  • An A record matches up a domain (or subdomain) to an IP address
  • point different subdomains to different IP addresses
  • An AAAA record is just like an A record, but for IPv6 IP addresses.
  • An AXFR record is a type of DNS record used for DNS replication
  • used on a slave DNS server to replicate the zone file from a master DNS server
  • DNS Certification Authority Authorization uses DNS to allow the holder of a domain to specify which certificate authorities are allowed to issue certificates for that domain.
  • A CNAME record or Canonical Name record matches up a domain (or subdomain) to a different domain.
  • You should not use a CNAME record for a domain that gets email, because some mail servers handle mail oddly for domains with CNAME records
  • the target domain for a CNAME record should have a normal A-record resolution
  • a CNAME record does not function the same way as a URL redirect
  • A DKIM record or domain keys identified mail record displays the public key for authenticating messages that have been signed with the DKIM protocol
  • An MX record or mail exchange record sets the mail delivery destination for a domain (or subdomain).
  • Ideally, an MX record should point to a domain that is also the hostname for its server.
  • Your MX records don’t necessarily have to point to your Linode. If you’re using a third-party mail service, like Google Apps, you should use the MX records they provide.
  • Lower numbers have a higher priority
  • NS records or name server records set the nameservers for a domain (or subdomain).
  • You can also set up different nameservers for any of your subdomains.
  • The order of NS records does not matter; DNS requests are sent randomly to the different servers, and if one host fails to respond, another one will be queried.
  • A PTR record or pointer record matches up an IP address to a domain (or subdomain), allowing reverse DNS queries to function.
  • PTR records are usually set with your hosting provider. They are not part of your domain’s zone file.
  • An SOA record or Start of Authority record labels a zone file with the name of the host where it was originally created.
  • The administrative email address is written with a period (.) instead of an at symbol (<@>).
  • The single nameserver mentioned in the SOA record is considered the primary master for the purposes of Dynamic DNS and is the server where zone file changes get made before they are propagated to all other nameservers.
  • An SPF record or Sender Policy Framework record lists the designated mail servers for a domain (or subdomain).
  • An SPF record for your domain tells other receiving mail servers which outgoing server(s) are valid sources of email, so they can reject spoofed email from your domain that has originated from unauthorized servers.
  • Your SPF record will have a domain or subdomain, type (which is TXT, or SPF if your name server supports it), and text (which starts with “v=spf1” and contains the SPF record settings).
  • An SRV record or service record matches up a specific service that runs on your domain (or subdomain) to a target domain.
  • A TXT record or text record provides information about the domain in question to other resources on the Internet.
  • One common use of the TXT record is to create an SPF record on nameservers that don’t natively support SPF.
張 旭

Getting Started with MariaDB Galera Cluster - MariaDB Knowledge Base - 0 views

  • most users are not going to bootstrap a server by executing "mysqld --wsrep-new-cluster" manually.
  • galera_new_cluster
  • Prerequisites
  • ...7 more annotations...
  • Once you have a cluster running and you want to add/reconnect another node to it, you must supply an address of one of the cluster members in the cluster address URL
  • The new node only needs to connect to one of the existing members
  • It will automatically retrieve the cluster map and reconnect to the rest of the nodes
  • it's better to list all nodes of the cluster so that any node can join a cluster connecting to any other node, even if one or more are down
  • The wsrep_cluster_address parameter should be added in my.cnf of each node, listing all the nodes of the cluster,
  • the minimum recommended number of nodes in a cluster is 3
  • While two of the members will be engaged in state transfer, the remaining member(s) will be able to keep on serving client requests.
1 - 20 of 30 Next ›
Showing 20 items per page