Skip to main content

Home/ Larvata/ Group items tagged HTTP

Rss Feed Group items tagged

crazylion lee

local-ch/lhs: Rails gem providing an easy, active-record-like interface for http json s... - 1 views

  •  
    "Rails gem providing an easy, active-record-like interface for http json services"
張 旭

Rails Routing from the Outside In - Ruby on Rails Guides - 0 views

  • Resource routing allows you to quickly declare all of the common routes for a given resourceful controller.
  • Rails would dispatch that request to the destroy method on the photos controller with { id: '17' } in params.
  • a resourceful route provides a mapping between HTTP verbs and URLs to controller actions.
  • ...86 more annotations...
  • each action also maps to particular CRUD operations in a database
  • resource :photo and resources :photos creates both singular and plural routes that map to the same controller (PhotosController).
  • One way to avoid deep nesting (as recommended above) is to generate the collection actions scoped under the parent, so as to get a sense of the hierarchy, but to not nest the member actions.
  • to only build routes with the minimal amount of information to uniquely identify the resource
  • The shallow method of the DSL creates a scope inside of which every nesting is shallow
  • These concerns can be used in resources to avoid code duplication and share behavior across routes
  • add a member route, just add a member block into the resource block
  • You can leave out the :on option, this will create the same member route except that the resource id value will be available in params[:photo_id] instead of params[:id].
  • Singular Resources
  • use a singular resource to map /profile (rather than /profile/:id) to the show action
  • Passing a String to get will expect a controller#action format
  • workaround
  • organize groups of controllers under a namespace
  • route /articles (without the prefix /admin) to Admin::ArticlesController
  • route /admin/articles to ArticlesController (without the Admin:: module prefix)
  • Nested routes allow you to capture this relationship in your routing.
  • helpers take an instance of Magazine as the first parameter (magazine_ads_url(@magazine)).
  • Resources should never be nested more than 1 level deep.
  • via the :shallow option
  • a balance between descriptive routes and deep nesting
  • :shallow_path prefixes member paths with the specified parameter
  • Routing Concerns allows you to declare common routes that can be reused inside other resources and routes
  • Rails can also create paths and URLs from an array of parameters.
  • use url_for with a set of objects
  • In helpers like link_to, you can specify just the object in place of the full url_for call
  • insert the action name as the first element of the array
  • This will recognize /photos/1/preview with GET, and route to the preview action of PhotosController, with the resource id value passed in params[:id]. It will also create the preview_photo_url and preview_photo_path helpers.
  • pass :on to a route, eliminating the block:
  • Collection Routes
  • This will enable Rails to recognize paths such as /photos/search with GET, and route to the search action of PhotosController. It will also create the search_photos_url and search_photos_path route helpers.
  • simple routing makes it very easy to map legacy URLs to new Rails actions
  • add an alternate new action using the :on shortcut
  • When you set up a regular route, you supply a series of symbols that Rails maps to parts of an incoming HTTP request.
  • :controller maps to the name of a controller in your application
  • :action maps to the name of an action within that controller
  • optional parameters, denoted by parentheses
  • This route will also route the incoming request of /photos to PhotosController#index, since :action and :id are
  • use a constraint on :controller that matches the namespace you require
  • dynamic segments don't accept dots
  • The params will also include any parameters from the query string
  • :defaults option.
  • set params[:format] to "jpg"
  • cannot override defaults via query parameters
  • specify a name for any route using the :as option
  • create logout_path and logout_url as named helpers in your application.
  • Inside the show action of UsersController, params[:username] will contain the username for the user.
  • should use the get, post, put, patch and delete methods to constrain a route to a particular verb.
  • use the match method with the :via option to match multiple verbs at once
  • Routing both GET and POST requests to a single action has security implications
  • 'GET' in Rails won't check for CSRF token. You should never write to the database from 'GET' requests
  • use the :constraints option to enforce a format for a dynamic segment
  • constraints
  • don't need to use anchors
  • Request-Based Constraints
  • the same name as the hash key and then compare the return value with the hash value.
  • constraint values should match the corresponding Request object method return type
    • 張 旭
       
      應該就是檢查來源的 request, 如果是某個特定的 request 來訪問的,就通過。
  • blacklist
    • 張 旭
       
      這裡有點複雜 ...
  • redirect helper
  • reuse dynamic segments from the match in the path to redirect
  • this redirection is a 301 "Moved Permanently" redirect.
  • root method
  • put the root route at the top of the file
  • The root route only routes GET requests to the action.
  • root inside namespaces and scopes
  • For namespaced controllers you can use the directory notation
  • Only the directory notation is supported
  • use the :constraints option to specify a required format on the implicit id
  • specify a single constraint to apply to a number of routes by using the block
  • non-resourceful routes
  • :id parameter doesn't accept dots
  • :as option lets you override the normal naming for the named route helpers
  • use the :as option to prefix the named route helpers that Rails generates for a rout
  • prevent name collisions
  • prefix routes with a named parameter
  • This will provide you with URLs such as /bob/articles/1 and will allow you to reference the username part of the path as params[:username] in controllers, helpers and views
  • :only option
  • :except option
  • generate only the routes that you actually need can cut down on memory use and speed up the routing process.
  • alter path names
  • http://localhost:3000/rails/info/routes
  • rake routes
  • setting the CONTROLLER environment variable
  • Routes should be included in your testing strategy
  • assert_generates assert_recognizes assert_routing
張 旭

rails/activeresource - 0 views

    • 張 旭
       
      所以執行 Person.find 時,會發送一個 GET 到 api.people.com:3000
    • 張 旭
       
      所以執行 Person.find 時,會發送一個 GET 到 api.people.com:3000
  • Active Resource is built on a standard JSON or XML format for requesting and submitting resources over HTTP
  • REST uses HTTP, but unlike “typical” web applications, it makes use of all the verbs available in the HTTP specification
  • ...2 more annotations...
  • When a request is made to a remote resource, a REST JSON request is generated, transmitted, and the result received and serialized into a usable Ruby object.
  • Relationships between resources can be declared using the standard association syntax that should be familiar to anyone who uses activerecord
張 旭

Embracing REST with mind, body and soul « Plataformatec Blog - 0 views

  • gain with respond_with introduction is more obvious if you compare index, new and show actions
    • 張 旭
       
      看起來 respond_with 會根據 request 型態自動回覆對應型態的 response
  • you can define supported formats at the class level and tell in the instance the resource to be represented by those formats.
  • when a request comes, for example with format xml, it will first search for a template at users/index.xml. If the template is not available, it tries to render the resource given (in this case, @users) by calling :to_xml on it
  • ...6 more annotations...
  • how to render our resources depending on the format AND HTTP verb
  • By default, ActionController::Responder holds all formats behavior in a method called to_format.
  • Suddenly we realized that respond_with is useful just for GET requests
  • it renders the resource based on the HTTP verb and whether it has errors or not
  • Your controller code just have to send the resource using respond_with(@resource) and respond_with will call ActionController::Responder which will know what to do.
    • 張 旭
       
      簡單說,就是只要寫 respond_with 就好了,其它都不用管了。Responder 會幫你判斷 HTTP 的動作。
  • Anything that responds to :call can be a responder, so you can create your custom classes or even give procs, fibers and so on.
張 旭

Best practices for writing Dockerfiles | Docker Documentation - 0 views

  • building efficient images
  • Docker builds images automatically by reading the instructions from a Dockerfile -- a text file that contains all commands, in order, needed to build a given image.
  • A Docker image consists of read-only layers each of which represents a Dockerfile instruction.
  • ...47 more annotations...
  • The layers are stacked and each one is a delta of the changes from the previous layer
  • When you run an image and generate a container, you add a new writable layer (the “container layer”) on top of the underlying layers.
  • By “ephemeral,” we mean that the container can be stopped and destroyed, then rebuilt and replaced with an absolute minimum set up and configuration.
  • Inadvertently including files that are not necessary for building an image results in a larger build context and larger image size.
  • To exclude files not relevant to the build (without restructuring your source repository) use a .dockerignore file. This file supports exclusion patterns similar to .gitignore files.
  • minimize image layers by leveraging build cache.
  • if your build contains several layers, you can order them from the less frequently changed (to ensure the build cache is reusable) to the more frequently changed
  • avoid installing extra or unnecessary packages just because they might be “nice to have.”
  • Each container should have only one concern.
  • Decoupling applications into multiple containers makes it easier to scale horizontally and reuse containers
  • Limiting each container to one process is a good rule of thumb, but it is not a hard and fast rule.
  • Use your best judgment to keep containers as clean and modular as possible.
  • do multi-stage builds and only copy the artifacts you need into the final image. This allows you to include tools and debug information in your intermediate build stages without increasing the size of the final image.
  • avoid duplication of packages and make the list much easier to update.
  • When building an image, Docker steps through the instructions in your Dockerfile, executing each in the order specified.
  • the next instruction is compared against all child images derived from that base image to see if one of them was built using the exact same instruction. If not, the cache is invalidated.
  • simply comparing the instruction in the Dockerfile with one of the child images is sufficient.
  • For the ADD and COPY instructions, the contents of the file(s) in the image are examined and a checksum is calculated for each file.
  • If anything has changed in the file(s), such as the contents and metadata, then the cache is invalidated.
  • cache checking does not look at the files in the container to determine a cache match.
  • In that case just the command string itself is used to find a match.
    • 張 旭
       
      RUN apt-get 這樣的指令,直接比對指令內容的意思。
  • Whenever possible, use current official repositories as the basis for your images.
  • Using RUN apt-get update && apt-get install -y ensures your Dockerfile installs the latest package versions with no further coding or manual intervention.
  • cache busting
  • Docker executes these commands using the /bin/sh -c interpreter, which only evaluates the exit code of the last operation in the pipe to determine success.
  • set -o pipefail && to ensure that an unexpected error prevents the build from inadvertently succeeding.
  • The CMD instruction should be used to run the software contained by your image, along with any arguments.
  • CMD should almost always be used in the form of CMD [“executable”, “param1”, “param2”…]
  • CMD should rarely be used in the manner of CMD [“param”, “param”] in conjunction with ENTRYPOINT
  • The ENV instruction is also useful for providing required environment variables specific to services you wish to containerize,
  • Each ENV line creates a new intermediate layer, just like RUN commands
  • COPY is preferred
  • COPY only supports the basic copying of local files into the container
  • the best use for ADD is local tar file auto-extraction into the image, as in ADD rootfs.tar.xz /
  • If you have multiple Dockerfile steps that use different files from your context, COPY them individually, rather than all at once.
  • using ADD to fetch packages from remote URLs is strongly discouraged; you should use curl or wget instead
  • The best use for ENTRYPOINT is to set the image’s main command, allowing that image to be run as though it was that command (and then use CMD as the default flags).
  • the image name can double as a reference to the binary as shown in the command
  • The VOLUME instruction should be used to expose any database storage area, configuration storage, or files/folders created by your docker container.
  • use VOLUME for any mutable and/or user-serviceable parts of your image
  • If you absolutely need functionality similar to sudo, such as initializing the daemon as root but running it as non-root), consider using “gosu”.
  • always use absolute paths for your WORKDIR
  • An ONBUILD command executes after the current Dockerfile build completes.
  • Think of the ONBUILD command as an instruction the parent Dockerfile gives to the child Dockerfile
  • A Docker build executes ONBUILD commands before any command in a child Dockerfile.
  • Be careful when putting ADD or COPY in ONBUILD. The “onbuild” image fails catastrophically if the new build’s context is missing the resource being added.
張 旭

How It Works - Let's Encrypt - Free SSL/TLS Certificates - 0 views

  • The objective of Let’s Encrypt and the ACME protocol is to make it possible to set up an HTTPS server and have it automatically obtain a browser-trusted certificate, without any human intervention.
  • First, the agent proves to the CA that the web server controls a domain.
  • Then, the agent can request, renew, and revoke certificates for that domain.
  • ...4 more annotations...
  • The first time the agent software interacts with Let’s Encrypt, it generates a new key pair and proves to the Let’s Encrypt CA that the server controls one or more domains.
  • The Let’s Encrypt CA will look at the domain name being requested and issue one or more sets of challenges
  • different ways that the agent can prove control of the domain
  • Once the agent has an authorized key pair, requesting, renewing, and revoking certificates is simple—just send certificate management messages and sign them with the authorized key pair.
張 旭

The Twelve-Factor App - 0 views

  • The twelve-factor app is completely self-contained
  • using dependency declaration to add a webserver library to the app, such as Tornado for Python, Thin for Ruby, or Jetty for Java and other JVM-based languages.
  • the port-binding approach means that one app can become the backing service for another app
張 旭

The Twelve-Factor App - 0 views

  • they can be started or stopped at a moment’s notice.
  • Processes should strive to minimize startup time
  • Processes shut down gracefully when they receive a SIGTERM signal from the process manager.
  • ...4 more annotations...
  • returning the current job to the work queue
  • all jobs are reentrant, which typically is achieved by wrapping the results in a transaction, or making the operation idempotent
  • Processes should also be robust against sudden death, in the case of a failure in the underlying hardware.
  • a twelve-factor app is architected to handle unexpected, non-graceful terminations
張 旭

The Twelve-Factor App - 0 views

  • Logs are the stream of aggregated, time-ordered events collected from the output streams of all running processes and backing services.
  • Logs have no fixed beginning or end, but flow continuously as long as the app is operating.
  • each running process writes its event stream, unbuffered, to stdout.
  • ...2 more annotations...
  • long-term archival. These archival destinations are not visible to or configurable by the app, and instead are completely managed by the execution environment.
  • Most significantly, the stream can be sent to a log indexing and analysis system such as Splunk, or a general-purpose data warehousing system such as Hadoop/Hive.
張 旭

The Twelve-Factor App - 0 views

  • Libraries installed through a packaging system can be installed system-wide (known as “site packages”) or scoped into the directory containing the app (known as “vendoring” or “bundling”).
  • A twelve-factor app never relies on implicit existence of system-wide packages.
  • declares all dependencies, completely and exactly, via a dependency declaration manifest.
  • ...8 more annotations...
  • The full and explicit dependency specification is applied uniformly to both production and development.
  • Bundler for Ruby offers the Gemfile manifest format for dependency declaration and bundle exec for dependency isolation.
  • Pip is used for declaration and Virtualenv for isolation.
  • No matter what the toolchain, dependency declaration and isolation must always be used together
  • requiring only the language runtime and dependency manager installed as prerequisites.
  • set up everything needed to run the app’s code with a deterministic build command.
  • If the app needs to shell out to a system tool, that tool should be vendored into the app.
  • do not rely on the implicit existence of any system tools
張 旭

SSL/TLS协议运行机制的概述 - 阮一峰的网络日志 - 0 views

  • 客户端先向服务器端索要公钥,然后用公钥加密信息,服务器收到密文后,用自己的私钥解密。
  • 互联网加密通信协议的历史,几乎与互联网一样长。
  • 将公钥放在数字证书中。只要证书是可信的,公钥就是可信的。
  • ...20 more annotations...
  • 每一次对话(session),客户端和服务器端都生成一个"对话密钥"(session key),用它来加密信息。
  • "对话密钥"是对称加密,所以运算速度非常快
  • 服务器公钥只用于加密"对话密钥"本身,这样就减少了加密运算的消耗时间。
  • "对话密钥"
  • "握手阶段"(handshake)
  • 客户端向服务器端索要并验证公钥
  • "握手阶段"的所有通信都是明文的
  • 客户端发送的信息之中不包括服务器的域名。也就是说,理论上服务器只能包含一个网站,否则会分不清应该向客户端提供哪一个网站的数字证书。这就是为什么通常一台服务器只能有一张数字证书的原因。
  • 2006年,TLS协议加入了一个Server Name Indication扩展,允许客户端向服务器提供它所请求的域名。
  • "客户端证书"。比如,金融机构往往只允许认证客户连入自己的网络,就会向正式客户提供USB密钥,里面就包含了一张客户端证书。
  • 验证服务器证书。如果证书不是可信机构颁布、或者证书中的域名与实际域名不一致、或者证书已经过期,就会向访问者显示一个警告,由其选择是否还要继续通信。
  • 从证书中取出服务器的公钥
  • 随后的信息都将用双方商定的加密方法和密钥发送
  • 前面发送的所有内容的hash值,用来供服务器校验。
  • 随机数用服务器公钥加密
  • 整个握手阶段出现的第三个随机数,又称"pre-master key"。有了它以后,客户端和服务器就同时有了三个随机数,接着双方就用事先商定的加密方法,各自生成本次会话所用的同一把"会话密钥"。
  • 不管是客户端还是服务器,都需要随机数,这样生成的密钥才不会每次都一样。由于SSL协议中证书是静态的,因此十分有必要引入一种随机因素来保证协商出来的密钥的随机性。
  • 一个伪随机可能完全不随机,可是是三个伪随机就十分接近随机了,每增加一个自由度,随机性增加的可不是一。
  • 服务器收到客户端的第三个随机数pre-master key之后,计算生成本次会话所用的"会话密钥"。
  • 客户端与服务器进入加密通信,就完全是使用普通的HTTP协议,只不过用"会话密钥"加密内容。
  •  
    "客户端先向服务器端索要公钥,然后用公钥加密信息,服务器收到密文后,用自己的私钥解密。"
張 旭

Pre-Built CircleCI Docker Images - CircleCI - 0 views

  • typically extensions of official Docker images and include tools especially useful for CI/CD.
  • Convenience images are based on the most recently built versions of upstream images, so it is best practice to use the most specific image possible.
  • add -jessie or -stretch to the end of each of those containers to ensure you’re only using that version of the Debian base OS.
  • ...12 more annotations...
  • language images
  • service images
  • All images add a circleci user as a system user
  • A language image should be listed first under the docker key in your configuration, making it the primary container during execution.
  • For example, if you want to add browsers to the circleci/golang:1.9 image, use the circleci/golang:1.9-browsers image.
  • Service images are convenience images for services like databases
  • should be listed after language images so they become secondary service containers.
  • To speed up builds using RAM volume, add the -ram suffix to the end of a service image tag
  • All convenience images have been extended with additional tools.
  • all images include the following packages, installed via apt-get
  • Most CircleCI convenience images are Debian Jessie- or Stretch-based images, however some extend Ubuntu-based images.
  • The following packages are installed via curl
張 旭

The Asset Pipeline - Ruby on Rails Guides - 0 views

  • provides a framework to concatenate and minify or compress JavaScript and CSS assets
  • adds the ability to write these assets in other languages and pre-processors such as CoffeeScript, Sass and ERB
  • invalidate the cache by altering this fingerprint
  • ...80 more annotations...
  • Rails 4 automatically adds the sass-rails, coffee-rails and uglifier gems to your Gemfile
  • reduce the number of requests that a browser makes to render a web page
  • Starting with version 3.1, Rails defaults to concatenating all JavaScript files into one master .js file and all CSS files into one master .css file
  • In production, Rails inserts an MD5 fingerprint into each filename so that the file is cached by the web browser
  • The technique sprockets uses for fingerprinting is to insert a hash of the content into the name, usually at the end.
  • asset minification or compression
  • The sass-rails gem is automatically used for CSS compression if included in Gemfile and no config.assets.css_compressor option is set.
  • Supported languages include Sass for CSS, CoffeeScript for JavaScript, and ERB for both by default.
  • When a filename is unique and based on its content, HTTP headers can be set to encourage caches everywhere (whether at CDNs, at ISPs, in networking equipment, or in web browsers) to keep their own copy of the content
  • asset pipeline is technically no longer a core feature of Rails 4
  • Rails uses for fingerprinting is to insert a hash of the content into the name, usually at the end
  • With the asset pipeline, the preferred location for these assets is now the app/assets directory.
  • Fingerprinting is enabled by default for production and disabled for all other environments
  • The files in app/assets are never served directly in production.
  • Paths are traversed in the order that they occur in the search path
  • You should use app/assets for files that must undergo some pre-processing before they are served.
  • By default .coffee and .scss files will not be precompiled on their own
  • app/assets is for assets that are owned by the application, such as custom images, JavaScript files or stylesheets.
  • lib/assets is for your own libraries' code that doesn't really fit into the scope of the application or those libraries which are shared across applications.
  • vendor/assets is for assets that are owned by outside entities, such as code for JavaScript plugins and CSS frameworks.
  • Any path under assets/* will be searched
  • By default these files will be ready to use by your application immediately using the require_tree directive.
  • By default, this means the files in app/assets take precedence, and will mask corresponding paths in lib and vendor
  • Sprockets uses files named index (with the relevant extensions) for a special purpose
  • Rails.application.config.assets.paths
  • causes turbolinks to check if an asset has been updated and if so loads it into the page
  • if you add an erb extension to a CSS asset (for example, application.css.erb), then helpers like asset_path are available in your CSS rules
  • If you add an erb extension to a JavaScript asset, making it something such as application.js.erb, then you can use the asset_path helper in your JavaScript code
  • The asset pipeline automatically evaluates ERB
  • data URI — a method of embedding the image data directly into the CSS file — you can use the asset_data_uri helper.
  • Sprockets will also look through the paths specified in config.assets.paths, which includes the standard application paths and any paths added by Rails engines.
  • image_tag
  • the closing tag cannot be of the style -%>
  • asset_data_uri
  • app/assets/javascripts/application.js
  • sass-rails provides -url and -path helpers (hyphenated in Sass, underscored in Ruby) for the following asset classes: image, font, video, audio, JavaScript and stylesheet.
  • Rails.application.config.assets.compress
  • In JavaScript files, the directives begin with //=
  • The require_tree directive tells Sprockets to recursively include all JavaScript files in the specified directory into the output.
  • manifest files contain directives — instructions that tell Sprockets which files to require in order to build a single CSS or JavaScript file.
  • You should not rely on any particular order among those
  • Sprockets uses manifest files to determine which assets to include and serve.
  • the family of require directives prevents files from being included twice in the output
  • which files to require in order to build a single CSS or JavaScript file
  • Directives are processed top to bottom, but the order in which files are included by require_tree is unspecified.
  • In JavaScript files, Sprockets directives begin with //=
  • If require_self is called more than once, only the last call is respected.
  • require directive is used to tell Sprockets the files you wish to require.
  • You need not supply the extensions explicitly. Sprockets assumes you are requiring a .js file when done from within a .js file
  • paths must be specified relative to the manifest file
  • require_directory
  • Rails 4 creates both app/assets/javascripts/application.js and app/assets/stylesheets/application.css regardless of whether the --skip-sprockets option is used when creating a new rails application.
  • The file extensions used on an asset determine what preprocessing is applied.
  • app/assets/stylesheets/application.css
  • Additional layers of preprocessing can be requested by adding other extensions, where each extension is processed in a right-to-left manner
  • require_self
  • use the Sass @import rule instead of these Sprockets directives.
  • Keep in mind that the order of these preprocessors is important
  • In development mode, assets are served as separate files in the order they are specified in the manifest file.
  • when these files are requested they are processed by the processors provided by the coffee-script and sass gems and then sent back to the browser as JavaScript and CSS respectively.
  • css.scss.erb
  • js.coffee.erb
  • Keep in mind the order of these preprocessors is important.
  • By default Rails assumes that assets have been precompiled and will be served as static assets by your web server
  • with the Asset Pipeline the :cache and :concat options aren't used anymore
  • Assets are compiled and cached on the first request after the server is started
  • RAILS_ENV=production bundle exec rake assets:precompile
  • Debug mode can also be enabled in Rails helper methods
  • If you set config.assets.initialize_on_precompile to false, be sure to test rake assets:precompile locally before deploying
  • By default Rails assumes assets have been precompiled and will be served as static assets by your web server.
  • a rake task to compile the asset manifests and other files in the pipeline
  • RAILS_ENV=production bin/rake assets:precompile
  • a recipe to handle this in deployment
  • links the folder specified in config.assets.prefix to shared/assets
  • config/initializers/assets.rb
  • The initialize_on_precompile change tells the precompile task to run without invoking Rails
  • The X-Sendfile header is a directive to the web server to ignore the response from the application, and instead serve a specified file from disk
  • the jquery-rails gem which comes with Rails as the standard JavaScript library gem.
  • Possible options for JavaScript compression are :closure, :uglifier and :yui
  • concatenate assets
張 旭

Introduction to GitLab Flow | GitLab - 0 views

  • GitLab flow as a clearly defined set of best practices. It combines feature-driven development and feature branches with issue tracking.
  • In Git, you add files from the working copy to the staging area. After that, you commit them to your local repo. The third step is pushing to a shared remote repository.
  • branching model
  • ...68 more annotations...
  • The biggest problem is that many long-running branches emerge that all contain part of the changes.
  • It is a convention to call your default branch master and to mostly branch from and merge to this.
  • Nowadays, most organizations practice continuous delivery, which means that your default branch can be deployed.
  • Continuous delivery removes the need for hotfix and release branches, including all the ceremony they introduce.
  • Merging everything into the master branch and frequently deploying means you minimize the amount of unreleased code, which is in line with lean and continuous delivery best practices.
  • GitHub flow assumes you can deploy to production every time you merge a feature branch.
  • You can deploy a new version by merging master into the production branch. If you need to know what code is in production, you can just checkout the production branch to see.
  • Production branch
  • Environment branches
  • have an environment that is automatically updated to the master branch.
  • deploy the master branch to staging.
  • To deploy to pre-production, create a merge request from the master branch to the pre-production branch.
  • Go live by merging the pre-production branch into the production branch.
  • Release branches
  • work with release branches if you need to release software to the outside world.
  • each branch contains a minor version
  • After announcing a release branch, only add serious bug fixes to the branch.
  • merge these bug fixes into master, and then cherry-pick them into the release branch.
  • Merging into master and then cherry-picking into release is called an “upstream first” policy
  • Tools such as GitHub and Bitbucket choose the name “pull request” since the first manual action is to pull the feature branch.
  • Tools such as GitLab and others choose the name “merge request” since the final action is to merge the feature branch.
  • If you work on a feature branch for more than a few hours, it is good to share the intermediate result with the rest of the team.
  • the merge request automatically updates when new commits are pushed to the branch.
  • If the assigned person does not feel comfortable, they can request more changes or close the merge request without merging.
  • In GitLab, it is common to protect the long-lived branches, e.g., the master branch, so that most developers can’t modify them.
  • if you want to merge into a protected branch, assign your merge request to someone with maintainer permissions.
  • After you merge a feature branch, you should remove it from the source control software.
  • Having a reason for every code change helps to inform the rest of the team and to keep the scope of a feature branch small.
  • If there is no issue yet, create the issue
  • The issue title should describe the desired state of the system.
  • For example, the issue title “As an administrator, I want to remove users without receiving an error” is better than “Admin can’t remove users.”
  • create a branch for the issue from the master branch
  • If you open the merge request but do not assign it to anyone, it is a “Work In Progress” merge request.
  • Start the title of the merge request with [WIP] or WIP: to prevent it from being merged before it’s ready.
  • When they press the merge button, GitLab merges the code and creates a merge commit that makes this event easily visible later on.
  • Merge requests always create a merge commit, even when the branch could be merged without one. This merge strategy is called “no fast-forward” in Git.
  • Suppose that a branch is merged but a problem occurs and the issue is reopened. In this case, it is no problem to reuse the same branch name since the first branch was deleted when it was merged.
  • At any time, there is at most one branch for every issue.
  • It is possible that one feature branch solves more than one issue.
  • GitLab closes these issues when the code is merged into the default branch.
  • If you have an issue that spans across multiple repositories, create an issue for each repository and link all issues to a parent issue.
  • use an interactive rebase (rebase -i) to squash multiple commits into one or reorder them.
  • you should never rebase commits you have pushed to a remote server.
  • Rebasing creates new commits for all your changes, which can cause confusion because the same change would have multiple identifiers.
  • if someone has already reviewed your code, rebasing makes it hard to tell what changed since the last review.
  • never rebase commits authored by other people.
  • it is a bad idea to rebase commits that you have already pushed.
  • If you revert a merge commit and then change your mind, revert the revert commit to redo the merge.
  • Often, people avoid merge commits by just using rebase to reorder their commits after the commits on the master branch.
  • Using rebase prevents a merge commit when merging master into your feature branch, and it creates a neat linear history.
  • every time you rebase, you have to resolve similar conflicts.
  • Sometimes you can reuse recorded resolutions (rerere), but merging is better since you only have to resolve conflicts once.
  • A good way to prevent creating many merge commits is to not frequently merge master into the feature branch.
  • keep your feature branches short-lived.
  • Most feature branches should take less than one day of work.
  • If your feature branches often take more than a day of work, try to split your features into smaller units of work.
  • You could also use feature toggles to hide incomplete features so you can still merge back into master every day.
  • you should try to prevent merge commits, but not eliminate them.
  • Your codebase should be clean, but your history should represent what actually happened.
  • If you rebase code, the history is incorrect, and there is no way for tools to remedy this because they can’t deal with changing commit identifiers
  • Commit often and push frequently
  • You should push your feature branch frequently, even when it is not yet ready for review.
  • A commit message should reflect your intention, not just the contents of the commit.
  • each merge request must be tested before it is accepted.
  • test the master branch after each change.
  • If new commits in master cause merge conflicts with the feature branch, merge master back into the branch to make the CI server re-run the tests.
  • When creating a feature branch, always branch from an up-to-date master.
  • Do not merge from upstream again if your code can work and merge cleanly without doing so.
張 旭

How to Write a Git Commit Message - 1 views

  • a well-crafted Git commit message is the best way to communicate context about a change to fellow developers (and indeed to their future selves).
  • A diff will tell you what changed, but only the commit message can properly tell you why.
  • a commit message shows whether a developer is a good collaborator
  • ...22 more annotations...
  • a well-cared for log is a beautiful and useful thing
  • Reviewing others’ commits and pull requests becomes something worth doing, and suddenly can be done independently.
  • Understanding why something happened months or years ago becomes not only possible but efficient.
  • how to write an individual commit message.
  • Markup syntax, wrap margins, grammar, capitalization, punctuation.
  • What should it not contain?
  • issue tracking IDs
  • pull request numbers
  • The seven rules of a great Git commit message
  • Use the body to explain what and why vs. how
  • Use the imperative mood in the subject line
  • it’s a good idea to begin the commit message with a single short (less than 50 character) line summarizing the change, followed by a blank line and then a more thorough description.
  • forces the author to think for a moment about the most concise way to explain what’s going on.
  • If you’re having a hard time summarizing, you might be committing too many changes at once.
  • shoot for 50 characters, but consider 72 the hard limit
  • Imperative mood just means “spoken or written as if giving a command or instruction”.
  • Git itself uses the imperative whenever it creates a commit on your behalf.
  • when you write your commit messages in the imperative, you’re following Git’s own built-in conventions.
  • A properly formed Git commit subject line should always be able to complete the following sentence: If applied, this commit will your subject line here
  • explaining what changed and why
  • Code is generally self-explanatory in this regard (and if the code is so complex that it needs to be explained in prose, that’s what source comments are for).
  • there are tab completion scripts that take much of the pain out of remembering the subcommands and switches.
張 旭

Rate Limits - Let's Encrypt - Free SSL/TLS Certificates - 0 views

  • If you have a lot of subdomains, you may want to combine them into a single certificate, up to a limit of 100 Names per Certificate.
  • A certificate with multiple names is often called a SAN certificate, or sometimes a UCC certificate
  • The main limit is Certificates per Registered Domain (20 per week).
  • ...12 more annotations...
  • A certificate is considered a duplicate of an earlier certificate if they contain the exact same set of hostnames, ignoring capitalization and ordering of hostnames.
  • We also have a Duplicate Certificate limit of 5 certificates per week.
  • a Renewal Exemption to the Certificates per Registered Domain limit.
  • The Duplicate Certificate limit and the Renewal Exemption ignore the public key and extensions requested
  • You can issue 20 certificates in week 1, 20 more certificates in week 2, and so on, while not interfering with renewals of existing certificates.
  • Revoking certificates does not reset rate limits
  • If you’ve hit a rate limit, we don’t have a way to temporarily reset it.
  • get a list of certificates issued for your registered domain by searching on crt.sh
  • Revoking certificates does not reset rate limits
  • If you have a large number of pending authorization objects and are getting a rate limiting error, you can trigger a validation attempt for those authorization objects by submitting a JWS-signed POST to one of its challenges, as described in the ACME spec.
  • If you do not have logs containing the relevant authorization URLs, you need to wait for the rate limit to expire.
  • having a large number of pending authorizations is generally the result of a buggy client
張 旭

循序漸進理解 HTTP Cache 機制 | TechBridge 技術共筆部落格 - 0 views

  • 減少資源的損耗
  • Cache,台灣翻譯叫做快取,中國翻譯叫做緩存
  • 用到這些資訊的時候,都可以用極快的速度撈出來,而不是再到資料庫裡面重新算一次
  • ...9 more annotations...
  • Server side 的 Cache,藉由把 Database 的資料撈出來之後存到別的地方達成。
  • Server 跟瀏覽器之間的 Cache 機制
  • 瀏覽器會檢視「現在的時間」是否有超過這個 Expires
  • HTTP 1.1 有一個新的 header 出現了,叫做:Cache-Control
  • max-age設定成31536000秒,也就是 365 天的意思
  • max-age會蓋過Expires。因此現在的快取儘管兩個都會放,但其實真正會用到的是max-age。
  • 過期了不代表不能用
  • 在 Server 傳送 Response 的時候,可以多加一個Last-Modified的 Header,表示這個檔案上一次更改是什麼時候。
  • 用If-Modified-Since來跟 Server 指定拿取:某個時間點以後有更改的資料。
« First ‹ Previous 61 - 80 of 1420 Next › Last »
Showing 20 items per page