Skip to main content

Home/ Larvata/ Group items tagged examples

Rss Feed Group items tagged

張 旭

The Asset Pipeline - Ruby on Rails Guides - 0 views

  • provides a framework to concatenate and minify or compress JavaScript and CSS assets
  • adds the ability to write these assets in other languages and pre-processors such as CoffeeScript, Sass and ERB
  • invalidate the cache by altering this fingerprint
  • ...80 more annotations...
  • Rails 4 automatically adds the sass-rails, coffee-rails and uglifier gems to your Gemfile
  • reduce the number of requests that a browser makes to render a web page
  • Starting with version 3.1, Rails defaults to concatenating all JavaScript files into one master .js file and all CSS files into one master .css file
  • In production, Rails inserts an MD5 fingerprint into each filename so that the file is cached by the web browser
  • The technique sprockets uses for fingerprinting is to insert a hash of the content into the name, usually at the end.
  • asset minification or compression
  • The sass-rails gem is automatically used for CSS compression if included in Gemfile and no config.assets.css_compressor option is set.
  • Supported languages include Sass for CSS, CoffeeScript for JavaScript, and ERB for both by default.
  • When a filename is unique and based on its content, HTTP headers can be set to encourage caches everywhere (whether at CDNs, at ISPs, in networking equipment, or in web browsers) to keep their own copy of the content
  • asset pipeline is technically no longer a core feature of Rails 4
  • Rails uses for fingerprinting is to insert a hash of the content into the name, usually at the end
  • With the asset pipeline, the preferred location for these assets is now the app/assets directory.
  • Fingerprinting is enabled by default for production and disabled for all other environments
  • The files in app/assets are never served directly in production.
  • Paths are traversed in the order that they occur in the search path
  • You should use app/assets for files that must undergo some pre-processing before they are served.
  • By default .coffee and .scss files will not be precompiled on their own
  • app/assets is for assets that are owned by the application, such as custom images, JavaScript files or stylesheets.
  • lib/assets is for your own libraries' code that doesn't really fit into the scope of the application or those libraries which are shared across applications.
  • vendor/assets is for assets that are owned by outside entities, such as code for JavaScript plugins and CSS frameworks.
  • Any path under assets/* will be searched
  • By default these files will be ready to use by your application immediately using the require_tree directive.
  • By default, this means the files in app/assets take precedence, and will mask corresponding paths in lib and vendor
  • Sprockets uses files named index (with the relevant extensions) for a special purpose
  • Rails.application.config.assets.paths
  • causes turbolinks to check if an asset has been updated and if so loads it into the page
  • if you add an erb extension to a CSS asset (for example, application.css.erb), then helpers like asset_path are available in your CSS rules
  • If you add an erb extension to a JavaScript asset, making it something such as application.js.erb, then you can use the asset_path helper in your JavaScript code
  • The asset pipeline automatically evaluates ERB
  • data URI — a method of embedding the image data directly into the CSS file — you can use the asset_data_uri helper.
  • Sprockets will also look through the paths specified in config.assets.paths, which includes the standard application paths and any paths added by Rails engines.
  • image_tag
  • the closing tag cannot be of the style -%>
  • asset_data_uri
  • app/assets/javascripts/application.js
  • sass-rails provides -url and -path helpers (hyphenated in Sass, underscored in Ruby) for the following asset classes: image, font, video, audio, JavaScript and stylesheet.
  • Rails.application.config.assets.compress
  • In JavaScript files, the directives begin with //=
  • The require_tree directive tells Sprockets to recursively include all JavaScript files in the specified directory into the output.
  • manifest files contain directives — instructions that tell Sprockets which files to require in order to build a single CSS or JavaScript file.
  • You should not rely on any particular order among those
  • Sprockets uses manifest files to determine which assets to include and serve.
  • the family of require directives prevents files from being included twice in the output
  • which files to require in order to build a single CSS or JavaScript file
  • Directives are processed top to bottom, but the order in which files are included by require_tree is unspecified.
  • In JavaScript files, Sprockets directives begin with //=
  • If require_self is called more than once, only the last call is respected.
  • require directive is used to tell Sprockets the files you wish to require.
  • You need not supply the extensions explicitly. Sprockets assumes you are requiring a .js file when done from within a .js file
  • paths must be specified relative to the manifest file
  • require_directory
  • Rails 4 creates both app/assets/javascripts/application.js and app/assets/stylesheets/application.css regardless of whether the --skip-sprockets option is used when creating a new rails application.
  • The file extensions used on an asset determine what preprocessing is applied.
  • app/assets/stylesheets/application.css
  • Additional layers of preprocessing can be requested by adding other extensions, where each extension is processed in a right-to-left manner
  • require_self
  • use the Sass @import rule instead of these Sprockets directives.
  • Keep in mind that the order of these preprocessors is important
  • In development mode, assets are served as separate files in the order they are specified in the manifest file.
  • when these files are requested they are processed by the processors provided by the coffee-script and sass gems and then sent back to the browser as JavaScript and CSS respectively.
  • css.scss.erb
  • js.coffee.erb
  • Keep in mind the order of these preprocessors is important.
  • By default Rails assumes that assets have been precompiled and will be served as static assets by your web server
  • with the Asset Pipeline the :cache and :concat options aren't used anymore
  • Assets are compiled and cached on the first request after the server is started
  • RAILS_ENV=production bundle exec rake assets:precompile
  • Debug mode can also be enabled in Rails helper methods
  • If you set config.assets.initialize_on_precompile to false, be sure to test rake assets:precompile locally before deploying
  • By default Rails assumes assets have been precompiled and will be served as static assets by your web server.
  • a rake task to compile the asset manifests and other files in the pipeline
  • RAILS_ENV=production bin/rake assets:precompile
  • a recipe to handle this in deployment
  • links the folder specified in config.assets.prefix to shared/assets
  • config/initializers/assets.rb
  • The initialize_on_precompile change tells the precompile task to run without invoking Rails
  • The X-Sendfile header is a directive to the web server to ignore the response from the application, and instead serve a specified file from disk
  • the jquery-rails gem which comes with Rails as the standard JavaScript library gem.
  • Possible options for JavaScript compression are :closure, :uglifier and :yui
  • concatenate assets
張 旭

Introduction to GitLab Flow | GitLab - 0 views

  • Git allows a wide variety of branching strategies and workflows.
  • not integrated with issue tracking systems
  • The biggest problem is that many long-running branches emerge that all contain part of the changes.
  • ...47 more annotations...
  • most organizations practice continuous delivery, which means that your default branch can be deployed.
  • Merging everything into the master branch and frequently deploying means you minimize the amount of unreleased code, which is in line with lean and continuous delivery best practices.
  • you can deploy to production every time you merge a feature branch.
  • deploy a new version by merging master into the production branch.
  • you can have your deployment script create a tag on each deployment.
  • to have an environment that is automatically updated to the master branch
  • commits only flow downstream, ensures that everything is tested in all environments.
  • first merge these bug fixes into master, and then cherry-pick them into the release branch.
  • Merging into master and then cherry-picking into release is called an “upstream first” policy
  • “merge request” since the final action is to merge the feature branch.
  • “pull request” since the first manual action is to pull the feature branch
  • it is common to protect the long-lived branches
  • After you merge a feature branch, you should remove it from the source control software
  • When you are ready to code, create a branch for the issue from the master branch. This branch is the place for any work related to this change.
  • A merge request is an online place to discuss the change and review the code.
  • If you open the merge request but do not assign it to anyone, it is a “Work In Progress” merge request.
  • Start the title of the merge request with “[WIP]” or “WIP:” to prevent it from being merged before it’s ready.
  • To automatically close linked issues, mention them with the words “fixes” or “closes,” for example, “fixes #14” or “closes #67.” GitLab closes these issues when the code is merged into the default branch.
  • If you have an issue that spans across multiple repositories, create an issue for each repository and link all issues to a parent issue.
  • With Git, you can use an interactive rebase (rebase -i) to squash multiple commits into one or reorder them.
  • you should never rebase commits you have pushed to a remote server.
  • Rebasing creates new commits for all your changes, which can cause confusion because the same change would have multiple identifiers.
  • if someone has already reviewed your code, rebasing makes it hard to tell what changed since the last review.
  • never rebase commits authored by other people.
  • it is a bad idea to rebase commits that you have already pushed.
  • always use the “no fast-forward” (--no-ff) strategy when you merge manually.
  • you should try to avoid merge commits in feature branches
  • people avoid merge commits by just using rebase to reorder their commits after the commits on the master branch. Using rebase prevents a merge commit when merging master into your feature branch, and it creates a neat linear history.
  • you should never rebase commits you have pushed to a remote server
  • Sometimes you can reuse recorded resolutions (rerere), but merging is better since you only have to resolve conflicts once.
  • not frequently merge master into the feature branch.
  • utilizing new code,
  • resolving merge conflicts
  • updating long-running branches.
  • just cherry-picking a commit.
  • If your feature branch has a merge conflict, creating a merge commit is a standard way of solving this.
  • keep your feature branches short-lived.
  • split your features into smaller units of work
  • you should try to prevent merge commits, but not eliminate them.
  • Your codebase should be clean, but your history should represent what actually happened.
  • Splitting up work into individual commits provides context for developers looking at your code later.
  • push your feature branch frequently, even when it is not yet ready for review.
  • Commit often and push frequently
  • A commit message should reflect your intention, not just the contents of the commit.
  • Testing before merging
  • When using GitLab flow, developers create their branches from this master branch, so it is essential that it never breaks. Therefore, each merge request must be tested before it is accepted.
  • When creating a feature branch, always branch from an up-to-date master
  •  
    "Git allows a wide variety of branching strategies and workflows."
張 旭

DNS - FreeIPA - 0 views

  • FreeIPA DNS integration allows administrator to manage and serve DNS records in a domain using the same CLI or Web UI as when managing identities and policies.
  • Single-master DNS is error prone, especially for inexperienced admins.
  • a decent Kerberos experience.
  • ...14 more annotations...
  • Goal is NOT to provide general-purpose DNS server.
  • DNS component in FreeIPA is optional and user may choose to manage all DNS records manually in other third party DNS server.
  • Clients can be configured to automatically run DNS updates (nsupdate) when their IP address changes and thus keeping its DNS record up-to-date. DNS zones can be configured to synchronize client's reverse (PTR) record along with the forward (A, AAAA) DNS record.
  • It is extremely hard to change DNS domain in existing installations so it is better to think ahead.
  • You should only use names which are delegated to you by the parent domain.
  • Not respecting this rule will cause problems sooner or later!
  • DNSSEC validation.
  • For internal names you can use arbitrary sub-domain in a DNS sub-tree you own, e.g. int.example.com.. Always respect rules from the previous section.
  • General advice about DNS views is do not use them because views make DNS deployment harder to maintain and security benefits are questionable (when compared with ACL).
  • The DNS integration is based on the bind-dyndb-ldap project, which enhances BIND name server to be able to use FreeIPA server LDAP instance as a data backend (data are stored in cn=dns entry, using schema defined by bind-dyndb-ldap
  • FreeIPA LDAP directory information tree is by default accessible to any user in the network
  • As DNS data are often considered as sensitive and as having access to cn=dns tree would be basically equal to being able to run zone transfer to all FreeIPA managed DNS zones, contents of this tree in LDAP are hidden by default.
  • standard system log (/var/log/messages or system journal)
  • BIND configuration (/etc/named.conf) can be updated to produce a more detailed log.
  •  
    "FreeIPA DNS integration allows administrator to manage and serve DNS records in a domain using the same CLI or Web UI as when managing identities and policies."
張 旭

What is a CAA record? - DNSimple Help - 0 views

  • The purpose of the CAA record is to allow domain owners to declare which certificate authorities are allowed to issue a certificate for a domain.
  • If a CAA record is present, only the CAs listed in the record(s) are allowed to issue certificates for that hostname.
  • CAA records can set policy for the entire domain, or for specific hostnames.
  • ...4 more annotations...
  • The CAA record consists of a flags byte and a tag-value pair referred to as a ‘property’.
  • example.com. CAA 0 issue "letsencrypt.org"
  • each CAA record contains only one tag-value pair
  • dig google.com type257
張 旭

Introduction to GitLab Flow | GitLab - 0 views

  • GitLab flow as a clearly defined set of best practices. It combines feature-driven development and feature branches with issue tracking.
  • In Git, you add files from the working copy to the staging area. After that, you commit them to your local repo. The third step is pushing to a shared remote repository.
  • branching model
  • ...68 more annotations...
  • The biggest problem is that many long-running branches emerge that all contain part of the changes.
  • It is a convention to call your default branch master and to mostly branch from and merge to this.
  • Nowadays, most organizations practice continuous delivery, which means that your default branch can be deployed.
  • Continuous delivery removes the need for hotfix and release branches, including all the ceremony they introduce.
  • Merging everything into the master branch and frequently deploying means you minimize the amount of unreleased code, which is in line with lean and continuous delivery best practices.
  • GitHub flow assumes you can deploy to production every time you merge a feature branch.
  • You can deploy a new version by merging master into the production branch. If you need to know what code is in production, you can just checkout the production branch to see.
  • Production branch
  • Environment branches
  • have an environment that is automatically updated to the master branch.
  • deploy the master branch to staging.
  • To deploy to pre-production, create a merge request from the master branch to the pre-production branch.
  • Go live by merging the pre-production branch into the production branch.
  • Release branches
  • work with release branches if you need to release software to the outside world.
  • each branch contains a minor version
  • After announcing a release branch, only add serious bug fixes to the branch.
  • merge these bug fixes into master, and then cherry-pick them into the release branch.
  • Merging into master and then cherry-picking into release is called an “upstream first” policy
  • Tools such as GitHub and Bitbucket choose the name “pull request” since the first manual action is to pull the feature branch.
  • Tools such as GitLab and others choose the name “merge request” since the final action is to merge the feature branch.
  • If you work on a feature branch for more than a few hours, it is good to share the intermediate result with the rest of the team.
  • the merge request automatically updates when new commits are pushed to the branch.
  • If the assigned person does not feel comfortable, they can request more changes or close the merge request without merging.
  • In GitLab, it is common to protect the long-lived branches, e.g., the master branch, so that most developers can’t modify them.
  • if you want to merge into a protected branch, assign your merge request to someone with maintainer permissions.
  • After you merge a feature branch, you should remove it from the source control software.
  • Having a reason for every code change helps to inform the rest of the team and to keep the scope of a feature branch small.
  • If there is no issue yet, create the issue
  • The issue title should describe the desired state of the system.
  • For example, the issue title “As an administrator, I want to remove users without receiving an error” is better than “Admin can’t remove users.”
  • create a branch for the issue from the master branch
  • If you open the merge request but do not assign it to anyone, it is a “Work In Progress” merge request.
  • Start the title of the merge request with [WIP] or WIP: to prevent it from being merged before it’s ready.
  • When they press the merge button, GitLab merges the code and creates a merge commit that makes this event easily visible later on.
  • Merge requests always create a merge commit, even when the branch could be merged without one. This merge strategy is called “no fast-forward” in Git.
  • Suppose that a branch is merged but a problem occurs and the issue is reopened. In this case, it is no problem to reuse the same branch name since the first branch was deleted when it was merged.
  • At any time, there is at most one branch for every issue.
  • It is possible that one feature branch solves more than one issue.
  • GitLab closes these issues when the code is merged into the default branch.
  • If you have an issue that spans across multiple repositories, create an issue for each repository and link all issues to a parent issue.
  • use an interactive rebase (rebase -i) to squash multiple commits into one or reorder them.
  • you should never rebase commits you have pushed to a remote server.
  • Rebasing creates new commits for all your changes, which can cause confusion because the same change would have multiple identifiers.
  • if someone has already reviewed your code, rebasing makes it hard to tell what changed since the last review.
  • never rebase commits authored by other people.
  • it is a bad idea to rebase commits that you have already pushed.
  • If you revert a merge commit and then change your mind, revert the revert commit to redo the merge.
  • Often, people avoid merge commits by just using rebase to reorder their commits after the commits on the master branch.
  • Using rebase prevents a merge commit when merging master into your feature branch, and it creates a neat linear history.
  • every time you rebase, you have to resolve similar conflicts.
  • Sometimes you can reuse recorded resolutions (rerere), but merging is better since you only have to resolve conflicts once.
  • A good way to prevent creating many merge commits is to not frequently merge master into the feature branch.
  • keep your feature branches short-lived.
  • Most feature branches should take less than one day of work.
  • If your feature branches often take more than a day of work, try to split your features into smaller units of work.
  • You could also use feature toggles to hide incomplete features so you can still merge back into master every day.
  • you should try to prevent merge commits, but not eliminate them.
  • Your codebase should be clean, but your history should represent what actually happened.
  • If you rebase code, the history is incorrect, and there is no way for tools to remedy this because they can’t deal with changing commit identifiers
  • Commit often and push frequently
  • You should push your feature branch frequently, even when it is not yet ready for review.
  • A commit message should reflect your intention, not just the contents of the commit.
  • each merge request must be tested before it is accepted.
  • test the master branch after each change.
  • If new commits in master cause merge conflicts with the feature branch, merge master back into the branch to make the CI server re-run the tests.
  • When creating a feature branch, always branch from an up-to-date master.
  • Do not merge from upstream again if your code can work and merge cleanly without doing so.
張 旭

Warnings, Notes, & Tips - 0 views

  • AS3 manages topology records globally in /Common, it is required that records only be managed through AS3, as it will treat the records declaratively.
  • If a record is added outside of AS3, it will be removed if it is not included in the next AS3 declaration for topology records (AS3 completely overwrites non-AS3 topologies when a declaration is submitted).
  • using AS3 to delete a tenant (for example, sending DELETE to the /declare/<TENANT> endpoint) that contains GSLB topologies will completely remove ALL GSLB topologies from the BIG-IP.
  • ...12 more annotations...
  • When posting a large declaration (hundreds of application services in a single declaration), you may experience a 500 error stating that the save sys config operation failed.
  • Even if you have asynchronous mode set to false, after 45 seconds AS3 sets asynchronous mode to true (API swap), and returns an async response.
  • When creating a new tenant using AS3, it must not use the same name as a partition you separately create on the target BIG-IP system.
  • If you use the same name and then post the declaration, AS3 overwrites (or removes) the existing partition completely, including all configuration objects in that partition.
  • use AS3 to create a tenant (which creates a BIG-IP partition), manually adding configuration objects to the partition created by AS3 can have unexpected results
  • When you delete the Tenant using AS3, the system deletes both virtual servers.
  • if a Firewall_Address_List contains zero addresses, a dummy IPv6 address of ::1:5ee:bad:c0de is added in order to maintain a valid Firewall_Address_List. If an address is added to the list, the dummy address is removed.
  • use /mgmt/shared/appsvcs/declare?async=true if you have a particularly large declaration which will take a long time to process.
  • reviewing the Sizing BIG-IP Virtual Editions section (page 7) of Deploying BIG-IP VEs in a Hyper-Converged Infrastructure
  • To test whether your system has AS3 installed or not, use GET with the /mgmt/shared/appsvcs/info URI.
  • You may find it more convenient to put multi-line texts such as iRules into AS3 declarations by first encoding them in Base64.
  • no matter your BIG-IP user account name, audit logs show all messages from admin and not the specific user name.
張 旭

MySQL :: MySQL 5.7 Reference Manual :: 19.2.1.2 Configuring an Instance for Group Repli... - 0 views

  • store replication metadata in system tables instead of files
  • collect the write set and encode it as a hash using the XXHASH64 hashing algorithm
  • not start operations automatically when the server starts
  • ...10 more annotations...
  • for incoming connections from other members in the group
  • The server listens on this port for member-to-member connections. This port must not be used for user applications at all
  • The loose- prefix used for the group_replication variables above instructs the server to continue to start if the Group Replication plugin has not been loaded at the time the server is started.
  • For example, if each server instance is on a different machine use the IP and port of the machine, such as 10.0.0.1:33061. The recommended port for group_replication_local_address is 33061
  • does not need to list all members in the group
  • The server that starts the group does not make use of this option, since it is the initial server and as such, it is in charge of bootstrapping the group
  • start the bootstrap member first, and let it create the group
  • Creating a group and joining multiple members at the same time is not supported.
  • must only be used on one server instance at any time
  • Disable this option after the first server instance comes online
張 旭

深度解读 - TDD(测试驱动开发) - 简书 - 0 views

  • TDD的原理是在开发功能代码之前,先编写单元测试用例代码,测试代码确定需要编写什么产品代码。
  • TDD 并不能直接提高设计能力,它只是给你更多机会和保障去改善设计。
  • 写测试,只关注需求,程序的输入输出,不关心中间过程
  • ...21 more annotations...
  • 写实现,不考虑别的需求,用最简单的方式满足当前这个小需求即可
  • 一次只关注一个点,思维负担更小
  • 提前澄清需求细节
  • 写一个失败的测试,它是对一个小需求的描述,只需要关心输入输出,这个时候根本不用关心如何实现
  • 专注在用最快的方式实现当前这个小需求,不用关心其他需求,也不要管代码的质量多么惨不忍睹
  • 既不用思考需求,也没有实现的压力,只需要找出代码中的坏味道,并用一个手法消除它,让代码变成整洁的代码
  • TDD 之前要拆分任务,把一个大需求拆成多个小需求。
  • 符合 Given-When-Then 格式
  • 包含断言
  • 可以重复执行
  • 单元测试基础设施
  • 只在没有信心的地方写测试代码
  • 有注释的地方都可以抽取方法,用方法名来代替注释
  • 选一个最简单的用例,直接开写,用最简单的代码通过测试。逐渐增加测试,让代码变复杂,用重构来驱动出设计。
  • 有了重构这个工具后,做设计的压力小了很多,因为有测试代码保护,我们可以随时重构实现了
  • 在纸上迭代总比改代码要快
  • 任务分解和列 Example
  • 步子迈太大,容易扯着蛋。
  • 做探索性的技术研究(Spike),不需要长期维护,而且测试基础设施搭建成本很高,那还是手工测试吧
  • TDD 是开发人员自己的事
  • 布置完一个任务,让新人先画图,就可以在他开始写代码前对他的设计提反馈。
  •  
    "TDD的原理是在开发功能代码之前,先编写单元测试用例代码,测试代码确定需要编写什么产品代码。"
張 旭

bbatsov/rails-style-guide: A community-driven Ruby on Rails 4 style guide - 0 views

  • custom initialization code in config/initializers. The code in initializers executes on application startup
  • Keep initialization code for each gem in a separate file with the same name as the gem
  • Mark additional assets for precompilation
  • ...90 more annotations...
  • config/environments/production.rb
  • Create an additional staging environment that closely resembles the production one
  • Keep any additional configuration in YAML files under the config/ directory
  • Rails::Application.config_for(:yaml_file)
  • Use nested routes to express better the relationship between ActiveRecord models
  • nest routes more than 1 level deep then use the shallow: true option
  • namespaced routes to group related actions
  • Don't use match to define any routes unless there is need to map multiple request types among [:get, :post, :patch, :put, :delete] to a single action using :via option.
  • Keep the controllers skinny
  • all the business logic should naturally reside in the model
  • Share no more than two instance variables between a controller and a view.
  • using a template
  • Prefer render plain: over render text
  • Prefer corresponding symbols to numeric HTTP status codes
  • without abbreviations
  • Keep your models for business logic and data-persistence only
  • Avoid altering ActiveRecord defaults (table names, primary key, etc)
  • Group macro-style methods (has_many, validates, etc) in the beginning of the class definition
  • Prefer has_many :through to has_and_belongs_to_many
  • self[:attribute]
  • self[:attribute] = value
  • validates
  • Keep custom validators under app/validators
  • Consider extracting custom validators to a shared gem
  • preferable to make a class method instead which serves the same purpose of the named scope
  • returns an ActiveRecord::Relation object
  • .update_attributes
  • Override the to_param method of the model
  • Use the friendly_id gem. It allows creation of human-readable URLs by using some descriptive attribute of the model instead of its id
  • find_each to iterate over a collection of AR objects
  • .find_each
  • .find_each
  • Looping through a collection of records from the database (using the all method, for example) is very inefficient since it will try to instantiate all the objects at once
  • always call before_destroy callbacks that perform validation with prepend: true
  • Define the dependent option to the has_many and has_one associations
  • always use the exception raising bang! method or handle the method return value.
  • When persisting AR objects
  • Avoid string interpolation in queries
  • param will be properly escaped
  • Consider using named placeholders instead of positional placeholders
  • use of find over where when you need to retrieve a single record by id
  • use of find_by over where and find_by_attribute
  • use of where.not over SQL
  • use heredocs with squish
  • Keep the schema.rb (or structure.sql) under version control.
  • Use rake db:schema:load instead of rake db:migrate to initialize an empty database
  • Enforce default values in the migrations themselves instead of in the application layer
  • change_column_default
  • imposing data integrity from the Rails app is impossible
  • use the change method instead of up and down methods.
  • constructive migrations
  • use models in migrations, make sure you define them so that you don't end up with broken migrations in the future
  • Don't use non-reversible migration commands in the change method.
  • In this case, block will be used by create_table in rollback
  • Never call the model layer directly from a view
  • Never make complex formatting in the views, export the formatting to a method in the view helper or the model.
  • When the labels of an ActiveRecord model need to be translated, use the activerecord scope
  • Separate the texts used in the views from translations of ActiveRecord attributes
  • Place the locale files for the models in a folder locales/models
  • the texts used in the views in folder locales/views
  • config/application.rb config.i18n.load_path += Dir[Rails.root.join('config', 'locales', '**', '*.{rb,yml}')]
  • I18n.t
  • I18n.l
  • Use "lazy" lookup for the texts used in views.
  • Use the dot-separated keys in the controllers and models
  • Reserve app/assets for custom stylesheets, javascripts, or images
  • Third party code such as jQuery or bootstrap should be placed in vendor/assets
  • Provide both HTML and plain-text view templates
  • config.action_mailer.raise_delivery_errors = true
  • Use a local SMTP server like Mailcatcher in the development environment
  • Provide default settings for the host name
  • The _url methods include the host name and the _path methods don't
  • _url
  • Format the from and to addresses properly
  • default from:
  • sending html emails all styles should be inline
  • Sending emails while generating page response should be avoided. It causes delays in loading of the page and request can timeout if multiple email are sent.
  • .start_with?
  • .end_with?
  • &.
  • Config your timezone accordingly in application.rb
  • config.active_record.default_timezone = :local
  • it can be only :utc or :local
  • Don't use Time.parse
  • Time.zone.parse
  • Don't use Time.now
  • Time.zone.now
  • Put gems used only for development or testing in the appropriate group in the Gemfile
  • Add all OS X specific gems to a darwin group in the Gemfile, and all Linux specific gems to a linux group
  • Do not remove the Gemfile.lock from version control.
張 旭

Ruby and AOP: Decouple your code even more - Arkency Blog - 0 views

  • Dark Parts in our apps - persistence, networking, logging, notifications… these parts are scattered in our code
  • aspect-oriented programming!
  • components are parts we can easily encapsulate into some kind of code abstraction - a methods, objects or procedures.
  • ...16 more annotations...
  • application’s logic is a great example of a component
  • Aspects cross-cut our application - when we use some kind of persistence (e.g. a database) or network communication (such as ZMQ sockets) our components need to know about it.
  • Aspect-oriented programming aims to get rid of cross-cuts by separating aspect code from component code using injections of our aspects in certain join points in our component code.
  • It’s responsible for pushing snippets scenario
  • SRP-conformant object
  • the join points in Ruby
  • advice
    • 張 旭
       
      AOP 裡面的術語
  • In most cases after and before advice are sufficient.
  • what does it mean to “evaluate code around” something? In our case it means: Don’t run this method. Take it and push to my advice as an argument and evaluate this advice
  • to provide a join point
  • You’ll often see empty methods in code written in AOP paradigm
  • provide aspect code to link with our use case
  • use case is a pure domain object, without even knowing it’s connected with some kind of persistence and logging layer.
  • Aspect-oriented programming is fixing the problem with polluting pure logic objects with technical context of our applications.
  • we treat our glues as a configuration part, not the logic part of our apps.
  • Glues should not contain any logic at all
張 旭

Automated Nginx Reverse Proxy for Docker - 0 views

  • Docker containers are assigned random IPs and ports which makes addressing them much more complicated from a client perspsective
  • Binding the container to the hosts port can prevent multiple containers from running on the same host. For example, only one container can bind to port 80 at a time.
  • Docker provides a remote API to inspect containers and access their IP, Ports and other configuration meta-data.
  • ...1 more annotation...
  • nginx template can be used to generate a reverse proxy configuration for docker containers using virtual hosts for routing.
張 旭

Understanding the Nginx Configuration File Structure and Configuration Contexts | Digit... - 0 views

  • discussing the basic structure of an Nginx configuration file along with some guidelines on how to design your files
  • /etc/nginx/nginx.conf
  • In Nginx parlance, the areas that these brackets define are called "contexts" because they contain configuration details that are separated according to their area of concern
  • ...50 more annotations...
  • contexts can be layered within one another
  • if a directive is valid in multiple nested scopes, a declaration in a broader context will be passed on to any child contexts as default values.
  • The children contexts can override these values at will
  • Nginx will error out on reading a configuration file with directives that are declared in the wrong context.
  • The most general context is the "main" or "global" context
  • Any directive that exist entirely outside of these blocks is said to inhabit the "main" context
  • The main context represents the broadest environment for Nginx configuration.
  • The "events" context is contained within the "main" context. It is used to set global options that affect how Nginx handles connections at a general level.
  • Nginx uses an event-based connection processing model, so the directives defined within this context determine how worker processes should handle connections.
  • the connection processing method is automatically selected based on the most efficient choice that the platform has available
  • a worker will only take a single connection at a time
  • When configuring Nginx as a web server or reverse proxy, the "http" context will hold the majority of the configuration.
  • The http context is a sibling of the events context, so they should be listed side-by-side, rather than nested
  • fine-tune the TCP keep alive settings (keepalive_disable, keepalive_requests, and keepalive_timeout)
  • The "server" context is declared within the "http" context.
  • multiple declarations
  • each instance defines a specific virtual server to handle client requests
  • Each client request will be handled according to the configuration defined in a single server context, so Nginx must decide which server context is most appropriate based on details of the request.
  • listen: The ip address / port combination that this server block is designed to respond to.
  • server_name: This directive is the other component used to select a server block for processing.
  • "Host" header
  • configure files to try to respond to requests (try_files)
  • issue redirects and rewrites (return and rewrite)
  • set arbitrary variables (set)
  • Location contexts share many relational qualities with server contexts
  • multiple location contexts can be defined, each location is used to handle a certain type of client request, and each location is selected by virtue of matching the location definition against the client request through a selection algorithm
  • Location blocks live within server contexts and, unlike server blocks, can be nested inside one another.
  • While server contexts are selected based on the requested IP address/port combination and the host name in the "Host" header, location blocks further divide up the request handling within a server block by looking at the request URI
  • The request URI is the portion of the request that comes after the domain name or IP address/port combination.
  • New directives at this level allow you to reach locations outside of the document root (alias), mark the location as only internally accessible (internal), and proxy to other servers or locations (using http, fastcgi, scgi, and uwsgi proxying).
  • These can then be used to do A/B testing by providing different content to different hosts.
  • configures Perl handlers for the location they appear in
  • set the value of a variable depending on the value of another variable
  • used to map MIME types to the file extensions that should be associated with them.
  • this context defines a named pool of servers that Nginx can then proxy requests to
  • The upstream context should be placed within the http context, outside of any specific server contexts.
  • The upstream context can then be referenced by name within server or location blocks to pass requests of a certain type to the pool of servers that have been defined.
  • function as a high performance mail proxy server
  • The mail context is defined within the "main" or "global" context (outside of the http context).
  • Nginx has the ability to redirect authentication requests to an external authentication server
  • the if directive in Nginx will execute the instructions contained if a given test returns "true".
  • Since Nginx will test conditions of a request with many other purpose-made directives, if should not be used for most forms of conditional execution.
  • The limit_except context is used to restrict the use of certain HTTP methods within a location context.
  • The result of the above example is that any client can use the GET and HEAD verbs, but only clients coming from the 192.168.1.1/24 subnet are allowed to use other methods.
  • Many directives are valid in more than one context
  • it is usually best to declare directives in the highest context to which they are applicable, and overriding them in lower contexts as necessary.
  • Declaring at higher levels provides you with a sane default
  • Nginx already engages in a well-documented selection algorithm for things like selecting server blocks and location blocks.
  • instead of relying on rewrites to get a user supplied request into the format that you would like to work with, you should try to set up two blocks for the request, one of which represents the desired method, and the other that catches messy requests and redirects (and possibly rewrites) them to your correct block.
  • incorrect requests can get by with a redirect rather than a rewrite, which should execute with lower overhead.
張 旭

Upgrading kubeadm clusters | Kubernetes - 0 views

  • Swap must be disabled.
  • read the release notes carefully.
  • back up any important components, such as app-level state stored in a database.
  • ...16 more annotations...
  • All containers are restarted after upgrade, because the container spec hash value is changed.
  • The upgrade procedure on control plane nodes should be executed one node at a time.
  • /etc/kubernetes/admin.conf
  • kubeadm upgrade also automatically renews the certificates that it manages on this node. To opt-out of certificate renewal the flag --certificate-renewal=false can be used.
  • Manually upgrade your CNI provider plugin.
  • sudo systemctl daemon-reload sudo systemctl restart kubelet
  • If kubeadm upgrade fails and does not roll back, for example because of an unexpected shutdown during execution, you can run kubeadm upgrade again.
  • To recover from a bad state, you can also run kubeadm upgrade apply --force without changing the version that your cluster is running.
  • kubeadm-backup-etcd contains a backup of the local etcd member data for this control plane Node.
  • the contents of this folder can be manually restored in /var/lib/etcd
  • kubeadm-backup-manifests contains a backup of the static Pod manifest files for this control plane Node.
  • the contents of this folder can be manually restored in /etc/kubernetes/manifests
  • Enforces the version skew policies.
  • Upgrades the control plane components or rollbacks if any of them fails to come up.
  • Creates new certificate and key files of the API server and backs up old files if they're about to expire in 180 days.
  • backup folders under /etc/kubernetes/tmp
張 旭

ALB vs ELB | Differences Between an ELB and an ALB on AWS | Sumo Logic - 0 views

  • If you use AWS, you have two load-balancing options: ELB and ALB.
  • An ELB is a software-based load balancer which can be set up and configured in front of a collection of AWS Elastic Compute (EC2) instances.
  • The load balancer serves as a single entry point for consumers of the EC2 instances and distributes incoming traffic across all machines available to receive requests.
  • ...14 more annotations...
  • the ELB also performs a vital role in improving the fault tolerance of the services which it fronts.
  • he Open Systems Interconnection Model, or OSI Model, is a conceptual model which is used to facilitate communications between different computing systems.
  • Layer 1 is the physical layer, and represents the physical medium across which the request is sent.
  • Layer 2 describes the data link layer
  • Layer 3 (the network layer)
  • Layer 7, which serves the application layer.
  • The Classic ELB operates at Layer 4. Layer 4 represents the transport layer, and is controlled by the protocol being used to transmit the request.
  • A network device, of which the Classic ELB is an example, reads the protocol and port of the incoming request, and then routes it to one or more backend servers.
  • the ALB operates at Layer 7. Layer 7 represents the application layer, and as such allows for the redirection of traffic based on the content of the request.
  • Whereas a request to a specific URL backed by a Classic ELB would only enable routing to a particular pool of homogeneous servers, the ALB can route based on the content of the URL, and direct to a specific subgroup of backing servers existing in a heterogeneous collection registered with the load balancer.
  • The Classic ELB is a simple load balancer, is easy to configure
  • As organizations move towards microservice architecture or adopt a container-based infrastructure, the ability to merely map a single address to a specific service becomes more complicated and harder to maintain.
  • the ALB manages routing based on user-defined rules.
  • oute traffic to different services based on either the host or the content of the path contained within that URL.
張 旭

Operator pattern - Kubernetes - 1 views

  • The Operator pattern aims to capture the key aim of a human operator who is managing a service or set of services
  • Operators are software extensions to Kubernetes that make use of custom resources to manage applications and their components
  • The Operator pattern captures how you can write code to automate a task beyond what Kubernetes itself provides.
  • ...7 more annotations...
  • Operators are clients of the Kubernetes API that act as controllers for a Custom Resource.
  • choosing a leader for a distributed application without an internal member election process
  • publishing a Service to applications that don't support Kubernetes APIs to discover them
  • The core of the Operator is code to tell the API server how to make reality match the configured resources.
  • If you add a new SampleDB, the operator sets up PersistentVolumeClaims to provide durable database storage, a StatefulSet to run SampleDB and a Job to handle initial configuration.If you delete it, the Operator takes a snapshot, then makes sure that the StatefulSet and Volumes are also removed.
  • to deploy an Operator is to add the Custom Resource Definition and its associated Controller to your cluster.
  • Once you have an Operator deployed, you'd use it by adding, modifying or deleting the kind of resource that the Operator uses.
crazylion lee

Serveo: expose local servers to the internet using SSH - 0 views

shared by crazylion lee on 22 Jul 19 - No Cached
  •  ssh -R 80:example.com:80 serveo.net
張 旭

How to Benchmark Performance of MySQL & MariaDB Using SysBench | Severalnines - 1 views

  • SysBench is a C binary which uses LUA scripts to execute benchmarks
  • support for parallelization in the LUA scripts, multiple queries can be executed in parallel
  • by default, benchmarks which cover most of the cases - OLTP workloads, read-only or read-write, primary key lookups and primary key updates.
  • ...21 more annotations...
  • SysBench is not a tool which you can use to tune configurations of your MySQL servers (unless you prepared LUA scripts with custom workload or your workload happen to be very similar to the benchmark workloads that SysBench comes with)
  • it is great for is to compare performance of different hardware.
  • Every new server acquired should go through a warm-up period during which you will stress it to pinpoint potential hardware defects
  • by executing OLTP workload which overloads the server, or you can also use dedicated benchmarks for CPU, disk and memory.
  • bulk_insert.lua. This test can be used to benchmark the ability of MySQL to perform multi-row inserts.
  • All oltp_* scripts share a common table structure. First two of them (oltp_delete.lua and oltp_insert.lua) execute single DELETE and INSERT statements.
  • oltp_point_select, oltp_update_index and oltp_update_non_index. These will execute a subset of queries - primary key-based selects, index-based updates and non-index-based updates.
  • you can run different workload patterns using the same benchmark.
  • Warmup helps to identify “regular” throughput by executing benchmark for a predefined time, allowing to warm up the cache, buffer pools etc.
  • By default SysBench will attempt to execute queries as fast as possible. To simulate slower traffic this option may be used. You can define here how many transactions should be executed per second.
  • SysBench gives you ability to generate different types of data distribution.
  • decide if SysBench should use prepared statements (as long as they are available in the given datastore - for MySQL it means PS will be enabled by default) or not.
  • sysbench ./sysbench/src/lua/oltp_read_write.lua  help
  • By default, SysBench will attempt to execute queries in explicit transaction. This way the dataset will stay consistent and not affected: SysBench will, for example, execute INSERT and DELETE on the same row, making sure the data set will not grow (impacting your ability to reproduce results).
  • specify error codes from MySQL which SysBench should ignore (and not kill the connection).
  • the two most popular benchmarks - OLTP read only and OLTP read/write.
  • 1 million rows will result in ~240 MB of data. Ten tables, 1000 000 rows each equals to 2.4GB
  • by default, SysBench looks for ‘sbtest’ schema which has to exist before you prepare the data set. You may have to create it manually.
  • pass ‘--histogram’ argument to SysBench
  • ~48GB of data (20 tables, 10 000 000 rows each).
  • if you don’t understand why the performance was like it was, you may draw incorrect conclusions out of the benchmarks.
張 旭

Introducing CNAME Flattening: RFC-Compliant CNAMEs at a Domain's Root - 0 views

  • you can now safely use a CNAME record, as opposed to an A record that points to a fixed IP address, as your root record in CloudFlare DNS without triggering a number of edge case error conditions because you’re violating the DNS spec.
  • CNAME Flattening allowed us to use a root domain while still maintaining DNS fault-tolerance across multiple IP addresses.
  • Traditionally, the root record of a domain needed to point to an IP address (known as an A -- for "address" -- Record).
  • ...13 more annotations...
  • WordPlumblr allows its users to use custom domains that point to the WordPlumblr infrastructure
  • A CNAME is an alias. It allows one domain to point to another domain which, eventually if you follow the CNAME chain, will resolve to an A record and IP address.
  • For example, WordPlumblr might have assigned the CNAME 6equj5.wordplumblr.com for Foo.com. Foo.com and the other customers may have all initially resolved, at the end of the CNAME chain, to the same IP address.
  • you usually don't want to address memory directly but, instead, you set up a pointer to a block of memory where you're going to store something. If the operating system needs to move the memory around then it just updates the pointer to point to wherever the chunk of memory has been moved to.
  • CNAMEs work great for subdomains like www.foo.com or blog.foo.com. Unfortunately, they don't work for a naked domain like foo.com itself.
  • the DNS spec enshrined that the root record -- the naked domain without any subdomain -- could not be a CNAME.
  • Technically, the root could be a CNAME but the RFCs state that once a record has a CNAME it can't have any other entries associated with it
  • a way to support a CNAME at the root, but still follow the RFC and return an IP address for any query for the root record.
  • extended our authoritative DNS infrastructure to, in certain cases, act as a kind of DNS resolver.
  • if there's a CNAME at the root, rather than returning that record directly we recurse through the CNAME chain ourselves until we find an A Record.
  • allows the flexibility of having CNAMEs at the root without breaking the DNS specification.
  • We cache the CNAME responses -- respecting the DNS TTLs, just like a recursor should -- which means often we have the answer without having to traverse the chain.
  • CNAME flattening solved email resolution errors for us which was very key.
張 旭

Service | Kubernetes - 0 views

  • Each Pod gets its own IP address
  • Pods are nonpermanent resources.
  • Kubernetes Pods are created and destroyed to match the state of your cluster
  • ...23 more annotations...
  • In Kubernetes, a Service is an abstraction which defines a logical set of Pods and a policy by which to access them (sometimes this pattern is called a micro-service).
  • The set of Pods targeted by a Service is usually determined by a selector
  • If you're able to use Kubernetes APIs for service discovery in your application, you can query the API server for Endpoints, that get updated whenever the set of Pods in a Service changes.
  • A Service in Kubernetes is a REST object, similar to a Pod.
  • The name of a Service object must be a valid DNS label name
  • Kubernetes assigns this Service an IP address (sometimes called the "cluster IP"), which is used by the Service proxies
  • A Service can map any incoming port to a targetPort. By default and for convenience, the targetPort is set to the same value as the port field.
  • The default protocol for Services is TCP
  • As many Services need to expose more than one port, Kubernetes supports multiple port definitions on a Service object. Each port definition can have the same protocol, or a different one.
  • Because this Service has no selector, the corresponding Endpoints object is not created automatically. You can manually map the Service to the network address and port where it's running, by adding an Endpoints object manually
  • Endpoint IP addresses cannot be the cluster IPs of other Kubernetes Services
  • Kubernetes ServiceTypes allow you to specify what kind of Service you want. The default is ClusterIP
  • ClusterIP: Exposes the Service on a cluster-internal IP.
  • NodePort: Exposes the Service on each Node's IP at a static port (the NodePort). A ClusterIP Service, to which the NodePort Service routes, is automatically created. You'll be able to contact the NodePort Service, from outside the cluster, by requesting <NodeIP>:<NodePort>.
  • LoadBalancer: Exposes the Service externally using a cloud provider's load balancer
  • ExternalName: Maps the Service to the contents of the externalName field (e.g. foo.bar.example.com), by returning a CNAME record with its value. No proxying of any kind is set up.
  • You can also use Ingress to expose your Service. Ingress is not a Service type, but it acts as the entry point for your cluster.
  • If you set the type field to NodePort, the Kubernetes control plane allocates a port from a range specified by --service-node-port-range flag (default: 30000-32767).
  • The default for --nodeport-addresses is an empty list. This means that kube-proxy should consider all available network interfaces for NodePort.
  • you need to take care of possible port collisions yourself. You also have to use a valid port number, one that's inside the range configured for NodePort use.
  • Service is visible as <NodeIP>:spec.ports[*].nodePort and .spec.clusterIP:spec.ports[*].port
  • Choosing this value makes the Service only reachable from within the cluster.
  • NodePort: Exposes the Service on each Node's IP at a static port
張 旭

Moving away from Alpine - DEV Community - 0 views

  • it’s a lot of work to get packages that are not readily available in Alpine repository.
  • things compiled in Alpine won’t be usable on Ubuntu, for example, and vice versa.
  • the difficulty in pinning package versions in Alpine.
  • ...2 more annotations...
  • Developers rely heavily on app logs via syslog (mounted /dev/log) and Alpine uses busybox syslog by default.
  • Ubuntu officially launched minimal ubuntu images for cloud / container use
« First ‹ Previous 41 - 60 of 62 Next ›
Showing 20 items per page