Skip to main content

Home/ Larvata/ Group items tagged script

Rss Feed Group items tagged

crazylion lee

JavaPoly.js - Java(script) in the Browser - 0 views

  •  
    "JavaPoly.js is a library that polyfills native JVM support in the browser. It allows you to import your existing Java code, and invoke the code directly from Javascript."
張 旭

Best practices for writing Dockerfiles - Docker Documentation - 0 views

  • Run only one process per container
  • use current Official Repositories as the basis for your image
  • put long or complex RUN statements on multiple lines separated with backslashes.
  • ...16 more annotations...
  • CMD instruction should be used to run the software contained by your image, along with any arguments
  • CMD should be given an interactive shell (bash, python, perl, etc)
  • COPY them individually, rather than all at once
  • COPY is preferred
  • using ADD to fetch packages from remote URLs is strongly discouraged
  • always use COPY
  • The best use for ENTRYPOINT is to set the image's main command, allowing that image to be run as though it was that command (and then use CMD as the default flags).
  • the image name can double as a reference to the binary as shown in the command above
  • ENTRYPOINT instruction can also be used in combination with a helper script
  • The VOLUME instruction should be used to expose any database storage area, configuration storage, or files/folders created by your docker container.
  • use USER to change to a non-root user
  • avoid installing or using sudo
  • avoid switching USER back and forth frequently.
  • always use absolute paths for your WORKDIR
  • ONBUILD is only useful for images that are going to be built FROM a given image
  • The “onbuild” image will fail catastrophically if the new build's context is missing the resource being added.
張 旭

Asset Pipeline - Ruby on Rails 指南 - 0 views

  • 清单文件或帮助方法
    • 張 旭
       
      清單文件是指:application.css 跟 application.js
  • Sprockets 会按照搜索路径中各路径出现的顺序进行搜索。默认情况下,这意味着 app/assets 文件夹中的静态资源优先级较高,会遮盖 lib 和 vendor 文件夹中的相应文件
  • 如果静态资源不会在清单文件中引入,就要添加到预编译的文件列表中,否则在生产环境中就无法访问文件。
  • ...36 more annotations...
  • 程序中使用了 jQuery 代码库和许多模块,都保存在 lib/assets/javascripts/library_name 文件夹中,那么 lib/assets/javascripts/library_name/index.js 文件的作用就是这个代码库的清单。清单文件中可以按顺序列出所需的文件,或者干脆使用 require_tree 指令。
  • 如果使用 Turbolinks(Rails 4 默认启用),加上 data-turbolinks-track 选项后,Turbolinks 会检查静态资源是否有更新,如果更新了就会将其载入页面
  • config.assets.paths 包含标准路径和其他 Rails 引擎添加的路径。
  • 链接不存在的资源(也包括链接到空字符串的情况)会在调用页面抛出异常。
  • 关闭标签不能使用 -%> 形式
  • Sprockets 通过清单文件决定要引入和伺服哪些静态资源
  • 在 JavaScript 文件中,Sprockets 的指令以 //= 开头。在上面的文件中,用到了 require 和 the require_tree 指令。
  • app/assets/javascripts/application.js
  • require_tree 指令告知 Sprockets 递归引入指定文件夹中的所有 JavaScript 文件。文件夹的路径必须相对于清单文件。也可使用 require_directory 指令加载指定文件夹中的所有 JavaScript 文件,但不会递归。
  • Sprockets 会按照从上至下的顺序处理指令,但 require_tree 引入的文件顺序是不可预期的,不要设想能得到一个期望的顺序。
  • app/assets/stylesheets/application.css
  • 不管创建新程序时有没有指定 --skip-sprockets 选项,Rails 4 都会生成 app/assets/javascripts/application.js 和 app/assets/stylesheets/application.css
  • 如果多次调用 require_self,只有最后一次调用有效
  • 如果想使用多个 Sass 文件,应该使用 Sass 中的 @import 规则,不要使用 Sprockets 指令。
  • 清单文件可以有多个。
  • 如果使用默认的 gem,生成控制器或脚手架时,会生成 CoffeeScript 和 SCSS 文件,而不是普通的 JavaScript 和 CSS 文件。
  • 在开发环境中,或者禁用 Asset Pipeline 时,这些文件会使用 coffee-script 和 sass 提供的预处理器处理,然后再发给浏览器
  • 启用 Asset Pipeline 时,这些文件会先使用预处理器处理,然后保存到 public/assets 文件夹中,再由 Rails 程序或网页服务器伺服
  • 添加额外的扩展名可以增加预处理次数,预处理程序会按照扩展名从右至左的顺序处理文件内容。所以,扩展名的顺序要和处理的顺序一致
  • 预处理器的执行顺序很重要
  • 在开发环境中也可启用压缩功能,检查是否能正常运行。需要调试时再禁用压缩即可。
  • 默认情况下,Rails 认为静态资源已经事先编译好了,直接由网页服务器伺服。
  • 般情况下,请勿修改 config.assets.digest 的默认值
  • 可在部署时编译静态资源
  • 在多次部署之间共用这个文件夹是十分重要的,这样只要缓存的页面可用,其中引用的编译后的静态资源就能正常使用。
  • 默认编译的文件包括 application.js、application.css 以及 gem 中 app/assets 文件夹中的所有非 JS/CSS 文件(会自动加载所有图片)
  • 如果想编译其他清单,或者单独的样式表和 JavaScript,可以添加到 config/application.rb 文件中的 precompile 选项
  • 设置编译所有静态资源
  • manifest-md5hash.json 的文件,列出所有静态资源和对应的指纹
  • 把 Expires 报头设置为很久以后
  • 在本地预编译后,可以把编译好的文件纳入版本控制系统,再按照常规的方式部署
  • 实时编译消耗的内存更多,比默认的编译方式性能更低,因此不推荐使用
  • 如果用 CDN 分发静态资源,要确保文件不会被缓存,因为缓存会导致问题。如果设置了 config.action_controller.perform_caching = true,Rack::Cache 会使用 Rails.cache 存储静态文件,很快缓存空间就会用完。
  • Sprockets 默认使用的公开路径是 /assets
  • X-Sendfile 报头的作用是让服务器忽略程序的响应,直接从硬盘上伺服指定的文件
  • 为 Rails 提供标准 JavaScript 代码库的 jquery-rails gem 是个很好的例子。这个 gem 中有个引擎类,继承自 Rails::Engine。添加这层继承关系后,Rails 就知道这个 gem 中可能包含静态资源文件,会把这个引擎中的 app/assets、lib/assets 和 vendor/assets 三个文件夹加入 Sprockets 的搜索路径中。
張 旭

Bash Reference Manual: Shell Parameter Expansion - 1 views

  • parameter expansion
  • command substitution
  • arithmetic expansion
  • ...16 more annotations...
  • The parameter name or symbol to be expanded may be enclosed in braces, which are optional but serve to protect the variable to be expanded from characters immediately following it which could be interpreted as part of the name.
  • When braces are used, the matching ending brace is the first ‘}’ not escaped by a backslash or within a quoted string, and not within an embedded arithmetic expansion, command substitution, or parameter expansion.
  • ${parameter}
  • braces are required
  • If the first character of parameter is an exclamation point (!), and parameter is not a nameref, it introduces a level of variable indirection.
  • ${parameter:-word} If parameter is unset or null, the expansion of word is substituted. Otherwise, the value of parameter is substituted.
  • ${parameter:=word} If parameter is unset or null, the expansion of word is assigned to parameter.
  • ${parameter:?word} If parameter is null or unset, the expansion of word (or a message to that effect if word is not present) is written to the standard error and the shell, if it is not interactive, exits.
  • ${parameter:+word} If parameter is null or unset, nothing is substituted, otherwise the expansion of word is substituted.
  • ${parameter:offset} ${parameter:offset:length}
  • Substring expansion applied to an associative array produces undefined results.
  • ${parameter/pattern/string} The pattern is expanded to produce a pattern just as in filename expansion.
  • If pattern begins with ‘/’, all matches of pattern are replaced with string.
  • Normally only the first match is replaced
  • The ‘^’ operator converts lowercase letters matching pattern to uppercase
  • the ‘,’ operator converts matching uppercase letters to lowercase.
張 旭

Overview - CircleCI - 0 views

  • every code change triggers automated tests in a clean container or VM
  • CircleCI may be configured to deploy code to various environments
  • Other cloud service deployments are easily scripted using SSH or by installing the API client of the service with your job configuration.
  • ...1 more annotation...
  • Continuous integration is a practice that encourages developers to integrate their code into a master branch of a shared repository early and often.
  •  
    "every code change triggers automated tests in a clean container or VM"
張 旭

Orbs, Jobs, Steps, and Workflows - CircleCI - 0 views

  • Orbs are packages of config that you either import by name or configure inline to simplify your config, share, and reuse config within and across projects.
  • Jobs are a collection of Steps.
  • All of the steps in the job are executed in a single unit which consumes a CircleCI container from your plan while it’s running.
  • ...11 more annotations...
  • Workspaces persist data between jobs in a single Workflow.
  • Caching persists data between the same job in different Workflow builds.
  • Artifacts persist data after a Workflow has finished.
  • run using the machine executor which enables reuse of recently used machine executor runs,
  • docker executor which can compose Docker containers to run your tests and any services they require
  • macos executor
  • Steps are a collection of executable commands which are run during a job
  • In addition to the run: key, keys for save_cache:, restore_cache:, deploy:, store_artifacts:, store_test_results: and add_ssh_keys are nested under Steps.
  • checkout: key is required to checkout your code
  • run: enables addition of arbitrary, multi-line shell command scripting
  • orchestrating job runs with parallel, sequential, and manual approval workflows.
張 旭

pre-commit - 0 views

  • a multi-language package manager for pre-commit hooks
  • pre-commit is specifically designed to not require root access
  • We copied and pasted unwieldy bash scripts from project to project and had to manually change the hooks to work for different project structures.
  • ...3 more annotations...
  • adding pre-commit plugins to your project is done with the .pre-commit-config.yaml configuration file.
  • The pre-commit config file describes what repositories and hooks are installed.
  • This configuration says to download the pre-commit-hooks project and run its trailing-whitespace hook
  •  
    "a multi-language package manager for pre-commit hooks"
張 旭

Use multi-stage builds | Docker Documentation - 0 views

  • Maintaining two Dockerfiles is not ideal.
  • This is failure-prone and hard to maintain. It’s easy to insert another command and forget to continue the line using the \ character
  • create a container from it to copy the artifact out
  • ...4 more annotations...
  • You only need the single Dockerfile. You don’t need a separate build script,
  • You don’t need to create any intermediate images and you don’t need to extract any artifacts to your local system at all.
  • Debugging a specific build stage
  • You can use the COPY --from instruction to copy from a separate image, either using the local image name, a tag available locally or on a Docker registry, or a tag ID.
張 旭

Using Infrastructure as Code to Automate VMware Deployments - 1 views

  • Infrastructure as code is at the heart of provisioning for cloud infrastructure marking a significant shift away from monolithic point-and-click management tools.
  • infrastructure as code enables operators to take a programmatic approach to provisioning.
  • provides a single workflow to provision and maintain infrastructure and services from all of your vendors, making it not only easier to switch providers
  • ...5 more annotations...
  • A Terraform Provider is responsible for understanding API interactions between and exposing the resources from a given Infrastructure, Platform, or SaaS offering to Terraform.
  • write a Terraform file that describes the Virtual Machine that you want, apply that file with Terraform and create that VM as you described without ever needing to log into the vSphere dashboard.
  • HashiCorp Configuration Language (HCL)
  • the provider credentials are passed in at the top of the script to connect to the vSphere account.
  • modules— a way to encapsulate infrastructure resources into a reusable format.
  •  
    "revolutionizing"
張 旭

The Twelve-Factor App - 0 views

  • The process formation is the array of processes that are used to do the app’s regular business
  • one-off administrative or maintenance tasks for the app
  • One-off admin processes should be run in an identical environment as the regular long-running processes of the app.
  • ...2 more annotations...
  • Admin code must ship with application code to avoid synchronization issues.
  • Twelve-factor strongly favors languages which provide a REPL shell out of the box, and which make it easy to run one-off scripts.
張 旭

Template Designer Documentation - Jinja2 Documentation (2.10) - 0 views

  • A Jinja template doesn’t need to have a specific extension
  • A Jinja template is simply a text file
  • tags, which control the logic of the template
  • ...106 more annotations...
  • {% ... %} for Statements
  • {{ ... }} for Expressions to print to the template output
  • use a dot (.) to access attributes of a variable
  • the outer double-curly braces are not part of the variable, but the print statement.
  • If you access variables inside tags don’t put the braces around them.
  • If a variable or attribute does not exist, you will get back an undefined value.
  • the default behavior is to evaluate to an empty string if printed or iterated over, and to fail for every other operation.
  • if an object has an item and attribute with the same name. Additionally, the attr() filter only looks up attributes.
  • Variables can be modified by filters. Filters are separated from the variable by a pipe symbol (|) and may have optional arguments in parentheses.
  • Multiple filters can be chained
  • Tests can be used to test a variable against a common expression.
  • add is plus the name of the test after the variable.
  • to find out if a variable is defined, you can do name is defined, which will then return true or false depending on whether name is defined in the current template context.
  • strip whitespace in templates by hand. If you add a minus sign (-) to the start or end of a block (e.g. a For tag), a comment, or a variable expression, the whitespaces before or after that block will be removed
  • not add whitespace between the tag and the minus sign
  • mark a block raw
  • Template inheritance allows you to build a base “skeleton” template that contains all the common elements of your site and defines blocks that child templates can override.
  • The {% extends %} tag is the key here. It tells the template engine that this template “extends” another template.
  • access templates in subdirectories with a slash
  • can’t define multiple {% block %} tags with the same name in the same template
  • use the special self variable and call the block with that name
  • self.title()
  • super()
  • put the name of the block after the end tag for better readability
  • if the block is replaced by a child template, a variable would appear that was not defined in the block or passed to the context.
  • setting the block to “scoped” by adding the scoped modifier to a block declaration
  • If you have a variable that may include any of the following chars (>, <, &, or ") you SHOULD escape it unless the variable contains well-formed and trusted HTML.
  • Jinja2 functions (macros, super, self.BLOCKNAME) always return template data that is marked as safe.
  • With the default syntax, control structures appear inside {% ... %} blocks.
  • the dictsort filter
  • loop.cycle
  • Unlike in Python, it’s not possible to break or continue in a loop
  • use loops recursively
  • add the recursive modifier to the loop definition and call the loop variable with the new iterable where you want to recurse.
  • The loop variable always refers to the closest (innermost) loop.
  • whether the value changed at all,
  • use it to test if a variable is defined, not empty and not false
  • Macros are comparable with functions in regular programming languages.
  • If a macro name starts with an underscore, it’s not exported and can’t be imported.
  • pass a macro to another macro
  • caller()
  • a single trailing newline is stripped if present
  • other whitespace (spaces, tabs, newlines etc.) is returned unchanged
  • a block tag works in “both” directions. That is, a block tag doesn’t just provide a placeholder to fill - it also defines the content that fills the placeholder in the parent.
  • Python dicts are not ordered
  • caller(user)
  • call(user)
  • This is a simple dialog rendered by using a macro and a call block.
  • Filter sections allow you to apply regular Jinja2 filters on a block of template data.
  • Assignments at top level (outside of blocks, macros or loops) are exported from the template like top level macros and can be imported by other templates.
  • using namespace objects which allow propagating of changes across scopes
  • use block assignments to capture the contents of a block into a variable name.
  • The extends tag can be used to extend one template from another.
  • Blocks are used for inheritance and act as both placeholders and replacements at the same time.
  • The include statement is useful to include a template and return the rendered contents of that file into the current namespace
  • Included templates have access to the variables of the active context by default.
  • putting often used code into macros
  • imports are cached and imported templates don’t have access to the current template variables, just the globals by default.
  • Macros and variables starting with one or more underscores are private and cannot be imported.
  • By default, included templates are passed the current context and imported templates are not.
  • imports are often used just as a module that holds macros.
  • Integers and floating point numbers are created by just writing the number down
  • Everything between two brackets is a list.
  • Tuples are like lists that cannot be modified (“immutable”).
  • A dict in Python is a structure that combines keys and values.
  • // Divide two numbers and return the truncated integer result
  • The special constants true, false, and none are indeed lowercase
  • all Jinja identifiers are lowercase
  • (expr) group an expression.
  • The is and in operators support negation using an infix notation
  • in Perform a sequence / mapping containment test.
  • | Applies a filter.
  • ~ Converts all operands into strings and concatenates them.
  • use inline if expressions.
  • always an attribute is returned and items are not looked up.
  • default(value, default_value=u'', boolean=False)¶ If the value is undefined it will return the passed default value, otherwise the value of the variable
  • dictsort(value, case_sensitive=False, by='key', reverse=False)¶ Sort a dict and yield (key, value) pairs.
  • format(value, *args, **kwargs)¶ Apply python string formatting on an object
  • groupby(value, attribute)¶ Group a sequence of objects by a common attribute.
  • grouping by is stored in the grouper attribute and the list contains all the objects that have this grouper in common.
  • indent(s, width=4, first=False, blank=False, indentfirst=None)¶ Return a copy of the string with each line indented by 4 spaces. The first line and blank lines are not indented by default.
  • join(value, d=u'', attribute=None)¶ Return a string which is the concatenation of the strings in the sequence.
  • map()¶ Applies a filter on a sequence of objects or looks up an attribute.
  • pprint(value, verbose=False)¶ Pretty print a variable. Useful for debugging.
  • reject()¶ Filters a sequence of objects by applying a test to each object, and rejecting the objects with the test succeeding.
  • replace(s, old, new, count=None)¶ Return a copy of the value with all occurrences of a substring replaced with a new one.
  • round(value, precision=0, method='common')¶ Round the number to a given precision
  • even if rounded to 0 precision, a float is returned.
  • select()¶ Filters a sequence of objects by applying a test to each object, and only selecting the objects with the test succeeding.
  • sort(value, reverse=False, case_sensitive=False, attribute=None)¶ Sort an iterable. Per default it sorts ascending, if you pass it true as first argument it will reverse the sorting.
  • striptags(value)¶ Strip SGML/XML tags and replace adjacent whitespace by one space.
  • tojson(value, indent=None)¶ Dumps a structure to JSON so that it’s safe to use in <script> tags.
  • trim(value)¶ Strip leading and trailing whitespace.
  • unique(value, case_sensitive=False, attribute=None)¶ Returns a list of unique items from the the given iterable
  • urlize(value, trim_url_limit=None, nofollow=False, target=None, rel=None)¶ Converts URLs in plain text into clickable links.
  • defined(value)¶ Return true if the variable is defined
  • in(value, seq)¶ Check if value is in seq.
  • mapping(value)¶ Return true if the object is a mapping (dict etc.).
  • number(value)¶ Return true if the variable is a number.
  • sameas(value, other)¶ Check if an object points to the same memory address than another object
  • undefined(value)¶ Like defined() but the other way round.
  • A joiner is passed a string and will return that string every time it’s called, except the first time (in which case it returns an empty string).
  • namespace(...)¶ Creates a new container that allows attribute assignment using the {% set %} tag
  • The with statement makes it possible to create a new inner scope. Variables set within this scope are not visible outside of the scope.
  • activate and deactivate the autoescaping from within the templates
  • With both trim_blocks and lstrip_blocks enabled, you can put block tags on their own lines, and the entire block line will be removed when rendered, preserving the whitespace of the contents
張 旭

Queue Workers: How they work - Diving Laravel - 0 views

  • define workers as a simple PHP process that runs in the background with the purpose of extracting jobs from a storage space and run them with respect to several configuration options.
  • have to manually restart the worker to reflect any code change you made in your application.
  • avoiding booting up the whole app on every job
  • ...7 more annotations...
  • instruct Laravel to create an instance of your application and start executing jobs, this instance will stay alive indefinitely which means the action of starting your Laravel application happens only once when the command was run & the same instance will be used to execute your jobs
  • This will start an instance of the application, process a single job,
  • and then kill the script.
  • Using queue:listen ensures that a new instance of the app is created for every job, that means you don't have to manually restart the worker in case you made changes to your code, but also means more server resources will be consumed.
  • the queue:listen command runs the WorkCommand inside a loop
  • The connection this worker will be pulling jobs from
  • The queue the worker will use to find jobs
  •  
    "define workers as a simple PHP process that runs in the background with the purpose of extracting jobs from a storage space and run them with respect to several configuration options."
張 旭

The Asset Pipeline - Ruby on Rails Guides - 0 views

  • provides a framework to concatenate and minify or compress JavaScript and CSS assets
  • adds the ability to write these assets in other languages and pre-processors such as CoffeeScript, Sass and ERB
  • invalidate the cache by altering this fingerprint
  • ...80 more annotations...
  • Rails 4 automatically adds the sass-rails, coffee-rails and uglifier gems to your Gemfile
  • reduce the number of requests that a browser makes to render a web page
  • Starting with version 3.1, Rails defaults to concatenating all JavaScript files into one master .js file and all CSS files into one master .css file
  • In production, Rails inserts an MD5 fingerprint into each filename so that the file is cached by the web browser
  • The technique sprockets uses for fingerprinting is to insert a hash of the content into the name, usually at the end.
  • asset minification or compression
  • The sass-rails gem is automatically used for CSS compression if included in Gemfile and no config.assets.css_compressor option is set.
  • Supported languages include Sass for CSS, CoffeeScript for JavaScript, and ERB for both by default.
  • When a filename is unique and based on its content, HTTP headers can be set to encourage caches everywhere (whether at CDNs, at ISPs, in networking equipment, or in web browsers) to keep their own copy of the content
  • asset pipeline is technically no longer a core feature of Rails 4
  • Rails uses for fingerprinting is to insert a hash of the content into the name, usually at the end
  • With the asset pipeline, the preferred location for these assets is now the app/assets directory.
  • Fingerprinting is enabled by default for production and disabled for all other environments
  • The files in app/assets are never served directly in production.
  • Paths are traversed in the order that they occur in the search path
  • You should use app/assets for files that must undergo some pre-processing before they are served.
  • By default .coffee and .scss files will not be precompiled on their own
  • app/assets is for assets that are owned by the application, such as custom images, JavaScript files or stylesheets.
  • lib/assets is for your own libraries' code that doesn't really fit into the scope of the application or those libraries which are shared across applications.
  • vendor/assets is for assets that are owned by outside entities, such as code for JavaScript plugins and CSS frameworks.
  • Any path under assets/* will be searched
  • By default these files will be ready to use by your application immediately using the require_tree directive.
  • By default, this means the files in app/assets take precedence, and will mask corresponding paths in lib and vendor
  • Sprockets uses files named index (with the relevant extensions) for a special purpose
  • Rails.application.config.assets.paths
  • causes turbolinks to check if an asset has been updated and if so loads it into the page
  • if you add an erb extension to a CSS asset (for example, application.css.erb), then helpers like asset_path are available in your CSS rules
  • If you add an erb extension to a JavaScript asset, making it something such as application.js.erb, then you can use the asset_path helper in your JavaScript code
  • The asset pipeline automatically evaluates ERB
  • data URI — a method of embedding the image data directly into the CSS file — you can use the asset_data_uri helper.
  • Sprockets will also look through the paths specified in config.assets.paths, which includes the standard application paths and any paths added by Rails engines.
  • image_tag
  • the closing tag cannot be of the style -%>
  • asset_data_uri
  • app/assets/javascripts/application.js
  • sass-rails provides -url and -path helpers (hyphenated in Sass, underscored in Ruby) for the following asset classes: image, font, video, audio, JavaScript and stylesheet.
  • Rails.application.config.assets.compress
  • In JavaScript files, the directives begin with //=
  • The require_tree directive tells Sprockets to recursively include all JavaScript files in the specified directory into the output.
  • manifest files contain directives — instructions that tell Sprockets which files to require in order to build a single CSS or JavaScript file.
  • You should not rely on any particular order among those
  • Sprockets uses manifest files to determine which assets to include and serve.
  • the family of require directives prevents files from being included twice in the output
  • which files to require in order to build a single CSS or JavaScript file
  • Directives are processed top to bottom, but the order in which files are included by require_tree is unspecified.
  • In JavaScript files, Sprockets directives begin with //=
  • If require_self is called more than once, only the last call is respected.
  • require directive is used to tell Sprockets the files you wish to require.
  • You need not supply the extensions explicitly. Sprockets assumes you are requiring a .js file when done from within a .js file
  • paths must be specified relative to the manifest file
  • require_directory
  • Rails 4 creates both app/assets/javascripts/application.js and app/assets/stylesheets/application.css regardless of whether the --skip-sprockets option is used when creating a new rails application.
  • The file extensions used on an asset determine what preprocessing is applied.
  • app/assets/stylesheets/application.css
  • Additional layers of preprocessing can be requested by adding other extensions, where each extension is processed in a right-to-left manner
  • require_self
  • use the Sass @import rule instead of these Sprockets directives.
  • Keep in mind that the order of these preprocessors is important
  • In development mode, assets are served as separate files in the order they are specified in the manifest file.
  • when these files are requested they are processed by the processors provided by the coffee-script and sass gems and then sent back to the browser as JavaScript and CSS respectively.
  • css.scss.erb
  • js.coffee.erb
  • Keep in mind the order of these preprocessors is important.
  • By default Rails assumes that assets have been precompiled and will be served as static assets by your web server
  • with the Asset Pipeline the :cache and :concat options aren't used anymore
  • Assets are compiled and cached on the first request after the server is started
  • RAILS_ENV=production bundle exec rake assets:precompile
  • Debug mode can also be enabled in Rails helper methods
  • If you set config.assets.initialize_on_precompile to false, be sure to test rake assets:precompile locally before deploying
  • By default Rails assumes assets have been precompiled and will be served as static assets by your web server.
  • a rake task to compile the asset manifests and other files in the pipeline
  • RAILS_ENV=production bin/rake assets:precompile
  • a recipe to handle this in deployment
  • links the folder specified in config.assets.prefix to shared/assets
  • config/initializers/assets.rb
  • The initialize_on_precompile change tells the precompile task to run without invoking Rails
  • The X-Sendfile header is a directive to the web server to ignore the response from the application, and instead serve a specified file from disk
  • the jquery-rails gem which comes with Rails as the standard JavaScript library gem.
  • Possible options for JavaScript compression are :closure, :uglifier and :yui
  • concatenate assets
張 旭

Introduction to GitLab Flow | GitLab - 0 views

  • Git allows a wide variety of branching strategies and workflows.
  • not integrated with issue tracking systems
  • The biggest problem is that many long-running branches emerge that all contain part of the changes.
  • ...47 more annotations...
  • most organizations practice continuous delivery, which means that your default branch can be deployed.
  • Merging everything into the master branch and frequently deploying means you minimize the amount of unreleased code, which is in line with lean and continuous delivery best practices.
  • you can deploy to production every time you merge a feature branch.
  • deploy a new version by merging master into the production branch.
  • you can have your deployment script create a tag on each deployment.
  • to have an environment that is automatically updated to the master branch
  • commits only flow downstream, ensures that everything is tested in all environments.
  • first merge these bug fixes into master, and then cherry-pick them into the release branch.
  • Merging into master and then cherry-picking into release is called an “upstream first” policy
  • “merge request” since the final action is to merge the feature branch.
  • “pull request” since the first manual action is to pull the feature branch
  • it is common to protect the long-lived branches
  • After you merge a feature branch, you should remove it from the source control software
  • When you are ready to code, create a branch for the issue from the master branch. This branch is the place for any work related to this change.
  • A merge request is an online place to discuss the change and review the code.
  • If you open the merge request but do not assign it to anyone, it is a “Work In Progress” merge request.
  • Start the title of the merge request with “[WIP]” or “WIP:” to prevent it from being merged before it’s ready.
  • To automatically close linked issues, mention them with the words “fixes” or “closes,” for example, “fixes #14” or “closes #67.” GitLab closes these issues when the code is merged into the default branch.
  • If you have an issue that spans across multiple repositories, create an issue for each repository and link all issues to a parent issue.
  • With Git, you can use an interactive rebase (rebase -i) to squash multiple commits into one or reorder them.
  • you should never rebase commits you have pushed to a remote server.
  • Rebasing creates new commits for all your changes, which can cause confusion because the same change would have multiple identifiers.
  • if someone has already reviewed your code, rebasing makes it hard to tell what changed since the last review.
  • never rebase commits authored by other people.
  • it is a bad idea to rebase commits that you have already pushed.
  • always use the “no fast-forward” (--no-ff) strategy when you merge manually.
  • you should try to avoid merge commits in feature branches
  • people avoid merge commits by just using rebase to reorder their commits after the commits on the master branch. Using rebase prevents a merge commit when merging master into your feature branch, and it creates a neat linear history.
  • you should never rebase commits you have pushed to a remote server
  • Sometimes you can reuse recorded resolutions (rerere), but merging is better since you only have to resolve conflicts once.
  • not frequently merge master into the feature branch.
  • utilizing new code,
  • resolving merge conflicts
  • updating long-running branches.
  • just cherry-picking a commit.
  • If your feature branch has a merge conflict, creating a merge commit is a standard way of solving this.
  • keep your feature branches short-lived.
  • split your features into smaller units of work
  • you should try to prevent merge commits, but not eliminate them.
  • Your codebase should be clean, but your history should represent what actually happened.
  • Splitting up work into individual commits provides context for developers looking at your code later.
  • push your feature branch frequently, even when it is not yet ready for review.
  • Commit often and push frequently
  • A commit message should reflect your intention, not just the contents of the commit.
  • Testing before merging
  • When using GitLab flow, developers create their branches from this master branch, so it is essential that it never breaks. Therefore, each merge request must be tested before it is accepted.
  • When creating a feature branch, always branch from an up-to-date master
  •  
    "Git allows a wide variety of branching strategies and workflows."
張 旭

FreeIPAv2:Dynamic updates with GSS-TSIG - FreeIPA - 0 views

  • This short tutorial will teach you how to setup your name server so that you can dynamically update the resource records with the help of FreeIPA.
  • tkey-gssapi-keytab
  • BIND version
    • 張 旭
       
      named -v
  • ...9 more annotations...
  • add the DNS service principal and acquire the keytab
  • kinit admin
  • All machines belonging to Kerberos realm EXAMPLE.COM are allowed to update own A record.
  • grant EXAMPLE.COM krb5-self * A;
  • Allow Kerberos principal SERVICE/ipaserver.example.com@EXAMPLE.COM to do any updates in whole zone.
  • Machine is allowed to update own PTR record in reverse zone.
  • kinit admin
  • with kinit. (This step is not required if the client was enrolled by ipa-client-install script or host keytab is already in place for other reasons.)
  • the "server dns.example.com" command tells nsupdate to update the specified DNS server
張 旭

Helm | - 0 views

  • Helm will figure out where to install Tiller by reading your Kubernetes configuration file (usually $HOME/.kube/config). This is the same file that kubectl uses.
  • kubectl cluster-info
  • Role-Based Access Control (RBAC) enabled
  • ...133 more annotations...
  • initialize the local CLI
  • install Tiller into your Kubernetes cluster
  • helm install
  • helm init --upgrade
  • By default, when Tiller is installed, it does not have authentication enabled.
  • helm repo update
  • Without a max history set the history is kept indefinitely, leaving a large number of records for helm and tiller to maintain.
  • helm init --upgrade
  • Whenever you install a chart, a new release is created.
  • one chart can be installed multiple times into the same cluster. And each can be independently managed and upgraded.
  • helm list function will show you a list of all deployed releases.
  • helm delete
  • helm status
  • you can audit a cluster’s history, and even undelete a release (with helm rollback).
  • the Helm server (Tiller).
  • The Helm client (helm)
  • brew install kubernetes-helm
  • Tiller, the server portion of Helm, typically runs inside of your Kubernetes cluster.
  • it can also be run locally, and configured to talk to a remote Kubernetes cluster.
  • Role-Based Access Control - RBAC for short
  • create a service account for Tiller with the right roles and permissions to access resources.
  • run Tiller in an RBAC-enabled Kubernetes cluster.
  • run kubectl get pods --namespace kube-system and see Tiller running.
  • helm inspect
  • Helm will look for Tiller in the kube-system namespace unless --tiller-namespace or TILLER_NAMESPACE is set.
  • For development, it is sometimes easier to work on Tiller locally, and configure it to connect to a remote Kubernetes cluster.
  • even when running locally, Tiller will store release configuration in ConfigMaps inside of Kubernetes.
  • helm version should show you both the client and server version.
  • Tiller stores its data in Kubernetes ConfigMaps, you can safely delete and re-install Tiller without worrying about losing any data.
  • helm reset
  • The --node-selectors flag allows us to specify the node labels required for scheduling the Tiller pod.
  • --override allows you to specify properties of Tiller’s deployment manifest.
  • helm init --override manipulates the specified properties of the final manifest (there is no “values” file).
  • The --output flag allows us skip the installation of Tiller’s deployment manifest and simply output the deployment manifest to stdout in either JSON or YAML format.
  • By default, tiller stores release information in ConfigMaps in the namespace where it is running.
  • switch from the default backend to the secrets backend, you’ll have to do the migration for this on your own.
  • a beta SQL storage backend that stores release information in an SQL database (only postgres has been tested so far).
  • Once you have the Helm Client and Tiller successfully installed, you can move on to using Helm to manage charts.
  • Helm requires that kubelet have access to a copy of the socat program to proxy connections to the Tiller API.
  • A Release is an instance of a chart running in a Kubernetes cluster. One chart can often be installed many times into the same cluster.
  • helm init --client-only
  • helm init --dry-run --debug
  • A panic in Tiller is almost always the result of a failure to negotiate with the Kubernetes API server
  • Tiller and Helm have to negotiate a common version to make sure that they can safely communicate without breaking API assumptions
  • helm delete --purge
  • Helm stores some files in $HELM_HOME, which is located by default in ~/.helm
  • A Chart is a Helm package. It contains all of the resource definitions necessary to run an application, tool, or service inside of a Kubernetes cluster.
  • it like the Kubernetes equivalent of a Homebrew formula, an Apt dpkg, or a Yum RPM file.
  • A Repository is the place where charts can be collected and shared.
  • Set the $HELM_HOME environment variable
  • each time it is installed, a new release is created.
  • Helm installs charts into Kubernetes, creating a new release for each installation. And to find new charts, you can search Helm chart repositories.
  • chart repository is named stable by default
  • helm search shows you all of the available charts
  • helm inspect
  • To install a new package, use the helm install command. At its simplest, it takes only one argument: The name of the chart.
  • If you want to use your own release name, simply use the --name flag on helm install
  • additional configuration steps you can or should take.
  • Helm does not wait until all of the resources are running before it exits. Many charts require Docker images that are over 600M in size, and may take a long time to install into the cluster.
  • helm status
  • helm inspect values
  • helm inspect values stable/mariadb
  • override any of these settings in a YAML formatted file, and then pass that file during installation.
  • helm install -f config.yaml stable/mariadb
  • --values (or -f): Specify a YAML file with overrides.
  • --set (and its variants --set-string and --set-file): Specify overrides on the command line.
  • Values that have been --set can be cleared by running helm upgrade with --reset-values specified.
  • Chart designers are encouraged to consider the --set usage when designing the format of a values.yaml file.
  • --set-file key=filepath is another variant of --set. It reads the file and use its content as a value.
  • inject a multi-line text into values without dealing with indentation in YAML.
  • An unpacked chart directory
  • When a new version of a chart is released, or when you want to change the configuration of your release, you can use the helm upgrade command.
  • Kubernetes charts can be large and complex, Helm tries to perform the least invasive upgrade.
  • It will only update things that have changed since the last release
  • $ helm upgrade -f panda.yaml happy-panda stable/mariadb
  • deployment
  • If both are used, --set values are merged into --values with higher precedence.
  • The helm get command is a useful tool for looking at a release in the cluster.
  • helm rollback
  • A release version is an incremental revision. Every time an install, upgrade, or rollback happens, the revision number is incremented by 1.
  • helm history
  • a release name cannot be re-used.
  • you can rollback a deleted resource, and have it re-activate.
  • helm repo list
  • helm repo add
  • helm repo update
  • The Chart Development Guide explains how to develop your own charts.
  • helm create
  • helm lint
  • helm package
  • Charts that are archived can be loaded into chart repositories.
  • chart repository server
  • Tiller can be installed into any namespace.
  • Limiting Tiller to only be able to install into specific namespaces and/or resource types is controlled by Kubernetes RBAC roles and rolebindings
  • Release names are unique PER TILLER INSTANCE
  • Charts should only contain resources that exist in a single namespace.
  • not recommended to have multiple Tillers configured to manage resources in the same namespace.
  • a client-side Helm plugin. A plugin is a tool that can be accessed through the helm CLI, but which is not part of the built-in Helm codebase.
  • Helm plugins are add-on tools that integrate seamlessly with Helm. They provide a way to extend the core feature set of Helm, but without requiring every new feature to be written in Go and added to the core tool.
  • Helm plugins live in $(helm home)/plugins
  • The Helm plugin model is partially modeled on Git’s plugin model
  • helm referred to as the porcelain layer, with plugins being the plumbing.
  • helm plugin install https://github.com/technosophos/helm-template
  • command is the command that this plugin will execute when it is called.
  • Environment variables are interpolated before the plugin is executed.
  • The command itself is not executed in a shell. So you can’t oneline a shell script.
  • Helm is able to fetch Charts using HTTP/S
  • Variables like KUBECONFIG are set for the plugin if they are set in the outer environment.
  • In Kubernetes, granting a role to an application-specific service account is a best practice to ensure that your application is operating in the scope that you have specified.
  • restrict Tiller’s capabilities to install resources to certain namespaces, or to grant a Helm client running access to a Tiller instance.
  • Service account with cluster-admin role
  • The cluster-admin role is created by default in a Kubernetes cluster
  • Deploy Tiller in a namespace, restricted to deploying resources only in that namespace
  • Deploy Tiller in a namespace, restricted to deploying resources in another namespace
  • When running a Helm client in a pod, in order for the Helm client to talk to a Tiller instance, it will need certain privileges to be granted.
  • SSL Between Helm and Tiller
  • The Tiller authentication model uses client-side SSL certificates.
  • creating an internal CA, and using both the cryptographic and identity functions of SSL.
  • Helm is a powerful and flexible package-management and operations tool for Kubernetes.
  • default installation applies no security configurations
  • with a cluster that is well-secured in a private network with no data-sharing or no other users or teams.
  • With great power comes great responsibility.
  • Choose the Best Practices you should apply to your helm installation
  • Role-based access control, or RBAC
  • Tiller’s gRPC endpoint and its usage by Helm
  • Kubernetes employ a role-based access control (or RBAC) system (as do modern operating systems) to help mitigate the damage that can be done if credentials are misused or bugs exist.
  • In the default installation the gRPC endpoint that Tiller offers is available inside the cluster (not external to the cluster) without authentication configuration applied.
  • Tiller stores its release information in ConfigMaps. We suggest changing the default to Secrets.
  • release information
  • charts
  • charts are a kind of package that not only installs containers you may or may not have validated yourself, but it may also install into more than one namespace.
  • As with all shared software, in a controlled or shared environment you must validate all software you install yourself before you install it.
  • Helm’s provenance tools to ensure the provenance and integrity of charts
  •  
    "Helm will figure out where to install Tiller by reading your Kubernetes configuration file (usually $HOME/.kube/config). This is the same file that kubectl uses."
張 旭

How to Write a Git Commit Message - 1 views

  • a well-crafted Git commit message is the best way to communicate context about a change to fellow developers (and indeed to their future selves).
  • A diff will tell you what changed, but only the commit message can properly tell you why.
  • a commit message shows whether a developer is a good collaborator
  • ...22 more annotations...
  • a well-cared for log is a beautiful and useful thing
  • Reviewing others’ commits and pull requests becomes something worth doing, and suddenly can be done independently.
  • Understanding why something happened months or years ago becomes not only possible but efficient.
  • how to write an individual commit message.
  • Markup syntax, wrap margins, grammar, capitalization, punctuation.
  • What should it not contain?
  • issue tracking IDs
  • pull request numbers
  • The seven rules of a great Git commit message
  • Use the body to explain what and why vs. how
  • Use the imperative mood in the subject line
  • it’s a good idea to begin the commit message with a single short (less than 50 character) line summarizing the change, followed by a blank line and then a more thorough description.
  • forces the author to think for a moment about the most concise way to explain what’s going on.
  • If you’re having a hard time summarizing, you might be committing too many changes at once.
  • shoot for 50 characters, but consider 72 the hard limit
  • Imperative mood just means “spoken or written as if giving a command or instruction”.
  • Git itself uses the imperative whenever it creates a commit on your behalf.
  • when you write your commit messages in the imperative, you’re following Git’s own built-in conventions.
  • A properly formed Git commit subject line should always be able to complete the following sentence: If applied, this commit will your subject line here
  • explaining what changed and why
  • Code is generally self-explanatory in this regard (and if the code is so complex that it needs to be explained in prose, that’s what source comments are for).
  • there are tab completion scripts that take much of the pain out of remembering the subcommands and switches.
張 旭

MySQL :: MySQL 5.7 Reference Manual :: 20.4 Getting Started with InnoDB Cluster - 0 views

  • InnoDB cluster instances are created and managed through the MySQL Shell.
  • To create a new InnoDB cluster, the MySQL Shell must be connected to the MySQL Server instance. By default, this MySQL Server instance is the seed instance of the new InnoDB cluster and hold the initial data set.
  • Sandbox instance are only suitable for deploying and running on your local machine.
  • ...3 more annotations...
  • A minimum of three instances are required to create an InnoDB cluster
  • reverts to read-only mode
  • MySQL Shell provides two scripting languages: JavaScript and Python.
張 旭

Kubernetes Components | Kubernetes - 0 views

  • A Kubernetes cluster consists of a set of worker machines, called nodes, that run containerized applications
  • Every cluster has at least one worker node.
  • The control plane manages the worker nodes and the Pods in the cluster.
  • ...29 more annotations...
  • The control plane's components make global decisions about the cluster
  • Control plane components can be run on any machine in the cluster.
  • for simplicity, set up scripts typically start all control plane components on the same machine, and do not run user containers on this machine
  • The API server is the front end for the Kubernetes control plane.
  • kube-apiserver is designed to scale horizontally—that is, it scales by deploying more instances. You can run several instances of kube-apiserver and balance traffic between those instances.
  • Kubernetes cluster uses etcd as its backing store, make sure you have a back up plan for those data.
  • watches for newly created Pods with no assigned node, and selects a node for them to run on.
  • Factors taken into account for scheduling decisions include: individual and collective resource requirements, hardware/software/policy constraints, affinity and anti-affinity specifications, data locality, inter-workload interference, and deadlines.
  • each controller is a separate process, but to reduce complexity, they are all compiled into a single binary and run in a single process.
  • Node controller
  • Job controller
  • Endpoints controller
  • Service Account & Token controllers
  • The cloud controller manager lets you link your cluster into your cloud provider's API, and separates out the components that interact with that cloud platform from components that only interact with your cluster.
  • If you are running Kubernetes on your own premises, or in a learning environment inside your own PC, the cluster does not have a cloud controller manager.
  • An agent that runs on each node in the cluster. It makes sure that containers are running in a Pod.
  • The kubelet takes a set of PodSpecs that are provided through various mechanisms and ensures that the containers described in those PodSpecs are running and healthy.
  • The kubelet doesn't manage containers which were not created by Kubernetes.
  • kube-proxy is a network proxy that runs on each node in your cluster, implementing part of the Kubernetes Service concept.
  • kube-proxy maintains network rules on nodes. These network rules allow network communication to your Pods from network sessions inside or outside of your cluster.
  • kube-proxy uses the operating system packet filtering layer if there is one and it's available.
  • Kubernetes supports several container runtimes: Docker, containerd, CRI-O, and any implementation of the Kubernetes CRI (Container Runtime Interface).
  • Addons use Kubernetes resources (DaemonSet, Deployment, etc) to implement cluster features
  • namespaced resources for addons belong within the kube-system namespace.
  • all Kubernetes clusters should have cluster DNS,
  • Cluster DNS is a DNS server, in addition to the other DNS server(s) in your environment, which serves DNS records for Kubernetes services.
  • Containers started by Kubernetes automatically include this DNS server in their DNS searches.
  • Container Resource Monitoring records generic time-series metrics about containers in a central database, and provides a UI for browsing that data.
  • A cluster-level logging mechanism is responsible for saving container logs to a central log store with search/browsing interface.
張 旭

Gracefully Shutdown Docker Container - Kakashi's Blog - 1 views

  • The initial idea is to make application invokes deconstructor of each component as soon as the application receives specific signals such as SIGTERM and SIGINT
  • When you run a docker container, by default it has a PID namespace, which means the docker process is isolated from other processes on your host.
  • The PID namespace has an important task to reap zombie processes.
  • ...11 more annotations...
  • This uses /bin/bash as PID1 and runs your program as the subprocess.
  • When a signal is sent to a shell, the signal actually won’t be forwarded to subprocesses.
  • By using the exec form, we can run our program as PID1
  • if you use exec form to run a shell script to spawn your application, remember to use exec syscall to overwrite /usr/bin/bash otherwise it will act as senario1
  • /bin/bash can handle repeating zombie process
  • with Tini, SIGTERM properly terminates your process even if you didn’t explicitly install a signal handler for it.
  • run tini as PID1 and it will forward the signal for subprocesses.
  • tini is a signal proxy and it also can deal with zombie process issue automatically.
  • run your program with tini by passing --init flag to docker run
  • use docker stop, docker will wait for 10s for stopping container before killing a process (by default). The main process inside the container will receive SIGTERM, then docker daemon will wait for 10s and send SIGKILL to terminate process.
  • kill running containers immediately. it’s more like kill -9 and kill --SIGKILL
‹ Previous 21 - 40 of 42 Next ›
Showing 20 items per page