Skip to main content

Home/ Larvata/ Group items tagged font

Rss Feed Group items tagged

張 旭

Providers - Configuration Language | Terraform | HashiCorp Developer - 0 views

  • Terraform relies on plugins called providers to interact with cloud providers, SaaS providers, and other APIs.
  • Terraform configurations must declare which providers they require so that Terraform can install and use them.
  • Each provider adds a set of resource types and/or data sources that Terraform can manage.
  • ...6 more annotations...
  • Every resource type is implemented by a provider; without providers, Terraform can't manage any kind of infrastructure.
  • The Terraform Registry is the main directory of publicly available Terraform providers, and hosts providers for most major infrastructure platforms.
  • Dependency Lock File documents an additional HCL file that can be included with a configuration, which tells Terraform to always use a specific set of provider versions.
  • Terraform CLI finds and installs providers when initializing a working directory. It can automatically download providers from a Terraform registry, or load them from a local mirror or cache.
  • To save time and bandwidth, Terraform CLI supports an optional plugin cache. You can enable the cache using the plugin_cache_dir setting in the CLI configuration file.
  • you can use Terraform CLI to create a dependency lock file and commit it to version control along with your configuration.
張 旭

Syntax - Configuration Language | Terraform | HashiCorp Developer - 0 views

  • the native syntax of the Terraform language, which is a rich language designed to be relatively easy for humans to read and write.
  • Terraform's configuration language is based on a more general language called HCL, and HCL's documentation usually uses the word "attribute" instead of "argument."
  • A particular block type may have any number of required labels, or it may require none
  • ...34 more annotations...
  • After the block type keyword and any labels, the block body is delimited by the { and } characters
  • Identifiers can contain letters, digits, underscores (_), and hyphens (-). The first character of an identifier must not be a digit, to avoid ambiguity with literal numbers.
  • The # single-line comment style is the default comment style and should be used in most cases.
  • he idiomatic style is to use the Unix convention
  • Indent two spaces for each nesting level.
  • align their equals signs
  • Use empty lines to separate logical groups of arguments within a block.
  • Use one blank line to separate the arguments from the blocks.
  • "meta-arguments" (as defined by the Terraform language semantics)
  • Avoid separating multiple blocks of the same type with other blocks of a different type, unless the block types are defined by semantics to form a family.
  • Resource names must start with a letter or underscore, and may contain only letters, digits, underscores, and dashes.
  • Each resource is associated with a single resource type, which determines the kind of infrastructure object it manages and what arguments and other attributes the resource supports.
  • Each resource type is implemented by a provider, which is a plugin for Terraform that offers a collection of resource types.
  • By convention, resource type names start with their provider's preferred local name.
  • Most publicly available providers are distributed on the Terraform Registry, which also hosts their documentation.
  • The Terraform language defines several meta-arguments, which can be used with any resource type to change the behavior of resources.
  • use precondition and postcondition blocks to specify assumptions and guarantees about how the resource operates.
  • Some resource types provide a special timeouts nested block argument that allows you to customize how long certain operations are allowed to take before being considered to have failed.
  • Timeouts are handled entirely by the resource type implementation in the provider
  • Most resource types do not support the timeouts block at all.
  • A resource block declares that you want a particular infrastructure object to exist with the given settings.
  • Destroy resources that exist in the state but no longer exist in the configuration.
  • Destroy and re-create resources whose arguments have changed but which cannot be updated in-place due to remote API limitations.
  • Expressions within a Terraform module can access information about resources in the same module, and you can use that information to help configure other resources. Use the <RESOURCE TYPE>.<NAME>.<ATTRIBUTE> syntax to reference a resource attribute in an expression.
  • resources often provide read-only attributes with information obtained from the remote API; this often includes things that can't be known until the resource is created, like the resource's unique random ID.
  • data sources, which are a special type of resource used only for looking up information.
  • some dependencies cannot be recognized implicitly in configuration.
  • local-only resource types exist for generating private keys, issuing self-signed TLS certificates, and even generating random ids.
  • The behavior of local-only resources is the same as all other resources, but their result data exists only within the Terraform state.
  • The count meta-argument accepts a whole number, and creates that many instances of the resource or module.
  • count.index — The distinct index number (starting with 0) corresponding to this instance.
  • the count value must be known before Terraform performs any remote resource actions. This means count can't refer to any resource attributes that aren't known until after a configuration is applied
  • Within nested provisioner or connection blocks, the special self object refers to the current resource instance, not the resource block as a whole.
  • This was fragile, because the resource instances were still identified by their index instead of the string values in the list.
  •  
    "the native syntax of the Terraform language, which is a rich language designed to be relatively easy for humans to read and write. "
張 旭

Logstash Alternatives: Pros & Cons of 5 Log Shippers [2019] - Sematext - 0 views

  • In this case, Elasticsearch. And because Elasticsearch can be down or struggling, or the network can be down, the shipper would ideally be able to buffer and retry
  • Logstash is typically used for collecting, parsing, and storing logs for future use as part of log management.
  • Logstash’s biggest con or “Achille’s heel” has always been performance and resource consumption (the default heap size is 1GB).
  • ...37 more annotations...
  • This can be a problem for high traffic deployments, when Logstash servers would need to be comparable with the Elasticsearch ones.
  • Filebeat was made to be that lightweight log shipper that pushes to Logstash or Elasticsearch.
  • differences between Logstash and Filebeat are that Logstash has more functionality, while Filebeat takes less resources.
  • Filebeat is just a tiny binary with no dependencies.
  • For example, how aggressive it should be in searching for new files to tail and when to close file handles when a file didn’t get changes for a while.
  • For example, the apache module will point Filebeat to default access.log and error.log paths
  • Filebeat’s scope is very limited,
  • Initially it could only send logs to Logstash and Elasticsearch, but now it can send to Kafka and Redis, and in 5.x it also gains filtering capabilities.
  • Filebeat can parse JSON
  • you can push directly from Filebeat to Elasticsearch, and have Elasticsearch do both parsing and storing.
  • You shouldn’t need a buffer when tailing files because, just as Logstash, Filebeat remembers where it left off
  • For larger deployments, you’d typically use Kafka as a queue instead, because Filebeat can talk to Kafka as well
  • The default syslog daemon on most Linux distros, rsyslog can do so much more than just picking logs from the syslog socket and writing to /var/log/messages.
  • It can tail files, parse them, buffer (on disk and in memory) and ship to a number of destinations, including Elasticsearch.
  • rsyslog is the fastest shipper
  • Its grammar-based parsing module (mmnormalize) works at constant speed no matter the number of rules (we tested this claim).
  • use it as a simple router/shipper, any decent machine will be limited by network bandwidth
  • It’s also one of the lightest parsers you can find, depending on the configured memory buffers.
  • rsyslog requires more work to get the configuration right
  • the main difference between Logstash and rsyslog is that Logstash is easier to use while rsyslog lighter.
  • rsyslog fits well in scenarios where you either need something very light yet capable (an appliance, a small VM, collecting syslog from within a Docker container).
  • rsyslog also works well when you need that ultimate performance.
  • syslog-ng as an alternative to rsyslog (though historically it was actually the other way around).
  • a modular syslog daemon, that can do much more than just syslog
  • Unlike rsyslog, it features a clear, consistent configuration format and has nice documentation.
  • Similarly to rsyslog, you’d probably want to deploy syslog-ng on boxes where resources are tight, yet you do want to perform potentially complex processing.
  • syslog-ng has an easier, more polished feel than rsyslog, but likely not that ultimate performance
  • Fluentd was built on the idea of logging in JSON wherever possible (which is a practice we totally agree with) so that log shippers down the line don’t have to guess which substring is which field of which type.
  • Fluentd plugins are in Ruby and very easy to write.
  • structured data through Fluentd, it’s not made to have the flexibility of other shippers on this list (Filebeat excluded).
  • Fluent Bit, which is to Fluentd similar to how Filebeat is for Logstash.
  • Fluentd is a good fit when you have diverse or exotic sources and destinations for your logs, because of the number of plugins.
  • Splunk isn’t a log shipper, it’s a commercial logging solution
  • Graylog is another complete logging solution, an open-source alternative to Splunk.
  • everything goes through graylog-server, from authentication to queries.
  • Graylog is nice because you have a complete logging solution, but it’s going to be harder to customize than an ELK stack.
  • it depends
張 旭

Securing NGINX-ingress - cert-manager Documentation - 1 views

  • If using a ClusterIssuer, remember to update the Ingress annotation cert-manager.io/issuer to cert-manager.io/cluster-issuer
  • Certificates resources allow you to specify the details of the certificate you want to request.
  • An Issuer defines how cert-manager will request TLS certificates.
  • ...4 more annotations...
  • cert-manager mainly uses two different custom Kubernetes resources - known as CRDs - to configure and control how it operates, as well as to store state. These resources are Issuers and Certificates.
  • using annotations on the ingress with ingress-shim or directly creating a certificate resource.
  • The secret that is used in the ingress should match the secret defined in the certificate.
  • a typo will result in the ingress-nginx-controller falling back to its self-signed certificate.
  •  
    "If using a ClusterIssuer, remember to update the Ingress annotation cert-manager.io/issuer to cert-manager.io/cluster-issuer"
張 旭

Dependency Lock File (.terraform.lock.hcl) - Configuration Language | Terraform | Hashi... - 0 views

  • Version constraints within the configuration itself determine which versions of dependencies are potentially compatible, but after selecting a specific version of each dependency Terraform remembers the decisions it made in a dependency lock file
  • At present, the dependency lock file tracks only provider dependencies.
  • Terraform does not remember version selections for remote modules, and so Terraform will always select the newest available module version that meets the specified version constraints.
  • ...14 more annotations...
  • The lock file is always named .terraform.lock.hcl, and this name is intended to signify that it is a lock file for various items that Terraform caches in the .terraform
  • Terraform automatically creates or updates the dependency lock file each time you run the terraform init command.
  • You should include this file in your version control repository
  • If a particular provider has no existing recorded selection, Terraform will select the newest available version that matches the given version constraint, and then update the lock file to include that selection.
  • the "trust on first use" model
  • you can pre-populate checksums for a variety of different platforms in your lock file using the terraform providers lock command, which will then allow future calls to terraform init to verify that the packages available in your chosen mirror match the official packages from the provider's origin registry.
  • The h1: and zh: prefixes on these values represent different hashing schemes, each of which represents calculating a checksum using a different algorithm.
  • zh:: a mnemonic for "zip hash"
  • h1:: a mnemonic for "hash scheme 1", which is the current preferred hashing scheme.
  • To determine whether there still exists a dependency on a given provider, Terraform uses two sources of truth: the configuration itself, and the state.
  • Version constraints within the configuration itself determine which versions of dependencies are potentially compatible, but after selecting a specific version of each dependency Terraform remembers the decisions it made in a dependency lock file so that it can (by default) make the same decisions again in future.
  • At present, the dependency lock file tracks only provider dependencies.
  • Terraform will always select the newest available module version that meets the specified version constraints.
  • The lock file is always named .terraform.lock.hcl
  •  
    "the overriding effect is compounded, with later blocks taking precedence over earlier blocks."
張 旭

Data Sources - Configuration Language | Terraform | HashiCorp Developer - 0 views

  • Each provider may offer data sources alongside its set of resource types.
  • When distinguishing from data resources, the primary kind of resource (as declared by a resource block) is known as a managed resource.
  • Each data resource is associated with a single data source, which determines the kind of object (or objects) it reads and what query constraint arguments are available.
  • ...4 more annotations...
  • Terraform reads data resources during the planning phase when possible, but announces in the plan when it must defer reading resources until the apply phase to preserve the order of operations.
  • local-only data sources exist for rendering templates, reading local files, and rendering AWS IAM policies.
  • As with managed resources, when count or for_each is present it is important to distinguish the resource itself from the multiple resource instances it creates. Each instance will separately read from its data source with its own variant of the constraint arguments, producing an indexed result.
  • Data instance arguments may refer to computed values, in which case the attributes of the instance itself cannot be resolved until all of its arguments are defined. I
張 旭

The for_each Meta-Argument - Configuration Language | Terraform | HashiCorp Developer - 0 views

  • A given resource or module block cannot use both count and for_each
  • The for_each meta-argument accepts a map or a set of strings, and creates an instance for each item in that map or set
  • each.key — The map key (or set member) corresponding to this instance.
  • ...10 more annotations...
  • each.value — The map value corresponding to this instance. (If a set was provided, this is the same as each.key.)
  • for_each keys cannot be the result (or rely on the result of) of impure functions, including uuid, bcrypt, or timestamp, as their evaluation is deferred during the main evaluation step.
  • The value used in for_each is used to identify the resource instance and will always be disclosed in UI output, which is why sensitive values are not allowed.
  • if you would like to call keys(local.map), where local.map is an object with sensitive values (but non-sensitive keys), you can create a value to pass to for_each with toset([for k,v in local.map : k]).
  • for_each can't refer to any resource attributes that aren't known until after a configuration is applied (such as a unique ID generated by the remote API when an object is created).
  • he for_each argument does not implicitly convert lists or tuples to sets.
  • Transform a multi-level nested structure into a flat list by using nested for expressions with the flatten function.
  • Instances are identified by a map key (or set member) from the value provided to for_each
  • Within nested provisioner or connection blocks, the special self object refers to the current resource instance, not the resource block as a whole.
  • Conversion from list to set discards the ordering of the items in the list and removes any duplicate elements.
crazylion lee

aui/font-spider: Smart webfont compression and format conversion tool - 1 views

  •  
    "Smart webfont compression and format conversion tool http://font-spider.org"
張 旭

ProxySQL Experimental Feature: Native ProxySQL Clustering - Percona Database Performanc... - 0 views

  • several ProxySQL instances to communicate with and share configuration updates with each other.
  • 4 tables where you can make changes and propagate the configuration
  • When you make a change like INSERT/DELETE/UPDATE on any of these tables, after running the command LOAD … TO RUNTIME , ProxySQL creates a new checksum of the table’s data and increments the version number in the table runtime_checksums_values
  • ...2 more annotations...
  • all nodes are monitoring and communicating with all the other ProxySQL nodes. When another node detects a change in the checksum and version (both at the same time), each node will get a copy of the table that was modified, make the same changes locally, and apply the new config to RUNTIME to refresh the new config, make it visible to the applications connected and automatically save it to DISK for persistence.
  • a “synchronous cluster” so any changes to these 4 tables on any ProxySQL server will be replicated to all other ProxySQL nodes.
張 旭

Improving Kubernetes reliability: quicker detection of a Node down | Fatal failure - 0 views

  • when a Node gets down, the pods of the broken node are still running for some time and they still get requests, and those requests, will fail.
  • 1- The Kubelet posts its status to the masters using –node-status-update-frequency=10s 2- A node dies 3- The kube controller manager is the one monitoring the nodes, using –-node-monitor-period=5s it checks, in the masters, the node status reported by the Kubelet. 4- Kube controller manager will see the node is unresponsive, and has this grace period –node-monitor-grace-period=40s until it considers the node unhealthy.
  • node-status-update-frequency x (N-1) != node-monitor-grace-period
  • ...2 more annotations...
  • 5- Once the node is marked as unhealthy, the kube controller manager will remove its pods based on –pod-eviction-timeout=5m0s
  • 6- Kube proxy has a watcher over the API, so the very first moment the pods are evicted the proxy will notice and update the iptables of the node, removing the endpoints from the services so the failing pods won’t be accessible anymore.
crazylion lee

Web fonts, boy, I don't know - Monica Dinculescu - 1 views

  •  
    學到兩個新名詞 FOUC , FOIC
張 旭

迷途工程師: Makefile的賦值運算符(=, :=, +=, ?=) - 0 views

  • = 是最基本的賦值 := 會覆蓋變數之前的值 ?= 變數為空時才給值,不然則維持之前的值 += 將值附加到變數的後面
  • = 在執行時擴展(values within it are recursively expanded when the variable is used, not when it's declared) := 在定義時擴展(values within it are expanded at declaration time)
  •  
    "= 是最基本的賦值 := 會覆蓋變數之前的值 ?= 變數為空時才給值,不然則維持之前的值 += 將值附加到變數的後面 "
張 旭

What is Data Definition Language (DDL) and how is it used? - 1 views

  • Data Definition Language (DDL) is used to create and modify the structure of objects in a database using predefined commands and a specific syntax.
  • DDL includes Structured Query Language (SQL) statements to create and drop databases, aliases, locations, indexes, tables and sequences.
  • Since DDL includes SQL statements to define changes in the database schema, it is considered a subset of SQL.
  • ...6 more annotations...
  • Data Manipulation Language (DML), commands are used to modify data in a database. DML statements control access to the database data.
  • DDL commands are used to create, delete or alter the structure of objects in a database but not its data.
  • DDL deals with descriptions of the database schema and is useful for creating new tables, indexes, sequences, stogroups, etc. and to define the attributes of these objects, such as data type, field length and alternate table names (aliases).
  • Data Query Language (DQL) is used to get data within the schema objects of a database and also to query it and impose order upon it.
  • DQL is also a subset of SQL. One of the most common commands in DQL is SELECT.
  • The most common command types in DDL are CREATE, ALTER and DROP.
ninecatswu

AppleDesignResources/SanFranciscoFont - 1 views

  •  
    The San Francisco font by Apple used in the Apple Watch, iOS 9, and OS X El Capitan. Originally found at https://developer.apple.com/watchos/download/ 如果沒Apple developer帳號,但需要安裝San Fancisco字型,可先使用
張 旭

DevOps - 0 views

  • 对于运维来说,知识的传承非常重要,非常有必要建立运维的知识库。一方面 有利于对事件的复盘回顾,另一方面也有助于日后参加运维的人员尽快熟悉与掌握系统的运维技能
  • 云平台主要从以下3个方面对DevOps提供支撑(括号内为承载此能力的软件工具): 1. 基于IaaS的自服务与环境编排能力(VMWare) 2. 基于PaaS的弹性伸缩能力(K8s) 3. 基于SaaS的软件服务能力
  • 考虑自建私有云,至少是混合云。
  • ...11 more annotations...
  • 内网建立所谓的私库,作为代理与外网的公共库同步。
  • 很难做到真正意义上的DevOps to Production
  • 可视化是为了实时展现持续交付流水线执行情况与单元测试的执行报告
  • 通过持续交付流水线串联自动化测试,在测试环境部署成功后触发自动化测试。
  • 测试阶段也需要测试报告的可视化与结果通知
  • 企业的持续交付流水线往往都打不通到生产环境
  • Service Desk不是某一款软件的名字,而是ITIL(信息技术基础架构库,可认为是ITSM的落地实现)里面承载变更管理与事件管理的工具统称。
  • 构建底层的云平台,是整个DevOps基础架构的基石
  • 架构不是一成不变的,而是应该随着实际需求变化而持续演化,能力也要跟着持续提升。
  • 并行测试的执行环境通过PaaS平台按需自动生成,测试执行完毕后自动销毁。
  • 即使是雷同的项目,在对编译构建上的一些细枝末节的差别也很可能导致它们的持续交付流水线设计非常不一样。
  •  
    "对于运维来说,知识的传承非常重要,非常有必要建立运维的知识库。一方面 有利于对事件的复盘回顾,另一方面也有助于日后参加运维的人员尽快熟悉与掌握系统的运维技能。"
張 旭

Think Before you NodePort in Kubernetes - Oteemo - 0 views

  • Two options are provided for Services intended for external use: a NodePort, or a LoadBalancer
  • no built-in cloud load balancers for Kubernetes in bare-metal environments
  • NodePort may not be your best choice.
  • ...15 more annotations...
  • NodePort, by design, bypasses almost all network security in Kubernetes.
  • NetworkPolicy resources can currently only control NodePorts by allowing or disallowing all traffic on them.
  • put a network filter in front of all the nodes
  • if a Nodeport-ranged Service is advertised to the public, it may serve as an invitation to black-hats to scan and probe
  • When Kubernetes creates a NodePort service, it allocates a port from a range specified in the flags that define your Kubernetes cluster. (By default, these are ports ranging from 30000-32767.)
  • By design, Kubernetes NodePort cannot expose standard low-numbered ports like 80 and 443, or even 8080 and 8443.
  • A port in the NodePort range can be specified manually, but this would mean the creation of a list of non-standard ports, cross-referenced with the applications they map to
  • if you want the exposed application to be highly available, everything contacting the application has to know all of your node addresses, or at least more than one.
  • non-standard ports.
  • Ingress resources use an Ingress controller (the nginx one is common but not by any means the only choice) and an external load balancer or public IP to enable path-based routing of external requests to internal Services.
  • With a single point of entry to expose and secure
  • get simpler TLS management!
  • consider putting a real load balancer in front of your NodePort Services before opening them up to the world
  • Google very recently released an alpha-stage bare-metal load balancer that, once installed in your cluster, will load-balance using BGP
  • NodePort Services are easy to create but hard to secure, hard to manage, and not especially friendly to others
張 旭

The Asset Pipeline - Ruby on Rails Guides - 0 views

  • provides a framework to concatenate and minify or compress JavaScript and CSS assets
  • adds the ability to write these assets in other languages and pre-processors such as CoffeeScript, Sass and ERB
  • invalidate the cache by altering this fingerprint
  • ...80 more annotations...
  • Rails 4 automatically adds the sass-rails, coffee-rails and uglifier gems to your Gemfile
  • reduce the number of requests that a browser makes to render a web page
  • Starting with version 3.1, Rails defaults to concatenating all JavaScript files into one master .js file and all CSS files into one master .css file
  • In production, Rails inserts an MD5 fingerprint into each filename so that the file is cached by the web browser
  • The technique sprockets uses for fingerprinting is to insert a hash of the content into the name, usually at the end.
  • asset minification or compression
  • The sass-rails gem is automatically used for CSS compression if included in Gemfile and no config.assets.css_compressor option is set.
  • Supported languages include Sass for CSS, CoffeeScript for JavaScript, and ERB for both by default.
  • When a filename is unique and based on its content, HTTP headers can be set to encourage caches everywhere (whether at CDNs, at ISPs, in networking equipment, or in web browsers) to keep their own copy of the content
  • asset pipeline is technically no longer a core feature of Rails 4
  • Rails uses for fingerprinting is to insert a hash of the content into the name, usually at the end
  • With the asset pipeline, the preferred location for these assets is now the app/assets directory.
  • Fingerprinting is enabled by default for production and disabled for all other environments
  • The files in app/assets are never served directly in production.
  • Paths are traversed in the order that they occur in the search path
  • You should use app/assets for files that must undergo some pre-processing before they are served.
  • By default .coffee and .scss files will not be precompiled on their own
  • app/assets is for assets that are owned by the application, such as custom images, JavaScript files or stylesheets.
  • lib/assets is for your own libraries' code that doesn't really fit into the scope of the application or those libraries which are shared across applications.
  • vendor/assets is for assets that are owned by outside entities, such as code for JavaScript plugins and CSS frameworks.
  • Any path under assets/* will be searched
  • By default these files will be ready to use by your application immediately using the require_tree directive.
  • By default, this means the files in app/assets take precedence, and will mask corresponding paths in lib and vendor
  • Sprockets uses files named index (with the relevant extensions) for a special purpose
  • Rails.application.config.assets.paths
  • causes turbolinks to check if an asset has been updated and if so loads it into the page
  • if you add an erb extension to a CSS asset (for example, application.css.erb), then helpers like asset_path are available in your CSS rules
  • If you add an erb extension to a JavaScript asset, making it something such as application.js.erb, then you can use the asset_path helper in your JavaScript code
  • The asset pipeline automatically evaluates ERB
  • data URI — a method of embedding the image data directly into the CSS file — you can use the asset_data_uri helper.
  • Sprockets will also look through the paths specified in config.assets.paths, which includes the standard application paths and any paths added by Rails engines.
  • image_tag
  • the closing tag cannot be of the style -%>
  • asset_data_uri
  • app/assets/javascripts/application.js
  • sass-rails provides -url and -path helpers (hyphenated in Sass, underscored in Ruby) for the following asset classes: image, font, video, audio, JavaScript and stylesheet.
  • Rails.application.config.assets.compress
  • In JavaScript files, the directives begin with //=
  • The require_tree directive tells Sprockets to recursively include all JavaScript files in the specified directory into the output.
  • manifest files contain directives — instructions that tell Sprockets which files to require in order to build a single CSS or JavaScript file.
  • You should not rely on any particular order among those
  • Sprockets uses manifest files to determine which assets to include and serve.
  • the family of require directives prevents files from being included twice in the output
  • which files to require in order to build a single CSS or JavaScript file
  • Directives are processed top to bottom, but the order in which files are included by require_tree is unspecified.
  • In JavaScript files, Sprockets directives begin with //=
  • If require_self is called more than once, only the last call is respected.
  • require directive is used to tell Sprockets the files you wish to require.
  • You need not supply the extensions explicitly. Sprockets assumes you are requiring a .js file when done from within a .js file
  • paths must be specified relative to the manifest file
  • require_directory
  • Rails 4 creates both app/assets/javascripts/application.js and app/assets/stylesheets/application.css regardless of whether the --skip-sprockets option is used when creating a new rails application.
  • The file extensions used on an asset determine what preprocessing is applied.
  • app/assets/stylesheets/application.css
  • Additional layers of preprocessing can be requested by adding other extensions, where each extension is processed in a right-to-left manner
  • require_self
  • use the Sass @import rule instead of these Sprockets directives.
  • Keep in mind that the order of these preprocessors is important
  • In development mode, assets are served as separate files in the order they are specified in the manifest file.
  • when these files are requested they are processed by the processors provided by the coffee-script and sass gems and then sent back to the browser as JavaScript and CSS respectively.
  • css.scss.erb
  • js.coffee.erb
  • Keep in mind the order of these preprocessors is important.
  • By default Rails assumes that assets have been precompiled and will be served as static assets by your web server
  • with the Asset Pipeline the :cache and :concat options aren't used anymore
  • Assets are compiled and cached on the first request after the server is started
  • RAILS_ENV=production bundle exec rake assets:precompile
  • Debug mode can also be enabled in Rails helper methods
  • If you set config.assets.initialize_on_precompile to false, be sure to test rake assets:precompile locally before deploying
  • By default Rails assumes assets have been precompiled and will be served as static assets by your web server.
  • a rake task to compile the asset manifests and other files in the pipeline
  • RAILS_ENV=production bin/rake assets:precompile
  • a recipe to handle this in deployment
  • links the folder specified in config.assets.prefix to shared/assets
  • config/initializers/assets.rb
  • The initialize_on_precompile change tells the precompile task to run without invoking Rails
  • The X-Sendfile header is a directive to the web server to ignore the response from the application, and instead serve a specified file from disk
  • the jquery-rails gem which comes with Rails as the standard JavaScript library gem.
  • Possible options for JavaScript compression are :closure, :uglifier and :yui
  • concatenate assets
張 旭

Volumes - Kubernetes - 0 views

  • On-disk files in a Container are ephemeral,
  • when a Container crashes, kubelet will restart it, but the files will be lost - the Container starts with a clean state
  • In Docker, a volume is simply a directory on disk or in another Container.
  • ...105 more annotations...
  • A Kubernetes volume, on the other hand, has an explicit lifetime - the same as the Pod that encloses it.
  • a volume outlives any Containers that run within the Pod, and data is preserved across Container restarts.
    • 張 旭
       
      Kubernetes Volume 是跟著 Pod 的生命週期在走
  • Kubernetes supports many types of volumes, and a Pod can use any number of them simultaneously.
  • To use a volume, a Pod specifies what volumes to provide for the Pod (the .spec.volumes field) and where to mount those into Containers (the .spec.containers.volumeMounts field).
  • A process in a container sees a filesystem view composed from their Docker image and volumes.
  • Volumes can not mount onto other volumes or have hard links to other volumes.
  • Each Container in the Pod must independently specify where to mount each volume
  • localnfs
  • cephfs
  • awsElasticBlockStore
  • glusterfs
  • vsphereVolume
  • An awsElasticBlockStore volume mounts an Amazon Web Services (AWS) EBS Volume into your Pod.
  • the contents of an EBS volume are preserved and the volume is merely unmounted.
  • an EBS volume can be pre-populated with data, and that data can be “handed off” between Pods.
  • create an EBS volume using aws ec2 create-volume
  • the nodes on which Pods are running must be AWS EC2 instances
  • EBS only supports a single EC2 instance mounting a volume
  • check that the size and EBS volume type are suitable for your use!
  • A cephfs volume allows an existing CephFS volume to be mounted into your Pod.
  • the contents of a cephfs volume are preserved and the volume is merely unmounted.
    • 張 旭
       
      相當於自己的 AWS EBS
  • CephFS can be mounted by multiple writers simultaneously.
  • have your own Ceph server running with the share exported
  • configMap
  • The configMap resource provides a way to inject configuration data into Pods
  • When referencing a configMap object, you can simply provide its name in the volume to reference it
  • volumeMounts: - name: config-vol mountPath: /etc/config volumes: - name: config-vol configMap: name: log-config items: - key: log_level path: log_level
  • create a ConfigMap before you can use it.
  • A Container using a ConfigMap as a subPath volume mount will not receive ConfigMap updates.
  • An emptyDir volume is first created when a Pod is assigned to a Node, and exists as long as that Pod is running on that node.
  • When a Pod is removed from a node for any reason, the data in the emptyDir is deleted forever.
  • By default, emptyDir volumes are stored on whatever medium is backing the node - that might be disk or SSD or network storage, depending on your environment.
  • you can set the emptyDir.medium field to "Memory" to tell Kubernetes to mount a tmpfs (RAM-backed filesystem)
  • volumeMounts: - mountPath: /cache name: cache-volume volumes: - name: cache-volume emptyDir: {}
  • An fc volume allows an existing fibre channel volume to be mounted in a Pod.
  • configure FC SAN Zoning to allocate and mask those LUNs (volumes) to the target WWNs beforehand so that Kubernetes hosts can access them.
  • Flocker is an open-source clustered Container data volume manager. It provides management and orchestration of data volumes backed by a variety of storage backends.
  • emptyDir
  • flocker
  • A flocker volume allows a Flocker dataset to be mounted into a Pod
  • have your own Flocker installation running
  • A gcePersistentDisk volume mounts a Google Compute Engine (GCE) Persistent Disk into your Pod.
  • Using a PD on a Pod controlled by a ReplicationController will fail unless the PD is read-only or the replica count is 0 or 1
  • A glusterfs volume allows a Glusterfs (an open source networked filesystem) volume to be mounted into your Pod.
  • have your own GlusterFS installation running
  • A hostPath volume mounts a file or directory from the host node’s filesystem into your Pod.
  • a powerful escape hatch for some applications
  • access to Docker internals; use a hostPath of /var/lib/docker
  • allowing a Pod to specify whether a given hostPath should exist prior to the Pod running, whether it should be created, and what it should exist as
  • specify a type for a hostPath volume
  • the files or directories created on the underlying hosts are only writable by root.
  • hostPath: # directory location on host path: /data # this field is optional type: Directory
  • An iscsi volume allows an existing iSCSI (SCSI over IP) volume to be mounted into your Pod.
  • have your own iSCSI server running
  • A feature of iSCSI is that it can be mounted as read-only by multiple consumers simultaneously.
  • A local volume represents a mounted local storage device such as a disk, partition or directory.
  • Local volumes can only be used as a statically created PersistentVolume.
  • Compared to hostPath volumes, local volumes can be used in a durable and portable manner without manually scheduling Pods to nodes, as the system is aware of the volume’s node constraints by looking at the node affinity on the PersistentVolume.
  • If a node becomes unhealthy, then the local volume will also become inaccessible, and a Pod using it will not be able to run.
  • PersistentVolume spec using a local volume and nodeAffinity
  • PersistentVolume nodeAffinity is required when using local volumes. It enables the Kubernetes scheduler to correctly schedule Pods using local volumes to the correct node.
  • PersistentVolume volumeMode can now be set to “Block” (instead of the default value “Filesystem”) to expose the local volume as a raw block device.
  • When using local volumes, it is recommended to create a StorageClass with volumeBindingMode set to WaitForFirstConsumer
  • An nfs volume allows an existing NFS (Network File System) share to be mounted into your Pod.
  • NFS can be mounted by multiple writers simultaneously.
  • have your own NFS server running with the share exported
  • A persistentVolumeClaim volume is used to mount a PersistentVolume into a Pod.
  • PersistentVolumes are a way for users to “claim” durable storage (such as a GCE PersistentDisk or an iSCSI volume) without knowing the details of the particular cloud environment.
  • A projected volume maps several existing volume sources into the same directory.
  • All sources are required to be in the same namespace as the Pod. For more details, see the all-in-one volume design document.
  • Each projected volume source is listed in the spec under sources
  • A Container using a projected volume source as a subPath volume mount will not receive updates for those volume sources.
  • RBD volumes can only be mounted by a single consumer in read-write mode - no simultaneous writers allowed
  • A secret volume is used to pass sensitive information, such as passwords, to Pods
  • store secrets in the Kubernetes API and mount them as files for use by Pods
  • secret volumes are backed by tmpfs (a RAM-backed filesystem) so they are never written to non-volatile storage.
  • create a secret in the Kubernetes API before you can use it
  • A Container using a Secret as a subPath volume mount will not receive Secret updates.
  • StorageOS runs as a Container within your Kubernetes environment, making local or attached storage accessible from any node within the Kubernetes cluster.
  • Data can be replicated to protect against node failure. Thin provisioning and compression can improve utilization and reduce cost.
  • StorageOS provides block storage to Containers, accessible via a file system.
  • A vsphereVolume is used to mount a vSphere VMDK Volume into your Pod.
  • supports both VMFS and VSAN datastore.
  • create VMDK using one of the following methods before using with Pod.
  • share one volume for multiple uses in a single Pod.
  • The volumeMounts.subPath property can be used to specify a sub-path inside the referenced volume instead of its root.
  • volumeMounts: - name: workdir1 mountPath: /logs subPathExpr: $(POD_NAME)
  • env: - name: POD_NAME valueFrom: fieldRef: apiVersion: v1 fieldPath: metadata.name
  • Use the subPathExpr field to construct subPath directory names from Downward API environment variables
  • enable the VolumeSubpathEnvExpansion feature gate
  • The subPath and subPathExpr properties are mutually exclusive.
  • There is no limit on how much space an emptyDir or hostPath volume can consume, and no isolation between Containers or between Pods.
  • emptyDir and hostPath volumes will be able to request a certain amount of space using a resource specification, and to select the type of media to use, for clusters that have several media types.
  • the Container Storage Interface (CSI) and Flexvolume. They enable storage vendors to create custom storage plugins without adding them to the Kubernetes repository.
  • all volume plugins (like volume types listed above) were “in-tree” meaning they were built, linked, compiled, and shipped with the core Kubernetes binaries and extend the core Kubernetes API.
  • Container Storage Interface (CSI) defines a standard interface for container orchestration systems (like Kubernetes) to expose arbitrary storage systems to their container workloads.
  • Once a CSI compatible volume driver is deployed on a Kubernetes cluster, users may use the csi volume type to attach, mount, etc. the volumes exposed by the CSI driver.
  • The csi volume type does not support direct reference from Pod and may only be referenced in a Pod via a PersistentVolumeClaim object.
  • This feature requires CSIInlineVolume feature gate to be enabled:--feature-gates=CSIInlineVolume=true
  • In-tree plugins that support CSI Migration and have a corresponding CSI driver implemented are listed in the “Types of Volumes” section above.
  • Mount propagation allows for sharing volumes mounted by a Container to other Containers in the same Pod, or even to other Pods on the same node.
  • Mount propagation of a volume is controlled by mountPropagation field in Container.volumeMounts.
  • HostToContainer - This volume mount will receive all subsequent mounts that are mounted to this volume or any of its subdirectories.
  • Bidirectional - This volume mount behaves the same the HostToContainer mount. In addition, all volume mounts created by the Container will be propagated back to the host and to all Containers of all Pods that use the same volume.
  • Edit your Docker’s systemd service file. Set MountFlags as follows:MountFlags=shared
張 旭

Override Files - Configuration Language | Terraform | HashiCorp Developer - 0 views

  • the overriding effect is compounded, with later blocks taking precedence over earlier blocks.
  • Terraform has special handling of any configuration file whose name ends in _override.tf or _override.tf.json. This special handling also applies to a file named literally override.tf or override.tf.json.Terraform initially skips these override files when loading configuration, and then afterwards processes each one in turn (in lexicographical order).
  • If the original block defines a default value and an override block changes the variable's type, Terraform attempts to convert the default value to the overridden type, producing an error if this conversion is not possible.
  • ...1 more annotation...
  • Each locals block defines a number of named values.
  •  
    "the overriding effect is compounded, with later blocks taking precedence over earlier blocks."
crazylion lee

Iosevka - 0 views

1 - 20 of 24 Next ›
Showing 20 items per page