Skip to main content

Home/ Larvata/ Group items tagged plugin

Rss Feed Group items tagged

張 旭

3. Hello, world! - Err 9.9.9 documentation - 0 views

  • The first is a Message object, which represents the full message object received by Errbot.
  • The second is a string (or a list, if using the split_args_with parameter of botcmd()) with the arguments passed to the command.
  • args would be the string “Mister Errbot”.
  • ...6 more annotations...
  • If you return None, Errbot will not respond with any kind of message when executing the command.
  • a file that ends with the extension .plug and it is used by Errbot to identify and load plugins.
  • The key Module should point to a module that Python can find and import.
  • The key Name should be identical to the name you gave to the class in your plugin file
  • The presence of __init__.py indicates lib is a Python regular package.
  • the !status command,
crazylion lee

slick - the last carousel you'll ever need - 1 views

  •  
    "the last carousel you'll ever need"
crazylion lee

Cleave.js - Format input text content when you are typing - 0 views

  •  
    "Format your content when you are typing"
張 旭

Kubernetes 架构浅析 - 0 views

  • 将Loadbalancer改造成Smart Loadbalancer,通过服务发现机制,应用实例启动或者销毁时自动注册到一个配置中心(etcd/zookeeper),Loadbalancer监听应用配置的变化自动修改自己的配置。
  • Mysql计划该成域名访问方式,而不是ip。为了避免dns变更时的延迟问题,需要在内网架设私有dns。
  • 配合服务发现机制自动修改dns
  • ...23 more annotations...
  • 通过增加一层代理的机制实现
  • 操作系统和基础库的依赖允许应用自定义
  • 对磁盘路径以及端口的依赖通过Docker运行参数动态注入
  • Docker的自定义变量以及参数,需要提供标准化的配置文件
  • 每个服务器节点上要有个agent来执行具体的操作,监控该节点上的应用
  • 还要提供接口以及工具去操作。
  • 应用进程和资源(包括 cpu,内存,磁盘,网络)的解耦
  • 服务依赖关系的解耦
  • scheduler在Kubernetes中是一个plugin,可以用其他的实现替换(比如mesos)
  • 大多数接口都是直接读写etcd中的数据。
  • etcd 作为配置中心和存储服务
  • kubelet 主要包含容器管理,镜像管理,Volume管理等。同时kubelet也是一个rest服务,和pod相关的命令操作都是通过调用接口实现的。
  • kube-proxy 主要用于实现Kubernetes的service机制。提供一部分SDN功能以及集群内部的智能LoadBalancer。
  • Pods Kubernetes将应用的具体实例抽象为pod。每个pod首先会启动一个google_containers/pause docker容器,然后再启动应用真正的docker容器。这样做的目的是为了可以将多个docker容器封装到一个pod中,共享网络地址。
  • Replication Controller 控制pod的副本数量
  • Services service是对一组pods的抽象,通过kube-proxy的智能LoadBalancer机制,pods的销毁迁移不会影响services的功能以及上层的调用方。
  • Namespace Kubernetes中的namespace主要用来避免pod,service的名称冲突。同一个namespace内的pod,service的名称必须是唯一的。
  • Kubernetes的理念里,pod之间是可以直接通讯的
  • 需要用户自己选择解决方案: Flannel,OpenVSwitch,Weave 等。
  • Hypernetes就是一个实现了多租户的Kubernetes版本。
  • 如果运维系统跟不上,服务拆太细,很容易出现某个服务器的角落里部署着一个很古老的不常更新的服务,后来大家竟然忘记了,最后服务器迁移的时候给丢了,用户投诉才发现。
  • 在Kubernetes上的微服务治理框架可以一揽子解决微服务的rpc,监控,容灾问题
  • 同一个pod的多个容器定义中没有优先级,启动顺序不能保证
crazylion lee

Noiszy - 0 views

張 旭

Why we should stop using Grunt & Gulp - 0 views

  • All of these task runners (or build systems if you want to call them that) try to abstract some kind of task paradigm away into their own siloed incantations.
  • Gulp isn't the only culprit either; Jake, Broccoli, Brunch and Mimosa all require plugins to install as well, which means when using any of these task runners, you're essentially just installing 1 more dependency (the task runner) than if you had no task runner, and just used each of the projects own binaries
張 旭

Vim Awesome - 0 views

shared by 張 旭 on 08 Dec 15 - No Cached
張 旭

pre-commit - 0 views

  • a multi-language package manager for pre-commit hooks
  • pre-commit is specifically designed to not require root access
  • We copied and pasted unwieldy bash scripts from project to project and had to manually change the hooks to work for different project structures.
  • ...3 more annotations...
  • adding pre-commit plugins to your project is done with the .pre-commit-config.yaml configuration file.
  • The pre-commit config file describes what repositories and hooks are installed.
  • This configuration says to download the pre-commit-hooks project and run its trailing-whitespace hook
  •  
    "a multi-language package manager for pre-commit hooks"
張 旭

How To Create a Kubernetes Cluster Using Kubeadm on Ubuntu 18.04 | DigitalOcean - 0 views

  • A pod is an atomic unit that runs one or more containers.
  • Pods are the basic unit of scheduling in Kubernetes: all containers in a pod are guaranteed to run on the same node that the pod is scheduled on.
  • Each pod has its own IP address, and a pod on one node should be able to access a pod on another node using the pod's IP.
  • ...12 more annotations...
  • Communication between pods is more complicated, however, and requires a separate networking component that can transparently route traffic from a pod on one node to a pod on another.
  • pod network plugins. For this cluster, you will use Flannel, a stable and performant option.
  • Passing the argument --pod-network-cidr=10.244.0.0/16 specifies the private subnet that the pod IPs will be assigned from.
  • kubectl apply -f descriptor.[yml|json] is the syntax for telling kubectl to create the objects described in the descriptor.[yml|json] file.
  • deploy Nginx using Deployments and Services
  • A deployment is a type of Kubernetes object that ensures there's always a specified number of pods running based on a defined template, even if the pod crashes during the cluster's lifetime.
  • NodePort, a scheme that will make the pod accessible through an arbitrary port opened on each node of the cluster
  • Services are another type of Kubernetes object that expose cluster internal services to clients, both internal and external.
  • load balancing requests to multiple pods
  • Pods are ubiquitous in Kubernetes, so understanding them will facilitate your work
  • how controllers such as deployments work since they are used frequently in stateless applications for scaling and the automated healing of unhealthy applications.
  • Understanding the types of services and the options they have is essential for running both stateless and stateful applications.
張 旭

你到底知不知道什麼是 Kubernetes? | Hwchiu Learning Note - 0 views

  • Storage(儲存) 實際上一直都不是一個簡單處理的問題,從軟體面來看實際上牽扯到非常多的層級,譬如 Linux Kernel, FileSystem, Block/File-Level, Cache, Snapshot, Object Storage 等各式各樣的議題可以討論。
  • DRBD
  • 異地備援,容錯機制,快照,重複資料刪除等超多相關的議題基本上從來沒有一個完美的解法能夠滿足所有使用情境。
  • ...20 more annotations...
  • 管理者可能會直接在 NFS Server 上進行 MDADM 來設定相關的 Block Device 並且基於上面提供 Export 供 NFS 使用,甚至底層套用不同的檔案系統 (EXT4/BTF4) 來獲取不同的功能與效能。
  • Kubernetes 就只是 NFS Client 的角色
  • CSI(Container Storage Interface)。CSI 本身作為 Kubernetes 與 Storage Solution 的中介層。
  • 基本上 Pod 裡面每個 Container 會使用 Volume 這個物件來代表容器內的掛載點,而在外部實際上會透過 PVC 以及 PV 的方式來描述這個 Volume 背後的儲存方案伺服器的資訊。
  • 整體會透過 CSI 的元件們與最外面實際上的儲存設備連接,所有儲存相關的功能是否有實現,有支援全部都要仰賴最後面的實際提供者, kubernetes 只透過 CSI 的標準去執行。
  • 在網路部分也有與之對應的 CNI(Container Network Interface). kubernetes 透過 CNI 這個介面來與後方的 網路解決方案 溝通
  • CNI 最基本的要求就是在在對應的階段為對應的容器提供網路能力
  • 目前最常見也是 IPv4 + TCP/UDP 的傳輸方式,因此才會看到大部分的 CNI 都在講這些。
  • 希望所有容器彼此之間可以透過 IPv4 來互相存取彼此,不論是同節點或是跨節點的容器們都要可以滿足這個需求。
  • 容器間到底怎麼傳輸的,需不需要封裝,透過什麼網卡,要不要透過 NAT 處理? 這一切都是 CNI 介面背後的實現
  • 外部網路存取容器服務 (Service/Ingress)
  • kubernetes 在 Service/Ingress 中間自行實現了一個模組,大抵上稱為 kube-proxy, 其底層可以使用 iptables, IPVS, user-space software 等不同的實現方法,這部分是跟 CNI 完全無關。
  • CNI 跟 Service/Ingress 是會衝突的,也有可能彼此沒有配合,這中間沒有絕對的穩定整合。
  • CNI 一般會處理的部份,包含了容器內的 網卡數量,網卡名稱,網卡IP, 以及容器與外部節點的連接能力等
  • CRI (Container Runtime Interface) 或是 Device Plugin
  • 對於 kubernetes 來說,其實本身並不在意到底底下的容器化技術實際上是怎麼實現的,你要用 Docker, rkt, CRI-O 都無所謂,甚至背後是一個偽裝成 Container 的 Virtaul Machine virtlet 都可以。
  • 去思考到底為什麼自己本身的服務需要容器化,容器化可以帶來什麼優點
  • 太多太多的人都認為只要寫一個 Dockerfile 將原先的應用程式們全部包裝起來放在一起就是一個很好的容器 來使用了。
  • 最後就會發現根本把 Container 當作 Virtual Machine 來使用,然後再補一句 Contaienr 根本不好用啊
  • 容器化 不是把直接 Virtual Machine 的使用習慣換個環境使用就叫做 容器化,而是要從概念上去暸解與使用
張 旭

Docker for AWS persistent data volumes | Docker Documentation - 0 views

  • Cloudstor is a modern volume plugin built by Docker
  • Docker swarm mode tasks and regular Docker containers can use a volume created with Cloudstor to mount a persistent data volume.
  • Global shared Cloudstor volumes mounted by all tasks in a swarm service.
  • ...14 more annotations...
  • Workloads running in a Docker service that require access to low latency/high IOPs persistent storage, such as a database engine, can use a relocatable Cloudstor volume backed by EBS.
  • Each relocatable Cloudstor volume is backed by a single EBS volume.
  • If a swarm task using a relocatable Cloudstor volume gets rescheduled to another node within the same availability zone as the original node where the task was running, Cloudstor detaches the backing EBS volume from the original node and attaches it to the new target node automatically.
  • in a different availability zone,
  • Cloudstor transfers the contents of the backing EBS volume to the destination availability zone using a snapshot, and cleans up the EBS volume in the original availability zone.
  • Typically the snapshot-based transfer process across availability zones takes between 2 and 5 minutes unless the work load is write-heavy.
  • A swarm task is not started until the volume it mounts becomes available
  • Sharing/mounting the same Cloudstor volume backed by EBS among multiple tasks is not a supported scenario and leads to data loss.
  • a Cloudstor volume to share data between tasks, choose the appropriate EFS backed shared volume option.
  • When multiple swarm service tasks need to share data in a persistent storage volume, you can use a shared Cloudstor volume backed by EFS.
  • a volume and its contents can be mounted by multiple swarm service tasks without the risk of data loss
  • over NFS
  • the persistent data backed by EFS volumes is always available.
  • shared Cloudstor volumes only work in those AWS regions where EFS is supported.
張 旭

The Asset Pipeline - Ruby on Rails Guides - 0 views

  • provides a framework to concatenate and minify or compress JavaScript and CSS assets
  • adds the ability to write these assets in other languages and pre-processors such as CoffeeScript, Sass and ERB
  • invalidate the cache by altering this fingerprint
  • ...80 more annotations...
  • Rails 4 automatically adds the sass-rails, coffee-rails and uglifier gems to your Gemfile
  • reduce the number of requests that a browser makes to render a web page
  • Starting with version 3.1, Rails defaults to concatenating all JavaScript files into one master .js file and all CSS files into one master .css file
  • In production, Rails inserts an MD5 fingerprint into each filename so that the file is cached by the web browser
  • The technique sprockets uses for fingerprinting is to insert a hash of the content into the name, usually at the end.
  • asset minification or compression
  • The sass-rails gem is automatically used for CSS compression if included in Gemfile and no config.assets.css_compressor option is set.
  • Supported languages include Sass for CSS, CoffeeScript for JavaScript, and ERB for both by default.
  • When a filename is unique and based on its content, HTTP headers can be set to encourage caches everywhere (whether at CDNs, at ISPs, in networking equipment, or in web browsers) to keep their own copy of the content
  • asset pipeline is technically no longer a core feature of Rails 4
  • Rails uses for fingerprinting is to insert a hash of the content into the name, usually at the end
  • With the asset pipeline, the preferred location for these assets is now the app/assets directory.
  • Fingerprinting is enabled by default for production and disabled for all other environments
  • The files in app/assets are never served directly in production.
  • Paths are traversed in the order that they occur in the search path
  • You should use app/assets for files that must undergo some pre-processing before they are served.
  • By default .coffee and .scss files will not be precompiled on their own
  • app/assets is for assets that are owned by the application, such as custom images, JavaScript files or stylesheets.
  • lib/assets is for your own libraries' code that doesn't really fit into the scope of the application or those libraries which are shared across applications.
  • vendor/assets is for assets that are owned by outside entities, such as code for JavaScript plugins and CSS frameworks.
  • Any path under assets/* will be searched
  • By default these files will be ready to use by your application immediately using the require_tree directive.
  • By default, this means the files in app/assets take precedence, and will mask corresponding paths in lib and vendor
  • Sprockets uses files named index (with the relevant extensions) for a special purpose
  • Rails.application.config.assets.paths
  • causes turbolinks to check if an asset has been updated and if so loads it into the page
  • if you add an erb extension to a CSS asset (for example, application.css.erb), then helpers like asset_path are available in your CSS rules
  • If you add an erb extension to a JavaScript asset, making it something such as application.js.erb, then you can use the asset_path helper in your JavaScript code
  • The asset pipeline automatically evaluates ERB
  • data URI — a method of embedding the image data directly into the CSS file — you can use the asset_data_uri helper.
  • Sprockets will also look through the paths specified in config.assets.paths, which includes the standard application paths and any paths added by Rails engines.
  • image_tag
  • the closing tag cannot be of the style -%>
  • asset_data_uri
  • app/assets/javascripts/application.js
  • sass-rails provides -url and -path helpers (hyphenated in Sass, underscored in Ruby) for the following asset classes: image, font, video, audio, JavaScript and stylesheet.
  • Rails.application.config.assets.compress
  • In JavaScript files, the directives begin with //=
  • The require_tree directive tells Sprockets to recursively include all JavaScript files in the specified directory into the output.
  • manifest files contain directives — instructions that tell Sprockets which files to require in order to build a single CSS or JavaScript file.
  • You should not rely on any particular order among those
  • Sprockets uses manifest files to determine which assets to include and serve.
  • the family of require directives prevents files from being included twice in the output
  • which files to require in order to build a single CSS or JavaScript file
  • Directives are processed top to bottom, but the order in which files are included by require_tree is unspecified.
  • In JavaScript files, Sprockets directives begin with //=
  • If require_self is called more than once, only the last call is respected.
  • require directive is used to tell Sprockets the files you wish to require.
  • You need not supply the extensions explicitly. Sprockets assumes you are requiring a .js file when done from within a .js file
  • paths must be specified relative to the manifest file
  • require_directory
  • Rails 4 creates both app/assets/javascripts/application.js and app/assets/stylesheets/application.css regardless of whether the --skip-sprockets option is used when creating a new rails application.
  • The file extensions used on an asset determine what preprocessing is applied.
  • app/assets/stylesheets/application.css
  • Additional layers of preprocessing can be requested by adding other extensions, where each extension is processed in a right-to-left manner
  • require_self
  • use the Sass @import rule instead of these Sprockets directives.
  • Keep in mind that the order of these preprocessors is important
  • In development mode, assets are served as separate files in the order they are specified in the manifest file.
  • when these files are requested they are processed by the processors provided by the coffee-script and sass gems and then sent back to the browser as JavaScript and CSS respectively.
  • css.scss.erb
  • js.coffee.erb
  • Keep in mind the order of these preprocessors is important.
  • By default Rails assumes that assets have been precompiled and will be served as static assets by your web server
  • with the Asset Pipeline the :cache and :concat options aren't used anymore
  • Assets are compiled and cached on the first request after the server is started
  • RAILS_ENV=production bundle exec rake assets:precompile
  • Debug mode can also be enabled in Rails helper methods
  • If you set config.assets.initialize_on_precompile to false, be sure to test rake assets:precompile locally before deploying
  • By default Rails assumes assets have been precompiled and will be served as static assets by your web server.
  • a rake task to compile the asset manifests and other files in the pipeline
  • RAILS_ENV=production bin/rake assets:precompile
  • a recipe to handle this in deployment
  • links the folder specified in config.assets.prefix to shared/assets
  • config/initializers/assets.rb
  • The initialize_on_precompile change tells the precompile task to run without invoking Rails
  • The X-Sendfile header is a directive to the web server to ignore the response from the application, and instead serve a specified file from disk
  • the jquery-rails gem which comes with Rails as the standard JavaScript library gem.
  • Possible options for JavaScript compression are :closure, :uglifier and :yui
  • concatenate assets
張 旭

MySQL :: MySQL 5.7 Reference Manual :: 19.2.1.2 Configuring an Instance for Group Repli... - 0 views

  • store replication metadata in system tables instead of files
  • collect the write set and encode it as a hash using the XXHASH64 hashing algorithm
  • not start operations automatically when the server starts
  • ...10 more annotations...
  • for incoming connections from other members in the group
  • The server listens on this port for member-to-member connections. This port must not be used for user applications at all
  • The loose- prefix used for the group_replication variables above instructs the server to continue to start if the Group Replication plugin has not been loaded at the time the server is started.
  • For example, if each server instance is on a different machine use the IP and port of the machine, such as 10.0.0.1:33061. The recommended port for group_replication_local_address is 33061
  • does not need to list all members in the group
  • The server that starts the group does not make use of this option, since it is the initial server and as such, it is in charge of bootstrapping the group
  • start the bootstrap member first, and let it create the group
  • Creating a group and joining multiple members at the same time is not supported.
  • must only be used on one server instance at any time
  • Disable this option after the first server instance comes online
張 旭

Upgrading kubeadm clusters | Kubernetes - 0 views

  • Swap must be disabled.
  • read the release notes carefully.
  • back up any important components, such as app-level state stored in a database.
  • ...16 more annotations...
  • All containers are restarted after upgrade, because the container spec hash value is changed.
  • The upgrade procedure on control plane nodes should be executed one node at a time.
  • /etc/kubernetes/admin.conf
  • kubeadm upgrade also automatically renews the certificates that it manages on this node. To opt-out of certificate renewal the flag --certificate-renewal=false can be used.
  • Manually upgrade your CNI provider plugin.
  • sudo systemctl daemon-reload sudo systemctl restart kubelet
  • If kubeadm upgrade fails and does not roll back, for example because of an unexpected shutdown during execution, you can run kubeadm upgrade again.
  • To recover from a bad state, you can also run kubeadm upgrade apply --force without changing the version that your cluster is running.
  • kubeadm-backup-etcd contains a backup of the local etcd member data for this control plane Node.
  • the contents of this folder can be manually restored in /var/lib/etcd
  • kubeadm-backup-manifests contains a backup of the static Pod manifest files for this control plane Node.
  • the contents of this folder can be manually restored in /etc/kubernetes/manifests
  • Enforces the version skew policies.
  • Upgrades the control plane components or rollbacks if any of them fails to come up.
  • Creates new certificate and key files of the API server and backs up old files if they're about to expire in 180 days.
  • backup folders under /etc/kubernetes/tmp
張 旭

MySQL cluster vs Galera - How to make the right choice - 0 views

  • there is no “one size fits all” solution when coming to database clustering.
  • MySQL cluster contains the data nodes that store the cluster data and management node that store the cluster’s configuration.
  • MySQL clients first communicate with the management node and then connect directly to these data nodes.
  • ...13 more annotations...
  • For synchronization of data in the data nodes, MySQL cluster uses a special data engine called NDB (Network Database).
  • it uses automatic shrading aka splitting of a large database into small units.
  • MySQL cluster avoids single point failure and ensures 99.99% availability.
  • MySQL cluster can provide a response time as low as less than 3 ms.
  • Galera Cluster consists of a database server and uses the Galera Replication Plugin to manage replication.
  • a multi-master database cluster that supports synchronous replication.
  • it provides multiple, up-to-date copies of the data.
  • there is a need for instant fail-over.
  • Galera cluster allows the read and write of data in any node.
  • Galera cluster include guaranteed write consistency, automatic node provisioning, etc.
  • Upon restoring the connection, the separated nodes will sync back and rejoin the cluster automatically.
  • there is no need to have management node like MySQL cluster.
  • it gives best results with the InnoDB storage engine.
張 旭

Syntax - Configuration Language | Terraform | HashiCorp Developer - 0 views

  • the native syntax of the Terraform language, which is a rich language designed to be relatively easy for humans to read and write.
  • Terraform's configuration language is based on a more general language called HCL, and HCL's documentation usually uses the word "attribute" instead of "argument."
  • A particular block type may have any number of required labels, or it may require none
  • ...34 more annotations...
  • After the block type keyword and any labels, the block body is delimited by the { and } characters
  • Identifiers can contain letters, digits, underscores (_), and hyphens (-). The first character of an identifier must not be a digit, to avoid ambiguity with literal numbers.
  • The # single-line comment style is the default comment style and should be used in most cases.
  • he idiomatic style is to use the Unix convention
  • Indent two spaces for each nesting level.
  • align their equals signs
  • Use empty lines to separate logical groups of arguments within a block.
  • Use one blank line to separate the arguments from the blocks.
  • "meta-arguments" (as defined by the Terraform language semantics)
  • Avoid separating multiple blocks of the same type with other blocks of a different type, unless the block types are defined by semantics to form a family.
  • Resource names must start with a letter or underscore, and may contain only letters, digits, underscores, and dashes.
  • Each resource is associated with a single resource type, which determines the kind of infrastructure object it manages and what arguments and other attributes the resource supports.
  • Each resource type is implemented by a provider, which is a plugin for Terraform that offers a collection of resource types.
  • By convention, resource type names start with their provider's preferred local name.
  • Most publicly available providers are distributed on the Terraform Registry, which also hosts their documentation.
  • The Terraform language defines several meta-arguments, which can be used with any resource type to change the behavior of resources.
  • use precondition and postcondition blocks to specify assumptions and guarantees about how the resource operates.
  • Some resource types provide a special timeouts nested block argument that allows you to customize how long certain operations are allowed to take before being considered to have failed.
  • Timeouts are handled entirely by the resource type implementation in the provider
  • Most resource types do not support the timeouts block at all.
  • A resource block declares that you want a particular infrastructure object to exist with the given settings.
  • Destroy resources that exist in the state but no longer exist in the configuration.
  • Destroy and re-create resources whose arguments have changed but which cannot be updated in-place due to remote API limitations.
  • Expressions within a Terraform module can access information about resources in the same module, and you can use that information to help configure other resources. Use the <RESOURCE TYPE>.<NAME>.<ATTRIBUTE> syntax to reference a resource attribute in an expression.
  • resources often provide read-only attributes with information obtained from the remote API; this often includes things that can't be known until the resource is created, like the resource's unique random ID.
  • data sources, which are a special type of resource used only for looking up information.
  • some dependencies cannot be recognized implicitly in configuration.
  • local-only resource types exist for generating private keys, issuing self-signed TLS certificates, and even generating random ids.
  • The behavior of local-only resources is the same as all other resources, but their result data exists only within the Terraform state.
  • The count meta-argument accepts a whole number, and creates that many instances of the resource or module.
  • count.index — The distinct index number (starting with 0) corresponding to this instance.
  • the count value must be known before Terraform performs any remote resource actions. This means count can't refer to any resource attributes that aren't known until after a configuration is applied
  • Within nested provisioner or connection blocks, the special self object refers to the current resource instance, not the resource block as a whole.
  • This was fragile, because the resource instances were still identified by their index instead of the string values in the list.
  •  
    "the native syntax of the Terraform language, which is a rich language designed to be relatively easy for humans to read and write. "
‹ Previous 21 - 39 of 39
Showing 20 items per page