Skip to main content

Home/ Larvata/ Group items tagged select

Rss Feed Group items tagged

張 旭

Dependency Lock File (.terraform.lock.hcl) - Configuration Language | Terraform | Hashi... - 0 views

  • Version constraints within the configuration itself determine which versions of dependencies are potentially compatible, but after selecting a specific version of each dependency Terraform remembers the decisions it made in a dependency lock file
  • At present, the dependency lock file tracks only provider dependencies.
  • Terraform does not remember version selections for remote modules, and so Terraform will always select the newest available module version that meets the specified version constraints.
  • ...14 more annotations...
  • The lock file is always named .terraform.lock.hcl, and this name is intended to signify that it is a lock file for various items that Terraform caches in the .terraform
  • Terraform automatically creates or updates the dependency lock file each time you run the terraform init command.
  • You should include this file in your version control repository
  • If a particular provider has no existing recorded selection, Terraform will select the newest available version that matches the given version constraint, and then update the lock file to include that selection.
  • the "trust on first use" model
  • you can pre-populate checksums for a variety of different platforms in your lock file using the terraform providers lock command, which will then allow future calls to terraform init to verify that the packages available in your chosen mirror match the official packages from the provider's origin registry.
  • The h1: and zh: prefixes on these values represent different hashing schemes, each of which represents calculating a checksum using a different algorithm.
  • zh:: a mnemonic for "zip hash"
  • h1:: a mnemonic for "hash scheme 1", which is the current preferred hashing scheme.
  • To determine whether there still exists a dependency on a given provider, Terraform uses two sources of truth: the configuration itself, and the state.
  • Version constraints within the configuration itself determine which versions of dependencies are potentially compatible, but after selecting a specific version of each dependency Terraform remembers the decisions it made in a dependency lock file so that it can (by default) make the same decisions again in future.
  • At present, the dependency lock file tracks only provider dependencies.
  • Terraform will always select the newest available module version that meets the specified version constraints.
  • The lock file is always named .terraform.lock.hcl
  •  
    "the overriding effect is compounded, with later blocks taking precedence over earlier blocks."
張 旭

plataformatec/simple_form - 0 views

  • The basic goal of Simple Form is to not touch your way of defining the layout
  • by default contains label, hints, errors and the input itself
  • Simple Form acts as a DSL and just maps your input type (retrieved from the column definition in the database) to a specific helper method.
  • ...68 more annotations...
  • can overwrite the default label by passing it to the input method
  • configure the html of any of them
  • disable labels, hints or error
  • add a hint, an error, or even a placeholder
  • add an inline label
  • pass any html attribute straight to the input, by using the :input_html option
  • use the :defaults option in simple_form_fo
  • Simple Form generates a wrapper div around your label and input by default, you can pass any html attribute to that wrapper as well using the :wrapper_html option,
  • By default all inputs are required
  • the required property of any input can be overwritten
  • Simple Form will look at the column type in the database and use an appropriate input for the column
  • lets you overwrite the default input type it creates
  • can also render boolean attributes using as: :select to show a dropdown.
  • give the :disabled option to Simple Form, and it'll automatically mark the wrapper as disabled with a CSS class
  • Simple Form accepts same options as their corresponding input type helper in Rails
  • Any extra option passed to these methods will be rendered as html option.
  • use label, hint, input_field, error and full_error helpers
  • to strip away all the div wrappers around the <input> field
  • is to use f.input_field
  • changing boolean_style from default value :nested to :inline
  • overriding the :collection option
  • Collections can be arrays or ranges, and when a :collection is given the :select input will be rendered by default
  • Other types of collection are :radio_buttons and :check_boxes
  • label_method
  • value_method
  • Both of these options also accept lambda/procs
  • define a to_label method on your model as Simple Form will search for and use :to_label as a :label_method first if it is found
  • create grouped collection selects, that will use the html optgroup tags
  • Grouped collection inputs accept the same :label_method and :value_method options
  • group_method
  • group_label_method
  • configured with a default value to be used on the site through the SimpleForm.country_priority and SimpleForm.time_zone_priority helpers.
  • association
  • association
  • render a :select input for choosing the :company, and another :select input with :multiple option for the :roles
  • all options available to :select, :radio_buttons and :check_boxes are also available to association
  • declare different labels and values
  • the association helper is currently only tested with Active Record
  • f.input
  • f.select
  • create a <button> element
  • simple_fields_for
  • Creates a collection of radio inputs with labels associated
  • Creates a collection of checkboxes with labels associated
  • collection_radio_buttons
  • collection_check_boxes
  • associations in your model
  • Role.all
  • the html element you will get for each attribute according to its database definition
  • redefine existing Simple Form inputs by creating a new class with the same name
  • Simple Form uses all power of I18n API to lookup labels, hints, prompts and placeholders
  • specify defaults for all models under the 'defaults' key
  • Simple Form will always look for a default attribute translation under the "defaults" key if no specific is found inside the model key
  • Simple Form will fallback to default human_attribute_name from Rails when no other translation is found for labels.
  • Simple Form will only do the lookup for options if you give a collection composed of symbols only.
  • "Add %{model}"
  • translations for labels, hints and placeholders for a namespaced model, e.g. Admin::User, should be placed under admin_user, not under admin/user
  • This difference exists because Simple Form relies on object_name provided by Rails' FormBuilder to determine the translation path for a given object instead of i18n_key from the object itself.
  • configure how your components will be rendered using the wrappers API
  • optional
  • unless_blank
  • By default, Simple Form will generate input field types and attributes that are supported in HTML5
  • The HTML5 extensions include the new field types such as email, number, search, url, tel, and the new attributes such as required, autofocus, maxlength, min, max, step.
  • If you want to have all other HTML 5 features, such as the new field types, you can disable only the browser validation
  • add novalidate to a specific form by setting the option on the form itself
  • the Simple Form configuration file
  • passing the html5 option
  • as: :date, html5: true
張 旭

Understanding the Nginx Configuration File Structure and Configuration Contexts | Digit... - 0 views

  • discussing the basic structure of an Nginx configuration file along with some guidelines on how to design your files
  • /etc/nginx/nginx.conf
  • In Nginx parlance, the areas that these brackets define are called "contexts" because they contain configuration details that are separated according to their area of concern
  • ...50 more annotations...
  • contexts can be layered within one another
  • if a directive is valid in multiple nested scopes, a declaration in a broader context will be passed on to any child contexts as default values.
  • The children contexts can override these values at will
  • Nginx will error out on reading a configuration file with directives that are declared in the wrong context.
  • The most general context is the "main" or "global" context
  • Any directive that exist entirely outside of these blocks is said to inhabit the "main" context
  • The main context represents the broadest environment for Nginx configuration.
  • The "events" context is contained within the "main" context. It is used to set global options that affect how Nginx handles connections at a general level.
  • Nginx uses an event-based connection processing model, so the directives defined within this context determine how worker processes should handle connections.
  • the connection processing method is automatically selected based on the most efficient choice that the platform has available
  • a worker will only take a single connection at a time
  • When configuring Nginx as a web server or reverse proxy, the "http" context will hold the majority of the configuration.
  • The http context is a sibling of the events context, so they should be listed side-by-side, rather than nested
  • fine-tune the TCP keep alive settings (keepalive_disable, keepalive_requests, and keepalive_timeout)
  • The "server" context is declared within the "http" context.
  • multiple declarations
  • each instance defines a specific virtual server to handle client requests
  • Each client request will be handled according to the configuration defined in a single server context, so Nginx must decide which server context is most appropriate based on details of the request.
  • listen: The ip address / port combination that this server block is designed to respond to.
  • server_name: This directive is the other component used to select a server block for processing.
  • "Host" header
  • configure files to try to respond to requests (try_files)
  • issue redirects and rewrites (return and rewrite)
  • set arbitrary variables (set)
  • Location contexts share many relational qualities with server contexts
  • multiple location contexts can be defined, each location is used to handle a certain type of client request, and each location is selected by virtue of matching the location definition against the client request through a selection algorithm
  • Location blocks live within server contexts and, unlike server blocks, can be nested inside one another.
  • While server contexts are selected based on the requested IP address/port combination and the host name in the "Host" header, location blocks further divide up the request handling within a server block by looking at the request URI
  • The request URI is the portion of the request that comes after the domain name or IP address/port combination.
  • New directives at this level allow you to reach locations outside of the document root (alias), mark the location as only internally accessible (internal), and proxy to other servers or locations (using http, fastcgi, scgi, and uwsgi proxying).
  • These can then be used to do A/B testing by providing different content to different hosts.
  • configures Perl handlers for the location they appear in
  • set the value of a variable depending on the value of another variable
  • used to map MIME types to the file extensions that should be associated with them.
  • this context defines a named pool of servers that Nginx can then proxy requests to
  • The upstream context should be placed within the http context, outside of any specific server contexts.
  • The upstream context can then be referenced by name within server or location blocks to pass requests of a certain type to the pool of servers that have been defined.
  • function as a high performance mail proxy server
  • The mail context is defined within the "main" or "global" context (outside of the http context).
  • Nginx has the ability to redirect authentication requests to an external authentication server
  • the if directive in Nginx will execute the instructions contained if a given test returns "true".
  • Since Nginx will test conditions of a request with many other purpose-made directives, if should not be used for most forms of conditional execution.
  • The limit_except context is used to restrict the use of certain HTTP methods within a location context.
  • The result of the above example is that any client can use the GET and HEAD verbs, but only clients coming from the 192.168.1.1/24 subnet are allowed to use other methods.
  • Many directives are valid in more than one context
  • it is usually best to declare directives in the highest context to which they are applicable, and overriding them in lower contexts as necessary.
  • Declaring at higher levels provides you with a sane default
  • Nginx already engages in a well-documented selection algorithm for things like selecting server blocks and location blocks.
  • instead of relying on rewrites to get a user supplied request into the format that you would like to work with, you should try to set up two blocks for the request, one of which represents the desired method, and the other that catches messy requests and redirects (and possibly rewrites) them to your correct block.
  • incorrect requests can get by with a redirect rather than a rewrite, which should execute with lower overhead.
張 旭

Active Record Associations - Ruby on Rails Guides - 0 views

  • With Active Record associations, we can streamline these - and other - operations by declaratively telling Rails that there is a connection between the two models.
  • belongs_to has_one has_many has_many :through has_one :through has_and_belongs_to_many
  • an association is a connection between two Active Record models
  • ...195 more annotations...
  • Associations are implemented using macro-style calls, so that you can declaratively add features to your models
  • A belongs_to association sets up a one-to-one connection with another model, such that each instance of the declaring model "belongs to" one instance of the other model.
  • belongs_to associations must use the singular term.
  • belongs_to
  • A has_one association also sets up a one-to-one connection with another model, but with somewhat different semantics (and consequences).
  • This association indicates that each instance of a model contains or possesses one instance of another model
  • belongs_to
  • A has_many association indicates a one-to-many connection with another model.
  • This association indicates that each instance of the model has zero or more instances of another model.
  • belongs_to
  • A has_many :through association is often used to set up a many-to-many connection with another model
  • This association indicates that the declaring model can be matched with zero or more instances of another model by proceeding through a third model.
  • through:
  • through:
  • The collection of join models can be managed via the API
  • new join models are created for newly associated objects, and if some are gone their rows are deleted.
  • The has_many :through association is also useful for setting up "shortcuts" through nested has_many associations
  • A has_one :through association sets up a one-to-one connection with another model. This association indicates that the declaring model can be matched with one instance of another model by proceeding through a third model.
  • A has_and_belongs_to_many association creates a direct many-to-many connection with another model, with no intervening model.
  • id: false
  • The has_one relationship says that one of something is yours
  • using t.references :supplier instead.
  • declare a many-to-many relationship is to use has_many :through. This makes the association indirectly, through a join model
  • set up a has_many :through relationship if you need to work with the relationship model as an independent entity
  • set up a has_and_belongs_to_many relationship (though you'll need to remember to create the joining table in the database).
  • use has_many :through if you need validations, callbacks, or extra attributes on the join model
  • With polymorphic associations, a model can belong to more than one other model, on a single association.
  • belongs_to :imageable, polymorphic: true
  • a polymorphic belongs_to declaration as setting up an interface that any other model can use.
    • 張 旭
       
      _id 記錄的是不同類型的外連鍵 id;_type 記錄的是不同類型的表格名稱。
  • In designing a data model, you will sometimes find a model that should have a relation to itself
  • add a references column to the model itself
  • Controlling caching Avoiding name collisions Updating the schema Controlling association scope Bi-directional associations
  • All of the association methods are built around caching, which keeps the result of the most recent query available for further operations.
  • it is a bad idea to give an association a name that is already used for an instance method of ActiveRecord::Base. The association method would override the base method and break things.
  • You are responsible for maintaining your database schema to match your associations.
  • belongs_to associations you need to create foreign keys
  • has_and_belongs_to_many associations you need to create the appropriate join table
  • If you create an association some time after you build the underlying model, you need to remember to create an add_column migration to provide the necessary foreign key.
  • Active Record creates the name by using the lexical order of the class names
  • So a join between customer and order models will give the default join table name of "customers_orders" because "c" outranks "o" in lexical ordering.
  • For example, one would expect the tables "paper_boxes" and "papers" to generate a join table name of "papers_paper_boxes" because of the length of the name "paper_boxes", but it in fact generates a join table name of "paper_boxes_papers" (because the underscore '' is lexicographically _less than 's' in common encodings).
  • id: false
  • pass id: false to create_table because that table does not represent a model
  • By default, associations look for objects only within the current module's scope.
  • will work fine, because both the Supplier and the Account class are defined within the same scope.
  • To associate a model with a model in a different namespace, you must specify the complete class name in your association declaration:
  • class_name
  • class_name
  • Active Record provides the :inverse_of option
    • 張 旭
       
      意思是說第一次比較兩者的 first_name 是相同的;但透過 c 實體修改 first_name 之後,再次比較就不相同了,因為兩個是記憶體裡面兩個不同的物件。
  • preventing inconsistencies and making your application more efficient
  • Every association will attempt to automatically find the inverse association and set the :inverse_of option heuristically (based on the association name)
  • In database terms, this association says that this class contains the foreign key.
  • In all of these methods, association is replaced with the symbol passed as the first argument to belongs_to.
  • (force_reload = false)
  • The association method returns the associated object, if any. If no associated object is found, it returns nil.
  • the cached version will be returned.
  • The association= method assigns an associated object to this object.
  • Behind the scenes, this means extracting the primary key from the associate object and setting this object's foreign key to the same value.
  • The build_association method returns a new object of the associated type
  • but the associated object will not yet be saved.
  • The create_association method returns a new object of the associated type
  • once it passes all of the validations specified on the associated model, the associated object will be saved
  • raises ActiveRecord::RecordInvalid if the record is invalid.
  • dependent
  • counter_cache
  • :autosave :class_name :counter_cache :dependent :foreign_key :inverse_of :polymorphic :touch :validate
  • finding the number of belonging objects more efficient.
  • Although the :counter_cache option is specified on the model that includes the belongs_to declaration, the actual column must be added to the associated model.
  • add a column named orders_count to the Customer model.
  • :destroy, when the object is destroyed, destroy will be called on its associated objects.
  • deleted directly from the database without calling their destroy method.
  • Rails will not create foreign key columns for you
  • The :inverse_of option specifies the name of the has_many or has_one association that is the inverse of this association
  • set the :touch option to :true, then the updated_at or updated_on timestamp on the associated object will be set to the current time whenever this object is saved or destroyed
  • specify a particular timestamp attribute to update
  • If you set the :validate option to true, then associated objects will be validated whenever you save this object
  • By default, this is false: associated objects will not be validated when this object is saved.
  • where includes readonly select
  • make your code somewhat more efficient
  • no need to use includes for immediate associations
  • will be read-only when retrieved via the association
  • The select method lets you override the SQL SELECT clause that is used to retrieve data about the associated object
  • using the association.nil?
  • Assigning an object to a belongs_to association does not automatically save the object. It does not save the associated object either.
  • In database terms, this association says that the other class contains the foreign key.
  • the cached version will be returned.
  • :as :autosave :class_name :dependent :foreign_key :inverse_of :primary_key :source :source_type :through :validate
  • Setting the :as option indicates that this is a polymorphic association
  • :nullify causes the foreign key to be set to NULL. Callbacks are not executed.
  • It's necessary not to set or leave :nullify option for those associations that have NOT NULL database constraints.
  • The :source_type option specifies the source association type for a has_one :through association that proceeds through a polymorphic association.
  • The :source option specifies the source association name for a has_one :through association.
  • The :through option specifies a join model through which to perform the query
  • more efficient by including representatives in the association from suppliers to accounts
  • When you assign an object to a has_one association, that object is automatically saved (in order to update its foreign key).
  • If either of these saves fails due to validation errors, then the assignment statement returns false and the assignment itself is cancelled.
  • If the parent object (the one declaring the has_one association) is unsaved (that is, new_record? returns true) then the child objects are not saved.
  • If you want to assign an object to a has_one association without saving the object, use the association.build method
  • collection(force_reload = false) collection<<(object, ...) collection.delete(object, ...) collection.destroy(object, ...) collection=(objects) collection_singular_ids collection_singular_ids=(ids) collection.clear collection.empty? collection.size collection.find(...) collection.where(...) collection.exists?(...) collection.build(attributes = {}, ...) collection.create(attributes = {}) collection.create!(attributes = {})
  • In all of these methods, collection is replaced with the symbol passed as the first argument to has_many, and collection_singular is replaced with the singularized version of that symbol.
  • The collection<< method adds one or more objects to the collection by setting their foreign keys to the primary key of the calling model
  • The collection.delete method removes one or more objects from the collection by setting their foreign keys to NULL.
  • objects will be destroyed if they're associated with dependent: :destroy, and deleted if they're associated with dependent: :delete_all
  • The collection.destroy method removes one or more objects from the collection by running destroy on each object.
  • The collection_singular_ids method returns an array of the ids of the objects in the collection.
  • The collection_singular_ids= method makes the collection contain only the objects identified by the supplied primary key values, by adding and deleting as appropriate
  • The default strategy for has_many :through associations is delete_all, and for has_many associations is to set the foreign keys to NULL.
  • The collection.clear method removes all objects from the collection according to the strategy specified by the dependent option
  • uses the same syntax and options as ActiveRecord::Base.find
  • The collection.where method finds objects within the collection based on the conditions supplied but the objects are loaded lazily meaning that the database is queried only when the object(s) are accessed.
  • The collection.build method returns one or more new objects of the associated type. These objects will be instantiated from the passed attributes, and the link through their foreign key will be created, but the associated objects will not yet be saved.
  • The collection.create method returns a new object of the associated type. This object will be instantiated from the passed attributes, the link through its foreign key will be created, and, once it passes all of the validations specified on the associated model, the associated object will be saved.
  • :as :autosave :class_name :dependent :foreign_key :inverse_of :primary_key :source :source_type :through :validate
  • :delete_all causes all the associated objects to be deleted directly from the database (so callbacks will not execute)
  • :nullify causes the foreign keys to be set to NULL. Callbacks are not executed.
  • where includes readonly select
  • :conditions :through :polymorphic :foreign_key
  • By convention, Rails assumes that the column used to hold the primary key of the association is id. You can override this and explicitly specify the primary key with the :primary_key option.
  • The :source option specifies the source association name for a has_many :through association.
  • You only need to use this option if the name of the source association cannot be automatically inferred from the association name.
  • The :source_type option specifies the source association type for a has_many :through association that proceeds through a polymorphic association.
  • The :through option specifies a join model through which to perform the query.
  • has_many :through associations provide a way to implement many-to-many relationships,
  • By default, this is true: associated objects will be validated when this object is saved.
  • where extending group includes limit offset order readonly select uniq
  • If you use a hash-style where option, then record creation via this association will be automatically scoped using the hash
  • The extending method specifies a named module to extend the association proxy.
  • Association extensions
  • The group method supplies an attribute name to group the result set by, using a GROUP BY clause in the finder SQL.
  • has_many :line_items, -> { group 'orders.id' },                        through: :orders
  • more efficient by including line items in the association from customers to orders
  • The limit method lets you restrict the total number of objects that will be fetched through an association.
  • The offset method lets you specify the starting offset for fetching objects via an association
  • The order method dictates the order in which associated objects will be received (in the syntax used by an SQL ORDER BY clause).
  • Use the distinct method to keep the collection free of duplicates.
  • mostly useful together with the :through option
  • -> { distinct }
  • .all.inspect
  • If you want to make sure that, upon insertion, all of the records in the persisted association are distinct (so that you can be sure that when you inspect the association that you will never find duplicate records), you should add a unique index on the table itself
  • unique: true
  • Do not attempt to use include? to enforce distinctness in an association.
  • multiple users could be attempting this at the same time
  • checking for uniqueness using something like include? is subject to race conditions
  • When you assign an object to a has_many association, that object is automatically saved (in order to update its foreign key).
  • If any of these saves fails due to validation errors, then the assignment statement returns false and the assignment itself is cancelled.
  • If the parent object (the one declaring the has_many association) is unsaved (that is, new_record? returns true) then the child objects are not saved when they are added
  • All unsaved members of the association will automatically be saved when the parent is saved.
  • assign an object to a has_many association without saving the object, use the collection.build method
  • collection(force_reload = false) collection<<(object, ...) collection.delete(object, ...) collection.destroy(object, ...) collection=(objects) collection_singular_ids collection_singular_ids=(ids) collection.clear collection.empty? collection.size collection.find(...) collection.where(...) collection.exists?(...) collection.build(attributes = {}) collection.create(attributes = {}) collection.create!(attributes = {})
  • If the join table for a has_and_belongs_to_many association has additional columns beyond the two foreign keys, these columns will be added as attributes to records retrieved via that association.
  • Records returned with additional attributes will always be read-only
  • If you require this sort of complex behavior on the table that joins two models in a many-to-many relationship, you should use a has_many :through association instead of has_and_belongs_to_many.
  • aliased as collection.concat and collection.push.
  • The collection.delete method removes one or more objects from the collection by deleting records in the join table
  • not destroy the objects
  • The collection.destroy method removes one or more objects from the collection by running destroy on each record in the join table, including running callbacks.
  • not destroy the objects.
  • The collection.clear method removes every object from the collection by deleting the rows from the joining table.
  • not destroy the associated objects.
  • The collection.find method finds objects within the collection. It uses the same syntax and options as ActiveRecord::Base.find.
  • The collection.where method finds objects within the collection based on the conditions supplied but the objects are loaded lazily meaning that the database is queried only when the object(s) are accessed.
  • The collection.exists? method checks whether an object meeting the supplied conditions exists in the collection.
  • The collection.build method returns a new object of the associated type.
  • the associated object will not yet be saved.
  • the associated object will be saved.
  • The collection.create method returns a new object of the associated type.
  • it passes all of the validations specified on the associated model
  • :association_foreign_key :autosave :class_name :foreign_key :join_table :validate
  • The :foreign_key and :association_foreign_key options are useful when setting up a many-to-many self-join.
  • Rails assumes that the column in the join table used to hold the foreign key pointing to the other model is the name of that model with the suffix _id added.
  • If you set the :autosave option to true, Rails will save any loaded members and destroy members that are marked for destruction whenever you save the parent object.
  • By convention, Rails assumes that the column in the join table used to hold the foreign key pointing to this model is the name of this model with the suffix _id added.
  • By default, this is true: associated objects will be validated when this object is saved.
  • where extending group includes limit offset order readonly select uniq
  • set conditions via a hash
  • In this case, using @parts.assemblies.create or @parts.assemblies.build will create orders where the factory column has the value "Seattle"
  • If you use a hash-style where, then record creation via this association will be automatically scoped using the hash
  • using a GROUP BY clause in the finder SQL.
  • Use the uniq method to remove duplicates from the collection.
  • assign an object to a has_and_belongs_to_many association, that object is automatically saved (in order to update the join table).
  • If any of these saves fails due to validation errors, then the assignment statement returns false and the assignment itself is cancelled.
  • If the parent object (the one declaring the has_and_belongs_to_many association) is unsaved (that is, new_record? returns true) then the child objects are not saved when they are added.
  • If you want to assign an object to a has_and_belongs_to_many association without saving the object, use the collection.build method.
  • Normal callbacks hook into the life cycle of Active Record objects, allowing you to work with those objects at various points
  • define association callbacks by adding options to the association declaration
  • Rails passes the object being added or removed to the callback.
  • stack callbacks on a single event by passing them as an array
  • If a before_add callback throws an exception, the object does not get added to the collection.
  • if a before_remove callback throws an exception, the object does not get removed from the collection
  • extend these objects through anonymous modules, adding new finders, creators, or other methods.
  • order_number
  • use a named extension module
  • proxy_association.owner returns the object that the association is a part of.
張 旭

Understanding Nginx Server and Location Block Selection Algorithms | DigitalOcean - 0 views

  • A server block is a subset of Nginx’s configuration that defines a virtual server used to handle requests of a defined type. Administrators often configure multiple server blocks and decide which block should handle which connection based on the requested domain name, port, and IP address.
  • A location block lives within a server block and is used to define how Nginx should handle requests for different resources and URIs for the parent server. The URI space can be subdivided in whatever way the administrator likes using these blocks. It is an extremely flexible model.
  • Nginx logically divides the configurations meant to serve different content into blocks, which live in a hierarchical structure. Each time a client request is made, Nginx begins a process of determining which configuration blocks should be used to handle the request.
  • ...37 more annotations...
  • Nginx is one of the most popular web servers in the world. It can successfully handle high loads with many concurrent client connections, and can easily function as a web server, a mail server, or a reverse proxy server.
  • The main server block directives that Nginx is concerned with during this process are the listen directive, and the server_name directive.
  • The listen directive typically defines which IP address and port that the server block will respond to.
  • 0.0.0.0:8080 if Nginx is being run by a normal, non-root user
  • Nginx translates all “incomplete” listen directives by substituting missing values with their default values so that each block can be evaluated by its IP address and port.
  • In any case, the port must be matched exactly.
  • If there are multiple server blocks with the same level of specificity matching, Nginx then begins to evaluate the server_name directive of each server block.
  • Nginx will only evaluate the server_name directive when it needs to distinguish between server blocks that match to the same level of specificity in the listen directive.
  • Nginx checks the request’s “Host” header. This value holds the domain or IP address that the client was actually trying to reach.
  • Nginx will first try to find a server block with a server_name that matches the value in the “Host” header of the request exactly.
  • If no exact match is found, Nginx will then try to find a server block with a server_name that matches using a leading wildcard (indicated by a * at the beginning of the name in the config).
  • If no match is found using a leading wildcard, Nginx then looks for a server block with a server_name that matches using a trailing wildcard (indicated by a server name ending with a * in the config)
  • If no match is found using a trailing wildcard, Nginx then evaluates server blocks that define the server_name using regular expressions (indicated by a ~ before the name).
  • If no regular expression match is found, Nginx then selects the default server block for that IP address and port.
  • There can be only one default_server declaration per each IP address/port combination.
  • Location blocks live within server blocks (or other location blocks) and are used to decide how to process the request URI (the part of the request that comes after the domain name or IP address/port).
  • If no modifiers are present, the location is interpreted as a prefix match.
  • =: If an equal sign is used, this block will be considered a match if the request URI exactly matches the location given.
  • ~: If a tilde modifier is present, this location will be interpreted as a case-sensitive regular expression match.
  • ~*: If a tilde and asterisk modifier is used, the location block will be interpreted as a case-insensitive regular expression match.
  • ^~: If a carat and tilde modifier is present, and if this block is selected as the best non-regular expression match, regular expression matching will not take place.
  • Keep in mind that if this block is selected and the request is fulfilled using an index page, an internal redirect will take place to another location that will be the actual handler of the request
  • Keeping in mind the types of location declarations we described above, Nginx evaluates the possible location contexts by comparing the request URI to each of the locations.
  • Nginx begins by checking all prefix-based location matches (all location types not involving a regular expression).
  • First, Nginx looks for an exact match.
  • If no exact (with the = modifier) location block matches are found, Nginx then moves on to evaluating non-exact prefixes.
  • After the longest matching prefix location is determined and stored, Nginx moves on to evaluating the regular expression locations (both case sensitive and insensitive).
  • by default, Nginx will serve regular expression matches in preference to prefix matches.
  • regular expression matches within the longest prefix match will “jump the line” when Nginx evaluates regex locations.
  • The exceptions to the “only one location block” rule may have implications on how the request is actually served and may not align with the expectations you had when designing your location blocks.
  • The index directive always leads to an internal redirect if it is used to handle the request.
  • In the case above, if you really need the execution to stay in the first block, you will have to come up with a different method of satisfying the request to the directory.
  • one way of preventing an index from switching contexts, but it’s probably not useful for most configurations
  • the try_files directive. This directive tells Nginx to check for the existence of a named set of files or directories.
  • the rewrite directive. When using the last parameter with the rewrite directive, or when using no parameter at all, Nginx will search for a new matching location based on the results of the rewrite.
  • The error_page directive can lead to an internal redirect similar to that created by try_files.
  • when certain status codes are encountered.
張 旭

Template Designer Documentation - Jinja2 Documentation (2.10) - 0 views

  • A Jinja template doesn’t need to have a specific extension
  • A Jinja template is simply a text file
  • tags, which control the logic of the template
  • ...106 more annotations...
  • {% ... %} for Statements
  • {{ ... }} for Expressions to print to the template output
  • use a dot (.) to access attributes of a variable
  • the outer double-curly braces are not part of the variable, but the print statement.
  • If you access variables inside tags don’t put the braces around them.
  • If a variable or attribute does not exist, you will get back an undefined value.
  • the default behavior is to evaluate to an empty string if printed or iterated over, and to fail for every other operation.
  • if an object has an item and attribute with the same name. Additionally, the attr() filter only looks up attributes.
  • Variables can be modified by filters. Filters are separated from the variable by a pipe symbol (|) and may have optional arguments in parentheses.
  • Multiple filters can be chained
  • Tests can be used to test a variable against a common expression.
  • add is plus the name of the test after the variable.
  • to find out if a variable is defined, you can do name is defined, which will then return true or false depending on whether name is defined in the current template context.
  • strip whitespace in templates by hand. If you add a minus sign (-) to the start or end of a block (e.g. a For tag), a comment, or a variable expression, the whitespaces before or after that block will be removed
  • not add whitespace between the tag and the minus sign
  • mark a block raw
  • Template inheritance allows you to build a base “skeleton” template that contains all the common elements of your site and defines blocks that child templates can override.
  • The {% extends %} tag is the key here. It tells the template engine that this template “extends” another template.
  • access templates in subdirectories with a slash
  • can’t define multiple {% block %} tags with the same name in the same template
  • use the special self variable and call the block with that name
  • self.title()
  • super()
  • put the name of the block after the end tag for better readability
  • if the block is replaced by a child template, a variable would appear that was not defined in the block or passed to the context.
  • setting the block to “scoped” by adding the scoped modifier to a block declaration
  • If you have a variable that may include any of the following chars (>, <, &, or ") you SHOULD escape it unless the variable contains well-formed and trusted HTML.
  • Jinja2 functions (macros, super, self.BLOCKNAME) always return template data that is marked as safe.
  • With the default syntax, control structures appear inside {% ... %} blocks.
  • the dictsort filter
  • loop.cycle
  • Unlike in Python, it’s not possible to break or continue in a loop
  • use loops recursively
  • add the recursive modifier to the loop definition and call the loop variable with the new iterable where you want to recurse.
  • The loop variable always refers to the closest (innermost) loop.
  • whether the value changed at all,
  • use it to test if a variable is defined, not empty and not false
  • Macros are comparable with functions in regular programming languages.
  • If a macro name starts with an underscore, it’s not exported and can’t be imported.
  • pass a macro to another macro
  • caller()
  • a single trailing newline is stripped if present
  • other whitespace (spaces, tabs, newlines etc.) is returned unchanged
  • a block tag works in “both” directions. That is, a block tag doesn’t just provide a placeholder to fill - it also defines the content that fills the placeholder in the parent.
  • Python dicts are not ordered
  • caller(user)
  • call(user)
  • This is a simple dialog rendered by using a macro and a call block.
  • Filter sections allow you to apply regular Jinja2 filters on a block of template data.
  • Assignments at top level (outside of blocks, macros or loops) are exported from the template like top level macros and can be imported by other templates.
  • using namespace objects which allow propagating of changes across scopes
  • use block assignments to capture the contents of a block into a variable name.
  • The extends tag can be used to extend one template from another.
  • Blocks are used for inheritance and act as both placeholders and replacements at the same time.
  • The include statement is useful to include a template and return the rendered contents of that file into the current namespace
  • Included templates have access to the variables of the active context by default.
  • putting often used code into macros
  • imports are cached and imported templates don’t have access to the current template variables, just the globals by default.
  • Macros and variables starting with one or more underscores are private and cannot be imported.
  • By default, included templates are passed the current context and imported templates are not.
  • imports are often used just as a module that holds macros.
  • Integers and floating point numbers are created by just writing the number down
  • Everything between two brackets is a list.
  • Tuples are like lists that cannot be modified (“immutable”).
  • A dict in Python is a structure that combines keys and values.
  • // Divide two numbers and return the truncated integer result
  • The special constants true, false, and none are indeed lowercase
  • all Jinja identifiers are lowercase
  • (expr) group an expression.
  • The is and in operators support negation using an infix notation
  • in Perform a sequence / mapping containment test.
  • | Applies a filter.
  • ~ Converts all operands into strings and concatenates them.
  • use inline if expressions.
  • always an attribute is returned and items are not looked up.
  • default(value, default_value=u'', boolean=False)¶ If the value is undefined it will return the passed default value, otherwise the value of the variable
  • dictsort(value, case_sensitive=False, by='key', reverse=False)¶ Sort a dict and yield (key, value) pairs.
  • format(value, *args, **kwargs)¶ Apply python string formatting on an object
  • groupby(value, attribute)¶ Group a sequence of objects by a common attribute.
  • grouping by is stored in the grouper attribute and the list contains all the objects that have this grouper in common.
  • indent(s, width=4, first=False, blank=False, indentfirst=None)¶ Return a copy of the string with each line indented by 4 spaces. The first line and blank lines are not indented by default.
  • join(value, d=u'', attribute=None)¶ Return a string which is the concatenation of the strings in the sequence.
  • map()¶ Applies a filter on a sequence of objects or looks up an attribute.
  • pprint(value, verbose=False)¶ Pretty print a variable. Useful for debugging.
  • reject()¶ Filters a sequence of objects by applying a test to each object, and rejecting the objects with the test succeeding.
  • replace(s, old, new, count=None)¶ Return a copy of the value with all occurrences of a substring replaced with a new one.
  • round(value, precision=0, method='common')¶ Round the number to a given precision
  • even if rounded to 0 precision, a float is returned.
  • select()¶ Filters a sequence of objects by applying a test to each object, and only selecting the objects with the test succeeding.
  • sort(value, reverse=False, case_sensitive=False, attribute=None)¶ Sort an iterable. Per default it sorts ascending, if you pass it true as first argument it will reverse the sorting.
  • striptags(value)¶ Strip SGML/XML tags and replace adjacent whitespace by one space.
  • tojson(value, indent=None)¶ Dumps a structure to JSON so that it’s safe to use in <script> tags.
  • trim(value)¶ Strip leading and trailing whitespace.
  • unique(value, case_sensitive=False, attribute=None)¶ Returns a list of unique items from the the given iterable
  • urlize(value, trim_url_limit=None, nofollow=False, target=None, rel=None)¶ Converts URLs in plain text into clickable links.
  • defined(value)¶ Return true if the variable is defined
  • in(value, seq)¶ Check if value is in seq.
  • mapping(value)¶ Return true if the object is a mapping (dict etc.).
  • number(value)¶ Return true if the variable is a number.
  • sameas(value, other)¶ Check if an object points to the same memory address than another object
  • undefined(value)¶ Like defined() but the other way round.
  • A joiner is passed a string and will return that string every time it’s called, except the first time (in which case it returns an empty string).
  • namespace(...)¶ Creates a new container that allows attribute assignment using the {% set %} tag
  • The with statement makes it possible to create a new inner scope. Variables set within this scope are not visible outside of the scope.
  • activate and deactivate the autoescaping from within the templates
  • With both trim_blocks and lstrip_blocks enabled, you can put block tags on their own lines, and the entire block line will be removed when rendered, preserving the whitespace of the contents
張 旭

An Introduction to HAProxy and Load Balancing Concepts | DigitalOcean - 0 views

  • HAProxy, which stands for High Availability Proxy
  • improve the performance and reliability of a server environment by distributing the workload across multiple servers (e.g. web, application, database).
  • ACLs are used to test some condition and perform an action (e.g. select a server, or block a request) based on the test result.
  • ...28 more annotations...
  • Access Control List (ACL)
  • ACLs allows flexible network traffic forwarding based on a variety of factors like pattern-matching and the number of connections to a backend
  • A backend is a set of servers that receives forwarded requests
  • adding more servers to your backend will increase your potential load capacity by spreading the load over multiple servers
  • mode http specifies that layer 7 proxying will be used
  • specifies the load balancing algorithm
  • health checks
  • A frontend defines how requests should be forwarded to backends
  • use_backend rules, which define which backends to use depending on which ACL conditions are matched, and/or a default_backend rule that handles every other case
  • A frontend can be configured to various types of network traffic
  • Load balancing this way will forward user traffic based on IP range and port
  • Generally, all of the servers in the web-backend should be serving identical content--otherwise the user might receive inconsistent content.
  • Using layer 7 allows the load balancer to forward requests to different backend servers based on the content of the user's request.
  • allows you to run multiple web application servers under the same domain and port
  • acl url_blog path_beg /blog matches a request if the path of the user's request begins with /blog.
  • Round Robin selects servers in turns
  • Selects the server with the least number of connections--it is recommended for longer sessions
  • This selects which server to use based on a hash of the source IP
  • ensure that a user will connect to the same server
  • require that a user continues to connect to the same backend server. This persistence is achieved through sticky sessions, using the appsession parameter in the backend that requires it.
  • HAProxy uses health checks to determine if a backend server is available to process requests.
  • The default health check is to try to establish a TCP connection to the server
  • If a server fails a health check, and therefore is unable to serve requests, it is automatically disabled in the backend
  • For certain types of backends, like database servers in certain situations, the default health check is insufficient to determine whether a server is still healthy.
  • However, your load balancer is a single point of failure in these setups; if it goes down or gets overwhelmed with requests, it can cause high latency or downtime for your service.
  • A high availability (HA) setup is an infrastructure without a single point of failure
  • a static IP address that can be remapped from one server to another.
  • If that load balancer fails, your failover mechanism will detect it and automatically reassign the IP address to one of the passive servers.
crazylion lee

Amazon.com:Creative Selection: Inside Apple's Design Process During the Golde... - 0 views

  •  
    "Creative Selection: Inside Apple's Design Process During the Golden Age of Steve Jobs"
張 旭

Flynn: first preview release | Hacker News - 0 views

  • Etcd and Zookeeper provide essentially the same functionality. They are both a strongly consistent key/value stores that support notifications to clients of changes. These two projects are limited to service discovery
  • So lets say you had a client application that would talk to a node application that could be on any number of servers. What you could do is hard code that list into your application and randomly select one, in order to "fake" load balancing. However every time a machine went up or down you would have to update that list.
  • What Consul provides is you just tell your app to connect to "mynodeapp.consul" and then consul will give you the proper address of one of your node apps.
  • ...9 more annotations...
  • Consul and Skydock are both applications that build on top of a tool like Zookeeper and Etcd.
  • What a developer ideally wants to do is just push code and not have to worry about what servers are running what, and worry about failover and the like
  • What Flynn provides (if I get it), is a diy Heroku like platform
  • Raft is a consensus algorithm for keeping a set of distributed state machines in a consistent state.
  • a self hosted Heroku
  • Google Omega is Google's answer to Apache Mesos
  • Omega would need a service like Raft to understand what services are currently available
  • Another project that I believe may be similar to Flynn is Apache Mesos.
  • I want to use Docker, but it has no easy way to say "take this file that contains instructions and make everything". You can write Dockerfiles, but you can only use one part of the stack in them, otherwise you run into trouble.
  •  
    " So lets say you had a client application that would talk to a node application that could be on any number of servers. What you could do is hard code that list into your application and randomly select one, in order to "fake" load balancing. However every time a machine went up or down you would have to update that list. What Consul provides is you just tell your app to connect to "mynodeapp.consul" and then consul will give you the proper address of one of your node apps."
張 旭

Data Sources - Configuration Language | Terraform | HashiCorp Developer - 0 views

  • Each provider may offer data sources alongside its set of resource types.
  • When distinguishing from data resources, the primary kind of resource (as declared by a resource block) is known as a managed resource.
  • Each data resource is associated with a single data source, which determines the kind of object (or objects) it reads and what query constraint arguments are available.
  • ...4 more annotations...
  • Terraform reads data resources during the planning phase when possible, but announces in the plan when it must defer reading resources until the apply phase to preserve the order of operations.
  • local-only data sources exist for rendering templates, reading local files, and rendering AWS IAM policies.
  • As with managed resources, when count or for_each is present it is important to distinguish the resource itself from the multiple resource instances it creates. Each instance will separately read from its data source with its own variant of the constraint arguments, producing an indexed result.
  • Data instance arguments may refer to computed values, in which case the attributes of the instance itself cannot be resolved until all of its arguments are defined. I
crazylion lee

Unique gradient generator - 0 views

  •  
    "This tool helps you to generate beautiful blurry background images that you can use in any project. It doesn't use CSS3 gradients, but a rather unique approach. It takes a stock image, extracts a very small area (sample area) and scales it up to 100%. The browser's image smoothing algorithm takes care of the rest. You can then use the image as an inline, base64 encoded image in any HTML element's background, just click Generate CSS button at the bottom of the app. Select source images from the gallery or use yours, the possibilities are endless."
張 旭

Using Workflows to Schedule Jobs - CircleCI - 1 views

  • A workflow is a set of rules for defining a collection of jobs and their run order.
  • Schedule workflows for jobs that should only run periodically.
  • run multiple jobs in parallel
  • ...37 more annotations...
  • rerun just the failed job
  • Builds without workflows require a build job.
  • Refer the YAML Anchors/Aliases documentation for information about how to alias and reuse syntax to keep your .circleci/config.yml file small.
  • workflow orchestration with two parallel jobs
  • jobs run according to configured requirements, each job waiting to start until the required job finishes successfully
  • requires: key
  • fans-out to run a set of acceptance test jobs in parallel, and finally fans-in to run a common deploy job.
  • Holding a Workflow for a Manual Approval
  • Workflows can be configured to wait for manual approval of a job before continuing to the next job
  • add a job to the jobs list with the key type: approval
  • approval is a special job type that is only available to jobs under the workflow key
  • The name of the job to hold is arbitrary - it could be wait or pause, for example, as long as the job has a type: approval key in it.
  • schedule a workflow to run at a certain time for specific branches.
  • The triggers key is only added under your workflows key
  • using cron syntax to represent Coordinated Universal Time (UTC) for specified branches.
  • By default, a workflow is triggered on every git push
  • the commit workflow has no triggers key and will run on every git push
  • The nightly workflow has a triggers key and will run on the specified schedule
  • Cron step syntax (for example, */1, */20) is not supported.
  • use a context to share environment variables
  • use the same shared environment variables when initiated by a user who is part of the organization.
  • CircleCI does not run workflows for tags unless you explicitly specify tag filters.
  • CircleCI branch and tag filters support the Java variant of regex pattern matching.
  • Each workflow has an associated workspace which can be used to transfer files to downstream jobs as the workflow progresses.
  • The workspace is an additive-only store of data.
  • Jobs can persist data to the workspace
  • Downstream jobs can attach the workspace to their container filesystem.
  • Attaching the workspace downloads and unpacks each layer based on the ordering of the upstream jobs in the workflow graph.
  • Workflows that include jobs running on multiple branches may require data to be shared using workspaces
  • To persist data from a job and make it available to other jobs, configure the job to use the persist_to_workspace key.
  • Files and directories named in the paths: property of persist_to_workspace will be uploaded to the workflow’s temporary workspace relative to the directory specified with the root key.
  • Configure a job to get saved data by configuring the attach_workspace key.
  • persist_to_workspace
  • attach_workspace
  • To rerun only a workflow’s failed jobs, click the Workflows icon in the app and select a workflow to see the status of each job, then click the Rerun button and select Rerun from failed.
  • if you do not see your workflows triggering, a configuration error is preventing the workflow from starting.
  • check your Workflows page of the CircleCI app (not the Job page)
  •  
    "A workflow is a set of rules for defining a collection of jobs and their run order."
張 旭

Boosting your kubectl productivity ♦︎ Learnk8s - 0 views

  • kubectl is your cockpit to control Kubernetes.
  • kubectl is a client for the Kubernetes API
  • Kubernetes API is an HTTP REST API.
  • ...75 more annotations...
  • This API is the real Kubernetes user interface.
  • Kubernetes is fully controlled through this API
  • every Kubernetes operation is exposed as an API endpoint and can be executed by an HTTP request to this endpoint.
  • the main job of kubectl is to carry out HTTP requests to the Kubernetes API
  • Kubernetes maintains an internal state of resources, and all Kubernetes operations are CRUD operations on these resources.
  • Kubernetes is a fully resource-centred system
  • Kubernetes API reference is organised as a list of resource types with their associated operations.
  • This is how kubectl works for all commands that interact with the Kubernetes cluster.
  • kubectl simply makes HTTP requests to the appropriate Kubernetes API endpoints.
  • it's totally possible to control Kubernetes with a tool like curl by manually issuing HTTP requests to the Kubernetes API.
  • Kubernetes consists of a set of independent components that run as separate processes on the nodes of a cluster.
  • components on the master nodes
  • Storage backend: stores resource definitions (usually etcd is used)
  • API server: provides Kubernetes API and manages storage backend
  • Controller manager: ensures resource statuses match specifications
  • Scheduler: schedules Pods to worker nodes
  • component on the worker nodes
  • Kubelet: manages execution of containers on a worker node
  • triggers the ReplicaSet controller, which is a sub-process of the controller manager.
  • the scheduler, who watches for Pod definitions that are not yet scheduled to a worker node.
  • creating and updating resources in the storage backend on the master node.
  • The kubelet of the worker node your ReplicaSet Pods have been scheduled to instructs the configured container runtime (which may be Docker) to download the required container images and run the containers.
  • Kubernetes components (except the API server and the storage backend) work by watching for resource changes in the storage backend and manipulating resources in the storage backend.
  • However, these components do not access the storage backend directly, but only through the Kubernetes API.
    • 張 旭
       
      很精彩,相互之間都是使用 API call 溝通,良好的微服務行為。
  • double usage of the Kubernetes API for internal components as well as for external users is a fundamental design concept of Kubernetes.
  • All other Kubernetes components and users read, watch, and manipulate the state (i.e. resources) of Kubernetes through the Kubernetes API
  • The storage backend stores the state (i.e. resources) of Kubernetes.
  • command completion is a shell feature that works by the means of a completion script.
  • A completion script is a shell script that defines the completion behaviour for a specific command. Sourcing a completion script enables completion for the corresponding command.
  • kubectl completion zsh
  • /etc/bash_completion.d directory (create it, if it doesn't exist)
  • source <(kubectl completion bash)
  • source <(kubectl completion zsh)
  • autoload -Uz compinit compinit
  • the API reference, which contains the full specifications of all resources.
  • kubectl api-resources
  • displays the resource names in their plural form (e.g. deployments instead of deployment). It also displays the shortname (e.g. deploy) for those resources that have one. Don't worry about these differences. All of these name variants are equivalent for kubectl.
  • .spec
  • custom columns output format comes in. It lets you freely define the columns and the data to display in them. You can choose any field of a resource to be displayed as a separate column in the output
  • kubectl get pods -o custom-columns='NAME:metadata.name,NODE:spec.nodeName'
  • kubectl explain pod.spec.
  • kubectl explain pod.metadata.
  • browse the resource specifications and try it out with any fields you like!
  • JSONPath is a language to extract data from JSON documents (it is similar to XPath for XML).
  • with kubectl explain, only a subset of the JSONPath capabilities is supported
  • Many fields of Kubernetes resources are lists, and this operator allows you to select items of these lists. It is often used with a wildcard as [*] to select all items of the list.
  • kubectl get pods -o custom-columns='NAME:metadata.name,IMAGES:spec.containers[*].image'
  • a Pod may contain more than one container.
  • The availability zones for each node are obtained through the special failure-domain.beta.kubernetes.io/zone label.
  • kubectl get nodes -o yaml kubectl get nodes -o json
  • The default kubeconfig file is ~/.kube/config
  • with multiple clusters, then you have connection parameters for multiple clusters configured in your kubeconfig file.
  • Within a cluster, you can set up multiple namespaces (a namespace is kind of "virtual" clusters within a physical cluster)
  • overwrite the default kubeconfig file with the --kubeconfig option for every kubectl command.
  • Namespace: the namespace to use when connecting to the cluster
  • a one-to-one mapping between clusters and contexts.
  • When kubectl reads a kubeconfig file, it always uses the information from the current context.
  • just change the current context in the kubeconfig file
  • to switch to another namespace in the same cluster, you can change the value of the namespace element of the current context
  • kubectl also provides the --cluster, --user, --namespace, and --context options that allow you to overwrite individual elements and the current context itself, regardless of what is set in the kubeconfig file.
  • for switching between clusters and namespaces is kubectx.
  • kubectl config get-contexts
  • just have to download the shell scripts named kubectl-ctx and kubectl-ns to any directory in your PATH and make them executable (for example, with chmod +x)
  • kubectl proxy
  • kubectl get roles
  • kubectl get pod
  • Kubectl plugins are distributed as simple executable files with a name of the form kubectl-x. The prefix kubectl- is mandatory,
  • To install a plugin, you just have to copy the kubectl-x file to any directory in your PATH and make it executable (for example, with chmod +x)
  • krew itself is a kubectl plugin
  • check out the kubectl-plugins GitHub topic
  • The executable can be of any type, a Bash script, a compiled Go program, a Python script, it really doesn't matter. The only requirement is that it can be directly executed by the operating system.
  • kubectl plugins can be written in any programming or scripting language.
  • you can write more sophisticated plugins with real programming languages, for example, using a Kubernetes client library. If you use Go, you can also use the cli-runtime library, which exists specifically for writing kubectl plugins.
  • a kubeconfig file consists of a set of contexts
  • changing the current context means changing the cluster, if you have only a single context per cluster.
張 旭

Providers - Configuration Language - Terraform by HashiCorp - 0 views

  • By default, terraform init downloads plugins into a subdirectory of the working directory so that each working directory is self-contained.
  • Terraform optionally allows the use of a local directory as a shared plugin cache, which then allows each distinct plugin binary to be downloaded only once.
  • directory must already exist before Terraform will cache plugins; Terraform will not create the directory itself.
  • ...3 more annotations...
  • When a plugin cache directory is enabled, the terraform init command will still access the plugin distribution server to obtain metadata about which plugins are available, but once a suitable version has been selected it will first check to see if the selected plugin is already available in the cache directory.
  • When possible, Terraform will use hardlinks or symlinks to avoid storing a separate copy of a cached plugin in multiple directories.
  • Terraform will never itself delete a plugin from the plugin cache once it's been placed there.
  •  
    "By default, terraform init downloads plugins into a subdirectory of the working directory so that each working directory is self-contained."
張 旭

Understanding Nginx HTTP Proxying, Load Balancing, Buffering, and Caching | DigitalOcean - 0 views

  • allow Nginx to pass requests off to backend http servers for further processing
  • Nginx is often set up as a reverse proxy solution to help scale out infrastructure or to pass requests to other servers that are not designed to handle large client loads
  • explore buffering and caching to improve the performance of proxying operations for clients
  • ...48 more annotations...
  • Nginx is built to handle many concurrent connections at the same time.
  • provides you with flexibility in easily adding backend servers or taking them down as needed for maintenance
  • Proxying in Nginx is accomplished by manipulating a request aimed at the Nginx server and passing it to other servers for the actual processing
  • The servers that Nginx proxies requests to are known as upstream servers.
  • Nginx can proxy requests to servers that communicate using the http(s), FastCGI, SCGI, and uwsgi, or memcached protocols through separate sets of directives for each type of proxy
  • When a request matches a location with a proxy_pass directive inside, the request is forwarded to the URL given by the directive
  • For example, when a request for /match/here/please is handled by this block, the request URI will be sent to the example.com server as http://example.com/match/here/please
  • The request coming from Nginx on behalf of a client will look different than a request coming directly from a client
  • Nginx gets rid of any empty headers
  • Nginx, by default, will consider any header that contains underscores as invalid. It will remove these from the proxied request
    • 張 旭
       
      這裡要注意一下,header 欄位名稱有設定底線的,要設定 Nginx 讓它可以通過。
  • The "Host" header is re-written to the value defined by the $proxy_host variable.
  • The upstream should not expect this connection to be persistent
  • Headers with empty values are completely removed from the passed request.
  • if your backend application will be processing non-standard headers, you must make sure that they do not have underscores
  • by default, this will be set to the value of $proxy_host, a variable that will contain the domain name or IP address and port taken directly from the proxy_pass definition
  • This is selected by default as it is the only address Nginx can be sure the upstream server responds to
  • (as it is pulled directly from the connection info)
  • $http_host: Sets the "Host" header to the "Host" header from the client request.
  • The headers sent by the client are always available in Nginx as variables. The variables will start with an $http_ prefix, followed by the header name in lowercase, with any dashes replaced by underscores.
  • preference to: the host name from the request line itself
  • set the "Host" header to the $host variable. It is the most flexible and will usually provide the proxied servers with a "Host" header filled in as accurately as possible
  • sets the "Host" header to the $host variable, which should contain information about the original host being requested
  • This variable takes the value of the original X-Forwarded-For header retrieved from the client and adds the Nginx server's IP address to the end.
  • The upstream directive must be set in the http context of your Nginx configuration.
  • http context
  • Once defined, this name will be available for use within proxy passes as if it were a regular domain name
  • By default, this is just a simple round-robin selection process (each request will be routed to a different host in turn)
  • Specifies that new connections should always be given to the backend that has the least number of active connections.
  • distributes requests to different servers based on the client's IP address.
  • mainly used with memcached proxying
  • As for the hash method, you must provide the key to hash against
  • Server Weight
  • Nginx's buffering and caching capabilities
  • Without buffers, data is sent from the proxied server and immediately begins to be transmitted to the client.
  • With buffers, the Nginx proxy will temporarily store the backend's response and then feed this data to the client
  • Nginx defaults to a buffering design
  • can be set in the http, server, or location contexts.
  • the sizing directives are configured per request, so increasing them beyond your need can affect your performance
  • When buffering is "off" only the buffer defined by the proxy_buffer_size directive will be used
  • A high availability (HA) setup is an infrastructure without a single point of failure, and your load balancers are a part of this configuration.
  • multiple load balancers (one active and one or more passive) behind a static IP address that can be remapped from one server to another.
  • Nginx also provides a way to cache content from backend servers
  • The proxy_cache_path directive must be set in the http context.
  • proxy_cache backcache;
    • 張 旭
       
      這裡的 backcache 是前文設定的 backcache 變數,看起來每個 location 都可以有自己的 cache 目錄。
  • The proxy_cache_bypass directive is set to the $http_cache_control variable. This will contain an indicator as to whether the client is explicitly requesting a fresh, non-cached version of the resource
  • any user-related data should not be cached
  • For private content, you should set the Cache-Control header to "no-cache", "no-store", or "private" depending on the nature of the data
張 旭

Kubernetes Deployments: The Ultimate Guide - Semaphore - 1 views

  • Continuous integration gives you confidence in your code. To extend that confidence to the release process, your deployment operations need to come with a safety belt.
  • these Kubernetes objects ensure that you can progressively deploy, roll back and scale your applications without downtime.
  • A pod is just a group of containers (it can be a group of one container) that run on the same machine, and share a few things together.
  • ...34 more annotations...
  • the containers within a pod can communicate with each other over localhost
  • From a network perspective, all the processes in these containers are local.
  • we can never create a standalone container: the closest we can do is create a pod, with a single container in it.
  • Kubernetes is a declarative system (by opposition to imperative systems).
  • All we can do, is describe what we want to have, and wait for Kubernetes to take action to reconcile what we have, with what we want to have.
  • In other words, we can say, “I would like a 40-feet long blue container with yellow doors“, and Kubernetes will find such a container for us. If it doesn’t exist, it will build it; if there is already one but it’s green with red doors, it will paint it for us; if there is already a container of the right size and color, Kubernetes will do nothing, since what we have already matches what we want.
  • The specification of a replica set looks very much like the specification of a pod, except that it carries a number, indicating how many replicas
  • What happens if we change that definition? Suddenly, there are zero pods matching the new specification.
  • the creation of new pods could happen in a more gradual manner.
  • the specification for a deployment looks very much like the one for a replica set: it features a pod specification, and a number of replicas.
  • Deployments, however, don’t create or delete pods directly.
  • When we update a deployment and adjust the number of replicas, it passes that update down to the replica set.
  • When we update the pod specification, the deployment creates a new replica set with the updated pod specification. That replica set has an initial size of zero. Then, the size of that replica set is progressively increased, while decreasing the size of the other replica set.
  • we are going to fade in (turn up the volume) on the new replica set, while we fade out (turn down the volume) on the old one.
  • During the whole process, requests are sent to pods of both the old and new replica sets, without any downtime for our users.
  • A readiness probe is a test that we add to a container specification.
  • Kubernetes supports three ways of implementing readiness probes:Running a command inside a container;Making an HTTP(S) request against a container; orOpening a TCP socket against a container.
  • When we roll out a new version, Kubernetes will wait for the new pod to mark itself as “ready” before moving on to the next one.
  • If there is no readiness probe, then the container is considered as ready, as long as it could be started.
  • MaxSurge indicates how many extra pods we are willing to run during a rolling update, while MaxUnavailable indicates how many pods we can lose during the rolling update.
  • Setting MaxUnavailable to 0 means, “do not shutdown any old pod before a new one is up and ready to serve traffic“.
  • Setting MaxSurge to 100% means, “immediately start all the new pods“, implying that we have enough spare capacity on our cluster, and that we want to go as fast as possible.
  • kubectl rollout undo deployment web
  • the replica set doesn’t look at the pods’ specifications, but only at their labels.
  • A replica set contains a selector, which is a logical expression that “selects” (just like a SELECT query in SQL) a number of pods.
  • it is absolutely possible to manually create pods with these labels, but running a different image (or with different settings), and fool our replica set.
  • Selectors are also used by services, which act as the load balancers for Kubernetes traffic, internal and external.
  • internal IP address (denoted by the name ClusterIP)
  • during a rollout, the deployment doesn’t reconfigure or inform the load balancer that pods are started and stopped. It happens automatically through the selector of the service associated to the load balancer.
  • a pod is added as a valid endpoint for a service only if all its containers pass their readiness check. In other words, a pod starts receiving traffic only once it’s actually ready for it.
  • In blue/green deployment, we want to instantly switch over all the traffic from the old version to the new, instead of doing it progressively
  • We can achieve blue/green deployment by creating multiple deployments (in the Kubernetes sense), and then switching from one to another by changing the selector of our service
  • kubectl label pods -l app=blue,version=v1.5 status=enabled
  • kubectl label pods -l app=blue,version=v1.4 status-
  •  
    "Continuous integration gives you confidence in your code. To extend that confidence to the release process, your deployment operations need to come with a safety belt."
張 旭

javascript - How do I "think in AngularJS" if I have a jQuery background? - Stack Overflow - 0 views

  • in AngularJS, we have a separate model layer that we can manage in any way we want, completely independently from the view.
  • keep your concerns separate
  • do DOM manipulation and augment your view with directives
  • ...34 more annotations...
  • DI means that you can declare components very freely and then from any other component, just ask for an instance of it and it will be granted
  • do test-driven development iteratively in AngularJS!
  • only do DOM manipulation in a directive
  • with ngClass we can dynamically update the class;
  • ngBind allows two-way data binding;
  • ngShow and ngHide programmatically show or hide an element;
  • The less DOM manipulation, the easier directives are to test, the easier they are to style, the easier they are to change in the future, and the more re-usable and distributable they are.
  • still wrong.
  • Before doing DOM manipulation anywhere in your application, ask yourself if you really need to.
  • a few things wrong with this
  • jQuery was never necessary
  • use angular.element and our component will still work when dropped into a project that doesn't have jQuery.
  • just use angular.element
  • the element that is passed to the link function would already be a jQuery element!
  • directives aren't just collections of jQuery-like functions
  • Directives are actually extensions of HTML
  • If HTML doesn't do something you need it to do, you write a directive to do it for you, and then use it just as if it was part of HTML.
  • think how the team would accomplish it to fit right in with ngClick, ngClass, et al.
  • Don't even use jQuery. Don't even include it.
  • ry to think about how to do it within the confines the AngularJS.
  • In jQuery, selectors are used to find DOM elements and then bind/register event handlers to them.
  • Views are (declarative) HTML that contain AngularJS directives
  • Directives set up the event handlers behind the scenes for us and give us dynamic databinding.
  • Views are tied to models (via scopes). Views are a projection of the model
  • In AngularJS, think about models, rather than jQuery-selected DOM elements that hold your data.
  • AngularJS uses controllers and directives (each of which can have their own controller, and/or compile and linking functions) to remove behavior from the view/structure (HTML). Angular also has services and filters to help separate/organize your application.
  • Think about your models
  • Think about how you want to present your models -- your views.
  • using the necessary directives to get dynamic databinding.
  • Attach a controller to each view (using ng-view and routing, or ng-controller)
  • Make controllers as thin as possible.
  • You can do a lot with jQuery without knowing about how JavaScript prototypal inheritance works.
  • jQuery is a library
  • AngularJS is a beautiful client-side framework
張 旭

Introduction To The Queue System - Diving Laravel - 0 views

  • Laravel is shipped with a built-in queue system that helps you run tasks in the background
  • The QueueManager is registered into the container and it knows how to connect to the different built-in queue drivers
  • for example when we called the Queue::push() method, what happened is that the manager selected the desired queue driver, connected to it, and called the push method on that driver.
  • ...2 more annotations...
  • All calls to methods that don't exist in the QueueManager class will be sent to the loaded driver
  • when you do Queue::push() you're actually calling the push method on the queue driver you're using
  •  
    "Laravel is shipped with a built-in queue system that helps you run tasks in the background "
張 旭

Ruby on Rails 實戰聖經 | 網站效能 - 0 views

  • 依照慣例是_count結尾,型別是integer,有預設值0。
  • lol_dba提供了Rake任務可以幫忙找忘記加的索引。
  • Bullet是一個外掛可以在開發時偵測N+1 queries問題。
  • ...19 more annotations...
  • 存取資料庫是一種相對很慢的I/O的操作:每一條SQL query都得耗上時間、執行回傳的結果也會被轉成ActiveRecord物件全部放進記憶體
  • 如果需要撈出全部的資料做處理,強烈建議最好不要用all方法,因為這樣會把全部的資料一次放進記憶體中,如果資料有成千上萬筆的話,效能就墜毀了。
  • .find_each( :batch_size => 100 )
  • .find_in_batches( :batch_size => 100 )
  • 在Transaction交易範圍內的SQL效能會加快,因為最後只需要COMMIT一次即可
  • Elasticsearch全文搜尋引擎和elasticsearch-rails gem
  • QueryReviewer這個套件透過SQL EXPLAIN分析SQL query的效率
  • 必要時可以採用逆正規化的設計。犧牲空間,增加修改的麻煩,但是讓讀取這事件變得更快更簡單。
  • 將成本轉嫁到寫入,而最佳化了讀取時間
  • 在效能還沒有造成問題前,就為了優化效能而修改程式和架構,只會讓程式更混亂不好維護
  • 當效能還不會造成問題時,程式的維護性比考慮效能重要
  • 會拖慢整體效能的程式,只佔全部程式的一小部分而已,所以我們只最佳化會造成問題的程式。
  • 善用分析工具找效能瓶頸,最佳化前需要測量,最佳化後也要測量比較。
  • rack-mini-profiler在頁面的左上角顯示花了多少時間,並且提供報表,推薦安裝
  • 如果是不需要權限控管的靜態檔案,可以直接放在public目錄下讓使用者下載。
  • Web伺服器得先安裝好x_sendfile功能
  • 如果要讓你的Assets例如CSS, JavaScript, Images也讓使用者透過CDN下載,只要修改config/environments/production.rb的config.action_controller.asset_host為CDN網址即可。
  • 有時候「執行速度較快」的程式碼不代表好維護、好除錯的程式碼
  • Ruby不是萬能,有時候直接呼叫外部程式是最快的作法
1 - 20 of 30 Next ›
Showing 20 items per page