Skip to main content

Home/ Larvata/ Group items tagged exceptions

Rss Feed Group items tagged

張 旭

Choose when to run jobs | GitLab - 0 views

  • Rules are evaluated in order until the first match.
  • no rules match, so the job is not added to any other pipeline.
  • define a set of rules to exclude jobs in a few cases, but run them in all other cases
  • ...32 more annotations...
  • use all rules keywords, like if, changes, and exists, in the same rule. The rule evaluates to true only when all included keywords evaluate to true.
  • use parentheses with && and || to build more complicated variable expressions.
  • Use workflow to specify which types of pipelines can run.
  • every push to an open merge request’s source branch causes duplicated pipelines.
  • avoid duplicate pipelines by changing the job rules to avoid either push (branch) pipelines or merge request pipelines.
  • do not mix only/except jobs with rules jobs in the same pipeline.
  • For behavior similar to the only/except keywords, you can check the value of the $CI_PIPELINE_SOURCE variable
  • commonly used variables for if clauses
  • rules:changes expressions to determine when to add jobs to a pipeline
  • Use !reference tags to reuse rules in different jobs.
  • Use except to define when a job does not run.
  • only or except used without refs is the same as only:refs / except/refs
  • If you change multiple files, but only one file ends in .md, the build job is still skipped.
  • If you use multiple keywords with only or except, the keywords are evaluated as a single conjoined expression.
  • only includes the job if all of the keys have at least one condition that matches.
  • except excludes the job if any of the keys have at least one condition that matches.
  • With only, individual keys are logically joined by an AND
  • With except, individual keys are logically joined by an OR
  • To specify a job as manual, add when: manual to the job in the .gitlab-ci.yml file.
  • Use protected environments to define a list of users authorized to run a manual job.
  • Use when: delayed to execute scripts after a waiting period, or if you want to avoid jobs immediately entering the pending state.
  • To split a large job into multiple smaller jobs that run in parallel, use the parallel keyword
  • run a trigger job multiple times in parallel in a single pipeline, but with different variable values for each instance of the job.
  • The @ symbol denotes the beginning of a ref’s repository path. To match a ref name that contains the @ character in a regular expression, you must use the hex character code match \x40.
  • Compare a variable to a string
  • Check if a variable is undefined
  • Check if a variable is empty
  • Check if a variable exists
  • Check if a variable is empty
  • Matches are found when using =~.
  • Matches are not found when using !~.
  • Join variable expressions together with && or ||
  •  
    "Rules are evaluated in order until the first match."
張 旭

Overview of Virtual Private Cloud  |  VPC  |  Google Cloud - 0 views

  • a VPC network the same way you'd think of a physical network, except that it is virtualized within GCP.
  • VPC networks are logically isolated from each other in GCP.
  • The network connects the resources to each other and to the Internet.
  •  
    "a VPC network the same way you'd think of a physical network, except that it is virtualized within GCP."
張 旭

Active Record Associations - Ruby on Rails Guides - 0 views

  • With Active Record associations, we can streamline these - and other - operations by declaratively telling Rails that there is a connection between the two models.
  • belongs_to has_one has_many has_many :through has_one :through has_and_belongs_to_many
  • an association is a connection between two Active Record models
  • ...195 more annotations...
  • Associations are implemented using macro-style calls, so that you can declaratively add features to your models
  • A belongs_to association sets up a one-to-one connection with another model, such that each instance of the declaring model "belongs to" one instance of the other model.
  • belongs_to associations must use the singular term.
  • belongs_to
  • A has_one association also sets up a one-to-one connection with another model, but with somewhat different semantics (and consequences).
  • This association indicates that each instance of a model contains or possesses one instance of another model
  • belongs_to
  • A has_many association indicates a one-to-many connection with another model.
  • This association indicates that each instance of the model has zero or more instances of another model.
  • belongs_to
  • A has_many :through association is often used to set up a many-to-many connection with another model
  • This association indicates that the declaring model can be matched with zero or more instances of another model by proceeding through a third model.
  • through:
  • through:
  • The collection of join models can be managed via the API
  • new join models are created for newly associated objects, and if some are gone their rows are deleted.
  • The has_many :through association is also useful for setting up "shortcuts" through nested has_many associations
  • A has_one :through association sets up a one-to-one connection with another model. This association indicates that the declaring model can be matched with one instance of another model by proceeding through a third model.
  • A has_and_belongs_to_many association creates a direct many-to-many connection with another model, with no intervening model.
  • id: false
  • The has_one relationship says that one of something is yours
  • using t.references :supplier instead.
  • declare a many-to-many relationship is to use has_many :through. This makes the association indirectly, through a join model
  • set up a has_many :through relationship if you need to work with the relationship model as an independent entity
  • set up a has_and_belongs_to_many relationship (though you'll need to remember to create the joining table in the database).
  • use has_many :through if you need validations, callbacks, or extra attributes on the join model
  • With polymorphic associations, a model can belong to more than one other model, on a single association.
  • belongs_to :imageable, polymorphic: true
  • a polymorphic belongs_to declaration as setting up an interface that any other model can use.
    • 張 旭
       
      _id 記錄的是不同類型的外連鍵 id;_type 記錄的是不同類型的表格名稱。
  • In designing a data model, you will sometimes find a model that should have a relation to itself
  • add a references column to the model itself
  • Controlling caching Avoiding name collisions Updating the schema Controlling association scope Bi-directional associations
  • All of the association methods are built around caching, which keeps the result of the most recent query available for further operations.
  • it is a bad idea to give an association a name that is already used for an instance method of ActiveRecord::Base. The association method would override the base method and break things.
  • You are responsible for maintaining your database schema to match your associations.
  • belongs_to associations you need to create foreign keys
  • has_and_belongs_to_many associations you need to create the appropriate join table
  • If you create an association some time after you build the underlying model, you need to remember to create an add_column migration to provide the necessary foreign key.
  • Active Record creates the name by using the lexical order of the class names
  • So a join between customer and order models will give the default join table name of "customers_orders" because "c" outranks "o" in lexical ordering.
  • For example, one would expect the tables "paper_boxes" and "papers" to generate a join table name of "papers_paper_boxes" because of the length of the name "paper_boxes", but it in fact generates a join table name of "paper_boxes_papers" (because the underscore '' is lexicographically _less than 's' in common encodings).
  • id: false
  • pass id: false to create_table because that table does not represent a model
  • By default, associations look for objects only within the current module's scope.
  • will work fine, because both the Supplier and the Account class are defined within the same scope.
  • To associate a model with a model in a different namespace, you must specify the complete class name in your association declaration:
  • class_name
  • class_name
  • Active Record provides the :inverse_of option
    • 張 旭
       
      意思是說第一次比較兩者的 first_name 是相同的;但透過 c 實體修改 first_name 之後,再次比較就不相同了,因為兩個是記憶體裡面兩個不同的物件。
  • preventing inconsistencies and making your application more efficient
  • Every association will attempt to automatically find the inverse association and set the :inverse_of option heuristically (based on the association name)
  • In database terms, this association says that this class contains the foreign key.
  • In all of these methods, association is replaced with the symbol passed as the first argument to belongs_to.
  • (force_reload = false)
  • The association method returns the associated object, if any. If no associated object is found, it returns nil.
  • the cached version will be returned.
  • The association= method assigns an associated object to this object.
  • Behind the scenes, this means extracting the primary key from the associate object and setting this object's foreign key to the same value.
  • The build_association method returns a new object of the associated type
  • but the associated object will not yet be saved.
  • The create_association method returns a new object of the associated type
  • once it passes all of the validations specified on the associated model, the associated object will be saved
  • raises ActiveRecord::RecordInvalid if the record is invalid.
  • dependent
  • counter_cache
  • :autosave :class_name :counter_cache :dependent :foreign_key :inverse_of :polymorphic :touch :validate
  • finding the number of belonging objects more efficient.
  • Although the :counter_cache option is specified on the model that includes the belongs_to declaration, the actual column must be added to the associated model.
  • add a column named orders_count to the Customer model.
  • :destroy, when the object is destroyed, destroy will be called on its associated objects.
  • deleted directly from the database without calling their destroy method.
  • Rails will not create foreign key columns for you
  • The :inverse_of option specifies the name of the has_many or has_one association that is the inverse of this association
  • set the :touch option to :true, then the updated_at or updated_on timestamp on the associated object will be set to the current time whenever this object is saved or destroyed
  • specify a particular timestamp attribute to update
  • If you set the :validate option to true, then associated objects will be validated whenever you save this object
  • By default, this is false: associated objects will not be validated when this object is saved.
  • where includes readonly select
  • make your code somewhat more efficient
  • no need to use includes for immediate associations
  • will be read-only when retrieved via the association
  • The select method lets you override the SQL SELECT clause that is used to retrieve data about the associated object
  • using the association.nil?
  • Assigning an object to a belongs_to association does not automatically save the object. It does not save the associated object either.
  • In database terms, this association says that the other class contains the foreign key.
  • the cached version will be returned.
  • :as :autosave :class_name :dependent :foreign_key :inverse_of :primary_key :source :source_type :through :validate
  • Setting the :as option indicates that this is a polymorphic association
  • :nullify causes the foreign key to be set to NULL. Callbacks are not executed.
  • It's necessary not to set or leave :nullify option for those associations that have NOT NULL database constraints.
  • The :source_type option specifies the source association type for a has_one :through association that proceeds through a polymorphic association.
  • The :source option specifies the source association name for a has_one :through association.
  • The :through option specifies a join model through which to perform the query
  • more efficient by including representatives in the association from suppliers to accounts
  • When you assign an object to a has_one association, that object is automatically saved (in order to update its foreign key).
  • If either of these saves fails due to validation errors, then the assignment statement returns false and the assignment itself is cancelled.
  • If the parent object (the one declaring the has_one association) is unsaved (that is, new_record? returns true) then the child objects are not saved.
  • If you want to assign an object to a has_one association without saving the object, use the association.build method
  • collection(force_reload = false) collection<<(object, ...) collection.delete(object, ...) collection.destroy(object, ...) collection=(objects) collection_singular_ids collection_singular_ids=(ids) collection.clear collection.empty? collection.size collection.find(...) collection.where(...) collection.exists?(...) collection.build(attributes = {}, ...) collection.create(attributes = {}) collection.create!(attributes = {})
  • In all of these methods, collection is replaced with the symbol passed as the first argument to has_many, and collection_singular is replaced with the singularized version of that symbol.
  • The collection<< method adds one or more objects to the collection by setting their foreign keys to the primary key of the calling model
  • The collection.delete method removes one or more objects from the collection by setting their foreign keys to NULL.
  • objects will be destroyed if they're associated with dependent: :destroy, and deleted if they're associated with dependent: :delete_all
  • The collection.destroy method removes one or more objects from the collection by running destroy on each object.
  • The collection_singular_ids method returns an array of the ids of the objects in the collection.
  • The collection_singular_ids= method makes the collection contain only the objects identified by the supplied primary key values, by adding and deleting as appropriate
  • The default strategy for has_many :through associations is delete_all, and for has_many associations is to set the foreign keys to NULL.
  • The collection.clear method removes all objects from the collection according to the strategy specified by the dependent option
  • uses the same syntax and options as ActiveRecord::Base.find
  • The collection.where method finds objects within the collection based on the conditions supplied but the objects are loaded lazily meaning that the database is queried only when the object(s) are accessed.
  • The collection.build method returns one or more new objects of the associated type. These objects will be instantiated from the passed attributes, and the link through their foreign key will be created, but the associated objects will not yet be saved.
  • The collection.create method returns a new object of the associated type. This object will be instantiated from the passed attributes, the link through its foreign key will be created, and, once it passes all of the validations specified on the associated model, the associated object will be saved.
  • :as :autosave :class_name :dependent :foreign_key :inverse_of :primary_key :source :source_type :through :validate
  • :delete_all causes all the associated objects to be deleted directly from the database (so callbacks will not execute)
  • :nullify causes the foreign keys to be set to NULL. Callbacks are not executed.
  • where includes readonly select
  • :conditions :through :polymorphic :foreign_key
  • By convention, Rails assumes that the column used to hold the primary key of the association is id. You can override this and explicitly specify the primary key with the :primary_key option.
  • The :source option specifies the source association name for a has_many :through association.
  • You only need to use this option if the name of the source association cannot be automatically inferred from the association name.
  • The :source_type option specifies the source association type for a has_many :through association that proceeds through a polymorphic association.
  • The :through option specifies a join model through which to perform the query.
  • has_many :through associations provide a way to implement many-to-many relationships,
  • By default, this is true: associated objects will be validated when this object is saved.
  • where extending group includes limit offset order readonly select uniq
  • If you use a hash-style where option, then record creation via this association will be automatically scoped using the hash
  • The extending method specifies a named module to extend the association proxy.
  • Association extensions
  • The group method supplies an attribute name to group the result set by, using a GROUP BY clause in the finder SQL.
  • has_many :line_items, -> { group 'orders.id' },                        through: :orders
  • more efficient by including line items in the association from customers to orders
  • The limit method lets you restrict the total number of objects that will be fetched through an association.
  • The offset method lets you specify the starting offset for fetching objects via an association
  • The order method dictates the order in which associated objects will be received (in the syntax used by an SQL ORDER BY clause).
  • Use the distinct method to keep the collection free of duplicates.
  • mostly useful together with the :through option
  • -> { distinct }
  • .all.inspect
  • If you want to make sure that, upon insertion, all of the records in the persisted association are distinct (so that you can be sure that when you inspect the association that you will never find duplicate records), you should add a unique index on the table itself
  • unique: true
  • Do not attempt to use include? to enforce distinctness in an association.
  • multiple users could be attempting this at the same time
  • checking for uniqueness using something like include? is subject to race conditions
  • When you assign an object to a has_many association, that object is automatically saved (in order to update its foreign key).
  • If any of these saves fails due to validation errors, then the assignment statement returns false and the assignment itself is cancelled.
  • If the parent object (the one declaring the has_many association) is unsaved (that is, new_record? returns true) then the child objects are not saved when they are added
  • All unsaved members of the association will automatically be saved when the parent is saved.
  • assign an object to a has_many association without saving the object, use the collection.build method
  • collection(force_reload = false) collection<<(object, ...) collection.delete(object, ...) collection.destroy(object, ...) collection=(objects) collection_singular_ids collection_singular_ids=(ids) collection.clear collection.empty? collection.size collection.find(...) collection.where(...) collection.exists?(...) collection.build(attributes = {}) collection.create(attributes = {}) collection.create!(attributes = {})
  • If the join table for a has_and_belongs_to_many association has additional columns beyond the two foreign keys, these columns will be added as attributes to records retrieved via that association.
  • Records returned with additional attributes will always be read-only
  • If you require this sort of complex behavior on the table that joins two models in a many-to-many relationship, you should use a has_many :through association instead of has_and_belongs_to_many.
  • aliased as collection.concat and collection.push.
  • The collection.delete method removes one or more objects from the collection by deleting records in the join table
  • not destroy the objects
  • The collection.destroy method removes one or more objects from the collection by running destroy on each record in the join table, including running callbacks.
  • not destroy the objects.
  • The collection.clear method removes every object from the collection by deleting the rows from the joining table.
  • not destroy the associated objects.
  • The collection.find method finds objects within the collection. It uses the same syntax and options as ActiveRecord::Base.find.
  • The collection.where method finds objects within the collection based on the conditions supplied but the objects are loaded lazily meaning that the database is queried only when the object(s) are accessed.
  • The collection.exists? method checks whether an object meeting the supplied conditions exists in the collection.
  • The collection.build method returns a new object of the associated type.
  • the associated object will not yet be saved.
  • the associated object will be saved.
  • The collection.create method returns a new object of the associated type.
  • it passes all of the validations specified on the associated model
  • :association_foreign_key :autosave :class_name :foreign_key :join_table :validate
  • The :foreign_key and :association_foreign_key options are useful when setting up a many-to-many self-join.
  • Rails assumes that the column in the join table used to hold the foreign key pointing to the other model is the name of that model with the suffix _id added.
  • If you set the :autosave option to true, Rails will save any loaded members and destroy members that are marked for destruction whenever you save the parent object.
  • By convention, Rails assumes that the column in the join table used to hold the foreign key pointing to this model is the name of this model with the suffix _id added.
  • By default, this is true: associated objects will be validated when this object is saved.
  • where extending group includes limit offset order readonly select uniq
  • set conditions via a hash
  • In this case, using @parts.assemblies.create or @parts.assemblies.build will create orders where the factory column has the value "Seattle"
  • If you use a hash-style where, then record creation via this association will be automatically scoped using the hash
  • using a GROUP BY clause in the finder SQL.
  • Use the uniq method to remove duplicates from the collection.
  • assign an object to a has_and_belongs_to_many association, that object is automatically saved (in order to update the join table).
  • If any of these saves fails due to validation errors, then the assignment statement returns false and the assignment itself is cancelled.
  • If the parent object (the one declaring the has_and_belongs_to_many association) is unsaved (that is, new_record? returns true) then the child objects are not saved when they are added.
  • If you want to assign an object to a has_and_belongs_to_many association without saving the object, use the collection.build method.
  • Normal callbacks hook into the life cycle of Active Record objects, allowing you to work with those objects at various points
  • define association callbacks by adding options to the association declaration
  • Rails passes the object being added or removed to the callback.
  • stack callbacks on a single event by passing them as an array
  • If a before_add callback throws an exception, the object does not get added to the collection.
  • if a before_remove callback throws an exception, the object does not get removed from the collection
  • extend these objects through anonymous modules, adding new finders, creators, or other methods.
  • order_number
  • use a named extension module
  • proxy_association.owner returns the object that the association is a part of.
crazylion lee

Riemann - A network monitoring system - 0 views

  •  
    "Riemann aggregates events from your servers and applications with a powerful stream processing language. Send an email for every exception in your app. Track the latency distribution of your web app. See the top processes on any host, by memory and CPU. Combine statistics from every Riak node in your cluster and forward to Graphite. Track user activity from second to second."
張 旭

Rails Routing from the Outside In - Ruby on Rails Guides - 0 views

  • Resource routing allows you to quickly declare all of the common routes for a given resourceful controller.
  • Rails would dispatch that request to the destroy method on the photos controller with { id: '17' } in params.
  • a resourceful route provides a mapping between HTTP verbs and URLs to controller actions.
  • ...86 more annotations...
  • each action also maps to particular CRUD operations in a database
  • resource :photo and resources :photos creates both singular and plural routes that map to the same controller (PhotosController).
  • One way to avoid deep nesting (as recommended above) is to generate the collection actions scoped under the parent, so as to get a sense of the hierarchy, but to not nest the member actions.
  • to only build routes with the minimal amount of information to uniquely identify the resource
  • The shallow method of the DSL creates a scope inside of which every nesting is shallow
  • These concerns can be used in resources to avoid code duplication and share behavior across routes
  • add a member route, just add a member block into the resource block
  • You can leave out the :on option, this will create the same member route except that the resource id value will be available in params[:photo_id] instead of params[:id].
  • Singular Resources
  • use a singular resource to map /profile (rather than /profile/:id) to the show action
  • Passing a String to get will expect a controller#action format
  • workaround
  • organize groups of controllers under a namespace
  • route /articles (without the prefix /admin) to Admin::ArticlesController
  • route /admin/articles to ArticlesController (without the Admin:: module prefix)
  • Nested routes allow you to capture this relationship in your routing.
  • helpers take an instance of Magazine as the first parameter (magazine_ads_url(@magazine)).
  • Resources should never be nested more than 1 level deep.
  • via the :shallow option
  • a balance between descriptive routes and deep nesting
  • :shallow_path prefixes member paths with the specified parameter
  • Routing Concerns allows you to declare common routes that can be reused inside other resources and routes
  • Rails can also create paths and URLs from an array of parameters.
  • use url_for with a set of objects
  • In helpers like link_to, you can specify just the object in place of the full url_for call
  • insert the action name as the first element of the array
  • This will recognize /photos/1/preview with GET, and route to the preview action of PhotosController, with the resource id value passed in params[:id]. It will also create the preview_photo_url and preview_photo_path helpers.
  • pass :on to a route, eliminating the block:
  • Collection Routes
  • This will enable Rails to recognize paths such as /photos/search with GET, and route to the search action of PhotosController. It will also create the search_photos_url and search_photos_path route helpers.
  • simple routing makes it very easy to map legacy URLs to new Rails actions
  • add an alternate new action using the :on shortcut
  • When you set up a regular route, you supply a series of symbols that Rails maps to parts of an incoming HTTP request.
  • :controller maps to the name of a controller in your application
  • :action maps to the name of an action within that controller
  • optional parameters, denoted by parentheses
  • This route will also route the incoming request of /photos to PhotosController#index, since :action and :id are
  • use a constraint on :controller that matches the namespace you require
  • dynamic segments don't accept dots
  • The params will also include any parameters from the query string
  • :defaults option.
  • set params[:format] to "jpg"
  • cannot override defaults via query parameters
  • specify a name for any route using the :as option
  • create logout_path and logout_url as named helpers in your application.
  • Inside the show action of UsersController, params[:username] will contain the username for the user.
  • should use the get, post, put, patch and delete methods to constrain a route to a particular verb.
  • use the match method with the :via option to match multiple verbs at once
  • Routing both GET and POST requests to a single action has security implications
  • 'GET' in Rails won't check for CSRF token. You should never write to the database from 'GET' requests
  • use the :constraints option to enforce a format for a dynamic segment
  • constraints
  • don't need to use anchors
  • Request-Based Constraints
  • the same name as the hash key and then compare the return value with the hash value.
  • constraint values should match the corresponding Request object method return type
    • 張 旭
       
      應該就是檢查來源的 request, 如果是某個特定的 request 來訪問的,就通過。
  • blacklist
    • 張 旭
       
      這裡有點複雜 ...
  • redirect helper
  • reuse dynamic segments from the match in the path to redirect
  • this redirection is a 301 "Moved Permanently" redirect.
  • root method
  • put the root route at the top of the file
  • The root route only routes GET requests to the action.
  • root inside namespaces and scopes
  • For namespaced controllers you can use the directory notation
  • Only the directory notation is supported
  • use the :constraints option to specify a required format on the implicit id
  • specify a single constraint to apply to a number of routes by using the block
  • non-resourceful routes
  • :id parameter doesn't accept dots
  • :as option lets you override the normal naming for the named route helpers
  • use the :as option to prefix the named route helpers that Rails generates for a rout
  • prevent name collisions
  • prefix routes with a named parameter
  • This will provide you with URLs such as /bob/articles/1 and will allow you to reference the username part of the path as params[:username] in controllers, helpers and views
  • :only option
  • :except option
  • generate only the routes that you actually need can cut down on memory use and speed up the routing process.
  • alter path names
  • http://localhost:3000/rails/info/routes
  • rake routes
  • setting the CONTROLLER environment variable
  • Routes should be included in your testing strategy
  • assert_generates assert_recognizes assert_routing
張 旭

Active Record Callbacks - Ruby on Rails Guides - 0 views

  • Active Record provides hooks into this object life cycle so that you can control your application and its data.
  • Callbacks allow you to trigger logic before or after an alteration of an object's state.
  • Callbacks are methods that get called at certain moments of an object's life cycle.
  • ...42 more annotations...
  • created
  • saved
  • updated
  • deleted
  • validated
  • loaded
  • use a macro-style class method to register them as callbacks
  • self.name = login.capitalize if name.blank?
  • registered to only fire on certain life cycle events
  • considered good practice to declare callback methods as protected or private
  • all the available Active Record callbacks,
  • after_initialize callback will be called whenever an Active Record object is instantiated, either by directly using new or when a record is loaded from the database
  • after_find callback will be called whenever Active Record loads a record from the database.
  • after_find is called before after_initialize if both are defined
  • after_touch callback will be called whenever an Active Record object is touched.
  • belongs_to :company, touch: true
  • methods trigger callbacks
  • after_find callback is triggered by the following finder methods
  • after_initialize callback is triggered every time a new object of the class is initialized
  • should be used with caution, however, because important business rules and application logic may be kept in callbacks.
  • As you start registering new callbacks for your models, they will be queued for execution
  • The whole callback chain is wrapped in a transaction
  • Callbacks work through model relationships, and can even be defined by them.
  • As with validations, we can also make the calling of a callback method conditional on the satisfaction of a given predicate
  • When using the :if option, the callback won't be executed if the predicate method returns false; when using the :unless option, the callback won't be executed if the predicate method returns true.
  • with a Symbol
  • with a String
  • with a Proc
  • using eval and hence needs to contain valid Ruby code.
  • mix both :if and :unless in the same callback declaration
  • needed to instantiate a new PictureFileCallbacks object, since we declared our callback as an instance method.
  • Active Record makes it possible to create classes that encapsulate the callback methods, so it becomes very easy to reuse them.
  • won't be necessary to instantiate
  • after_commit
  • after_rollback
  • very similar to the after_save callback except that they don't execute until after database changes have either been committed or rolled back
  • delete_picture_file_from_disk
  • after_commit
  • If anything raises an exception after the after_destroy callback is called and the transaction rolls back, the file will have been deleted and the model will be left in an inconsistent state
    • 張 旭
       
      刪除檔案這種動作,要在資料庫的變動正確執行完成之後。
  • don't supply the :on option the callback will fire for every action.
  • The after_commit and after_rollback callbacks are guaranteed to be called for all models created, updated, or destroyed within a transaction block.
張 旭

elabs/pundit: Minimal authorization through OO design and pure Ruby classes - 0 views

  • The class implements some kind of query method
  • Pundit will call the current_user method to retrieve what to send into this argumen
  • put these classes in app/policies
  • ...49 more annotations...
  • in leveraging regular Ruby classes and object oriented design patterns to build a simple, robust and scaleable authorization system
  • map to the name of a particular controller action
  • In the generated ApplicationPolicy, the model object is called record.
  • record
  • authorize
  • authorize would have done something like this: raise "not authorized" unless PostPolicy.new(current_user, @post).update?
  • pass a second argument to authorize if the name of the permission you want to check doesn't match the action name.
  • you can chain it
  • authorize returns the object passed to it
  • the policy method in both the view and controller.
  • have some kind of view listing records which a particular user has access to
  • ActiveRecord::Relation
  • Instances of this class respond to the method resolve, which should return some kind of result which can be iterated over.
  • scope.where(published: true)
    • 張 旭
       
      我想大概的意思就是:如果是 admin 可以看到全部 post,如果不是只能看到 published = true 的 post
  • use this class from your controller via the policy_scope method:
  • PostPolicy::Scope.new(current_user, Post).resolve
  • policy_scope(@user.posts).each
  • This method will raise an exception if authorize has not yet been called.
  • verify_policy_scoped to your controller. This will raise an exception in the vein of verify_authorized. However, it tracks if policy_scope is used instead of authorize
  • need to conditionally bypass verification, you can use skip_authorization
  • skip_policy_scope
  • Having a mechanism that ensures authorization happens allows developers to thoroughly test authorization scenarios as units on the policy objects themselves.
  • Pundit doesn't do anything you couldn't have easily done yourself. It's a very small library, it just provides a few neat helpers.
  • all of the policy and scope classes are just plain Ruby classes
  • rails g pundit:policy post
  • define a filter that redirects unauthenticated users to the login page
  • fail more gracefully
  • raise Pundit::NotAuthorizedError, "must be logged in" unless user
  • having rails handle them as a 403 error and serving a 403 error page.
  • config.action_dispatch.rescue_responses["Pundit::NotAuthorizedError"] = :forbidden
  • with I18n to generate error messages
  • retrieve a policy for a record outside the controller or view
  • define a method in your controller called pundit_user
  • Pundit strongly encourages you to model your application in such a way that the only context you need for authorization is a user object and a domain model that you want to check authorization for.
  • Pundit does not allow you to pass additional arguments to policies
  • authorization is dependent on IP address in addition to the authenticated user
  • create a special class which wraps up both user and IP and passes it to the policy.
  • set up a permitted_attributes method in your policy
  • policy(@post).permitted_attributes
  • permitted_attributes(@post)
  • Pundit provides a convenient helper method
  • permit different attributes based on the current action,
  • If you have defined an action-specific method on your policy for the current action, the permitted_attributes helper will call it instead of calling permitted_attributes on your controller
  • If you don't have an instance for the first argument to authorize, then you can pass the class
  • restart the Rails server
  • Given there is a policy without a corresponding model / ruby class, you can retrieve it by passing a symbol
  • after_action :verify_authorized
  • It is not some kind of failsafe mechanism or authorization mechanism.
  • Pundit will work just fine without using verify_authorized and verify_policy_scoped
  •  
    "Minimal authorization through OO design and pure Ruby classes"
張 旭

ruby-grape/grape: An opinionated framework for creating REST-like APIs in Ruby. - 0 views

shared by 張 旭 on 17 Dec 16 - No Cached
  • Grape is a REST-like API framework for Ruby.
  • designed to run on Rack or complement existing web application frameworks such as Rails and Sinatra by providing a simple DSL to easily develop RESTful APIs
  • Grape APIs are Rack applications that are created by subclassing Grape::API
  • ...54 more annotations...
  • Rails expects a subdirectory that matches the name of the Ruby module and a file name that matches the name of the class
  • mount multiple API implementations inside another one
  • mount on a path, which is similar to using prefix inside the mounted API itself.
  • four strategies in which clients can reach your API's endpoints: :path, :header, :accept_version_header and :param
  • clients should pass the desired version as a request parameter, either in the URL query string or in the request body.
  • clients should pass the desired version in the HTTP Accept head
  • clients should pass the desired version in the UR
  • clients should pass the desired version in the HTTP Accept-Version header.
  • add a description to API methods and namespaces
  • Request parameters are available through the params hash object
  • Parameters are automatically populated from the request body on POST and PUT
  • route string parameters will have precedence.
  • Grape allows you to access only the parameters that have been declared by your params block
  • By default declared(params) includes parameters that have nil values
  • all valid types
  • type: File
  • JSON objects and arrays of objects are accepted equally
  • any class can be used as a type so long as an explicit coercion method is supplied
  • As a special case, variant-member-type collections may also be declared, by passing a Set or Array with more than one member to type
  • Parameters can be nested using group or by calling requires or optional with a block
  • relevant if another parameter is given
  • Parameters options can be grouped
  • allow_blank can be combined with both requires and optional
  • Parameters can be restricted to a specific set of values
  • Parameters can be restricted to match a specific regular expression
  • Never define mutually exclusive sets with any required params
  • Namespaces allow parameter definitions and apply to every method within the namespace
  • define a route parameter as a namespace using route_param
  • create custom validation that use request to validate the attribute
  • rescue a Grape::Exceptions::ValidationErrors and respond with a custom response or turn the response into well-formatted JSON for a JSON API that separates individual parameters and the corresponding error messages
  • custom validation messages
  • Request headers are available through the headers helper or from env in their original form
  • define requirements for your named route parameters using regular expressions on namespace or endpoint
  • route will match only if all requirements are met
  • mix in a module
  • define reusable params
  • using cookies method
  • a 201 for POST-Requests
  • 204 for DELETE-Requests
  • 200 status code for all other Requests
  • use status to query and set the actual HTTP Status Code
  • raising errors with error!
  • It is very crucial to define this endpoint at the very end of your API, as it literally accepts every request.
  • rescue_from will rescue the exceptions listed and all their subclasses.
  • Grape::API provides a logger method which by default will return an instance of the Logger class from Ruby's standard library.
  • Grape supports a range of ways to present your data
  • Grape has built-in Basic and Digest authentication (the given block is executed in the context of the current Endpoint).
  • Authentication applies to the current namespace and any children, but not parents.
  • Blocks can be executed before or after every API call, using before, after, before_validation and after_validation
  • Before and after callbacks execute in the following order
  • Grape by default anchors all request paths, which means that the request URL should match from start to end to match
  • The namespace method has a number of aliases, including: group, resource, resources, and segment. Use whichever reads the best for your API.
  • test a Grape API with RSpec by making HTTP requests and examining the response
  • POST JSON data and specify the correct content-type.
crazylion lee

Cello * High Level C - 0 views

  •  
    "Cello is a library that brings higher level programming to C. By acting as a modern, powerful runtime system Cello makes many things easy that were previously impractical or awkward in C such as: Generic Data Structures Polymorphic Functions Interfaces / Type Classes Constructors / Destructors Optional Garbage Collection Exceptions Reflection And because Cello works seamlessly alongside standard C you get all the other benefits such as great performance, powerful tooling, and extensive libraries."
張 旭

Building a RESTful API in a Rails application - 0 views

  • designing and implementing a REST API in an intentionally simplistic task management web application, and will cover some best practices to ensure maintainability of the code.
  • each individual request should have no context of the requests that came before it.
  • each request that modifies the database should act on one and only one row of one and only one table
  • ...10 more annotations...
  • The resource endpoints should return representations of the resource as data, usually XML or JSON.
  • POST for create, PUT for update, PATCH for upsert (update and insert).
  • an existing API should never be modified, except for critical bugfixes
  • Rather than changing existing endpoints, expose a new version
  • using unique database ids in the route chain allows users to access short routes, and simplifies resource lookup
  • while exposing internal database ids to the consumer and requiring the consumer to maintain a reference to ids on their end
  • The downfall is longer nested routes
  • require reauthentication on a per-request level
  • Devise.secure_compare helps avoid timing attacks
  • Defensive programming is a software design principle that dictates that a piece of software should be designed to continue functioning in unforeseen circumstances.
張 旭

I am a puts debuggerer | Tenderlovemaking - 0 views

  • method(:render).source_location
  • method(:render).super_method.source_location
  • unbind the method from Kernel, rebind it to the request
  • ...9 more annotations...
  • The TracePoint allocated here will fire on every “call” event and the block will print out the method name and location of the call.
  • The -d flag will enable warnings and also print out every line where an exception was raised.
  • re-raised
  • RUBYOPT=-d bundle exec rake test
  • The RUBYOPT environment variable will get applied to every Ruby program that is executed in this shell, even sub shells executed by rake.
  • @sharing.freeze
  • can't modify frozen Hash
  • where the first mutation happened
  • I hit Ctrl-T (sorry, this only works on OS X, you’ll need to use kill on Linux
張 旭

Probably Done Before: Visualizing Docker Containers and Images - 0 views

  •  In my opinion, understanding how a technology works under the hood is the best way to achieve learning speed and to build confidence that you are using the tool in the correct way.
  • union view
    • 張 旭
       
      把多層 image layer 串接起來,看上去就像是在讀一個 image 檔案而已。
  • The top-level layer may be read by a union-ing file system (AUFS on my docker implementation) to present a single cohesive view of all the changes as one read-only file system
  • ...36 more annotations...
  • it is nearly the same thing as an image, except that the top layer is read-write
  • A container is defined only as a read-write layer atop an image (of read-only layers itself).  It does not have to be running.
  • a running container
    • 張 旭
       
      之前一直搞錯了!不是 run 起來的才會叫 container,只要有 read-write layer 就是了!
  • the the isolated process-space and processes within
  • A running container is defined as a read-write "union view" and
  • kernel-level technologies like cgroups, namespaces
  • The processes within this process-space may change, delete or create files within the "union view" file that will be captured in the read-write layer
  • there is no longer a running container
    • 張 旭
       
      這行指令執行結束之後,running container 就停掉了,但是該 container 還在!
  • each layer contains a pointer to a parent layer using the Id
  • The 'docker create' command adds a read-write layer to the top stack based on the image id.  It does not run this container.
  • The command 'docker start' creates a process space around the union view of the container's layers.
  • can only be one process space per container.
  • the docker run command starts with an image, creates a container, and starts the container
  • 'git pull' (which is a combination of 'git fetch' and 'git merge')
  • 'docker ps' lists out the inventory of running containers on your system
  • 'docker ps -a' where the 'a' is short for 'all' lists out all the containers on your system, whether stopped or running.
  • Only those images that have containers attached to them or that have been pulled are considered top-level.
  • 'docker stop' issues a SIGTERM to a running container which politely stops all the processes in that process-space.
  • results is a normal, but non-running, container
  • 'docker kill' issues a non-polite SIGKILL command to all the processes in a running container.
  • 'docker stop' and 'docker kill' which send actual UNIX signals to a running process
  • 'docker pause' uses a special cgroups feature to freeze/pause a running process-space
  • 'docker rm' removes the read-write layer that defines a container from your host system
  • It effectively deletes files
  • 'docker rmi' removes the read-layer that defines a "union view" of an image.
  • 'docker commit' takes a container's top-level read-write layer and burns it into a read-only layer.
  • turns a container (whether running or stopped) into an immutable image
  • uses the FROM directive in the Dockerfile file as the starting image and iteratively 1) runs (create and start) 2) modifies and 3) commits.
  • At each step in the iteration a new layer is created.
  • 'docker exec' command runs on a running container and executes a process in that running container's process space
  • 'docker inspect' fetches the metadata that has been associated with the top-layer of the container or image
  • 'docker save' creates a single tar file that can be used to import on a different host system
  • only be run on an image
  • 'docker export' command creates a tar file of the contents of the "union view" and flattens it for consumption for non-Docker usages
  • This command removes the metadata and the layers.  This command can only be run on containers.
  • 'docker history' command takes an image-id and recursively prints out the read-only layers
張 旭

Active Record Validations - Ruby on Rails Guides - 0 views

  • validates :name, presence: true
  • Validations are used to ensure that only valid data is saved into your database
  • Model-level validations are the best way to ensure that only valid data is saved into your database.
  • ...117 more annotations...
  • native database constraints
  • client-side validations
  • controller-level validations
  • Database constraints and/or stored procedures make the validation mechanisms database-dependent and can make testing and maintenance more difficult
  • Client-side validations can be useful, but are generally unreliable
  • combined with other techniques, client-side validation can be a convenient way to provide users with immediate feedback
  • it's a good idea to keep your controllers skinny
  • model-level validations are the most appropriate in most circumstances.
  • Active Record uses the new_record? instance method to determine whether an object is already in the database or not.
  • Creating and saving a new record will send an SQL INSERT operation to the database. Updating an existing record will send an SQL UPDATE operation instead. Validations are typically run before these commands are sent to the database
  • The bang versions (e.g. save!) raise an exception if the record is invalid.
  • save and update return false
  • create just returns the object
  • skip validations, and will save the object to the database regardless of its validity.
  • be used with caution
  • update_all
  • save also has the ability to skip validations if passed validate: false as argument.
  • save(validate: false)
  • valid? triggers your validations and returns true if no errors
  • After Active Record has performed validations, any errors found can be accessed through the errors.messages instance method
  • By definition, an object is valid if this collection is empty after running validations.
  • validations are not run when using new.
  • invalid? is simply the inverse of valid?.
  • To verify whether or not a particular attribute of an object is valid, you can use errors[:attribute]. I
  • only useful after validations have been run
  • Every time a validation fails, an error message is added to the object's errors collection,
  • All of them accept the :on and :message options, which define when the validation should be run and what message should be added to the errors collection if it fails, respectively.
  • validates that a checkbox on the user interface was checked when a form was submitted.
  • agree to your application's terms of service
  • 'acceptance' does not need to be recorded anywhere in your database (if you don't have a field for it, the helper will just create a virtual attribute).
  • It defaults to "1" and can be easily changed.
  • use this helper when your model has associations with other models and they also need to be validated
  • valid? will be called upon each one of the associated objects.
  • work with all of the association types
  • Don't use validates_associated on both ends of your associations.
    • 張 旭
       
      關聯式的物件驗證,在其中一方啟動就好了!
  • each associated object will contain its own errors collection
  • errors do not bubble up to the calling model
  • when you have two text fields that should receive exactly the same content
  • This validation creates a virtual attribute whose name is the name of the field that has to be confirmed with "_confirmation" appended.
  • To require confirmation, make sure to add a presence check for the confirmation attribute
  • this set can be any enumerable object.
  • The exclusion helper has an option :in that receives the set of values that will not be accepted for the validated attributes.
  • :in option has an alias called :within
  • validates the attributes' values by testing whether they match a given regular expression, which is specified using the :with option.
  • attribute does not match the regular expression by using the :without option.
  • validates that the attributes' values are included in a given set
  • :in option has an alias called :within
  • specify length constraints
  • :minimum
  • :maximum
  • :in (or :within)
  • :is - The attribute length must be equal to the given value.
  • :wrong_length, :too_long, and :too_short options and %{count} as a placeholder for the number corresponding to the length constraint being used.
  • split the value in a different way using the :tokenizer option:
    • 張 旭
       
      自己提供切割算字數的方式
  • validates that your attributes have only numeric values
  • By default, it will match an optional sign followed by an integral or floating point number.
  • set :only_integer to true.
  • allows a trailing newline character.
  • :greater_than
  • :greater_than_or_equal_to
  • :equal_to
  • :less_than
  • :less_than_or_equal_to
  • :odd - Specifies the value must be an odd number if set to true.
  • :even - Specifies the value must be an even number if set to true.
  • validates that the specified attributes are not empty
  • if the value is either nil or a blank string
  • validate associated records whose presence is required, you must specify the :inverse_of option for the association
  • inverse_of
  • an association is present, you'll need to test whether the associated object itself is present, and not the foreign key used to map the association
  • false.blank? is true
  • validate the presence of a boolean field
  • ensure the value will NOT be nil
  • validates that the specified attributes are absent
  • not either nil or a blank string
  • be sure that an association is absent
  • false.present? is false
  • validate the absence of a boolean field you should use validates :field_name, exclusion: { in: [true, false] }.
  • validates that the attribute's value is unique right before the object gets saved
  • a :scope option that you can use to specify other attributes that are used to limit the uniqueness check
  • a :case_sensitive option that you can use to define whether the uniqueness constraint will be case sensitive or not.
  • There is no default error message for validates_with.
  • To implement the validate method, you must have a record parameter defined, which is the record to be validated.
  • the validator will be initialized only once for the whole application life cycle, and not on each validation run, so be careful about using instance variables inside it.
  • passes the record to a separate class for validation
  • use a plain old Ruby object
  • validates attributes against a block
  • The block receives the record, the attribute's name and the attribute's value. You can do anything you like to check for valid data within the block
  • will let validation pass if the attribute's value is blank?, like nil or an empty string
  • the :message option lets you specify the message that will be added to the errors collection when validation fails
  • skips the validation when the value being validated is nil
  • specify when the validation should happen
  • raise ActiveModel::StrictValidationFailed when the object is invalid
  • You can do that by using the :if and :unless options, which can take a symbol, a string, a Proc or an Array.
  • use the :if option when you want to specify when the validation should happen
  • using eval and needs to contain valid Ruby code.
  • Using a Proc object gives you the ability to write an inline condition instead of a separate method
  • have multiple validations use one condition, it can be easily achieved using with_options.
  • implement a validate method which takes a record as an argument and performs the validation on it
  • validates_with method
  • implement a validate_each method which takes three arguments: record, attribute, and value
  • combine standard validations with your own custom validators.
  • :expiration_date_cannot_be_in_the_past,    :discount_cannot_be_greater_than_total_value
  • By default such validations will run every time you call valid?
  • errors[] is used when you want to check the error messages for a specific attribute.
  • Returns an instance of the class ActiveModel::Errors containing all errors.
  • lets you manually add messages that are related to particular attributes
  • using []= setter
  • errors[:base] is an array, you can simply add a string to it and it will be used as an error message.
  • use this method when you want to say that the object is invalid, no matter the values of its attributes.
  • clear all the messages in the errors collection
  • calling errors.clear upon an invalid object won't actually make it valid: the errors collection will now be empty, but the next time you call valid? or any method that tries to save this object to the database, the validations will run again.
  • the total number of error messages for the object.
  • .errors.full_messages.each
  • .field_with_errors
張 旭

Boosting your kubectl productivity ♦︎ Learnk8s - 0 views

  • kubectl is your cockpit to control Kubernetes.
  • kubectl is a client for the Kubernetes API
  • Kubernetes API is an HTTP REST API.
  • ...75 more annotations...
  • This API is the real Kubernetes user interface.
  • Kubernetes is fully controlled through this API
  • every Kubernetes operation is exposed as an API endpoint and can be executed by an HTTP request to this endpoint.
  • the main job of kubectl is to carry out HTTP requests to the Kubernetes API
  • Kubernetes maintains an internal state of resources, and all Kubernetes operations are CRUD operations on these resources.
  • Kubernetes is a fully resource-centred system
  • Kubernetes API reference is organised as a list of resource types with their associated operations.
  • This is how kubectl works for all commands that interact with the Kubernetes cluster.
  • kubectl simply makes HTTP requests to the appropriate Kubernetes API endpoints.
  • it's totally possible to control Kubernetes with a tool like curl by manually issuing HTTP requests to the Kubernetes API.
  • Kubernetes consists of a set of independent components that run as separate processes on the nodes of a cluster.
  • components on the master nodes
  • Storage backend: stores resource definitions (usually etcd is used)
  • API server: provides Kubernetes API and manages storage backend
  • Controller manager: ensures resource statuses match specifications
  • Scheduler: schedules Pods to worker nodes
  • component on the worker nodes
  • Kubelet: manages execution of containers on a worker node
  • triggers the ReplicaSet controller, which is a sub-process of the controller manager.
  • the scheduler, who watches for Pod definitions that are not yet scheduled to a worker node.
  • creating and updating resources in the storage backend on the master node.
  • The kubelet of the worker node your ReplicaSet Pods have been scheduled to instructs the configured container runtime (which may be Docker) to download the required container images and run the containers.
  • Kubernetes components (except the API server and the storage backend) work by watching for resource changes in the storage backend and manipulating resources in the storage backend.
  • However, these components do not access the storage backend directly, but only through the Kubernetes API.
    • 張 旭
       
      很精彩,相互之間都是使用 API call 溝通,良好的微服務行為。
  • double usage of the Kubernetes API for internal components as well as for external users is a fundamental design concept of Kubernetes.
  • All other Kubernetes components and users read, watch, and manipulate the state (i.e. resources) of Kubernetes through the Kubernetes API
  • The storage backend stores the state (i.e. resources) of Kubernetes.
  • command completion is a shell feature that works by the means of a completion script.
  • A completion script is a shell script that defines the completion behaviour for a specific command. Sourcing a completion script enables completion for the corresponding command.
  • kubectl completion zsh
  • /etc/bash_completion.d directory (create it, if it doesn't exist)
  • source <(kubectl completion bash)
  • source <(kubectl completion zsh)
  • autoload -Uz compinit compinit
  • the API reference, which contains the full specifications of all resources.
  • kubectl api-resources
  • displays the resource names in their plural form (e.g. deployments instead of deployment). It also displays the shortname (e.g. deploy) for those resources that have one. Don't worry about these differences. All of these name variants are equivalent for kubectl.
  • .spec
  • custom columns output format comes in. It lets you freely define the columns and the data to display in them. You can choose any field of a resource to be displayed as a separate column in the output
  • kubectl get pods -o custom-columns='NAME:metadata.name,NODE:spec.nodeName'
  • kubectl explain pod.spec.
  • kubectl explain pod.metadata.
  • browse the resource specifications and try it out with any fields you like!
  • JSONPath is a language to extract data from JSON documents (it is similar to XPath for XML).
  • with kubectl explain, only a subset of the JSONPath capabilities is supported
  • Many fields of Kubernetes resources are lists, and this operator allows you to select items of these lists. It is often used with a wildcard as [*] to select all items of the list.
  • kubectl get pods -o custom-columns='NAME:metadata.name,IMAGES:spec.containers[*].image'
  • a Pod may contain more than one container.
  • The availability zones for each node are obtained through the special failure-domain.beta.kubernetes.io/zone label.
  • kubectl get nodes -o yaml kubectl get nodes -o json
  • The default kubeconfig file is ~/.kube/config
  • with multiple clusters, then you have connection parameters for multiple clusters configured in your kubeconfig file.
  • Within a cluster, you can set up multiple namespaces (a namespace is kind of "virtual" clusters within a physical cluster)
  • overwrite the default kubeconfig file with the --kubeconfig option for every kubectl command.
  • Namespace: the namespace to use when connecting to the cluster
  • a one-to-one mapping between clusters and contexts.
  • When kubectl reads a kubeconfig file, it always uses the information from the current context.
  • just change the current context in the kubeconfig file
  • to switch to another namespace in the same cluster, you can change the value of the namespace element of the current context
  • kubectl also provides the --cluster, --user, --namespace, and --context options that allow you to overwrite individual elements and the current context itself, regardless of what is set in the kubeconfig file.
  • for switching between clusters and namespaces is kubectx.
  • kubectl config get-contexts
  • just have to download the shell scripts named kubectl-ctx and kubectl-ns to any directory in your PATH and make them executable (for example, with chmod +x)
  • kubectl proxy
  • kubectl get roles
  • kubectl get pod
  • Kubectl plugins are distributed as simple executable files with a name of the form kubectl-x. The prefix kubectl- is mandatory,
  • To install a plugin, you just have to copy the kubectl-x file to any directory in your PATH and make it executable (for example, with chmod +x)
  • krew itself is a kubectl plugin
  • check out the kubectl-plugins GitHub topic
  • The executable can be of any type, a Bash script, a compiled Go program, a Python script, it really doesn't matter. The only requirement is that it can be directly executed by the operating system.
  • kubectl plugins can be written in any programming or scripting language.
  • you can write more sophisticated plugins with real programming languages, for example, using a Kubernetes client library. If you use Go, you can also use the cli-runtime library, which exists specifically for writing kubectl plugins.
  • a kubeconfig file consists of a set of contexts
  • changing the current context means changing the cluster, if you have only a single context per cluster.
張 旭

Template Designer Documentation - Jinja2 Documentation (2.10) - 0 views

  • A Jinja template doesn’t need to have a specific extension
  • A Jinja template is simply a text file
  • tags, which control the logic of the template
  • ...106 more annotations...
  • {% ... %} for Statements
  • {{ ... }} for Expressions to print to the template output
  • use a dot (.) to access attributes of a variable
  • the outer double-curly braces are not part of the variable, but the print statement.
  • If you access variables inside tags don’t put the braces around them.
  • If a variable or attribute does not exist, you will get back an undefined value.
  • the default behavior is to evaluate to an empty string if printed or iterated over, and to fail for every other operation.
  • if an object has an item and attribute with the same name. Additionally, the attr() filter only looks up attributes.
  • Variables can be modified by filters. Filters are separated from the variable by a pipe symbol (|) and may have optional arguments in parentheses.
  • Multiple filters can be chained
  • Tests can be used to test a variable against a common expression.
  • add is plus the name of the test after the variable.
  • to find out if a variable is defined, you can do name is defined, which will then return true or false depending on whether name is defined in the current template context.
  • strip whitespace in templates by hand. If you add a minus sign (-) to the start or end of a block (e.g. a For tag), a comment, or a variable expression, the whitespaces before or after that block will be removed
  • not add whitespace between the tag and the minus sign
  • mark a block raw
  • Template inheritance allows you to build a base “skeleton” template that contains all the common elements of your site and defines blocks that child templates can override.
  • The {% extends %} tag is the key here. It tells the template engine that this template “extends” another template.
  • access templates in subdirectories with a slash
  • can’t define multiple {% block %} tags with the same name in the same template
  • use the special self variable and call the block with that name
  • self.title()
  • super()
  • put the name of the block after the end tag for better readability
  • if the block is replaced by a child template, a variable would appear that was not defined in the block or passed to the context.
  • setting the block to “scoped” by adding the scoped modifier to a block declaration
  • If you have a variable that may include any of the following chars (>, <, &, or ") you SHOULD escape it unless the variable contains well-formed and trusted HTML.
  • Jinja2 functions (macros, super, self.BLOCKNAME) always return template data that is marked as safe.
  • With the default syntax, control structures appear inside {% ... %} blocks.
  • the dictsort filter
  • loop.cycle
  • Unlike in Python, it’s not possible to break or continue in a loop
  • use loops recursively
  • add the recursive modifier to the loop definition and call the loop variable with the new iterable where you want to recurse.
  • The loop variable always refers to the closest (innermost) loop.
  • whether the value changed at all,
  • use it to test if a variable is defined, not empty and not false
  • Macros are comparable with functions in regular programming languages.
  • If a macro name starts with an underscore, it’s not exported and can’t be imported.
  • pass a macro to another macro
  • caller()
  • a single trailing newline is stripped if present
  • other whitespace (spaces, tabs, newlines etc.) is returned unchanged
  • a block tag works in “both” directions. That is, a block tag doesn’t just provide a placeholder to fill - it also defines the content that fills the placeholder in the parent.
  • Python dicts are not ordered
  • caller(user)
  • call(user)
  • This is a simple dialog rendered by using a macro and a call block.
  • Filter sections allow you to apply regular Jinja2 filters on a block of template data.
  • Assignments at top level (outside of blocks, macros or loops) are exported from the template like top level macros and can be imported by other templates.
  • using namespace objects which allow propagating of changes across scopes
  • use block assignments to capture the contents of a block into a variable name.
  • The extends tag can be used to extend one template from another.
  • Blocks are used for inheritance and act as both placeholders and replacements at the same time.
  • The include statement is useful to include a template and return the rendered contents of that file into the current namespace
  • Included templates have access to the variables of the active context by default.
  • putting often used code into macros
  • imports are cached and imported templates don’t have access to the current template variables, just the globals by default.
  • Macros and variables starting with one or more underscores are private and cannot be imported.
  • By default, included templates are passed the current context and imported templates are not.
  • imports are often used just as a module that holds macros.
  • Integers and floating point numbers are created by just writing the number down
  • Everything between two brackets is a list.
  • Tuples are like lists that cannot be modified (“immutable”).
  • A dict in Python is a structure that combines keys and values.
  • // Divide two numbers and return the truncated integer result
  • The special constants true, false, and none are indeed lowercase
  • all Jinja identifiers are lowercase
  • (expr) group an expression.
  • The is and in operators support negation using an infix notation
  • in Perform a sequence / mapping containment test.
  • | Applies a filter.
  • ~ Converts all operands into strings and concatenates them.
  • use inline if expressions.
  • always an attribute is returned and items are not looked up.
  • default(value, default_value=u'', boolean=False)¶ If the value is undefined it will return the passed default value, otherwise the value of the variable
  • dictsort(value, case_sensitive=False, by='key', reverse=False)¶ Sort a dict and yield (key, value) pairs.
  • format(value, *args, **kwargs)¶ Apply python string formatting on an object
  • groupby(value, attribute)¶ Group a sequence of objects by a common attribute.
  • grouping by is stored in the grouper attribute and the list contains all the objects that have this grouper in common.
  • indent(s, width=4, first=False, blank=False, indentfirst=None)¶ Return a copy of the string with each line indented by 4 spaces. The first line and blank lines are not indented by default.
  • join(value, d=u'', attribute=None)¶ Return a string which is the concatenation of the strings in the sequence.
  • map()¶ Applies a filter on a sequence of objects or looks up an attribute.
  • pprint(value, verbose=False)¶ Pretty print a variable. Useful for debugging.
  • reject()¶ Filters a sequence of objects by applying a test to each object, and rejecting the objects with the test succeeding.
  • replace(s, old, new, count=None)¶ Return a copy of the value with all occurrences of a substring replaced with a new one.
  • round(value, precision=0, method='common')¶ Round the number to a given precision
  • even if rounded to 0 precision, a float is returned.
  • select()¶ Filters a sequence of objects by applying a test to each object, and only selecting the objects with the test succeeding.
  • sort(value, reverse=False, case_sensitive=False, attribute=None)¶ Sort an iterable. Per default it sorts ascending, if you pass it true as first argument it will reverse the sorting.
  • striptags(value)¶ Strip SGML/XML tags and replace adjacent whitespace by one space.
  • tojson(value, indent=None)¶ Dumps a structure to JSON so that it’s safe to use in <script> tags.
  • trim(value)¶ Strip leading and trailing whitespace.
  • unique(value, case_sensitive=False, attribute=None)¶ Returns a list of unique items from the the given iterable
  • urlize(value, trim_url_limit=None, nofollow=False, target=None, rel=None)¶ Converts URLs in plain text into clickable links.
  • defined(value)¶ Return true if the variable is defined
  • in(value, seq)¶ Check if value is in seq.
  • mapping(value)¶ Return true if the object is a mapping (dict etc.).
  • number(value)¶ Return true if the variable is a number.
  • sameas(value, other)¶ Check if an object points to the same memory address than another object
  • undefined(value)¶ Like defined() but the other way round.
  • A joiner is passed a string and will return that string every time it’s called, except the first time (in which case it returns an empty string).
  • namespace(...)¶ Creates a new container that allows attribute assignment using the {% set %} tag
  • The with statement makes it possible to create a new inner scope. Variables set within this scope are not visible outside of the scope.
  • activate and deactivate the autoescaping from within the templates
  • With both trim_blocks and lstrip_blocks enabled, you can put block tags on their own lines, and the entire block line will be removed when rendered, preserving the whitespace of the contents
張 旭

Queues - Laravel - The PHP Framework For Web Artisans - 0 views

  • Laravel queues provide a unified API across a variety of different queue backends, such as Beanstalk, Amazon SQS, Redis, or even a relational database.
  • The queue configuration file is stored in config/queue.php
  • a synchronous driver that will execute jobs immediately (for local use)
  • ...56 more annotations...
  • A null queue driver is also included which discards queued jobs.
  • In your config/queue.php configuration file, there is a connections configuration option.
  • any given queue connection may have multiple "queues" which may be thought of as different stacks or piles of queued jobs.
  • each connection configuration example in the queue configuration file contains a queue attribute.
  • if you dispatch a job without explicitly defining which queue it should be dispatched to, the job will be placed on the queue that is defined in the queue attribute of the connection configuration
  • pushing jobs to multiple queues can be especially useful for applications that wish to prioritize or segment how jobs are processed
  • specify which queues it should process by priority.
  • If your Redis queue connection uses a Redis Cluster, your queue names must contain a key hash tag.
  • ensure all of the Redis keys for a given queue are placed into the same hash slot
  • all of the queueable jobs for your application are stored in the app/Jobs directory.
  • Job classes are very simple, normally containing only a handle method which is called when the job is processed by the queue.
  • we were able to pass an Eloquent model directly into the queued job's constructor. Because of the SerializesModels trait that the job is using, Eloquent models will be gracefully serialized and unserialized when the job is processing.
  • When the job is actually handled, the queue system will automatically re-retrieve the full model instance from the database.
  • The handle method is called when the job is processed by the queue
  • The arguments passed to the dispatch method will be given to the job's constructor
  • delay the execution of a queued job, you may use the delay method when dispatching a job.
  • dispatch a job immediately (synchronously), you may use the dispatchNow method.
  • When using this method, the job will not be queued and will be run immediately within the current process
  • specify a list of queued jobs that should be run in sequence.
  • Deleting jobs using the $this->delete() method will not prevent chained jobs from being processed. The chain will only stop executing if a job in the chain fails.
  • this does not push jobs to different queue "connections" as defined by your queue configuration file, but only to specific queues within a single connection.
  • To specify the queue, use the onQueue method when dispatching the job
  • To specify the connection, use the onConnection method when dispatching the job
  • defining the maximum number of attempts on the job class itself.
  • to defining how many times a job may be attempted before it fails, you may define a time at which the job should timeout.
  • using the funnel method, you may limit jobs of a given type to only be processed by one worker at a time
  • using the throttle method, you may throttle a given type of job to only run 10 times every 60 seconds.
  • If an exception is thrown while the job is being processed, the job will automatically be released back onto the queue so it may be attempted again.
  • dispatch a Closure. This is great for quick, simple tasks that need to be executed outside of the current request cycle
  • When dispatching Closures to the queue, the Closure's code contents is cryptographically signed so it can not be modified in transit.
  • Laravel includes a queue worker that will process new jobs as they are pushed onto the queue.
  • once the queue:work command has started, it will continue to run until it is manually stopped or you close your terminal
  • queue workers are long-lived processes and store the booted application state in memory.
  • they will not notice changes in your code base after they have been started.
  • during your deployment process, be sure to restart your queue workers.
  • customize your queue worker even further by only processing particular queues for a given connection
  • The --once option may be used to instruct the worker to only process a single job from the queue
  • The --stop-when-empty option may be used to instruct the worker to process all jobs and then exit gracefully.
  • Daemon queue workers do not "reboot" the framework before processing each job.
  • you should free any heavy resources after each job completes.
  • Since queue workers are long-lived processes, they will not pick up changes to your code without being restarted.
  • restart the workers during your deployment process.
  • php artisan queue:restart
  • The queue uses the cache to store restart signals
  • the queue workers will die when the queue:restart command is executed, you should be running a process manager such as Supervisor to automatically restart the queue workers.
  • each queue connection defines a retry_after option. This option specifies how many seconds the queue connection should wait before retrying a job that is being processed.
  • The --timeout option specifies how long the Laravel queue master process will wait before killing off a child queue worker that is processing a job.
  • When jobs are available on the queue, the worker will keep processing jobs with no delay in between them.
  • While sleeping, the worker will not process any new jobs - the jobs will be processed after the worker wakes up again
  • the numprocs directive will instruct Supervisor to run 8 queue:work processes and monitor all of them, automatically restarting them if they fail.
  • Laravel includes a convenient way to specify the maximum number of times a job should be attempted.
  • define a failed method directly on your job class, allowing you to perform job specific clean-up when a failure occurs.
  • a great opportunity to notify your team via email or Slack.
  • php artisan queue:retry all
  • php artisan queue:flush
  • When injecting an Eloquent model into a job, it is automatically serialized before being placed on the queue and restored when the job is processed
張 旭

Storing Sessions in a Database, by Chris Shiflett - 0 views

  • to store sessions in a database rather than the filesystem
  • server affinity (methods that direct requests from the same client to the same server)
  • store sessions in a central database that is common to all servers
  • ...6 more annotations...
  • security concerns
  • PHP provides a function that lets you override the default session mechanism by specifying the names of your own functions for taking care of the distinct tasks
  • The handler PHP uses to handle data serialization is defined by the session.serialize_handler configuration directive. It is set to php by default.
  • REPLACE
  • REPLACE, which behaves exactly like INSERT, except that it handles cases where a record already exists with the same session identifier by first deleting that record.
  • the _write() function keeps the timestamp of the last access in the access column for each record, this can be used to determine which records to delete.
張 旭

Helm | - 0 views

  • A chart is a collection of files that describe a related set of Kubernetes resources.
  • A single chart might be used to deploy something simple, like a memcached pod, or something complex, like a full web app stack with HTTP servers, databases, caches, and so on.
  • Charts are created as files laid out in a particular directory tree, then they can be packaged into versioned archives to be deployed.
  • ...170 more annotations...
  • A chart is organized as a collection of files inside of a directory.
  • values.yaml # The default configuration values for this chart
  • charts/ # A directory containing any charts upon which this chart depends.
  • templates/ # A directory of templates that, when combined with values, # will generate valid Kubernetes manifest files.
  • version: A SemVer 2 version (required)
  • apiVersion: The chart API version, always "v1" (required)
  • Every chart must have a version number. A version must follow the SemVer 2 standard.
  • non-SemVer names are explicitly disallowed by the system.
  • When generating a package, the helm package command will use the version that it finds in the Chart.yaml as a token in the package name.
  • the appVersion field is not related to the version field. It is a way of specifying the version of the application.
  • appVersion: The version of the app that this contains (optional). This needn't be SemVer.
  • If the latest version of a chart in the repository is marked as deprecated, then the chart as a whole is considered to be deprecated.
  • deprecated: Whether this chart is deprecated (optional, boolean)
  • one chart may depend on any number of other charts.
  • dependencies can be dynamically linked through the requirements.yaml file or brought in to the charts/ directory and managed manually.
  • the preferred method of declaring dependencies is by using a requirements.yaml file inside of your chart.
  • A requirements.yaml file is a simple file for listing your dependencies.
  • The repository field is the full URL to the chart repository.
  • you must also use helm repo add to add that repo locally.
  • helm dependency update and it will use your dependency file to download all the specified charts into your charts/ directory for you.
  • When helm dependency update retrieves charts, it will store them as chart archives in the charts/ directory.
  • Managing charts with requirements.yaml is a good way to easily keep charts updated, and also share requirements information throughout a team.
  • All charts are loaded by default.
  • The condition field holds one or more YAML paths (delimited by commas). If this path exists in the top parent’s values and resolves to a boolean value, the chart will be enabled or disabled based on that boolean value.
  • The tags field is a YAML list of labels to associate with this chart.
  • all charts with tags can be enabled or disabled by specifying the tag and a boolean value.
  • The --set parameter can be used as usual to alter tag and condition values.
  • Conditions (when set in values) always override tags.
  • The first condition path that exists wins and subsequent ones for that chart are ignored.
  • The keys containing the values to be imported can be specified in the parent chart’s requirements.yaml file using a YAML list. Each item in the list is a key which is imported from the child chart’s exports field.
  • specifying the key data in our import list, Helm looks in the exports field of the child chart for data key and imports its contents.
  • the parent key data is not contained in the parent’s final values. If you need to specify the parent key, use the ‘child-parent’ format.
  • To access values that are not contained in the exports key of the child chart’s values, you will need to specify the source key of the values to be imported (child) and the destination path in the parent chart’s values (parent).
  • To drop a dependency into your charts/ directory, use the helm fetch command
  • A dependency can be either a chart archive (foo-1.2.3.tgz) or an unpacked chart directory.
  • name cannot start with _ or .. Such files are ignored by the chart loader.
  • a single release is created with all the objects for the chart and its dependencies.
  • Helm Chart templates are written in the Go template language, with the addition of 50 or so add-on template functions from the Sprig library and a few other specialized functions
  • When Helm renders the charts, it will pass every file in that directory through the template engine.
  • Chart developers may supply a file called values.yaml inside of a chart. This file can contain default values.
  • Chart users may supply a YAML file that contains values. This can be provided on the command line with helm install.
  • When a user supplies custom values, these values will override the values in the chart’s values.yaml file.
  • Template files follow the standard conventions for writing Go templates
  • {{default "minio" .Values.storage}}
  • Values that are supplied via a values.yaml file (or via the --set flag) are accessible from the .Values object in a template.
  • pre-defined, are available to every template, and cannot be overridden
  • the names are case sensitive
  • Release.Name: The name of the release (not the chart)
  • Release.IsUpgrade: This is set to true if the current operation is an upgrade or rollback.
  • Release.Revision: The revision number. It begins at 1, and increments with each helm upgrade
  • Chart: The contents of the Chart.yaml
  • Files: A map-like object containing all non-special files in the chart.
  • Files can be accessed using {{index .Files "file.name"}} or using the {{.Files.Get name}} or {{.Files.GetString name}} functions.
  • .helmignore
  • access the contents of the file as []byte using {{.Files.GetBytes}}
  • Any unknown Chart.yaml fields will be dropped
  • Chart.yaml cannot be used to pass arbitrarily structured data into the template.
  • A values file is formatted in YAML.
  • A chart may include a default values.yaml file
  • be merged into the default values file.
  • The default values file included inside of a chart must be named values.yaml
  • accessible inside of templates using the .Values object
  • Values files can declare values for the top-level chart, as well as for any of the charts that are included in that chart’s charts/ directory.
  • Charts at a higher level have access to all of the variables defined beneath.
  • lower level charts cannot access things in parent charts
  • Values are namespaced, but namespaces are pruned.
  • the scope of the values has been reduced and the namespace prefix removed
  • Helm supports special “global” value.
  • a way of sharing one top-level variable with all subcharts, which is useful for things like setting metadata properties like labels.
  • If a subchart declares a global variable, that global will be passed downward (to the subchart’s subcharts), but not upward to the parent chart.
  • global variables of parent charts take precedence over the global variables from subcharts.
  • helm lint
  • A chart repository is an HTTP server that houses one or more packaged charts
  • Any HTTP server that can serve YAML files and tar files and can answer GET requests can be used as a repository server.
  • Helm does not provide tools for uploading charts to remote repository servers.
  • the only way to add a chart to $HELM_HOME/starters is to manually copy it there.
  • Helm provides a hook mechanism to allow chart developers to intervene at certain points in a release’s life cycle.
  • Execute a Job to back up a database before installing a new chart, and then execute a second job after the upgrade in order to restore data.
  • Hooks are declared as an annotation in the metadata section of a manifest
  • Hooks work like regular templates, but they have special annotations
  • pre-install
  • post-install: Executes after all resources are loaded into Kubernetes
  • pre-delete
  • post-delete: Executes on a deletion request after all of the release’s resources have been deleted.
  • pre-upgrade
  • post-upgrade
  • pre-rollback
  • post-rollback: Executes on a rollback request after all resources have been modified.
  • crd-install
  • test-success: Executes when running helm test and expects the pod to return successfully (return code == 0).
  • test-failure: Executes when running helm test and expects the pod to fail (return code != 0).
  • Hooks allow you, the chart developer, an opportunity to perform operations at strategic points in a release lifecycle
  • Tiller then loads the hook with the lowest weight first (negative to positive)
  • Tiller returns the release name (and other data) to the client
  • If the resources is a Job kind, Tiller will wait until the job successfully runs to completion.
  • if the job fails, the release will fail. This is a blocking operation, so the Helm client will pause while the Job is run.
  • If they have hook weights (see below), they are executed in weighted order. Otherwise, ordering is not guaranteed.
  • good practice to add a hook weight, and set it to 0 if weight is not important.
  • The resources that a hook creates are not tracked or managed as part of the release.
  • leave the hook resource alone.
  • To destroy such resources, you need to either write code to perform this operation in a pre-delete or post-delete hook or add "helm.sh/hook-delete-policy" annotation to the hook template file.
  • Hooks are just Kubernetes manifest files with special annotations in the metadata section
  • One resource can implement multiple hooks
  • no limit to the number of different resources that may implement a given hook.
  • When subcharts declare hooks, those are also evaluated. There is no way for a top-level chart to disable the hooks declared by subcharts.
  • Hook weights can be positive or negative numbers but must be represented as strings.
  • sort those hooks in ascending order.
  • Hook deletion policies
  • "before-hook-creation" specifies Tiller should delete the previous hook before the new hook is launched.
  • By default Tiller will wait for 60 seconds for a deleted hook to no longer exist in the API server before timing out.
  • Custom Resource Definitions (CRDs) are a special kind in Kubernetes.
  • The crd-install hook is executed very early during an installation, before the rest of the manifests are verified.
  • A common reason why the hook resource might already exist is that it was not deleted following use on a previous install/upgrade.
  • Helm uses Go templates for templating your resource files.
  • two special template functions: include and required
  • include function allows you to bring in another template, and then pass the results to other template functions.
  • The required function allows you to declare a particular values entry as required for template rendering.
  • If the value is empty, the template rendering will fail with a user submitted error message.
  • When you are working with string data, you are always safer quoting the strings than leaving them as bare words
  • Quote Strings, Don’t Quote Integers
  • when working with integers do not quote the values
  • env variables values which are expected to be string
  • to include a template, and then perform an operation on that template’s output, Helm has a special include function
  • The above includes a template called toYaml, passes it $value, and then passes the output of that template to the nindent function.
  • Go provides a way for setting template options to control behavior when a map is indexed with a key that’s not present in the map
  • The required function gives developers the ability to declare a value entry as required for template rendering.
  • The tpl function allows developers to evaluate strings as templates inside a template.
  • Rendering a external configuration file
  • (.Files.Get "conf/app.conf")
  • Image pull secrets are essentially a combination of registry, username, and password.
  • Automatically Roll Deployments When ConfigMaps or Secrets change
  • configmaps or secrets are injected as configuration files in containers
  • a restart may be required should those be updated with a subsequent helm upgrade
  • The sha256sum function can be used to ensure a deployment’s annotation section is updated if another file changes
  • checksum/config: {{ include (print $.Template.BasePath "/configmap.yaml") . | sha256sum }}
  • helm upgrade --recreate-pods
  • "helm.sh/resource-policy": keep
  • resources that should not be deleted when Helm runs a helm delete
  • this resource becomes orphaned. Helm will no longer manage it in any way.
  • create some reusable parts in your chart
  • In the templates/ directory, any file that begins with an underscore(_) is not expected to output a Kubernetes manifest file.
  • by convention, helper templates and partials are placed in a _helpers.tpl file.
  • The current best practice for composing a complex application from discrete parts is to create a top-level umbrella chart that exposes the global configurations, and then use the charts/ subdirectory to embed each of the components.
  • SAP’s Converged charts: These charts install SAP Converged Cloud a full OpenStack IaaS on Kubernetes. All of the charts are collected together in one GitHub repository, except for a few submodules.
  • Deis’s Workflow: This chart exposes the entire Deis PaaS system with one chart. But it’s different from the SAP chart in that this umbrella chart is built from each component, and each component is tracked in a different Git repository.
  • YAML is a superset of JSON
  • any valid JSON structure ought to be valid in YAML.
  • As a best practice, templates should follow a YAML-like syntax unless the JSON syntax substantially reduces the risk of a formatting issue.
  • There are functions in Helm that allow you to generate random data, cryptographic keys, and so on.
  • a chart repository is a location where packaged charts can be stored and shared.
  • A chart repository is an HTTP server that houses an index.yaml file and optionally some packaged charts.
  • Because a chart repository can be any HTTP server that can serve YAML and tar files and can answer GET requests, you have a plethora of options when it comes down to hosting your own chart repository.
  • It is not required that a chart package be located on the same server as the index.yaml file.
  • A valid chart repository must have an index file. The index file contains information about each chart in the chart repository.
  • The Helm project provides an open-source Helm repository server called ChartMuseum that you can host yourself.
  • $ helm repo index fantastic-charts --url https://fantastic-charts.storage.googleapis.com
  • A repository will not be added if it does not contain a valid index.yaml
  • add the repository to their helm client via the helm repo add [NAME] [URL] command with any name they would like to use to reference the repository.
  • Helm has provenance tools which help chart users verify the integrity and origin of a package.
  • Integrity is established by comparing a chart to a provenance record
  • The provenance file contains a chart’s YAML file plus several pieces of verification information
  • Chart repositories serve as a centralized collection of Helm charts.
  • Chart repositories must make it possible to serve provenance files over HTTP via a specific request, and must make them available at the same URI path as the chart.
  • We don’t want to be “the certificate authority” for all chart signers. Instead, we strongly favor a decentralized model, which is part of the reason we chose OpenPGP as our foundational technology.
  • The Keybase platform provides a public centralized repository for trust information.
  • A chart contains a number of Kubernetes resources and components that work together.
  • A test in a helm chart lives under the templates/ directory and is a pod definition that specifies a container with a given command to run.
  • The pod definition must contain one of the helm test hook annotations: helm.sh/hook: test-success or helm.sh/hook: test-failure
  • helm test
  • nest your test suite under a tests/ directory like <chart-name>/templates/tests/
張 旭

Understanding Nginx Server and Location Block Selection Algorithms | DigitalOcean - 0 views

  • A server block is a subset of Nginx’s configuration that defines a virtual server used to handle requests of a defined type. Administrators often configure multiple server blocks and decide which block should handle which connection based on the requested domain name, port, and IP address.
  • A location block lives within a server block and is used to define how Nginx should handle requests for different resources and URIs for the parent server. The URI space can be subdivided in whatever way the administrator likes using these blocks. It is an extremely flexible model.
  • Nginx logically divides the configurations meant to serve different content into blocks, which live in a hierarchical structure. Each time a client request is made, Nginx begins a process of determining which configuration blocks should be used to handle the request.
  • ...37 more annotations...
  • Nginx is one of the most popular web servers in the world. It can successfully handle high loads with many concurrent client connections, and can easily function as a web server, a mail server, or a reverse proxy server.
  • The main server block directives that Nginx is concerned with during this process are the listen directive, and the server_name directive.
  • The listen directive typically defines which IP address and port that the server block will respond to.
  • 0.0.0.0:8080 if Nginx is being run by a normal, non-root user
  • Nginx translates all “incomplete” listen directives by substituting missing values with their default values so that each block can be evaluated by its IP address and port.
  • In any case, the port must be matched exactly.
  • If there are multiple server blocks with the same level of specificity matching, Nginx then begins to evaluate the server_name directive of each server block.
  • Nginx will only evaluate the server_name directive when it needs to distinguish between server blocks that match to the same level of specificity in the listen directive.
  • Nginx checks the request’s “Host” header. This value holds the domain or IP address that the client was actually trying to reach.
  • Nginx will first try to find a server block with a server_name that matches the value in the “Host” header of the request exactly.
  • If no exact match is found, Nginx will then try to find a server block with a server_name that matches using a leading wildcard (indicated by a * at the beginning of the name in the config).
  • If no match is found using a leading wildcard, Nginx then looks for a server block with a server_name that matches using a trailing wildcard (indicated by a server name ending with a * in the config)
  • If no match is found using a trailing wildcard, Nginx then evaluates server blocks that define the server_name using regular expressions (indicated by a ~ before the name).
  • If no regular expression match is found, Nginx then selects the default server block for that IP address and port.
  • There can be only one default_server declaration per each IP address/port combination.
  • Location blocks live within server blocks (or other location blocks) and are used to decide how to process the request URI (the part of the request that comes after the domain name or IP address/port).
  • If no modifiers are present, the location is interpreted as a prefix match.
  • =: If an equal sign is used, this block will be considered a match if the request URI exactly matches the location given.
  • ~: If a tilde modifier is present, this location will be interpreted as a case-sensitive regular expression match.
  • ~*: If a tilde and asterisk modifier is used, the location block will be interpreted as a case-insensitive regular expression match.
  • ^~: If a carat and tilde modifier is present, and if this block is selected as the best non-regular expression match, regular expression matching will not take place.
  • Keep in mind that if this block is selected and the request is fulfilled using an index page, an internal redirect will take place to another location that will be the actual handler of the request
  • Keeping in mind the types of location declarations we described above, Nginx evaluates the possible location contexts by comparing the request URI to each of the locations.
  • Nginx begins by checking all prefix-based location matches (all location types not involving a regular expression).
  • First, Nginx looks for an exact match.
  • If no exact (with the = modifier) location block matches are found, Nginx then moves on to evaluating non-exact prefixes.
  • After the longest matching prefix location is determined and stored, Nginx moves on to evaluating the regular expression locations (both case sensitive and insensitive).
  • by default, Nginx will serve regular expression matches in preference to prefix matches.
  • regular expression matches within the longest prefix match will “jump the line” when Nginx evaluates regex locations.
  • The exceptions to the “only one location block” rule may have implications on how the request is actually served and may not align with the expectations you had when designing your location blocks.
  • The index directive always leads to an internal redirect if it is used to handle the request.
  • In the case above, if you really need the execution to stay in the first block, you will have to come up with a different method of satisfying the request to the directory.
  • one way of preventing an index from switching contexts, but it’s probably not useful for most configurations
  • the try_files directive. This directive tells Nginx to check for the existence of a named set of files or directories.
  • the rewrite directive. When using the last parameter with the rewrite directive, or when using no parameter at all, Nginx will search for a new matching location based on the results of the rewrite.
  • The error_page directive can lead to an internal redirect similar to that created by try_files.
  • when certain status codes are encountered.
1 - 20 of 30 Next ›
Showing 20 items per page