Skip to main content

Home/ Larvata/ Group items tagged settings

Rss Feed Group items tagged

張 旭

Replication - MongoDB Manual - 0 views

  • A replica set in MongoDB is a group of mongod processes that maintain the same data set.
  • Replica sets provide redundancy and high availability, and are the basis for all production deployments.
  • With multiple copies of data on different database servers, replication provides a level of fault tolerance against the loss of a single database server.
  • ...18 more annotations...
  • replication can provide increased read capacity as clients can send read operations to different servers.
  • A replica set is a group of mongod instances that maintain the same data set.
  • A replica set contains several data bearing nodes and optionally one arbiter node.
  • one and only one member is deemed the primary node, while the other nodes are deemed secondary nodes.
  • A replica set can have only one primary capable of confirming writes with { w: "majority" } write concern; although in some circumstances, another mongod instance may transiently believe itself to also be primary.
  • The secondaries replicate the primary’s oplog and apply the operations to their data sets such that the secondaries’ data sets reflect the primary’s data set
  • add a mongod instance to a replica set as an arbiter. An arbiter participates in elections but does not hold data
  • An arbiter will always be an arbiter whereas a primary may step down and become a secondary and a secondary may become the primary during an election.
  • Secondaries replicate the primary’s oplog and apply the operations to their data sets asynchronously.
  • These slow oplog messages are logged for the secondaries in the diagnostic log under the REPL component with the text applied op: <oplog entry> took <num>ms.
  • Replication lag refers to the amount of time that it takes to copy (i.e. replicate) a write operation on the primary to a secondary.
  • When a primary does not communicate with the other members of the set for more than the configured electionTimeoutMillis period (10 seconds by default), an eligible secondary calls for an election to nominate itself as the new primary.
  • The replica set cannot process write operations until the election completes successfully.
  • The median time before a cluster elects a new primary should not typically exceed 12 seconds, assuming default replica configuration settings.
  • Factors such as network latency may extend the time required for replica set elections to complete, which in turn affects the amount of time your cluster may operate without a primary.
  • Your application connection logic should include tolerance for automatic failovers and the subsequent elections.
  • MongoDB drivers can detect the loss of the primary and automatically retry certain write operations a single time, providing additional built-in handling of automatic failovers and elections
  • By default, clients read from the primary [1]; however, clients can specify a read preference to send read operations to secondaries.
張 旭

Active Record Associations - Ruby on Rails Guides - 0 views

  • With Active Record associations, we can streamline these - and other - operations by declaratively telling Rails that there is a connection between the two models.
  • belongs_to has_one has_many has_many :through has_one :through has_and_belongs_to_many
  • an association is a connection between two Active Record models
  • ...195 more annotations...
  • Associations are implemented using macro-style calls, so that you can declaratively add features to your models
  • A belongs_to association sets up a one-to-one connection with another model, such that each instance of the declaring model "belongs to" one instance of the other model.
  • belongs_to associations must use the singular term.
  • belongs_to
  • A has_one association also sets up a one-to-one connection with another model, but with somewhat different semantics (and consequences).
  • This association indicates that each instance of a model contains or possesses one instance of another model
  • belongs_to
  • A has_many association indicates a one-to-many connection with another model.
  • This association indicates that each instance of the model has zero or more instances of another model.
  • belongs_to
  • A has_many :through association is often used to set up a many-to-many connection with another model
  • This association indicates that the declaring model can be matched with zero or more instances of another model by proceeding through a third model.
  • through:
  • through:
  • The collection of join models can be managed via the API
  • new join models are created for newly associated objects, and if some are gone their rows are deleted.
  • The has_many :through association is also useful for setting up "shortcuts" through nested has_many associations
  • A has_one :through association sets up a one-to-one connection with another model. This association indicates that the declaring model can be matched with one instance of another model by proceeding through a third model.
  • A has_and_belongs_to_many association creates a direct many-to-many connection with another model, with no intervening model.
  • id: false
  • The has_one relationship says that one of something is yours
  • using t.references :supplier instead.
  • declare a many-to-many relationship is to use has_many :through. This makes the association indirectly, through a join model
  • set up a has_many :through relationship if you need to work with the relationship model as an independent entity
  • set up a has_and_belongs_to_many relationship (though you'll need to remember to create the joining table in the database).
  • use has_many :through if you need validations, callbacks, or extra attributes on the join model
  • With polymorphic associations, a model can belong to more than one other model, on a single association.
  • belongs_to :imageable, polymorphic: true
  • a polymorphic belongs_to declaration as setting up an interface that any other model can use.
    • 張 旭
       
      _id 記錄的是不同類型的外連鍵 id;_type 記錄的是不同類型的表格名稱。
  • In designing a data model, you will sometimes find a model that should have a relation to itself
  • add a references column to the model itself
  • Controlling caching Avoiding name collisions Updating the schema Controlling association scope Bi-directional associations
  • All of the association methods are built around caching, which keeps the result of the most recent query available for further operations.
  • it is a bad idea to give an association a name that is already used for an instance method of ActiveRecord::Base. The association method would override the base method and break things.
  • You are responsible for maintaining your database schema to match your associations.
  • belongs_to associations you need to create foreign keys
  • has_and_belongs_to_many associations you need to create the appropriate join table
  • If you create an association some time after you build the underlying model, you need to remember to create an add_column migration to provide the necessary foreign key.
  • Active Record creates the name by using the lexical order of the class names
  • So a join between customer and order models will give the default join table name of "customers_orders" because "c" outranks "o" in lexical ordering.
  • For example, one would expect the tables "paper_boxes" and "papers" to generate a join table name of "papers_paper_boxes" because of the length of the name "paper_boxes", but it in fact generates a join table name of "paper_boxes_papers" (because the underscore '' is lexicographically _less than 's' in common encodings).
  • id: false
  • pass id: false to create_table because that table does not represent a model
  • By default, associations look for objects only within the current module's scope.
  • will work fine, because both the Supplier and the Account class are defined within the same scope.
  • To associate a model with a model in a different namespace, you must specify the complete class name in your association declaration:
  • class_name
  • class_name
  • Active Record provides the :inverse_of option
    • 張 旭
       
      意思是說第一次比較兩者的 first_name 是相同的;但透過 c 實體修改 first_name 之後,再次比較就不相同了,因為兩個是記憶體裡面兩個不同的物件。
  • preventing inconsistencies and making your application more efficient
  • Every association will attempt to automatically find the inverse association and set the :inverse_of option heuristically (based on the association name)
  • In database terms, this association says that this class contains the foreign key.
  • In all of these methods, association is replaced with the symbol passed as the first argument to belongs_to.
  • (force_reload = false)
  • The association method returns the associated object, if any. If no associated object is found, it returns nil.
  • the cached version will be returned.
  • The association= method assigns an associated object to this object.
  • Behind the scenes, this means extracting the primary key from the associate object and setting this object's foreign key to the same value.
  • The build_association method returns a new object of the associated type
  • but the associated object will not yet be saved.
  • The create_association method returns a new object of the associated type
  • once it passes all of the validations specified on the associated model, the associated object will be saved
  • raises ActiveRecord::RecordInvalid if the record is invalid.
  • dependent
  • counter_cache
  • :autosave :class_name :counter_cache :dependent :foreign_key :inverse_of :polymorphic :touch :validate
  • finding the number of belonging objects more efficient.
  • Although the :counter_cache option is specified on the model that includes the belongs_to declaration, the actual column must be added to the associated model.
  • add a column named orders_count to the Customer model.
  • :destroy, when the object is destroyed, destroy will be called on its associated objects.
  • deleted directly from the database without calling their destroy method.
  • Rails will not create foreign key columns for you
  • The :inverse_of option specifies the name of the has_many or has_one association that is the inverse of this association
  • set the :touch option to :true, then the updated_at or updated_on timestamp on the associated object will be set to the current time whenever this object is saved or destroyed
  • specify a particular timestamp attribute to update
  • If you set the :validate option to true, then associated objects will be validated whenever you save this object
  • By default, this is false: associated objects will not be validated when this object is saved.
  • where includes readonly select
  • make your code somewhat more efficient
  • no need to use includes for immediate associations
  • will be read-only when retrieved via the association
  • The select method lets you override the SQL SELECT clause that is used to retrieve data about the associated object
  • using the association.nil?
  • Assigning an object to a belongs_to association does not automatically save the object. It does not save the associated object either.
  • In database terms, this association says that the other class contains the foreign key.
  • the cached version will be returned.
  • :as :autosave :class_name :dependent :foreign_key :inverse_of :primary_key :source :source_type :through :validate
  • Setting the :as option indicates that this is a polymorphic association
  • :nullify causes the foreign key to be set to NULL. Callbacks are not executed.
  • It's necessary not to set or leave :nullify option for those associations that have NOT NULL database constraints.
  • The :source_type option specifies the source association type for a has_one :through association that proceeds through a polymorphic association.
  • The :source option specifies the source association name for a has_one :through association.
  • The :through option specifies a join model through which to perform the query
  • more efficient by including representatives in the association from suppliers to accounts
  • When you assign an object to a has_one association, that object is automatically saved (in order to update its foreign key).
  • If either of these saves fails due to validation errors, then the assignment statement returns false and the assignment itself is cancelled.
  • If the parent object (the one declaring the has_one association) is unsaved (that is, new_record? returns true) then the child objects are not saved.
  • If you want to assign an object to a has_one association without saving the object, use the association.build method
  • collection(force_reload = false) collection<<(object, ...) collection.delete(object, ...) collection.destroy(object, ...) collection=(objects) collection_singular_ids collection_singular_ids=(ids) collection.clear collection.empty? collection.size collection.find(...) collection.where(...) collection.exists?(...) collection.build(attributes = {}, ...) collection.create(attributes = {}) collection.create!(attributes = {})
  • In all of these methods, collection is replaced with the symbol passed as the first argument to has_many, and collection_singular is replaced with the singularized version of that symbol.
  • The collection<< method adds one or more objects to the collection by setting their foreign keys to the primary key of the calling model
  • The collection.delete method removes one or more objects from the collection by setting their foreign keys to NULL.
  • objects will be destroyed if they're associated with dependent: :destroy, and deleted if they're associated with dependent: :delete_all
  • The collection.destroy method removes one or more objects from the collection by running destroy on each object.
  • The collection_singular_ids method returns an array of the ids of the objects in the collection.
  • The collection_singular_ids= method makes the collection contain only the objects identified by the supplied primary key values, by adding and deleting as appropriate
  • The default strategy for has_many :through associations is delete_all, and for has_many associations is to set the foreign keys to NULL.
  • The collection.clear method removes all objects from the collection according to the strategy specified by the dependent option
  • uses the same syntax and options as ActiveRecord::Base.find
  • The collection.where method finds objects within the collection based on the conditions supplied but the objects are loaded lazily meaning that the database is queried only when the object(s) are accessed.
  • The collection.build method returns one or more new objects of the associated type. These objects will be instantiated from the passed attributes, and the link through their foreign key will be created, but the associated objects will not yet be saved.
  • The collection.create method returns a new object of the associated type. This object will be instantiated from the passed attributes, the link through its foreign key will be created, and, once it passes all of the validations specified on the associated model, the associated object will be saved.
  • :as :autosave :class_name :dependent :foreign_key :inverse_of :primary_key :source :source_type :through :validate
  • :delete_all causes all the associated objects to be deleted directly from the database (so callbacks will not execute)
  • :nullify causes the foreign keys to be set to NULL. Callbacks are not executed.
  • where includes readonly select
  • :conditions :through :polymorphic :foreign_key
  • By convention, Rails assumes that the column used to hold the primary key of the association is id. You can override this and explicitly specify the primary key with the :primary_key option.
  • The :source option specifies the source association name for a has_many :through association.
  • You only need to use this option if the name of the source association cannot be automatically inferred from the association name.
  • The :source_type option specifies the source association type for a has_many :through association that proceeds through a polymorphic association.
  • The :through option specifies a join model through which to perform the query.
  • has_many :through associations provide a way to implement many-to-many relationships,
  • By default, this is true: associated objects will be validated when this object is saved.
  • where extending group includes limit offset order readonly select uniq
  • If you use a hash-style where option, then record creation via this association will be automatically scoped using the hash
  • The extending method specifies a named module to extend the association proxy.
  • Association extensions
  • The group method supplies an attribute name to group the result set by, using a GROUP BY clause in the finder SQL.
  • has_many :line_items, -> { group 'orders.id' },                        through: :orders
  • more efficient by including line items in the association from customers to orders
  • The limit method lets you restrict the total number of objects that will be fetched through an association.
  • The offset method lets you specify the starting offset for fetching objects via an association
  • The order method dictates the order in which associated objects will be received (in the syntax used by an SQL ORDER BY clause).
  • Use the distinct method to keep the collection free of duplicates.
  • mostly useful together with the :through option
  • -> { distinct }
  • .all.inspect
  • If you want to make sure that, upon insertion, all of the records in the persisted association are distinct (so that you can be sure that when you inspect the association that you will never find duplicate records), you should add a unique index on the table itself
  • unique: true
  • Do not attempt to use include? to enforce distinctness in an association.
  • multiple users could be attempting this at the same time
  • checking for uniqueness using something like include? is subject to race conditions
  • When you assign an object to a has_many association, that object is automatically saved (in order to update its foreign key).
  • If any of these saves fails due to validation errors, then the assignment statement returns false and the assignment itself is cancelled.
  • If the parent object (the one declaring the has_many association) is unsaved (that is, new_record? returns true) then the child objects are not saved when they are added
  • All unsaved members of the association will automatically be saved when the parent is saved.
  • assign an object to a has_many association without saving the object, use the collection.build method
  • collection(force_reload = false) collection<<(object, ...) collection.delete(object, ...) collection.destroy(object, ...) collection=(objects) collection_singular_ids collection_singular_ids=(ids) collection.clear collection.empty? collection.size collection.find(...) collection.where(...) collection.exists?(...) collection.build(attributes = {}) collection.create(attributes = {}) collection.create!(attributes = {})
  • If the join table for a has_and_belongs_to_many association has additional columns beyond the two foreign keys, these columns will be added as attributes to records retrieved via that association.
  • Records returned with additional attributes will always be read-only
  • If you require this sort of complex behavior on the table that joins two models in a many-to-many relationship, you should use a has_many :through association instead of has_and_belongs_to_many.
  • aliased as collection.concat and collection.push.
  • The collection.delete method removes one or more objects from the collection by deleting records in the join table
  • not destroy the objects
  • The collection.destroy method removes one or more objects from the collection by running destroy on each record in the join table, including running callbacks.
  • not destroy the objects.
  • The collection.clear method removes every object from the collection by deleting the rows from the joining table.
  • not destroy the associated objects.
  • The collection.find method finds objects within the collection. It uses the same syntax and options as ActiveRecord::Base.find.
  • The collection.where method finds objects within the collection based on the conditions supplied but the objects are loaded lazily meaning that the database is queried only when the object(s) are accessed.
  • The collection.exists? method checks whether an object meeting the supplied conditions exists in the collection.
  • The collection.build method returns a new object of the associated type.
  • the associated object will not yet be saved.
  • the associated object will be saved.
  • The collection.create method returns a new object of the associated type.
  • it passes all of the validations specified on the associated model
  • :association_foreign_key :autosave :class_name :foreign_key :join_table :validate
  • The :foreign_key and :association_foreign_key options are useful when setting up a many-to-many self-join.
  • Rails assumes that the column in the join table used to hold the foreign key pointing to the other model is the name of that model with the suffix _id added.
  • If you set the :autosave option to true, Rails will save any loaded members and destroy members that are marked for destruction whenever you save the parent object.
  • By convention, Rails assumes that the column in the join table used to hold the foreign key pointing to this model is the name of this model with the suffix _id added.
  • By default, this is true: associated objects will be validated when this object is saved.
  • where extending group includes limit offset order readonly select uniq
  • set conditions via a hash
  • In this case, using @parts.assemblies.create or @parts.assemblies.build will create orders where the factory column has the value "Seattle"
  • If you use a hash-style where, then record creation via this association will be automatically scoped using the hash
  • using a GROUP BY clause in the finder SQL.
  • Use the uniq method to remove duplicates from the collection.
  • assign an object to a has_and_belongs_to_many association, that object is automatically saved (in order to update the join table).
  • If any of these saves fails due to validation errors, then the assignment statement returns false and the assignment itself is cancelled.
  • If the parent object (the one declaring the has_and_belongs_to_many association) is unsaved (that is, new_record? returns true) then the child objects are not saved when they are added.
  • If you want to assign an object to a has_and_belongs_to_many association without saving the object, use the collection.build method.
  • Normal callbacks hook into the life cycle of Active Record objects, allowing you to work with those objects at various points
  • define association callbacks by adding options to the association declaration
  • Rails passes the object being added or removed to the callback.
  • stack callbacks on a single event by passing them as an array
  • If a before_add callback throws an exception, the object does not get added to the collection.
  • if a before_remove callback throws an exception, the object does not get removed from the collection
  • extend these objects through anonymous modules, adding new finders, creators, or other methods.
  • order_number
  • use a named extension module
  • proxy_association.owner returns the object that the association is a part of.
張 旭

Helm | - 0 views

  • Helm will figure out where to install Tiller by reading your Kubernetes configuration file (usually $HOME/.kube/config). This is the same file that kubectl uses.
  • kubectl cluster-info
  • Role-Based Access Control (RBAC) enabled
  • ...133 more annotations...
  • initialize the local CLI
  • install Tiller into your Kubernetes cluster
  • helm install
  • helm init --upgrade
  • By default, when Tiller is installed, it does not have authentication enabled.
  • helm repo update
  • Without a max history set the history is kept indefinitely, leaving a large number of records for helm and tiller to maintain.
  • helm init --upgrade
  • Whenever you install a chart, a new release is created.
  • one chart can be installed multiple times into the same cluster. And each can be independently managed and upgraded.
  • helm list function will show you a list of all deployed releases.
  • helm delete
  • helm status
  • you can audit a cluster’s history, and even undelete a release (with helm rollback).
  • the Helm server (Tiller).
  • The Helm client (helm)
  • brew install kubernetes-helm
  • Tiller, the server portion of Helm, typically runs inside of your Kubernetes cluster.
  • it can also be run locally, and configured to talk to a remote Kubernetes cluster.
  • Role-Based Access Control - RBAC for short
  • create a service account for Tiller with the right roles and permissions to access resources.
  • run Tiller in an RBAC-enabled Kubernetes cluster.
  • run kubectl get pods --namespace kube-system and see Tiller running.
  • helm inspect
  • Helm will look for Tiller in the kube-system namespace unless --tiller-namespace or TILLER_NAMESPACE is set.
  • For development, it is sometimes easier to work on Tiller locally, and configure it to connect to a remote Kubernetes cluster.
  • even when running locally, Tiller will store release configuration in ConfigMaps inside of Kubernetes.
  • helm version should show you both the client and server version.
  • Tiller stores its data in Kubernetes ConfigMaps, you can safely delete and re-install Tiller without worrying about losing any data.
  • helm reset
  • The --node-selectors flag allows us to specify the node labels required for scheduling the Tiller pod.
  • --override allows you to specify properties of Tiller’s deployment manifest.
  • helm init --override manipulates the specified properties of the final manifest (there is no “values” file).
  • The --output flag allows us skip the installation of Tiller’s deployment manifest and simply output the deployment manifest to stdout in either JSON or YAML format.
  • By default, tiller stores release information in ConfigMaps in the namespace where it is running.
  • switch from the default backend to the secrets backend, you’ll have to do the migration for this on your own.
  • a beta SQL storage backend that stores release information in an SQL database (only postgres has been tested so far).
  • Once you have the Helm Client and Tiller successfully installed, you can move on to using Helm to manage charts.
  • Helm requires that kubelet have access to a copy of the socat program to proxy connections to the Tiller API.
  • A Release is an instance of a chart running in a Kubernetes cluster. One chart can often be installed many times into the same cluster.
  • helm init --client-only
  • helm init --dry-run --debug
  • A panic in Tiller is almost always the result of a failure to negotiate with the Kubernetes API server
  • Tiller and Helm have to negotiate a common version to make sure that they can safely communicate without breaking API assumptions
  • helm delete --purge
  • Helm stores some files in $HELM_HOME, which is located by default in ~/.helm
  • A Chart is a Helm package. It contains all of the resource definitions necessary to run an application, tool, or service inside of a Kubernetes cluster.
  • it like the Kubernetes equivalent of a Homebrew formula, an Apt dpkg, or a Yum RPM file.
  • A Repository is the place where charts can be collected and shared.
  • Set the $HELM_HOME environment variable
  • each time it is installed, a new release is created.
  • Helm installs charts into Kubernetes, creating a new release for each installation. And to find new charts, you can search Helm chart repositories.
  • chart repository is named stable by default
  • helm search shows you all of the available charts
  • helm inspect
  • To install a new package, use the helm install command. At its simplest, it takes only one argument: The name of the chart.
  • If you want to use your own release name, simply use the --name flag on helm install
  • additional configuration steps you can or should take.
  • Helm does not wait until all of the resources are running before it exits. Many charts require Docker images that are over 600M in size, and may take a long time to install into the cluster.
  • helm status
  • helm inspect values
  • helm inspect values stable/mariadb
  • override any of these settings in a YAML formatted file, and then pass that file during installation.
  • helm install -f config.yaml stable/mariadb
  • --values (or -f): Specify a YAML file with overrides.
  • --set (and its variants --set-string and --set-file): Specify overrides on the command line.
  • Values that have been --set can be cleared by running helm upgrade with --reset-values specified.
  • Chart designers are encouraged to consider the --set usage when designing the format of a values.yaml file.
  • --set-file key=filepath is another variant of --set. It reads the file and use its content as a value.
  • inject a multi-line text into values without dealing with indentation in YAML.
  • An unpacked chart directory
  • When a new version of a chart is released, or when you want to change the configuration of your release, you can use the helm upgrade command.
  • Kubernetes charts can be large and complex, Helm tries to perform the least invasive upgrade.
  • It will only update things that have changed since the last release
  • $ helm upgrade -f panda.yaml happy-panda stable/mariadb
  • deployment
  • If both are used, --set values are merged into --values with higher precedence.
  • The helm get command is a useful tool for looking at a release in the cluster.
  • helm rollback
  • A release version is an incremental revision. Every time an install, upgrade, or rollback happens, the revision number is incremented by 1.
  • helm history
  • a release name cannot be re-used.
  • you can rollback a deleted resource, and have it re-activate.
  • helm repo list
  • helm repo add
  • helm repo update
  • The Chart Development Guide explains how to develop your own charts.
  • helm create
  • helm lint
  • helm package
  • Charts that are archived can be loaded into chart repositories.
  • chart repository server
  • Tiller can be installed into any namespace.
  • Limiting Tiller to only be able to install into specific namespaces and/or resource types is controlled by Kubernetes RBAC roles and rolebindings
  • Release names are unique PER TILLER INSTANCE
  • Charts should only contain resources that exist in a single namespace.
  • not recommended to have multiple Tillers configured to manage resources in the same namespace.
  • a client-side Helm plugin. A plugin is a tool that can be accessed through the helm CLI, but which is not part of the built-in Helm codebase.
  • Helm plugins are add-on tools that integrate seamlessly with Helm. They provide a way to extend the core feature set of Helm, but without requiring every new feature to be written in Go and added to the core tool.
  • Helm plugins live in $(helm home)/plugins
  • The Helm plugin model is partially modeled on Git’s plugin model
  • helm referred to as the porcelain layer, with plugins being the plumbing.
  • helm plugin install https://github.com/technosophos/helm-template
  • command is the command that this plugin will execute when it is called.
  • Environment variables are interpolated before the plugin is executed.
  • The command itself is not executed in a shell. So you can’t oneline a shell script.
  • Helm is able to fetch Charts using HTTP/S
  • Variables like KUBECONFIG are set for the plugin if they are set in the outer environment.
  • In Kubernetes, granting a role to an application-specific service account is a best practice to ensure that your application is operating in the scope that you have specified.
  • restrict Tiller’s capabilities to install resources to certain namespaces, or to grant a Helm client running access to a Tiller instance.
  • Service account with cluster-admin role
  • The cluster-admin role is created by default in a Kubernetes cluster
  • Deploy Tiller in a namespace, restricted to deploying resources only in that namespace
  • Deploy Tiller in a namespace, restricted to deploying resources in another namespace
  • When running a Helm client in a pod, in order for the Helm client to talk to a Tiller instance, it will need certain privileges to be granted.
  • SSL Between Helm and Tiller
  • The Tiller authentication model uses client-side SSL certificates.
  • creating an internal CA, and using both the cryptographic and identity functions of SSL.
  • Helm is a powerful and flexible package-management and operations tool for Kubernetes.
  • default installation applies no security configurations
  • with a cluster that is well-secured in a private network with no data-sharing or no other users or teams.
  • With great power comes great responsibility.
  • Choose the Best Practices you should apply to your helm installation
  • Role-based access control, or RBAC
  • Tiller’s gRPC endpoint and its usage by Helm
  • Kubernetes employ a role-based access control (or RBAC) system (as do modern operating systems) to help mitigate the damage that can be done if credentials are misused or bugs exist.
  • In the default installation the gRPC endpoint that Tiller offers is available inside the cluster (not external to the cluster) without authentication configuration applied.
  • Tiller stores its release information in ConfigMaps. We suggest changing the default to Secrets.
  • release information
  • charts
  • charts are a kind of package that not only installs containers you may or may not have validated yourself, but it may also install into more than one namespace.
  • As with all shared software, in a controlled or shared environment you must validate all software you install yourself before you install it.
  • Helm’s provenance tools to ensure the provenance and integrity of charts
  •  
    "Helm will figure out where to install Tiller by reading your Kubernetes configuration file (usually $HOME/.kube/config). This is the same file that kubectl uses."
張 旭

Kubernetes Deployments: The Ultimate Guide - Semaphore - 1 views

  • Continuous integration gives you confidence in your code. To extend that confidence to the release process, your deployment operations need to come with a safety belt.
  • these Kubernetes objects ensure that you can progressively deploy, roll back and scale your applications without downtime.
  • A pod is just a group of containers (it can be a group of one container) that run on the same machine, and share a few things together.
  • ...34 more annotations...
  • the containers within a pod can communicate with each other over localhost
  • From a network perspective, all the processes in these containers are local.
  • we can never create a standalone container: the closest we can do is create a pod, with a single container in it.
  • Kubernetes is a declarative system (by opposition to imperative systems).
  • All we can do, is describe what we want to have, and wait for Kubernetes to take action to reconcile what we have, with what we want to have.
  • In other words, we can say, “I would like a 40-feet long blue container with yellow doors“, and Kubernetes will find such a container for us. If it doesn’t exist, it will build it; if there is already one but it’s green with red doors, it will paint it for us; if there is already a container of the right size and color, Kubernetes will do nothing, since what we have already matches what we want.
  • The specification of a replica set looks very much like the specification of a pod, except that it carries a number, indicating how many replicas
  • What happens if we change that definition? Suddenly, there are zero pods matching the new specification.
  • the creation of new pods could happen in a more gradual manner.
  • the specification for a deployment looks very much like the one for a replica set: it features a pod specification, and a number of replicas.
  • Deployments, however, don’t create or delete pods directly.
  • When we update a deployment and adjust the number of replicas, it passes that update down to the replica set.
  • When we update the pod specification, the deployment creates a new replica set with the updated pod specification. That replica set has an initial size of zero. Then, the size of that replica set is progressively increased, while decreasing the size of the other replica set.
  • we are going to fade in (turn up the volume) on the new replica set, while we fade out (turn down the volume) on the old one.
  • During the whole process, requests are sent to pods of both the old and new replica sets, without any downtime for our users.
  • A readiness probe is a test that we add to a container specification.
  • Kubernetes supports three ways of implementing readiness probes:Running a command inside a container;Making an HTTP(S) request against a container; orOpening a TCP socket against a container.
  • When we roll out a new version, Kubernetes will wait for the new pod to mark itself as “ready” before moving on to the next one.
  • If there is no readiness probe, then the container is considered as ready, as long as it could be started.
  • MaxSurge indicates how many extra pods we are willing to run during a rolling update, while MaxUnavailable indicates how many pods we can lose during the rolling update.
  • Setting MaxUnavailable to 0 means, “do not shutdown any old pod before a new one is up and ready to serve traffic“.
  • Setting MaxSurge to 100% means, “immediately start all the new pods“, implying that we have enough spare capacity on our cluster, and that we want to go as fast as possible.
  • kubectl rollout undo deployment web
  • the replica set doesn’t look at the pods’ specifications, but only at their labels.
  • A replica set contains a selector, which is a logical expression that “selects” (just like a SELECT query in SQL) a number of pods.
  • it is absolutely possible to manually create pods with these labels, but running a different image (or with different settings), and fool our replica set.
  • Selectors are also used by services, which act as the load balancers for Kubernetes traffic, internal and external.
  • internal IP address (denoted by the name ClusterIP)
  • during a rollout, the deployment doesn’t reconfigure or inform the load balancer that pods are started and stopped. It happens automatically through the selector of the service associated to the load balancer.
  • a pod is added as a valid endpoint for a service only if all its containers pass their readiness check. In other words, a pod starts receiving traffic only once it’s actually ready for it.
  • In blue/green deployment, we want to instantly switch over all the traffic from the old version to the new, instead of doing it progressively
  • We can achieve blue/green deployment by creating multiple deployments (in the Kubernetes sense), and then switching from one to another by changing the selector of our service
  • kubectl label pods -l app=blue,version=v1.5 status=enabled
  • kubectl label pods -l app=blue,version=v1.4 status-
  •  
    "Continuous integration gives you confidence in your code. To extend that confidence to the release process, your deployment operations need to come with a safety belt."
張 旭

Understanding Nginx HTTP Proxying, Load Balancing, Buffering, and Caching | DigitalOcean - 0 views

  • allow Nginx to pass requests off to backend http servers for further processing
  • Nginx is often set up as a reverse proxy solution to help scale out infrastructure or to pass requests to other servers that are not designed to handle large client loads
  • explore buffering and caching to improve the performance of proxying operations for clients
  • ...48 more annotations...
  • Nginx is built to handle many concurrent connections at the same time.
  • provides you with flexibility in easily adding backend servers or taking them down as needed for maintenance
  • Proxying in Nginx is accomplished by manipulating a request aimed at the Nginx server and passing it to other servers for the actual processing
  • The servers that Nginx proxies requests to are known as upstream servers.
  • Nginx can proxy requests to servers that communicate using the http(s), FastCGI, SCGI, and uwsgi, or memcached protocols through separate sets of directives for each type of proxy
  • When a request matches a location with a proxy_pass directive inside, the request is forwarded to the URL given by the directive
  • For example, when a request for /match/here/please is handled by this block, the request URI will be sent to the example.com server as http://example.com/match/here/please
  • The request coming from Nginx on behalf of a client will look different than a request coming directly from a client
  • Nginx gets rid of any empty headers
  • Nginx, by default, will consider any header that contains underscores as invalid. It will remove these from the proxied request
    • 張 旭
       
      這裡要注意一下,header 欄位名稱有設定底線的,要設定 Nginx 讓它可以通過。
  • The "Host" header is re-written to the value defined by the $proxy_host variable.
  • The upstream should not expect this connection to be persistent
  • Headers with empty values are completely removed from the passed request.
  • if your backend application will be processing non-standard headers, you must make sure that they do not have underscores
  • by default, this will be set to the value of $proxy_host, a variable that will contain the domain name or IP address and port taken directly from the proxy_pass definition
  • This is selected by default as it is the only address Nginx can be sure the upstream server responds to
  • (as it is pulled directly from the connection info)
  • $http_host: Sets the "Host" header to the "Host" header from the client request.
  • The headers sent by the client are always available in Nginx as variables. The variables will start with an $http_ prefix, followed by the header name in lowercase, with any dashes replaced by underscores.
  • preference to: the host name from the request line itself
  • set the "Host" header to the $host variable. It is the most flexible and will usually provide the proxied servers with a "Host" header filled in as accurately as possible
  • sets the "Host" header to the $host variable, which should contain information about the original host being requested
  • This variable takes the value of the original X-Forwarded-For header retrieved from the client and adds the Nginx server's IP address to the end.
  • The upstream directive must be set in the http context of your Nginx configuration.
  • http context
  • Once defined, this name will be available for use within proxy passes as if it were a regular domain name
  • By default, this is just a simple round-robin selection process (each request will be routed to a different host in turn)
  • Specifies that new connections should always be given to the backend that has the least number of active connections.
  • distributes requests to different servers based on the client's IP address.
  • mainly used with memcached proxying
  • As for the hash method, you must provide the key to hash against
  • Server Weight
  • Nginx's buffering and caching capabilities
  • Without buffers, data is sent from the proxied server and immediately begins to be transmitted to the client.
  • With buffers, the Nginx proxy will temporarily store the backend's response and then feed this data to the client
  • Nginx defaults to a buffering design
  • can be set in the http, server, or location contexts.
  • the sizing directives are configured per request, so increasing them beyond your need can affect your performance
  • When buffering is "off" only the buffer defined by the proxy_buffer_size directive will be used
  • A high availability (HA) setup is an infrastructure without a single point of failure, and your load balancers are a part of this configuration.
  • multiple load balancers (one active and one or more passive) behind a static IP address that can be remapped from one server to another.
  • Nginx also provides a way to cache content from backend servers
  • The proxy_cache_path directive must be set in the http context.
  • proxy_cache backcache;
    • 張 旭
       
      這裡的 backcache 是前文設定的 backcache 變數,看起來每個 location 都可以有自己的 cache 目錄。
  • The proxy_cache_bypass directive is set to the $http_cache_control variable. This will contain an indicator as to whether the client is explicitly requesting a fresh, non-cached version of the resource
  • any user-related data should not be cached
  • For private content, you should set the Cache-Control header to "no-cache", "no-store", or "private" depending on the nature of the data
張 旭

Production environment | Kubernetes - 0 views

  • to promote an existing cluster for production use
  • Separating the control plane from the worker nodes.
  • Having enough worker nodes available
  • ...22 more annotations...
  • You can use role-based access control (RBAC) and other security mechanisms to make sure that users and workloads can get access to the resources they need, while keeping workloads, and the cluster itself, secure. You can set limits on the resources that users and workloads can access by managing policies and container resources.
  • you need to plan how to scale to relieve increased pressure from more requests to the control plane and worker nodes or scale down to reduce unused resources.
  • Managed control plane: Let the provider manage the scale and availability of the cluster's control plane, as well as handle patches and upgrades.
  • The simplest Kubernetes cluster has the entire control plane and worker node services running on the same machine.
  • You can deploy a control plane using tools such as kubeadm, kops, and kubespray.
  • Secure communications between control plane services are implemented using certificates.
  • Certificates are automatically generated during deployment or you can generate them using your own certificate authority.
  • Separate and backup etcd service: The etcd services can either run on the same machines as other control plane services or run on separate machines
  • Create multiple control plane systems: For high availability, the control plane should not be limited to a single machine
  • Some deployment tools set up Raft consensus algorithm to do leader election of Kubernetes services. If the primary goes away, another service elects itself and take over.
  • Groups of zones are referred to as regions.
  • if you installed with kubeadm, there are instructions to help you with Certificate Management and Upgrading kubeadm clusters.
  • Production-quality workloads need to be resilient and anything they rely on needs to be resilient (such as CoreDNS).
  • Add nodes to the cluster: If you are managing your own cluster you can add nodes by setting up your own machines and either adding them manually or having them register themselves to the cluster’s apiserver.
  • Set up node health checks: For important workloads, you want to make sure that the nodes and pods running on those nodes are healthy.
  • Authentication: The apiserver can authenticate users using client certificates, bearer tokens, an authenticating proxy, or HTTP basic auth.
  • Authorization: When you set out to authorize your regular users, you will probably choose between RBAC and ABAC authorization.
  • Role-based access control (RBAC): Lets you assign access to your cluster by allowing specific sets of permissions to authenticated users. Permissions can be assigned for a specific namespace (Role) or across the entire cluster (ClusterRole).
  • Attribute-based access control (ABAC): Lets you create policies based on resource attributes in the cluster and will allow or deny access based on those attributes.
  • Set limits on workload resources
  • Set namespace limits: Set per-namespace quotas on things like memory and CPU
  • Prepare for DNS demand: If you expect workloads to massively scale up, your DNS service must be ready to scale up as well.
張 旭

MongoDB Performance - MongoDB Manual - 0 views

  • MongoDB uses a locking system to ensure data set consistency. If certain operations are long-running or a queue forms, performance will degrade as requests and operations wait for the lock.
  • performance limitations as a result of inadequate or inappropriate indexing strategies, or as a consequence of poor schema design patterns.
  • performance issues may be temporary and related to abnormal traffic load.
  • ...9 more annotations...
  • Lock-related slowdowns can be intermittent.
  • If globalLock.currentQueue.total is consistently high, then there is a chance that a large number of requests are waiting for a lock.
  • If globalLock.totalTime is high relative to uptime, the database has existed in a lock state for a significant amount of time.
  • For write-heavy applications, deploy sharding and add one or more shards to a sharded cluster to distribute load among mongod instances.
  • Unless constrained by system-wide limits, the maximum number of incoming connections supported by MongoDB is configured with the maxIncomingConnections setting.
  • When logLevel is set to 0, MongoDB records slow operations to the diagnostic log at a rate determined by slowOpSampleRate.
  • At higher logLevel settings, all operations appear in the diagnostic log regardless of their latency with the following exception
  • Full Time Diagnostic Data Collection (FTDC) mechanism. FTDC data files are compressed, are not human-readable, and inherit the same file access permissions as the MongoDB data files.
  • mongod processes store FTDC data files in a diagnostic.data directory under the instances storage.dbPath.
  •  
    "MongoDB uses a locking system to ensure data set consistency. If certain operations are long-running or a queue forms, performance will degrade as requests and operations wait for the lock."
張 旭

Helm | - 0 views

  • A chart is a collection of files that describe a related set of Kubernetes resources.
  • A single chart might be used to deploy something simple, like a memcached pod, or something complex, like a full web app stack with HTTP servers, databases, caches, and so on.
  • Charts are created as files laid out in a particular directory tree, then they can be packaged into versioned archives to be deployed.
  • ...170 more annotations...
  • A chart is organized as a collection of files inside of a directory.
  • values.yaml # The default configuration values for this chart
  • charts/ # A directory containing any charts upon which this chart depends.
  • templates/ # A directory of templates that, when combined with values, # will generate valid Kubernetes manifest files.
  • version: A SemVer 2 version (required)
  • apiVersion: The chart API version, always "v1" (required)
  • Every chart must have a version number. A version must follow the SemVer 2 standard.
  • non-SemVer names are explicitly disallowed by the system.
  • When generating a package, the helm package command will use the version that it finds in the Chart.yaml as a token in the package name.
  • the appVersion field is not related to the version field. It is a way of specifying the version of the application.
  • appVersion: The version of the app that this contains (optional). This needn't be SemVer.
  • If the latest version of a chart in the repository is marked as deprecated, then the chart as a whole is considered to be deprecated.
  • deprecated: Whether this chart is deprecated (optional, boolean)
  • one chart may depend on any number of other charts.
  • dependencies can be dynamically linked through the requirements.yaml file or brought in to the charts/ directory and managed manually.
  • the preferred method of declaring dependencies is by using a requirements.yaml file inside of your chart.
  • A requirements.yaml file is a simple file for listing your dependencies.
  • The repository field is the full URL to the chart repository.
  • you must also use helm repo add to add that repo locally.
  • helm dependency update and it will use your dependency file to download all the specified charts into your charts/ directory for you.
  • When helm dependency update retrieves charts, it will store them as chart archives in the charts/ directory.
  • Managing charts with requirements.yaml is a good way to easily keep charts updated, and also share requirements information throughout a team.
  • All charts are loaded by default.
  • The condition field holds one or more YAML paths (delimited by commas). If this path exists in the top parent’s values and resolves to a boolean value, the chart will be enabled or disabled based on that boolean value.
  • The tags field is a YAML list of labels to associate with this chart.
  • all charts with tags can be enabled or disabled by specifying the tag and a boolean value.
  • The --set parameter can be used as usual to alter tag and condition values.
  • Conditions (when set in values) always override tags.
  • The first condition path that exists wins and subsequent ones for that chart are ignored.
  • The keys containing the values to be imported can be specified in the parent chart’s requirements.yaml file using a YAML list. Each item in the list is a key which is imported from the child chart’s exports field.
  • specifying the key data in our import list, Helm looks in the exports field of the child chart for data key and imports its contents.
  • the parent key data is not contained in the parent’s final values. If you need to specify the parent key, use the ‘child-parent’ format.
  • To access values that are not contained in the exports key of the child chart’s values, you will need to specify the source key of the values to be imported (child) and the destination path in the parent chart’s values (parent).
  • To drop a dependency into your charts/ directory, use the helm fetch command
  • A dependency can be either a chart archive (foo-1.2.3.tgz) or an unpacked chart directory.
  • name cannot start with _ or .. Such files are ignored by the chart loader.
  • a single release is created with all the objects for the chart and its dependencies.
  • Helm Chart templates are written in the Go template language, with the addition of 50 or so add-on template functions from the Sprig library and a few other specialized functions
  • When Helm renders the charts, it will pass every file in that directory through the template engine.
  • Chart developers may supply a file called values.yaml inside of a chart. This file can contain default values.
  • Chart users may supply a YAML file that contains values. This can be provided on the command line with helm install.
  • When a user supplies custom values, these values will override the values in the chart’s values.yaml file.
  • Template files follow the standard conventions for writing Go templates
  • {{default "minio" .Values.storage}}
  • Values that are supplied via a values.yaml file (or via the --set flag) are accessible from the .Values object in a template.
  • pre-defined, are available to every template, and cannot be overridden
  • the names are case sensitive
  • Release.Name: The name of the release (not the chart)
  • Release.IsUpgrade: This is set to true if the current operation is an upgrade or rollback.
  • Release.Revision: The revision number. It begins at 1, and increments with each helm upgrade
  • Chart: The contents of the Chart.yaml
  • Files: A map-like object containing all non-special files in the chart.
  • Files can be accessed using {{index .Files "file.name"}} or using the {{.Files.Get name}} or {{.Files.GetString name}} functions.
  • .helmignore
  • access the contents of the file as []byte using {{.Files.GetBytes}}
  • Any unknown Chart.yaml fields will be dropped
  • Chart.yaml cannot be used to pass arbitrarily structured data into the template.
  • A values file is formatted in YAML.
  • A chart may include a default values.yaml file
  • be merged into the default values file.
  • The default values file included inside of a chart must be named values.yaml
  • accessible inside of templates using the .Values object
  • Values files can declare values for the top-level chart, as well as for any of the charts that are included in that chart’s charts/ directory.
  • Charts at a higher level have access to all of the variables defined beneath.
  • lower level charts cannot access things in parent charts
  • Values are namespaced, but namespaces are pruned.
  • the scope of the values has been reduced and the namespace prefix removed
  • Helm supports special “global” value.
  • a way of sharing one top-level variable with all subcharts, which is useful for things like setting metadata properties like labels.
  • If a subchart declares a global variable, that global will be passed downward (to the subchart’s subcharts), but not upward to the parent chart.
  • global variables of parent charts take precedence over the global variables from subcharts.
  • helm lint
  • A chart repository is an HTTP server that houses one or more packaged charts
  • Any HTTP server that can serve YAML files and tar files and can answer GET requests can be used as a repository server.
  • Helm does not provide tools for uploading charts to remote repository servers.
  • the only way to add a chart to $HELM_HOME/starters is to manually copy it there.
  • Helm provides a hook mechanism to allow chart developers to intervene at certain points in a release’s life cycle.
  • Execute a Job to back up a database before installing a new chart, and then execute a second job after the upgrade in order to restore data.
  • Hooks are declared as an annotation in the metadata section of a manifest
  • Hooks work like regular templates, but they have special annotations
  • pre-install
  • post-install: Executes after all resources are loaded into Kubernetes
  • pre-delete
  • post-delete: Executes on a deletion request after all of the release’s resources have been deleted.
  • pre-upgrade
  • post-upgrade
  • pre-rollback
  • post-rollback: Executes on a rollback request after all resources have been modified.
  • crd-install
  • test-success: Executes when running helm test and expects the pod to return successfully (return code == 0).
  • test-failure: Executes when running helm test and expects the pod to fail (return code != 0).
  • Hooks allow you, the chart developer, an opportunity to perform operations at strategic points in a release lifecycle
  • Tiller then loads the hook with the lowest weight first (negative to positive)
  • Tiller returns the release name (and other data) to the client
  • If the resources is a Job kind, Tiller will wait until the job successfully runs to completion.
  • if the job fails, the release will fail. This is a blocking operation, so the Helm client will pause while the Job is run.
  • If they have hook weights (see below), they are executed in weighted order. Otherwise, ordering is not guaranteed.
  • good practice to add a hook weight, and set it to 0 if weight is not important.
  • The resources that a hook creates are not tracked or managed as part of the release.
  • leave the hook resource alone.
  • To destroy such resources, you need to either write code to perform this operation in a pre-delete or post-delete hook or add "helm.sh/hook-delete-policy" annotation to the hook template file.
  • Hooks are just Kubernetes manifest files with special annotations in the metadata section
  • One resource can implement multiple hooks
  • no limit to the number of different resources that may implement a given hook.
  • When subcharts declare hooks, those are also evaluated. There is no way for a top-level chart to disable the hooks declared by subcharts.
  • Hook weights can be positive or negative numbers but must be represented as strings.
  • sort those hooks in ascending order.
  • Hook deletion policies
  • "before-hook-creation" specifies Tiller should delete the previous hook before the new hook is launched.
  • By default Tiller will wait for 60 seconds for a deleted hook to no longer exist in the API server before timing out.
  • Custom Resource Definitions (CRDs) are a special kind in Kubernetes.
  • The crd-install hook is executed very early during an installation, before the rest of the manifests are verified.
  • A common reason why the hook resource might already exist is that it was not deleted following use on a previous install/upgrade.
  • Helm uses Go templates for templating your resource files.
  • two special template functions: include and required
  • include function allows you to bring in another template, and then pass the results to other template functions.
  • The required function allows you to declare a particular values entry as required for template rendering.
  • If the value is empty, the template rendering will fail with a user submitted error message.
  • When you are working with string data, you are always safer quoting the strings than leaving them as bare words
  • Quote Strings, Don’t Quote Integers
  • when working with integers do not quote the values
  • env variables values which are expected to be string
  • to include a template, and then perform an operation on that template’s output, Helm has a special include function
  • The above includes a template called toYaml, passes it $value, and then passes the output of that template to the nindent function.
  • Go provides a way for setting template options to control behavior when a map is indexed with a key that’s not present in the map
  • The required function gives developers the ability to declare a value entry as required for template rendering.
  • The tpl function allows developers to evaluate strings as templates inside a template.
  • Rendering a external configuration file
  • (.Files.Get "conf/app.conf")
  • Image pull secrets are essentially a combination of registry, username, and password.
  • Automatically Roll Deployments When ConfigMaps or Secrets change
  • configmaps or secrets are injected as configuration files in containers
  • a restart may be required should those be updated with a subsequent helm upgrade
  • The sha256sum function can be used to ensure a deployment’s annotation section is updated if another file changes
  • checksum/config: {{ include (print $.Template.BasePath "/configmap.yaml") . | sha256sum }}
  • helm upgrade --recreate-pods
  • "helm.sh/resource-policy": keep
  • resources that should not be deleted when Helm runs a helm delete
  • this resource becomes orphaned. Helm will no longer manage it in any way.
  • create some reusable parts in your chart
  • In the templates/ directory, any file that begins with an underscore(_) is not expected to output a Kubernetes manifest file.
  • by convention, helper templates and partials are placed in a _helpers.tpl file.
  • The current best practice for composing a complex application from discrete parts is to create a top-level umbrella chart that exposes the global configurations, and then use the charts/ subdirectory to embed each of the components.
  • SAP’s Converged charts: These charts install SAP Converged Cloud a full OpenStack IaaS on Kubernetes. All of the charts are collected together in one GitHub repository, except for a few submodules.
  • Deis’s Workflow: This chart exposes the entire Deis PaaS system with one chart. But it’s different from the SAP chart in that this umbrella chart is built from each component, and each component is tracked in a different Git repository.
  • YAML is a superset of JSON
  • any valid JSON structure ought to be valid in YAML.
  • As a best practice, templates should follow a YAML-like syntax unless the JSON syntax substantially reduces the risk of a formatting issue.
  • There are functions in Helm that allow you to generate random data, cryptographic keys, and so on.
  • a chart repository is a location where packaged charts can be stored and shared.
  • A chart repository is an HTTP server that houses an index.yaml file and optionally some packaged charts.
  • Because a chart repository can be any HTTP server that can serve YAML and tar files and can answer GET requests, you have a plethora of options when it comes down to hosting your own chart repository.
  • It is not required that a chart package be located on the same server as the index.yaml file.
  • A valid chart repository must have an index file. The index file contains information about each chart in the chart repository.
  • The Helm project provides an open-source Helm repository server called ChartMuseum that you can host yourself.
  • $ helm repo index fantastic-charts --url https://fantastic-charts.storage.googleapis.com
  • A repository will not be added if it does not contain a valid index.yaml
  • add the repository to their helm client via the helm repo add [NAME] [URL] command with any name they would like to use to reference the repository.
  • Helm has provenance tools which help chart users verify the integrity and origin of a package.
  • Integrity is established by comparing a chart to a provenance record
  • The provenance file contains a chart’s YAML file plus several pieces of verification information
  • Chart repositories serve as a centralized collection of Helm charts.
  • Chart repositories must make it possible to serve provenance files over HTTP via a specific request, and must make them available at the same URI path as the chart.
  • We don’t want to be “the certificate authority” for all chart signers. Instead, we strongly favor a decentralized model, which is part of the reason we chose OpenPGP as our foundational technology.
  • The Keybase platform provides a public centralized repository for trust information.
  • A chart contains a number of Kubernetes resources and components that work together.
  • A test in a helm chart lives under the templates/ directory and is a pod definition that specifies a container with a given command to run.
  • The pod definition must contain one of the helm test hook annotations: helm.sh/hook: test-success or helm.sh/hook: test-failure
  • helm test
  • nest your test suite under a tests/ directory like <chart-name>/templates/tests/
張 旭

Auto DevOps | GitLab - 0 views

  • Auto DevOps provides pre-defined CI/CD configuration which allows you to automatically detect, build, test, deploy, and monitor your applications
  • Just push your code and GitLab takes care of everything else.
  • Auto DevOps will be automatically disabled on the first pipeline failure.
  • ...78 more annotations...
  • Your project will continue to use an alternative CI/CD configuration file if one is found
  • Auto DevOps works with any Kubernetes cluster;
  • using the Docker or Kubernetes executor, with privileged mode enabled.
  • Base domain (needed for Auto Review Apps and Auto Deploy)
  • Kubernetes (needed for Auto Review Apps, Auto Deploy, and Auto Monitoring)
  • Prometheus (needed for Auto Monitoring)
  • scrape your Kubernetes cluster.
  • project level as a variable: KUBE_INGRESS_BASE_DOMAIN
  • A wildcard DNS A record matching the base domain(s) is required
  • Once set up, all requests will hit the load balancer, which in turn will route them to the Kubernetes pods that run your application(s).
  • review/ (every environment starting with review/)
  • staging
  • production
  • need to define a separate KUBE_INGRESS_BASE_DOMAIN variable for all the above based on the environment.
  • Continuous deployment to production: Enables Auto Deploy with master branch directly deployed to production.
  • Continuous deployment to production using timed incremental rollout
  • Automatic deployment to staging, manual deployment to production
  • Auto Build creates a build of the application using an existing Dockerfile or Heroku buildpacks.
  • If a project’s repository contains a Dockerfile, Auto Build will use docker build to create a Docker image.
  • Each buildpack requires certain files to be in your project’s repository for Auto Build to successfully build your application.
  • Auto Test automatically runs the appropriate tests for your application using Herokuish and Heroku buildpacks by analyzing your project to detect the language and framework.
  • Auto Code Quality uses the Code Quality image to run static analysis and other code checks on the current code.
  • Static Application Security Testing (SAST) uses the SAST Docker image to run static analysis on the current code and checks for potential security issues.
  • Dependency Scanning uses the Dependency Scanning Docker image to run analysis on the project dependencies and checks for potential security issues.
  • License Management uses the License Management Docker image to search the project dependencies for their license.
  • Vulnerability Static Analysis for containers uses Clair to run static analysis on a Docker image and checks for potential security issues.
  • Review Apps are temporary application environments based on the branch’s code so developers, designers, QA, product managers, and other reviewers can actually see and interact with code changes as part of the review process. Auto Review Apps create a Review App for each branch. Auto Review Apps will deploy your app to your Kubernetes cluster only. When no cluster is available, no deployment will occur.
  • The Review App will have a unique URL based on the project ID, the branch or tag name, and a unique number, combined with the Auto DevOps base domain.
  • Review apps are deployed using the auto-deploy-app chart with Helm, which can be customized.
  • Your apps should not be manipulated outside of Helm (using Kubernetes directly).
  • Dynamic Application Security Testing (DAST) uses the popular open source tool OWASP ZAProxy to perform an analysis on the current code and checks for potential security issues.
  • Auto Browser Performance Testing utilizes the Sitespeed.io container to measure the performance of a web page.
  • add the paths to a file named .gitlab-urls.txt in the root directory, one per line.
  • After a branch or merge request is merged into the project’s default branch (usually master), Auto Deploy deploys the application to a production environment in the Kubernetes cluster, with a namespace based on the project name and unique project ID
  • Auto Deploy doesn’t include deployments to staging or canary by default, but the Auto DevOps template contains job definitions for these tasks if you want to enable them.
  • Apps are deployed using the auto-deploy-app chart with Helm.
  • For internal and private projects a GitLab Deploy Token will be automatically created, when Auto DevOps is enabled and the Auto DevOps settings are saved.
  • If the GitLab Deploy Token cannot be found, CI_REGISTRY_PASSWORD is used. Note that CI_REGISTRY_PASSWORD is only valid during deployment.
  • If present, DB_INITIALIZE will be run as a shell command within an application pod as a helm post-install hook.
  • a post-install hook means that if any deploy succeeds, DB_INITIALIZE will not be processed thereafter.
  • DB_MIGRATE will be run as a shell command within an application pod as a helm pre-upgrade hook.
    • 張 旭
       
      如果專案類型不同,就要去查 buildpacks 裡面如何叫用該指令,例如 laravel 的 migration
    • 張 旭
       
      如果是自己的 Dockerfile 建立起來的,看來就不用鳥 buildpacks 的作法
  • Once your application is deployed, Auto Monitoring makes it possible to monitor your application’s server and response metrics right out of the box.
  • annotate the NGINX Ingress deployment to be scraped by Prometheus using prometheus.io/scrape: "true" and prometheus.io/port: "10254"
  • If you are also using Auto Review Apps and Auto Deploy and choose to provide your own Dockerfile, make sure you expose your application to port 5000 as this is the port assumed by the default Helm chart.
  • While Auto DevOps provides great defaults to get you started, you can customize almost everything to fit your needs; from custom buildpacks, to Dockerfiles, Helm charts, or even copying the complete CI/CD configuration into your project to enable staging and canary deployments, and more.
  • If your project has a Dockerfile in the root of the project repo, Auto DevOps will build a Docker image based on the Dockerfile rather than using buildpacks.
  • Auto DevOps uses Helm to deploy your application to Kubernetes.
  • Bundled chart - If your project has a ./chart directory with a Chart.yaml file in it, Auto DevOps will detect the chart and use it instead of the default one.
  • Create a project variable AUTO_DEVOPS_CHART with the URL of a custom chart to use or create two project variables AUTO_DEVOPS_CHART_REPOSITORY with the URL of a custom chart repository and AUTO_DEVOPS_CHART with the path to the chart.
  • make use of the HELM_UPGRADE_EXTRA_ARGS environment variable to override the default values in the values.yaml file in the default Helm chart.
  • specify the use of a custom Helm chart per environment by scoping the environment variable to the desired environment.
    • 張 旭
       
      Auto DevOps 就是一套人家寫好好的傳便便的 .gitlab-ci.yml
  • Your additions will be merged with the Auto DevOps template using the behaviour described for include
  • copy and paste the contents of the Auto DevOps template into your project and edit this as needed.
  • In order to support applications that require a database, PostgreSQL is provisioned by default.
  • Set up the replica variables using a project variable and scale your application by just redeploying it!
  • You should not scale your application using Kubernetes directly.
  • Some applications need to define secret variables that are accessible by the deployed application.
  • Auto DevOps detects variables where the key starts with K8S_SECRET_ and make these prefixed variables available to the deployed application, as environment variables.
  • Auto DevOps pipelines will take your application secret variables to populate a Kubernetes secret.
  • Environment variables are generally considered immutable in a Kubernetes pod.
  • if you update an application secret without changing any code then manually create a new pipeline, you will find that any running application pods will not have the updated secrets.
  • Variables with multiline values are not currently supported
  • The normal behavior of Auto DevOps is to use Continuous Deployment, pushing automatically to the production environment every time a new pipeline is run on the default branch.
  • If STAGING_ENABLED is defined in your project (e.g., set STAGING_ENABLED to 1 as a CI/CD variable), then the application will be automatically deployed to a staging environment, and a production_manual job will be created for you when you’re ready to manually deploy to production.
  • If CANARY_ENABLED is defined in your project (e.g., set CANARY_ENABLED to 1 as a CI/CD variable) then two manual jobs will be created: canary which will deploy the application to the canary environment production_manual which is to be used by you when you’re ready to manually deploy to production.
  • If INCREMENTAL_ROLLOUT_MODE is set to manual in your project, then instead of the standard production job, 4 different manual jobs will be created: rollout 10% rollout 25% rollout 50% rollout 100%
  • The percentage is based on the REPLICAS variable and defines the number of pods you want to have for your deployment.
  • To start a job, click on the play icon next to the job’s name.
  • Once you get to 100%, you cannot scale down, and you’d have to roll back by redeploying the old version using the rollback button in the environment page.
  • With INCREMENTAL_ROLLOUT_MODE set to manual and with STAGING_ENABLED
  • not all buildpacks support Auto Test yet
  • When a project has been marked as private, GitLab’s Container Registry requires authentication when downloading containers.
  • Authentication credentials will be valid while the pipeline is running, allowing for a successful initial deployment.
  • After the pipeline completes, Kubernetes will no longer be able to access the Container Registry.
  • We strongly advise using GitLab Container Registry with Auto DevOps in order to simplify configuration and prevent any unforeseen issues.
張 旭

Database Profiler - MongoDB Manual - 0 views

  • The database profiler collects detailed information about Database Commands executed against a running mongod instance.
  • The profiler writes all the data it collects to the system.profile collection, a capped collection in the admin database.
  • db.setProfilingLevel(2)
  • ...10 more annotations...
  • The slowms and sampleRate profiling settings are global. When set, these settings affect all databases in your process.
  • db.setProfilingLevel(1, { slowms: 20 })
  • db.setProfilingLevel(0, { slowms: 20 })
  • show profile
  • The system.profile collection is a capped collection with a default size of 1 megabyte.
  • By default, sampleRate is set to 1.0, meaning all slow operations are profiled.
  • When logLevel is set to 0, MongoDB records slow operations to the diagnostic log at a rate determined by slowOpSampleRate.
  • The slowms field indicates operation time threshold, in milliseconds, beyond which operations are considered slow.
  • You cannot enable profiling on a mongos instance.
  • profiler logs information about database operations in the system.profile collection.
張 旭

Ingress - Kubernetes - 0 views

  • An API object that manages external access to the services in a cluster, typically HTTP.
  • load balancing
  • SSL termination
  • ...62 more annotations...
  • name-based virtual hosting
  • Edge routerA router that enforces the firewall policy for your cluster.
  • Cluster networkA set of links, logical or physical, that facilitate communication within a cluster according to the Kubernetes networking model.
  • A Kubernetes ServiceA way to expose an application running on a set of Pods as a network service. that identifies a set of Pods using labelTags objects with identifying attributes that are meaningful and relevant to users. selectors.
  • Services are assumed to have virtual IPs only routable within the cluster network.
  • Ingress exposes HTTP and HTTPS routes from outside the cluster to services within the cluster.
  • Traffic routing is controlled by rules defined on the Ingress resource.
  • An Ingress can be configured to give Services externally-reachable URLs, load balance traffic, terminate SSL / TLS, and offer name based virtual hosting.
  • Exposing services other than HTTP and HTTPS to the internet typically uses a service of type Service.Type=NodePort or Service.Type=LoadBalancer.
  • You must have an ingress controller to satisfy an Ingress. Only creating an Ingress resource has no effect.
  • As with all other Kubernetes resources, an Ingress needs apiVersion, kind, and metadata fields
  • Ingress frequently uses annotations to configure some options depending on the Ingress controller,
  • Ingress resource only supports rules for directing HTTP traffic.
  • An optional host.
  • A list of paths
  • A backend is a combination of Service and port names
  • has an associated backend
  • Both the host and path must match the content of an incoming request before the load balancer directs traffic to the referenced Service.
  • HTTP (and HTTPS) requests to the Ingress that matches the host and path of the rule are sent to the listed backend.
  • A default backend is often configured in an Ingress controller to service any requests that do not match a path in the spec.
  • An Ingress with no rules sends all traffic to a single default backend.
  • Ingress controllers and load balancers may take a minute or two to allocate an IP address.
  • A fanout configuration routes traffic from a single IP address to more than one Service, based on the HTTP URI being requested.
  • nginx.ingress.kubernetes.io/rewrite-target: /
  • describe ingress
  • get ingress
  • Name-based virtual hosts support routing HTTP traffic to multiple host names at the same IP address.
  • route requests based on the Host header.
  • an Ingress resource without any hosts defined in the rules, then any web traffic to the IP address of your Ingress controller can be matched without a name based virtual host being required.
  • secure an Ingress by specifying a SecretStores sensitive information, such as passwords, OAuth tokens, and ssh keys. that contains a TLS private key and certificate.
  • Currently the Ingress only supports a single TLS port, 443, and assumes TLS termination.
  • An Ingress controller is bootstrapped with some load balancing policy settings that it applies to all Ingress, such as the load balancing algorithm, backend weight scheme, and others.
  • persistent sessions, dynamic weights) are not yet exposed through the Ingress. You can instead get these features through the load balancer used for a Service.
  • review the controller specific documentation to see how they handle health checks
  • edit ingress
  • After you save your changes, kubectl updates the resource in the API server, which tells the Ingress controller to reconfigure the load balancer.
  • kubectl replace -f on a modified Ingress YAML file.
  • Node: A worker machine in Kubernetes, part of a cluster.
  • in most common Kubernetes deployments, nodes in the cluster are not part of the public internet.
  • Edge router: A router that enforces the firewall policy for your cluster.
  • a gateway managed by a cloud provider or a physical piece of hardware.
  • Cluster network: A set of links, logical or physical, that facilitate communication within a cluster according to the Kubernetes networking model.
  • Service: A Kubernetes Service that identifies a set of Pods using label selectors.
  • An Ingress may be configured to give Services externally-reachable URLs, load balance traffic, terminate SSL / TLS, and offer name-based virtual hosting.
  • An Ingress does not expose arbitrary ports or protocols.
  • You must have an Ingress controller to satisfy an Ingress. Only creating an Ingress resource has no effect.
  • The name of an Ingress object must be a valid DNS subdomain name
  • The Ingress spec has all the information needed to configure a load balancer or proxy server.
  • Ingress resource only supports rules for directing HTTP(S) traffic.
  • An Ingress with no rules sends all traffic to a single default backend and .spec.defaultBackend is the backend that should handle requests in that case.
  • If defaultBackend is not set, the handling of requests that do not match any of the rules will be up to the ingress controller
  • A common usage for a Resource backend is to ingress data to an object storage backend with static assets.
  • Exact: Matches the URL path exactly and with case sensitivity.
  • Prefix: Matches based on a URL path prefix split by /. Matching is case sensitive and done on a path element by element basis.
  • multiple paths within an Ingress will match a request. In those cases precedence will be given first to the longest matching path.
  • Hosts can be precise matches (for example “foo.bar.com”) or a wildcard (for example “*.foo.com”).
  • No match, wildcard only covers a single DNS label
  • Each Ingress should specify a class, a reference to an IngressClass resource that contains additional configuration including the name of the controller that should implement the class.
  • secure an Ingress by specifying a Secret that contains a TLS private key and certificate.
  • The Ingress resource only supports a single TLS port, 443, and assumes TLS termination at the ingress point (traffic to the Service and its Pods is in plaintext).
  • TLS will not work on the default rule because the certificates would have to be issued for all the possible sub-domains.
  • hosts in the tls section need to explicitly match the host in the rules section.
張 旭

MongoDB Performance Tuning: Everything You Need to Know - Stackify - 0 views

  • db.serverStatus().globalLock
  • db.serverStatus().locks
  • globalLock.currentQueue.total: This number can indicate a possible concurrency issue if it’s consistently high. This can happen if a lot of requests are waiting for a lock to be released.
  • ...35 more annotations...
  • globalLock.totalTime: If this is higher than the total database uptime, the database has been in a lock state for too long.
  • Unlike relational databases such as MySQL or PostgreSQL, MongoDB uses JSON-like documents for storing data.
  • Databases operate in an environment that consists of numerous reads, writes, and updates.
  • When a lock occurs, no other operation can read or modify the data until the operation that initiated the lock is finished.
  • locks.deadlockCount: Number of times the lock acquisitions have encountered deadlocks
  • Is the database frequently locking from queries? This might indicate issues with the schema design, query structure, or system architecture.
  • For version 3.2 on, WiredTiger is the default.
  • MMAPv1 locks whole collections, not individual documents.
  • WiredTiger performs locking at the document level.
  • When the MMAPv1 storage engine is in use, MongoDB will use memory-mapped files to store data.
  • All available memory will be allocated for this usage if the data set is large enough.
  • db.serverStatus().mem
  • mem.resident: Roughly equivalent to the amount of RAM in megabytes that the database process uses
  • If mem.resident exceeds the value of system memory and there’s a large amount of unmapped data on disk, we’ve most likely exceeded system capacity.
  • If the value of mem.mapped is greater than the amount of system memory, some operations will experience page faults.
  • The WiredTiger storage engine is a significant improvement over MMAPv1 in performance and concurrency.
  • By default, MongoDB will reserve 50 percent of the available memory for the WiredTiger data cache.
  • wiredTiger.cache.bytes currently in the cache – This is the size of the data currently in the cache.
  • wiredTiger.cache.tracked dirty bytes in the cache – This is the size of the dirty data in the cache.
  • we can look at the wiredTiger.cache.bytes read into cache value for read-heavy applications. If this value is consistently high, increasing the cache size may improve overall read performance.
  • check whether the application is read-heavy. If it is, increase the size of the replica set and distribute the read operations to secondary members of the set.
  • write-heavy, use sharding within a sharded cluster to distribute the load.
  • Replication is the propagation of data from one node to another
  • Replication sets handle this replication.
  • Sometimes, data isn’t replicated as quickly as we’d like.
  • a particularly thorny problem if the lag between a primary and secondary node is high and the secondary becomes the primary
  • use the db.printSlaveReplicationInfo() or the rs.printSlaveReplicationInfo() command to see the status of a replica set from the perspective of the secondary member of the set.
  • shows how far behind the secondary members are from the primary. This number should be as low as possible.
  • monitor this metric closely.
  • watch for any spikes in replication delay.
  • Always investigate these issues to understand the reasons for the lag.
  • One replica set is primary. All others are secondary.
  • it’s not normal for nodes to change back and forth between primary and secondary.
  • use the profiler to gain a deeper understanding of the database’s behavior.
  • Enabling the profiler can affect system performance, due to the additional activity.
  •  
    "globalLock.currentQueue.total: This number can indicate a possible concurrency issue if it's consistently high. This can happen if a lot of requests are waiting for a lock to be released."
張 旭

Override Files - Configuration Language - Terraform by HashiCorp - 0 views

  • In both the required_version and required_providers settings, each override constraint entirely replaces the constraints for the same component in the original block.
  • If both the base block and the override block both set required_version then the constraints in the base block are entirely ignored.
  • Terraform normally loads all of the .tf and .tf.json files within a directory and expects each one to define a distinct set of configuration objects.
  • ...14 more annotations...
  • If two files attempt to define the same object, Terraform returns an error.
  • a human-edited configuration file in the Terraform language native syntax could be partially overridden using a programmatically-generated file in JSON syntax.
  • Terraform has special handling of any configuration file whose name ends in _override.tf or _override.tf.json
  • Terraform initially skips these override files when loading configuration, and then afterwards processes each one in turn (in lexicographical order).
  • merges the override block contents into the existing object.
  • Over-use of override files hurts readability, since a reader looking only at the original files cannot easily see that some portions of those files have been overridden without consulting all of the override files that are present.
  • When using override files, use comments in the original files to warn future readers about which override files apply changes to each block.
  • A top-level block in an override file merges with a block in a normal configuration file that has the same block header.
  • Within a top-level block, an attribute argument within an override block replaces any argument of the same name in the original block.
  • Within a top-level block, any nested blocks within an override block replace all blocks of the same type in the original block.
  • The contents of nested configuration blocks are not merged.
  • If more than one override file defines the same top-level block, the overriding effect is compounded, with later blocks taking precedence over earlier blocks
  • The settings within terraform blocks are considered individually when merging.
  • If the required_providers argument is set, its value is merged on an element-by-element basis, which allows an override block to adjust the constraint for a single provider without affecting the constraints for other providers.
  •  
    "In both the required_version and required_providers settings, each override constraint entirely replaces the constraints for the same component in the original block. "
張 旭

Kubernetes Components | Kubernetes - 0 views

  • A Kubernetes cluster consists of a set of worker machines, called nodes, that run containerized applications
  • Every cluster has at least one worker node.
  • The control plane manages the worker nodes and the Pods in the cluster.
  • ...29 more annotations...
  • The control plane's components make global decisions about the cluster
  • Control plane components can be run on any machine in the cluster.
  • for simplicity, set up scripts typically start all control plane components on the same machine, and do not run user containers on this machine
  • The API server is the front end for the Kubernetes control plane.
  • kube-apiserver is designed to scale horizontally—that is, it scales by deploying more instances. You can run several instances of kube-apiserver and balance traffic between those instances.
  • Kubernetes cluster uses etcd as its backing store, make sure you have a back up plan for those data.
  • watches for newly created Pods with no assigned node, and selects a node for them to run on.
  • Factors taken into account for scheduling decisions include: individual and collective resource requirements, hardware/software/policy constraints, affinity and anti-affinity specifications, data locality, inter-workload interference, and deadlines.
  • each controller is a separate process, but to reduce complexity, they are all compiled into a single binary and run in a single process.
  • Node controller
  • Job controller
  • Endpoints controller
  • Service Account & Token controllers
  • The cloud controller manager lets you link your cluster into your cloud provider's API, and separates out the components that interact with that cloud platform from components that only interact with your cluster.
  • If you are running Kubernetes on your own premises, or in a learning environment inside your own PC, the cluster does not have a cloud controller manager.
  • An agent that runs on each node in the cluster. It makes sure that containers are running in a Pod.
  • The kubelet takes a set of PodSpecs that are provided through various mechanisms and ensures that the containers described in those PodSpecs are running and healthy.
  • The kubelet doesn't manage containers which were not created by Kubernetes.
  • kube-proxy is a network proxy that runs on each node in your cluster, implementing part of the Kubernetes Service concept.
  • kube-proxy maintains network rules on nodes. These network rules allow network communication to your Pods from network sessions inside or outside of your cluster.
  • kube-proxy uses the operating system packet filtering layer if there is one and it's available.
  • Kubernetes supports several container runtimes: Docker, containerd, CRI-O, and any implementation of the Kubernetes CRI (Container Runtime Interface).
  • Addons use Kubernetes resources (DaemonSet, Deployment, etc) to implement cluster features
  • namespaced resources for addons belong within the kube-system namespace.
  • all Kubernetes clusters should have cluster DNS,
  • Cluster DNS is a DNS server, in addition to the other DNS server(s) in your environment, which serves DNS records for Kubernetes services.
  • Containers started by Kubernetes automatically include this DNS server in their DNS searches.
  • Container Resource Monitoring records generic time-series metrics about containers in a central database, and provides a UI for browsing that data.
  • A cluster-level logging mechanism is responsible for saving container logs to a central log store with search/browsing interface.
張 旭

Deploy Replica Set With Keyfile Authentication - MongoDB Manual - 0 views

  • Keyfiles are bare-minimum forms of security and are best suited for testing or development environments.
  • With keyfile authentication, each mongod instances in the replica set uses the contents of the keyfile as the shared password for authenticating other members in the deployment.
  • On UNIX systems, the keyfile must not have group or world permissions.
  • ...3 more annotations...
  • Copy the keyfile to each server hosting the replica set members.
  • the user running the mongod instances is the owner of the file and can access the keyfile.
  • For each member in the replica set, start the mongod with either the security.keyFile configuration file setting or the --keyFile command-line option.
張 旭

Service | Kubernetes - 0 views

  • Each Pod gets its own IP address
  • Pods are nonpermanent resources.
  • Kubernetes Pods are created and destroyed to match the state of your cluster
  • ...23 more annotations...
  • In Kubernetes, a Service is an abstraction which defines a logical set of Pods and a policy by which to access them (sometimes this pattern is called a micro-service).
  • The set of Pods targeted by a Service is usually determined by a selector
  • If you're able to use Kubernetes APIs for service discovery in your application, you can query the API server for Endpoints, that get updated whenever the set of Pods in a Service changes.
  • A Service in Kubernetes is a REST object, similar to a Pod.
  • The name of a Service object must be a valid DNS label name
  • Kubernetes assigns this Service an IP address (sometimes called the "cluster IP"), which is used by the Service proxies
  • A Service can map any incoming port to a targetPort. By default and for convenience, the targetPort is set to the same value as the port field.
  • The default protocol for Services is TCP
  • As many Services need to expose more than one port, Kubernetes supports multiple port definitions on a Service object. Each port definition can have the same protocol, or a different one.
  • Because this Service has no selector, the corresponding Endpoints object is not created automatically. You can manually map the Service to the network address and port where it's running, by adding an Endpoints object manually
  • Endpoint IP addresses cannot be the cluster IPs of other Kubernetes Services
  • Kubernetes ServiceTypes allow you to specify what kind of Service you want. The default is ClusterIP
  • ClusterIP: Exposes the Service on a cluster-internal IP.
  • NodePort: Exposes the Service on each Node's IP at a static port (the NodePort). A ClusterIP Service, to which the NodePort Service routes, is automatically created. You'll be able to contact the NodePort Service, from outside the cluster, by requesting <NodeIP>:<NodePort>.
  • LoadBalancer: Exposes the Service externally using a cloud provider's load balancer
  • ExternalName: Maps the Service to the contents of the externalName field (e.g. foo.bar.example.com), by returning a CNAME record with its value. No proxying of any kind is set up.
  • You can also use Ingress to expose your Service. Ingress is not a Service type, but it acts as the entry point for your cluster.
  • If you set the type field to NodePort, the Kubernetes control plane allocates a port from a range specified by --service-node-port-range flag (default: 30000-32767).
  • The default for --nodeport-addresses is an empty list. This means that kube-proxy should consider all available network interfaces for NodePort.
  • you need to take care of possible port collisions yourself. You also have to use a valid port number, one that's inside the range configured for NodePort use.
  • Service is visible as <NodeIP>:spec.ports[*].nodePort and .spec.clusterIP:spec.ports[*].port
  • Choosing this value makes the Service only reachable from within the cluster.
  • NodePort: Exposes the Service on each Node's IP at a static port
張 旭

The for_each Meta-Argument - Configuration Language | Terraform | HashiCorp Developer - 0 views

  • A given resource or module block cannot use both count and for_each
  • The for_each meta-argument accepts a map or a set of strings, and creates an instance for each item in that map or set
  • each.key — The map key (or set member) corresponding to this instance.
  • ...10 more annotations...
  • each.value — The map value corresponding to this instance. (If a set was provided, this is the same as each.key.)
  • for_each keys cannot be the result (or rely on the result of) of impure functions, including uuid, bcrypt, or timestamp, as their evaluation is deferred during the main evaluation step.
  • The value used in for_each is used to identify the resource instance and will always be disclosed in UI output, which is why sensitive values are not allowed.
  • if you would like to call keys(local.map), where local.map is an object with sensitive values (but non-sensitive keys), you can create a value to pass to for_each with toset([for k,v in local.map : k]).
  • for_each can't refer to any resource attributes that aren't known until after a configuration is applied (such as a unique ID generated by the remote API when an object is created).
  • he for_each argument does not implicitly convert lists or tuples to sets.
  • Transform a multi-level nested structure into a flat list by using nested for expressions with the flatten function.
  • Instances are identified by a map key (or set member) from the value provided to for_each
  • Within nested provisioner or connection blocks, the special self object refers to the current resource instance, not the resource block as a whole.
  • Conversion from list to set discards the ordering of the items in the list and removes any duplicate elements.
張 旭

Active Record Validations - Ruby on Rails Guides - 0 views

  • validates :name, presence: true
  • Validations are used to ensure that only valid data is saved into your database
  • Model-level validations are the best way to ensure that only valid data is saved into your database.
  • ...117 more annotations...
  • native database constraints
  • client-side validations
  • controller-level validations
  • Database constraints and/or stored procedures make the validation mechanisms database-dependent and can make testing and maintenance more difficult
  • Client-side validations can be useful, but are generally unreliable
  • combined with other techniques, client-side validation can be a convenient way to provide users with immediate feedback
  • it's a good idea to keep your controllers skinny
  • model-level validations are the most appropriate in most circumstances.
  • Active Record uses the new_record? instance method to determine whether an object is already in the database or not.
  • Creating and saving a new record will send an SQL INSERT operation to the database. Updating an existing record will send an SQL UPDATE operation instead. Validations are typically run before these commands are sent to the database
  • The bang versions (e.g. save!) raise an exception if the record is invalid.
  • save and update return false
  • create just returns the object
  • skip validations, and will save the object to the database regardless of its validity.
  • be used with caution
  • update_all
  • save also has the ability to skip validations if passed validate: false as argument.
  • save(validate: false)
  • valid? triggers your validations and returns true if no errors
  • After Active Record has performed validations, any errors found can be accessed through the errors.messages instance method
  • By definition, an object is valid if this collection is empty after running validations.
  • validations are not run when using new.
  • invalid? is simply the inverse of valid?.
  • To verify whether or not a particular attribute of an object is valid, you can use errors[:attribute]. I
  • only useful after validations have been run
  • Every time a validation fails, an error message is added to the object's errors collection,
  • All of them accept the :on and :message options, which define when the validation should be run and what message should be added to the errors collection if it fails, respectively.
  • validates that a checkbox on the user interface was checked when a form was submitted.
  • agree to your application's terms of service
  • 'acceptance' does not need to be recorded anywhere in your database (if you don't have a field for it, the helper will just create a virtual attribute).
  • It defaults to "1" and can be easily changed.
  • use this helper when your model has associations with other models and they also need to be validated
  • valid? will be called upon each one of the associated objects.
  • work with all of the association types
  • Don't use validates_associated on both ends of your associations.
    • 張 旭
       
      關聯式的物件驗證,在其中一方啟動就好了!
  • each associated object will contain its own errors collection
  • errors do not bubble up to the calling model
  • when you have two text fields that should receive exactly the same content
  • This validation creates a virtual attribute whose name is the name of the field that has to be confirmed with "_confirmation" appended.
  • To require confirmation, make sure to add a presence check for the confirmation attribute
  • this set can be any enumerable object.
  • The exclusion helper has an option :in that receives the set of values that will not be accepted for the validated attributes.
  • :in option has an alias called :within
  • validates the attributes' values by testing whether they match a given regular expression, which is specified using the :with option.
  • attribute does not match the regular expression by using the :without option.
  • validates that the attributes' values are included in a given set
  • :in option has an alias called :within
  • specify length constraints
  • :minimum
  • :maximum
  • :in (or :within)
  • :is - The attribute length must be equal to the given value.
  • :wrong_length, :too_long, and :too_short options and %{count} as a placeholder for the number corresponding to the length constraint being used.
  • split the value in a different way using the :tokenizer option:
    • 張 旭
       
      自己提供切割算字數的方式
  • validates that your attributes have only numeric values
  • By default, it will match an optional sign followed by an integral or floating point number.
  • set :only_integer to true.
  • allows a trailing newline character.
  • :greater_than
  • :greater_than_or_equal_to
  • :equal_to
  • :less_than
  • :less_than_or_equal_to
  • :odd - Specifies the value must be an odd number if set to true.
  • :even - Specifies the value must be an even number if set to true.
  • validates that the specified attributes are not empty
  • if the value is either nil or a blank string
  • validate associated records whose presence is required, you must specify the :inverse_of option for the association
  • inverse_of
  • an association is present, you'll need to test whether the associated object itself is present, and not the foreign key used to map the association
  • false.blank? is true
  • validate the presence of a boolean field
  • ensure the value will NOT be nil
  • validates that the specified attributes are absent
  • not either nil or a blank string
  • be sure that an association is absent
  • false.present? is false
  • validate the absence of a boolean field you should use validates :field_name, exclusion: { in: [true, false] }.
  • validates that the attribute's value is unique right before the object gets saved
  • a :scope option that you can use to specify other attributes that are used to limit the uniqueness check
  • a :case_sensitive option that you can use to define whether the uniqueness constraint will be case sensitive or not.
  • There is no default error message for validates_with.
  • To implement the validate method, you must have a record parameter defined, which is the record to be validated.
  • the validator will be initialized only once for the whole application life cycle, and not on each validation run, so be careful about using instance variables inside it.
  • passes the record to a separate class for validation
  • use a plain old Ruby object
  • validates attributes against a block
  • The block receives the record, the attribute's name and the attribute's value. You can do anything you like to check for valid data within the block
  • will let validation pass if the attribute's value is blank?, like nil or an empty string
  • the :message option lets you specify the message that will be added to the errors collection when validation fails
  • skips the validation when the value being validated is nil
  • specify when the validation should happen
  • raise ActiveModel::StrictValidationFailed when the object is invalid
  • You can do that by using the :if and :unless options, which can take a symbol, a string, a Proc or an Array.
  • use the :if option when you want to specify when the validation should happen
  • using eval and needs to contain valid Ruby code.
  • Using a Proc object gives you the ability to write an inline condition instead of a separate method
  • have multiple validations use one condition, it can be easily achieved using with_options.
  • implement a validate method which takes a record as an argument and performs the validation on it
  • validates_with method
  • implement a validate_each method which takes three arguments: record, attribute, and value
  • combine standard validations with your own custom validators.
  • :expiration_date_cannot_be_in_the_past,    :discount_cannot_be_greater_than_total_value
  • By default such validations will run every time you call valid?
  • errors[] is used when you want to check the error messages for a specific attribute.
  • Returns an instance of the class ActiveModel::Errors containing all errors.
  • lets you manually add messages that are related to particular attributes
  • using []= setter
  • errors[:base] is an array, you can simply add a string to it and it will be used as an error message.
  • use this method when you want to say that the object is invalid, no matter the values of its attributes.
  • clear all the messages in the errors collection
  • calling errors.clear upon an invalid object won't actually make it valid: the errors collection will now be empty, but the next time you call valid? or any method that tries to save this object to the database, the validations will run again.
  • the total number of error messages for the object.
  • .errors.full_messages.each
  • .field_with_errors
張 旭

Managing files | Django documentation | Django - 0 views

  • By default, Django stores files locally, using the MEDIA_ROOT and MEDIA_URL settings.
  • use a FileField or ImageField, Django provides a set of APIs you can use to deal with that file.
  • Behind the scenes, Django delegates decisions about how and where to store files to a file storage system.
  • ...1 more annotation...
  • Django uses a django.core.files.File instance any time it needs to represent a file.
張 旭

DNS Records: an Introduction - 0 views

  • reading from right to left
  • top-level domain, or TLD
  • first-level subdomains plus their TLDs (example.com) are referred to as “domains.”
  • ...37 more annotations...
  • Name servers host a domain’s DNS information in a text file called the zone file
  • Start of Authority (SOA) records
  • You’ll want to specify at least two name servers. That way, if one of them is down, the next one can continue to serve your DNS information.
  • Every domain’s zone file contains the admin’s email address, the name servers, and the DNS records.
  • a zone file, which lists domains and their corresponding IP addresses (and a few other things)
  • TLD nameserver
  • ISPs cache a lot of DNS information after they’ve looked it up the first time
  • Usually caching is a good thing, but it can be a problem if you’ve recently made a change to your DNS information
  • An A record matches up a domain (or subdomain) to an IP address
  • point different subdomains to different IP addresses
  • An AAAA record is just like an A record, but for IPv6 IP addresses.
  • An AXFR record is a type of DNS record used for DNS replication
  • used on a slave DNS server to replicate the zone file from a master DNS server
  • DNS Certification Authority Authorization uses DNS to allow the holder of a domain to specify which certificate authorities are allowed to issue certificates for that domain.
  • A CNAME record or Canonical Name record matches up a domain (or subdomain) to a different domain.
  • You should not use a CNAME record for a domain that gets email, because some mail servers handle mail oddly for domains with CNAME records
  • the target domain for a CNAME record should have a normal A-record resolution
  • a CNAME record does not function the same way as a URL redirect
  • A DKIM record or domain keys identified mail record displays the public key for authenticating messages that have been signed with the DKIM protocol
  • An MX record or mail exchange record sets the mail delivery destination for a domain (or subdomain).
  • Ideally, an MX record should point to a domain that is also the hostname for its server.
  • Your MX records don’t necessarily have to point to your Linode. If you’re using a third-party mail service, like Google Apps, you should use the MX records they provide.
  • Lower numbers have a higher priority
  • NS records or name server records set the nameservers for a domain (or subdomain).
  • You can also set up different nameservers for any of your subdomains.
  • The order of NS records does not matter; DNS requests are sent randomly to the different servers, and if one host fails to respond, another one will be queried.
  • A PTR record or pointer record matches up an IP address to a domain (or subdomain), allowing reverse DNS queries to function.
  • PTR records are usually set with your hosting provider. They are not part of your domain’s zone file.
  • An SOA record or Start of Authority record labels a zone file with the name of the host where it was originally created.
  • The administrative email address is written with a period (.) instead of an at symbol (<@>).
  • The single nameserver mentioned in the SOA record is considered the primary master for the purposes of Dynamic DNS and is the server where zone file changes get made before they are propagated to all other nameservers.
  • An SPF record or Sender Policy Framework record lists the designated mail servers for a domain (or subdomain).
  • An SPF record for your domain tells other receiving mail servers which outgoing server(s) are valid sources of email, so they can reject spoofed email from your domain that has originated from unauthorized servers.
  • Your SPF record will have a domain or subdomain, type (which is TXT, or SPF if your name server supports it), and text (which starts with “v=spf1” and contains the SPF record settings).
  • An SRV record or service record matches up a specific service that runs on your domain (or subdomain) to a target domain.
  • A TXT record or text record provides information about the domain in question to other resources on the Internet.
  • One common use of the TXT record is to create an SPF record on nameservers that don’t natively support SPF.
1 - 20 of 171 Next › Last »
Showing 20 items per page