Skip to main content

Home/ Larvata/ Group items tagged design

Rss Feed Group items tagged

crazylion lee

dbpatterns - create, share, explore database patterns - 0 views

  •  
    "Dbpatterns is a service that allows you to create, share, explore database models on the web."
crazylion lee

Desktop Neo - rethinking the desktop interface for productivity. - 0 views

  •  
    "rethinking the desktop interface for productivity."
crazylion lee

Notes / Better code with an inversion of control container - Icelab, an Australian desi... - 0 views

  •  
    "Better code with an inversion of control container"
張 旭

Containers Vs. Config Management - 0 views

  • With configuration management systems, you write code that describes how you want some component of your systems to be installed and configured, and when you execute the code on your server, it should end up in the desired state.
  • building a hosting platform that is capable of a lot of things that system administrators used to do manually
  • build modules on deployment via bundler or npm or similar, it can be incredibly slow to run, taking minutes or longer in some cases
  • ...10 more annotations...
  • pulling from git is slow.
  • deploying with configuration management tools is a pain in the ass and error prone.
  • Support for containers has existed in the Linux kernel since version 2.6.24 when cgroup support was added
  • All of the logic that used to live in your cookbooks/playbooks/manifests/etc now lives in a Dockerfile that resides directly in the repository for the application it is designed to build
  • All of the dependencies of the application are bundled with the container which means no need to build on the fly on every server during deployment.
  • Containers bring standardization which allows for systems like centralized logging, monitoring, and metrics to easily snap into place no matter what is running in the container.
  • Dockerfiles do not give you the same level of control over configuration as your application transitions between environments, like dev, staging, and production.
  • You may even need to have different Dockerfile’s for each environment in certain cases.
  • configuration management systems now have hooks for docker integration.
  • Config management will only be used to install Docker, an orchestration system, configure PAM/SSH auth, and tune OS sysctl values.
  •  
    "With configuration management systems, you write code that describes how you want some component of your systems to be installed and configured, and when you execute the code on your server, it should end up in the desired state."
張 旭

Active Record Migrations - Ruby on Rails Guides - 0 views

    • 張 旭
       
       跟 belongs_to 與 has_many 設定對應的 Migrattion
    • 張 旭
       
      has_and_belongs_to_many 的對應?
  • add_column and remove_column
  • ...114 more annotations...
  • allowing your schema and changes to be database independent.
  • each migration as being a new 'version' of the database
  • each migration modifies it to add or remove tables, columns, or entries
  • Active Record will also update your db/schema.rb file to match the up-to-date structure of your database.
  • A primary key column called id will also be added implicitly, as it's the default primary key for all Active Record models
  • roll this migration back, it will remove the table
  • timestamps macro adds two columns, created_at and updated_at
  • On databases that support transactions with statements that change the schema, migrations are wrapped in a transaction
  • reversible
  • use up and down instead of change
  • Migrations are stored as files in the db/migrate directory, one for each migration class.
  • a UTC timestamp identifying
  • Rails uses this timestamp to determine which migration should be run and in what order
  • "AddXXXToYYY" or "RemoveXXXFromYYY"
  • use a Ruby DSL
  • column type as references
  • part_number:string:index
  • a migration to remove a column
  • "CreateXXX"
  • change_column_null
  • AddUserRefToProducts
  • :references
  • produce join tables if JoinTable is part of the name
  • CreateJoinTable
  • The model and scaffold generators will create migrations appropriate for adding a new model.
  • enclosed by curly braces and follow the field type
  • create_table
  • By default, create_table will create a primary key called id
  • add an index on the new column
  • when using MySQL, the default is ENGINE=InnoDB
  • create_join_table creates an HABTM (has and belongs to many) join table
  • To customize the name of the table, provide a :table_name option:
  • create_join_table also accepts a block
  • change_table, used for changing existing tables
  • remove
  • rename
  • add_column
  • change_column
  • remove_column
  • change_column_default
  • place an SQL fragment in the :options option.
  • limit
  • precision
  • scale
  • polymorphic
  • default
  • index
  • add_foreign_key
  • Active Record only supports single column foreign keys.
  • use the old style of migration using up and down methods instead of the change method.
  • .connection.execute
  • change_table is also reversible, as long as the block does not call change, change_default or remove.
  • remove_column is reversible if you supply the column type as the third argument
  • Complex migrations may require processing that Active Record doesn't know how to reverse
  • reversible
  • Using reversible will ensure that the instructions are executed in the right order too.
  • add_column add_foreign_key add_index add_reference add_timestamps change_column_default (must supply a :from and :to option) change_column_null create_join_table create_table disable_extension drop_join_table drop_table (must supply a block) enable_extension remove_column (must supply a type) remove_foreign_key (must supply a second table) remove_index remove_reference remove_timestamps rename_column rename_index rename_table
  • :column_options option
  • have the option :null set to false by default
  • By default, the name of the join table comes from the union of the first two arguments provided to create_join_table
  • in alphabetical order
  • change_column command is irreversible.
    • 張 旭
       
      關聯物在前,被關聯物在後。 A 關聯到 B
  • If the column names can not be derived from the table names, you can use the :column and :primary_key options.
  • figure out the column name
  • foreign key for a specific column
  • foreign key by name
    • 張 旭
       
      不懂 column 跟 name 的用法差異,基本上一樣。
  • Active Record knows how to reverse the migration automatically
    • 張 旭
       
      使用內建的 method,Rails 比較容易自動 rollback
    • 張 旭
       
      除了幾個特殊的 change_ 跟 remove_
  • should use reversible or write the up and down methods instead of using the change method
  • If your migration is irreversible, you should raise ActiveRecord::IrreversibleMigration from your down method.
  • DontUseConstraintForZipcodeValidationMigration
  • rails db:migrate
  • the db:migrate task also invokes the db:schema:dump task, which will update your db/schema.rb file to match the structure of your database.
  • specify a target version
  • all migrations up to and including 20080906120000
  • run the down method on all the migrations down to, but not including, 20080906120000
  • rails db:rollback
  • db:migrate:redo task is a shortcut for doing a rollback and then migrating back up again
    • 張 旭
       
      舊版的還是 rake!
  • STEP parameter
  • db:setup task will create the database, load the schema and initialize it with the seed data
  • db:reset task will drop the database and set it up again. This is functionally equivalent to rails db:drop db:setup.
  • run a specific migration up or down, the db:migrate:up and db:migrate:down
  • the RAILS_ENV environment variable
  • db:migrate VERBOSE=false will suppress all output.
  • If you have already run the migration, then you cannot just edit the migration and run the migration again: Rails thinks it has already run the migration and so will do nothing when you run rails db:migrate.
  • must rollback the migration (for example with bin/rails db:rollback), edit your migration and then run rails db:migrate to run the corrected version.
  • editing existing migrations is not a good idea.
  • should write a new migration that performs the changes you require
  • revert method can be helpful when writing a new migration to undo previous migrations in whole or in part
  • require_relative
  • revert
  • They are not designed to be edited, they just represent the current state of the database.
  • Schema Files for
  • Schema files are also useful if you want a quick look at what attributes an Active Record object has
  • annotate_models gem automatically adds and updates comments at the top of each model summarizing the schema if you desire that functionality.
  • database-independent
  • multiple databases
  • db/schema.rb cannot express database specific items such as triggers, stored procedures or check constraints
  • you can execute custom SQL statements, the schema dumper cannot reconstitute those statements from the database
  • db:structure:dump
    • 張 旭
       
      資料庫種類不相依的 schema 付出的代價就是有些特殊的資料庫特性無法描述出來,例如 trigger;如果有在 migration 寫 SQL 的,簡單說 schema dumper 這邊就要設定成 :sql 而不是預設的 :ruby
  • set in config/application.rb by the config.active_record.schema_format setting, which may be either :sql or :ruby.
  • check them into source control.
  • db/schema.rb contains the current version number of the database
  • Validations such as validates :foreign_key, uniqueness: true are one way in which models can enforce data integrity
  • The :dependent option on associations allows models to automatically destroy child objects when the parent is destroyed.
  • Migrations can also be used to add or modify data
  • Initial
  • To add initial data after a database is created, Rails has a built-in 'seeds' feature that makes the process quick and easy.
  • db/seeds.rb
  • rails db:seed
張 旭

pre-commit - 0 views

  • a multi-language package manager for pre-commit hooks
  • pre-commit is specifically designed to not require root access
  • We copied and pasted unwieldy bash scripts from project to project and had to manually change the hooks to work for different project structures.
  • ...3 more annotations...
  • adding pre-commit plugins to your project is done with the .pre-commit-config.yaml configuration file.
  • The pre-commit config file describes what repositories and hooks are installed.
  • This configuration says to download the pre-commit-hooks project and run its trailing-whitespace hook
  •  
    "a multi-language package manager for pre-commit hooks"
張 旭

Boosting your kubectl productivity ♦︎ Learnk8s - 0 views

  • kubectl is your cockpit to control Kubernetes.
  • kubectl is a client for the Kubernetes API
  • Kubernetes API is an HTTP REST API.
  • ...75 more annotations...
  • This API is the real Kubernetes user interface.
  • Kubernetes is fully controlled through this API
  • every Kubernetes operation is exposed as an API endpoint and can be executed by an HTTP request to this endpoint.
  • the main job of kubectl is to carry out HTTP requests to the Kubernetes API
  • Kubernetes maintains an internal state of resources, and all Kubernetes operations are CRUD operations on these resources.
  • Kubernetes is a fully resource-centred system
  • Kubernetes API reference is organised as a list of resource types with their associated operations.
  • This is how kubectl works for all commands that interact with the Kubernetes cluster.
  • kubectl simply makes HTTP requests to the appropriate Kubernetes API endpoints.
  • it's totally possible to control Kubernetes with a tool like curl by manually issuing HTTP requests to the Kubernetes API.
  • Kubernetes consists of a set of independent components that run as separate processes on the nodes of a cluster.
  • components on the master nodes
  • Storage backend: stores resource definitions (usually etcd is used)
  • API server: provides Kubernetes API and manages storage backend
  • Controller manager: ensures resource statuses match specifications
  • Scheduler: schedules Pods to worker nodes
  • component on the worker nodes
  • Kubelet: manages execution of containers on a worker node
  • triggers the ReplicaSet controller, which is a sub-process of the controller manager.
  • the scheduler, who watches for Pod definitions that are not yet scheduled to a worker node.
  • creating and updating resources in the storage backend on the master node.
  • The kubelet of the worker node your ReplicaSet Pods have been scheduled to instructs the configured container runtime (which may be Docker) to download the required container images and run the containers.
  • Kubernetes components (except the API server and the storage backend) work by watching for resource changes in the storage backend and manipulating resources in the storage backend.
  • However, these components do not access the storage backend directly, but only through the Kubernetes API.
    • 張 旭
       
      很精彩,相互之間都是使用 API call 溝通,良好的微服務行為。
  • double usage of the Kubernetes API for internal components as well as for external users is a fundamental design concept of Kubernetes.
  • All other Kubernetes components and users read, watch, and manipulate the state (i.e. resources) of Kubernetes through the Kubernetes API
  • The storage backend stores the state (i.e. resources) of Kubernetes.
  • command completion is a shell feature that works by the means of a completion script.
  • A completion script is a shell script that defines the completion behaviour for a specific command. Sourcing a completion script enables completion for the corresponding command.
  • kubectl completion zsh
  • /etc/bash_completion.d directory (create it, if it doesn't exist)
  • source <(kubectl completion bash)
  • source <(kubectl completion zsh)
  • autoload -Uz compinit compinit
  • the API reference, which contains the full specifications of all resources.
  • kubectl api-resources
  • displays the resource names in their plural form (e.g. deployments instead of deployment). It also displays the shortname (e.g. deploy) for those resources that have one. Don't worry about these differences. All of these name variants are equivalent for kubectl.
  • .spec
  • custom columns output format comes in. It lets you freely define the columns and the data to display in them. You can choose any field of a resource to be displayed as a separate column in the output
  • kubectl get pods -o custom-columns='NAME:metadata.name,NODE:spec.nodeName'
  • kubectl explain pod.spec.
  • kubectl explain pod.metadata.
  • browse the resource specifications and try it out with any fields you like!
  • JSONPath is a language to extract data from JSON documents (it is similar to XPath for XML).
  • with kubectl explain, only a subset of the JSONPath capabilities is supported
  • Many fields of Kubernetes resources are lists, and this operator allows you to select items of these lists. It is often used with a wildcard as [*] to select all items of the list.
  • kubectl get pods -o custom-columns='NAME:metadata.name,IMAGES:spec.containers[*].image'
  • a Pod may contain more than one container.
  • The availability zones for each node are obtained through the special failure-domain.beta.kubernetes.io/zone label.
  • kubectl get nodes -o yaml kubectl get nodes -o json
  • The default kubeconfig file is ~/.kube/config
  • with multiple clusters, then you have connection parameters for multiple clusters configured in your kubeconfig file.
  • Within a cluster, you can set up multiple namespaces (a namespace is kind of "virtual" clusters within a physical cluster)
  • overwrite the default kubeconfig file with the --kubeconfig option for every kubectl command.
  • Namespace: the namespace to use when connecting to the cluster
  • a one-to-one mapping between clusters and contexts.
  • When kubectl reads a kubeconfig file, it always uses the information from the current context.
  • just change the current context in the kubeconfig file
  • to switch to another namespace in the same cluster, you can change the value of the namespace element of the current context
  • kubectl also provides the --cluster, --user, --namespace, and --context options that allow you to overwrite individual elements and the current context itself, regardless of what is set in the kubeconfig file.
  • for switching between clusters and namespaces is kubectx.
  • kubectl config get-contexts
  • just have to download the shell scripts named kubectl-ctx and kubectl-ns to any directory in your PATH and make them executable (for example, with chmod +x)
  • kubectl proxy
  • kubectl get roles
  • kubectl get pod
  • Kubectl plugins are distributed as simple executable files with a name of the form kubectl-x. The prefix kubectl- is mandatory,
  • To install a plugin, you just have to copy the kubectl-x file to any directory in your PATH and make it executable (for example, with chmod +x)
  • krew itself is a kubectl plugin
  • check out the kubectl-plugins GitHub topic
  • The executable can be of any type, a Bash script, a compiled Go program, a Python script, it really doesn't matter. The only requirement is that it can be directly executed by the operating system.
  • kubectl plugins can be written in any programming or scripting language.
  • you can write more sophisticated plugins with real programming languages, for example, using a Kubernetes client library. If you use Go, you can also use the cli-runtime library, which exists specifically for writing kubectl plugins.
  • a kubeconfig file consists of a set of contexts
  • changing the current context means changing the cluster, if you have only a single context per cluster.
張 旭

phusion/baseimage-docker - 1 views

    • 張 旭
       
      原始的 docker 在執行命令時,預設就是將傳入的 COMMAND 當成 PID 1 的程序,執行完畢就結束這個  docker,其他的 daemons 並不會執行,而 baseimage 解決了這個問題。
    • crazylion lee
       
      好棒棒
  • docker exec
  • Through SSH
  • ...57 more annotations...
  • docker exec -t -i YOUR-CONTAINER-ID bash -l
  • Login to the container
  • Baseimage-docker only advocates running multiple OS processes inside a single container.
  • Password and challenge-response authentication are disabled by default. Only key authentication is allowed.
  • A tool for running a command as another user
  • The Docker developers advocate the philosophy of running a single logical service per container. A logical service can consist of multiple OS processes.
  • All syslog messages are forwarded to "docker logs".
  • Baseimage-docker advocates running multiple OS processes inside a single container, and a single logical service can consist of multiple OS processes.
  • Baseimage-docker provides tools to encourage running processes as different users
  • sometimes it makes sense to run multiple services in a single container, and sometimes it doesn't.
  • Splitting your logical service into multiple OS processes also makes sense from a security standpoint.
  • using environment variables to pass parameters to containers is very much the "Docker way"
  • Baseimage-docker provides a facility to run a single one-shot command, while solving all of the aforementioned problems
  • the shell script must run the daemon without letting it daemonize/fork it.
  • All executable scripts in /etc/my_init.d, if this directory exists. The scripts are run in lexicographic order.
  • variables will also be passed to all child processes
  • Environment variables on Unix are inherited on a per-process basis
  • there is no good central place for defining environment variables for all applications and services
  • centrally defining environment variables
  • One of the ideas behind Docker is that containers should be stateless, easily restartable, and behave like a black box.
  • a one-shot command in a new container
  • immediately exit after the command exits,
  • However the downside of this approach is that the init system is not started. That is, while invoking COMMAND, important daemons such as cron and syslog are not running. Also, orphaned child processes are not properly reaped, because COMMAND is PID 1.
  • add additional daemons (e.g. your own app) to the image by creating runit entries.
  • Nginx is one such example: it removes all environment variables unless you explicitly instruct it to retain them through the env configuration option.
  • Mechanisms for easily running multiple processes, without violating the Docker philosophy
  • Ubuntu is not designed to be run inside Docker
  • According to the Unix process model, the init process -- PID 1 -- inherits all orphaned child processes and must reap them
  • Syslog-ng seems to be much more stable
  • cron daemon
  • Rotates and compresses logs
  • /sbin/setuser
  • A tool for installing apt packages that automatically cleans up after itself.
  • a single logical service inside a single container
  • A daemon is a program which runs in the background of its system, such as a web server.
  • The shell script must be called run, must be executable, and is to be placed in the directory /etc/service/<NAME>. runsv will switch to the directory and invoke ./run after your container starts.
  • If any script exits with a non-zero exit code, the booting will fail.
  • If your process is started with a shell script, make sure you exec the actual process, otherwise the shell will receive the signal and not your process.
  • any environment variables set with docker run --env or with the ENV command in the Dockerfile, will be picked up by my_init
  • not possible for a child process to change the environment variables of other processes
  • they will not see the environment variables that were originally passed by Docker.
  • We ignore HOME, SHELL, USER and a bunch of other environment variables on purpose, because not ignoring them will break multi-user containers.
  • my_init imports environment variables from the directory /etc/container_environment
  • /etc/container_environment.sh - a dump of the environment variables in Bash format.
  • modify the environment variables in my_init (and therefore the environment variables in all child processes that are spawned after that point in time), by altering the files in /etc/container_environment
  • my_init only activates changes in /etc/container_environment when running startup scripts
  • environment variables don't contain sensitive data, then you can also relax the permissions
  • Syslog messages are forwarded to the console
  • syslog-ng is started separately before the runit supervisor process, and shutdown after runit exits.
  • RUN apt-get update && apt-get upgrade -y -o Dpkg::Options::="--force-confold"
  • /sbin/my_init --skip-startup-files --quiet --
  • By default, no keys are installed, so nobody can login
  • provide a pregenerated, insecure key (PuTTY format)
  • RUN /usr/sbin/enable_insecure_key
  • docker run YOUR_IMAGE /sbin/my_init --enable-insecure-key
  • RUN cat /tmp/your_key.pub >> /root/.ssh/authorized_keys && rm -f /tmp/your_key.pub
  • The default baseimage-docker installs syslog-ng, cron and sshd services during the build process
張 旭

Baseimage-docker: A minimal Ubuntu base image modified for Docker-friendliness - 0 views

  • We encourage you to use multiple processes.
  • Baseimage-docker is a special Docker image that is configured for correct use within Docker containers.
  • When your Docker container starts, only the CMD command is run.
  • ...16 more annotations...
  • You're not running them, you're only running your app.
  • You have Ubuntu installed in Docker. The files are there. But that doesn't mean Ubuntu's running as it should.
  • The only processes that will be running inside the container is the CMD command, and all processes that it spawns.
  • A proper Unix system should run all kinds of important system services.
  • Ubuntu is not designed to be run inside Docker
  • When a system is started, the first process in the system is called the init process, with PID 1. The system halts when this processs halts.
  • Runit (written in C) is much lighter weight than supervisord (written in Python).
  • Docker runs fine with multiple processes in a container.
  • Baseimage-docker encourages you to run multiple processes through the use of runit.
  • If your init process is your app, then it'll probably only shut down itself, not all the other processes in the container.
  • a Docker container, which is a locked down environment with e.g. no direct access to many kernel resources.
  • Used for service supervision and management.
  • A custom tool for running a command as another user.
  • add additional daemons (e.g. your own app) to the image by creating runit entries.
  • write a small shell script which runs your daemon, and runit will keep it up and running for you, restarting it when it crashes, etc.
  • the shell script must run the daemon without letting it daemonize/fork it.
張 旭

The Twelve-Factor App - 0 views

  • Keep development, staging, and production as similar as possible
  • Developers write code, ops engineers deploy it.
  • The twelve-factor app is designed for continuous deployment by keeping the gap between development and production small
  • ...4 more annotations...
  • Backing services, such as the app’s database, queueing system, or cache, is one area where dev/prod parity is important
  • The twelve-factor developer resists the urge to use different backing services between development and production, even when adapters theoretically abstract away any differences in backing services.
  • declarative provisioning tools such as Chef and Puppet combined with light-weight virtual environments such as Docker and Vagrant allow developers to run local environments which closely approximate production environments.
  • all deploys of the app (developer environments, staging, production) should be using the same type and version of each of the backing services.
  •  
    "as similar as possible "
張 旭

Template Designer Documentation - Jinja2 Documentation (2.10) - 0 views

  • A Jinja template doesn’t need to have a specific extension
  • A Jinja template is simply a text file
  • tags, which control the logic of the template
  • ...106 more annotations...
  • {% ... %} for Statements
  • {{ ... }} for Expressions to print to the template output
  • use a dot (.) to access attributes of a variable
  • the outer double-curly braces are not part of the variable, but the print statement.
  • If you access variables inside tags don’t put the braces around them.
  • If a variable or attribute does not exist, you will get back an undefined value.
  • the default behavior is to evaluate to an empty string if printed or iterated over, and to fail for every other operation.
  • if an object has an item and attribute with the same name. Additionally, the attr() filter only looks up attributes.
  • Variables can be modified by filters. Filters are separated from the variable by a pipe symbol (|) and may have optional arguments in parentheses.
  • Multiple filters can be chained
  • Tests can be used to test a variable against a common expression.
  • add is plus the name of the test after the variable.
  • to find out if a variable is defined, you can do name is defined, which will then return true or false depending on whether name is defined in the current template context.
  • strip whitespace in templates by hand. If you add a minus sign (-) to the start or end of a block (e.g. a For tag), a comment, or a variable expression, the whitespaces before or after that block will be removed
  • not add whitespace between the tag and the minus sign
  • mark a block raw
  • Template inheritance allows you to build a base “skeleton” template that contains all the common elements of your site and defines blocks that child templates can override.
  • The {% extends %} tag is the key here. It tells the template engine that this template “extends” another template.
  • access templates in subdirectories with a slash
  • can’t define multiple {% block %} tags with the same name in the same template
  • use the special self variable and call the block with that name
  • self.title()
  • super()
  • put the name of the block after the end tag for better readability
  • if the block is replaced by a child template, a variable would appear that was not defined in the block or passed to the context.
  • setting the block to “scoped” by adding the scoped modifier to a block declaration
  • If you have a variable that may include any of the following chars (>, <, &, or ") you SHOULD escape it unless the variable contains well-formed and trusted HTML.
  • Jinja2 functions (macros, super, self.BLOCKNAME) always return template data that is marked as safe.
  • With the default syntax, control structures appear inside {% ... %} blocks.
  • the dictsort filter
  • loop.cycle
  • Unlike in Python, it’s not possible to break or continue in a loop
  • use loops recursively
  • add the recursive modifier to the loop definition and call the loop variable with the new iterable where you want to recurse.
  • The loop variable always refers to the closest (innermost) loop.
  • whether the value changed at all,
  • use it to test if a variable is defined, not empty and not false
  • Macros are comparable with functions in regular programming languages.
  • If a macro name starts with an underscore, it’s not exported and can’t be imported.
  • pass a macro to another macro
  • caller()
  • a single trailing newline is stripped if present
  • other whitespace (spaces, tabs, newlines etc.) is returned unchanged
  • a block tag works in “both” directions. That is, a block tag doesn’t just provide a placeholder to fill - it also defines the content that fills the placeholder in the parent.
  • Python dicts are not ordered
  • caller(user)
  • call(user)
  • This is a simple dialog rendered by using a macro and a call block.
  • Filter sections allow you to apply regular Jinja2 filters on a block of template data.
  • Assignments at top level (outside of blocks, macros or loops) are exported from the template like top level macros and can be imported by other templates.
  • using namespace objects which allow propagating of changes across scopes
  • use block assignments to capture the contents of a block into a variable name.
  • The extends tag can be used to extend one template from another.
  • Blocks are used for inheritance and act as both placeholders and replacements at the same time.
  • The include statement is useful to include a template and return the rendered contents of that file into the current namespace
  • Included templates have access to the variables of the active context by default.
  • putting often used code into macros
  • imports are cached and imported templates don’t have access to the current template variables, just the globals by default.
  • Macros and variables starting with one or more underscores are private and cannot be imported.
  • By default, included templates are passed the current context and imported templates are not.
  • imports are often used just as a module that holds macros.
  • Integers and floating point numbers are created by just writing the number down
  • Everything between two brackets is a list.
  • Tuples are like lists that cannot be modified (“immutable”).
  • A dict in Python is a structure that combines keys and values.
  • // Divide two numbers and return the truncated integer result
  • The special constants true, false, and none are indeed lowercase
  • all Jinja identifiers are lowercase
  • (expr) group an expression.
  • The is and in operators support negation using an infix notation
  • in Perform a sequence / mapping containment test.
  • | Applies a filter.
  • ~ Converts all operands into strings and concatenates them.
  • use inline if expressions.
  • always an attribute is returned and items are not looked up.
  • default(value, default_value=u'', boolean=False)¶ If the value is undefined it will return the passed default value, otherwise the value of the variable
  • dictsort(value, case_sensitive=False, by='key', reverse=False)¶ Sort a dict and yield (key, value) pairs.
  • format(value, *args, **kwargs)¶ Apply python string formatting on an object
  • groupby(value, attribute)¶ Group a sequence of objects by a common attribute.
  • grouping by is stored in the grouper attribute and the list contains all the objects that have this grouper in common.
  • indent(s, width=4, first=False, blank=False, indentfirst=None)¶ Return a copy of the string with each line indented by 4 spaces. The first line and blank lines are not indented by default.
  • join(value, d=u'', attribute=None)¶ Return a string which is the concatenation of the strings in the sequence.
  • map()¶ Applies a filter on a sequence of objects or looks up an attribute.
  • pprint(value, verbose=False)¶ Pretty print a variable. Useful for debugging.
  • reject()¶ Filters a sequence of objects by applying a test to each object, and rejecting the objects with the test succeeding.
  • replace(s, old, new, count=None)¶ Return a copy of the value with all occurrences of a substring replaced with a new one.
  • round(value, precision=0, method='common')¶ Round the number to a given precision
  • even if rounded to 0 precision, a float is returned.
  • select()¶ Filters a sequence of objects by applying a test to each object, and only selecting the objects with the test succeeding.
  • sort(value, reverse=False, case_sensitive=False, attribute=None)¶ Sort an iterable. Per default it sorts ascending, if you pass it true as first argument it will reverse the sorting.
  • striptags(value)¶ Strip SGML/XML tags and replace adjacent whitespace by one space.
  • tojson(value, indent=None)¶ Dumps a structure to JSON so that it’s safe to use in <script> tags.
  • trim(value)¶ Strip leading and trailing whitespace.
  • unique(value, case_sensitive=False, attribute=None)¶ Returns a list of unique items from the the given iterable
  • urlize(value, trim_url_limit=None, nofollow=False, target=None, rel=None)¶ Converts URLs in plain text into clickable links.
  • defined(value)¶ Return true if the variable is defined
  • in(value, seq)¶ Check if value is in seq.
  • mapping(value)¶ Return true if the object is a mapping (dict etc.).
  • number(value)¶ Return true if the variable is a number.
  • sameas(value, other)¶ Check if an object points to the same memory address than another object
  • undefined(value)¶ Like defined() but the other way round.
  • A joiner is passed a string and will return that string every time it’s called, except the first time (in which case it returns an empty string).
  • namespace(...)¶ Creates a new container that allows attribute assignment using the {% set %} tag
  • The with statement makes it possible to create a new inner scope. Variables set within this scope are not visible outside of the scope.
  • activate and deactivate the autoescaping from within the templates
  • With both trim_blocks and lstrip_blocks enabled, you can put block tags on their own lines, and the entire block line will be removed when rendered, preserving the whitespace of the contents
張 旭

DNS Records: an Introduction - 0 views

  • reading from right to left
  • top-level domain, or TLD
  • first-level subdomains plus their TLDs (example.com) are referred to as “domains.”
  • ...37 more annotations...
  • Name servers host a domain’s DNS information in a text file called the zone file
  • Start of Authority (SOA) records
  • You’ll want to specify at least two name servers. That way, if one of them is down, the next one can continue to serve your DNS information.
  • Every domain’s zone file contains the admin’s email address, the name servers, and the DNS records.
  • a zone file, which lists domains and their corresponding IP addresses (and a few other things)
  • TLD nameserver
  • ISPs cache a lot of DNS information after they’ve looked it up the first time
  • Usually caching is a good thing, but it can be a problem if you’ve recently made a change to your DNS information
  • An A record matches up a domain (or subdomain) to an IP address
  • point different subdomains to different IP addresses
  • An AAAA record is just like an A record, but for IPv6 IP addresses.
  • An AXFR record is a type of DNS record used for DNS replication
  • used on a slave DNS server to replicate the zone file from a master DNS server
  • DNS Certification Authority Authorization uses DNS to allow the holder of a domain to specify which certificate authorities are allowed to issue certificates for that domain.
  • A CNAME record or Canonical Name record matches up a domain (or subdomain) to a different domain.
  • You should not use a CNAME record for a domain that gets email, because some mail servers handle mail oddly for domains with CNAME records
  • the target domain for a CNAME record should have a normal A-record resolution
  • a CNAME record does not function the same way as a URL redirect
  • A DKIM record or domain keys identified mail record displays the public key for authenticating messages that have been signed with the DKIM protocol
  • An MX record or mail exchange record sets the mail delivery destination for a domain (or subdomain).
  • Ideally, an MX record should point to a domain that is also the hostname for its server.
  • Your MX records don’t necessarily have to point to your Linode. If you’re using a third-party mail service, like Google Apps, you should use the MX records they provide.
  • Lower numbers have a higher priority
  • NS records or name server records set the nameservers for a domain (or subdomain).
  • You can also set up different nameservers for any of your subdomains.
  • The order of NS records does not matter; DNS requests are sent randomly to the different servers, and if one host fails to respond, another one will be queried.
  • A PTR record or pointer record matches up an IP address to a domain (or subdomain), allowing reverse DNS queries to function.
  • PTR records are usually set with your hosting provider. They are not part of your domain’s zone file.
  • An SOA record or Start of Authority record labels a zone file with the name of the host where it was originally created.
  • The administrative email address is written with a period (.) instead of an at symbol (<@>).
  • The single nameserver mentioned in the SOA record is considered the primary master for the purposes of Dynamic DNS and is the server where zone file changes get made before they are propagated to all other nameservers.
  • An SPF record or Sender Policy Framework record lists the designated mail servers for a domain (or subdomain).
  • An SPF record for your domain tells other receiving mail servers which outgoing server(s) are valid sources of email, so they can reject spoofed email from your domain that has originated from unauthorized servers.
  • Your SPF record will have a domain or subdomain, type (which is TXT, or SPF if your name server supports it), and text (which starts with “v=spf1” and contains the SPF record settings).
  • An SRV record or service record matches up a specific service that runs on your domain (or subdomain) to a target domain.
  • A TXT record or text record provides information about the domain in question to other resources on the Internet.
  • One common use of the TXT record is to create an SPF record on nameservers that don’t natively support SPF.
張 旭

Intro to deployment strategies: blue-green, canary, and more - DEV Community - 0 views

  • using a service-oriented architecture and microservices approach, developers can design a code base to be modular.
  • Modern applications are often distributed and cloud-based
  • different release cycles for different components
  • ...20 more annotations...
  • the abstraction of the infrastructure layer, which is now considered code. Deployment of a new application may require the deployment of new infrastructure code as well.
  • "big bang" deployments update whole or large parts of an application in one fell swoop.
  • Big bang deployments required the business to conduct extensive development and testing before release, often associated with the "waterfall model" of large sequential releases.
  • Rollbacks are often costly, time-consuming, or even impossible.
  • In a rolling deployment, an application’s new version gradually replaces the old one.
  • new and old versions will coexist without affecting functionality or user experience.
  • Each container is modified to download the latest image from the app vendor’s site.
  • two identical production environments work in parallel.
  • Once the testing results are successful, application traffic is routed from blue to green.
  • In a blue-green deployment, both systems use the same persistence layer or database back end.
  • You can use the primary database by blue for write operations and use the secondary by green for read operations.
  • Blue-green deployments rely on traffic routing.
  • long TTL values can delay these changes.
  • The main challenge of canary deployment is to devise a way to route some users to the new application.
  • Using an application logic to unlock new features to specific users and groups.
  • With CD, the CI-built code artifact is packaged and always ready to be deployed in one or more environments.
  • Use Build Automation tools to automate environment builds
  • Use configuration management tools
  • Enable automated rollbacks for deployments
  • An application performance monitoring (APM) tool can help your team monitor critical performance metrics including server response times after deployments.
張 旭

What's the difference between Prometheus and Zabbix? - Stack Overflow - 0 views

  • Zabbix has core written in C and webUI based on PHP
  • Zabbix stores data in RDBMS (MySQL, PostgreSQL, Oracle, sqlite) of user's choice.
  • Prometheus uses its own database embedded into backend process
  • ...8 more annotations...
  • Zabbix by default uses "pull" model when a server connects to agents on each monitoring machine, agents periodically gather the info and send it to a server.
  • Prometheus prefers "pull" model when a server gather info from client machines.
  • Prometheus requires an application to be instrumented with Prometheus client library (available in different programming languages) for preparing metrics.
  • expose metrics for Prometheus (similar to "agents" for Zabbix)
  • Zabbix uses its own tcp-based communication protocol between agents and a server.
  • Prometheus uses HTTP with protocol buffers (+ text format for ease of use with curl).
  • Prometheus offers basic tool for exploring gathered data and visualizing it in simple graphs on its native server and also offers a minimal dashboard builder PromDash. But Prometheus is and is designed to be supported by modern visualizing tools like Grafana.
  • Prometheus offers solution for alerting that is separated from its core into Alertmanager application.
張 旭

Auto DevOps | GitLab - 0 views

  • Auto DevOps provides pre-defined CI/CD configuration which allows you to automatically detect, build, test, deploy, and monitor your applications
  • Just push your code and GitLab takes care of everything else.
  • Auto DevOps will be automatically disabled on the first pipeline failure.
  • ...78 more annotations...
  • Your project will continue to use an alternative CI/CD configuration file if one is found
  • Auto DevOps works with any Kubernetes cluster;
  • using the Docker or Kubernetes executor, with privileged mode enabled.
  • Base domain (needed for Auto Review Apps and Auto Deploy)
  • Kubernetes (needed for Auto Review Apps, Auto Deploy, and Auto Monitoring)
  • Prometheus (needed for Auto Monitoring)
  • scrape your Kubernetes cluster.
  • project level as a variable: KUBE_INGRESS_BASE_DOMAIN
  • A wildcard DNS A record matching the base domain(s) is required
  • Once set up, all requests will hit the load balancer, which in turn will route them to the Kubernetes pods that run your application(s).
  • review/ (every environment starting with review/)
  • staging
  • production
  • need to define a separate KUBE_INGRESS_BASE_DOMAIN variable for all the above based on the environment.
  • Continuous deployment to production: Enables Auto Deploy with master branch directly deployed to production.
  • Continuous deployment to production using timed incremental rollout
  • Automatic deployment to staging, manual deployment to production
  • Auto Build creates a build of the application using an existing Dockerfile or Heroku buildpacks.
  • If a project’s repository contains a Dockerfile, Auto Build will use docker build to create a Docker image.
  • Each buildpack requires certain files to be in your project’s repository for Auto Build to successfully build your application.
  • Auto Test automatically runs the appropriate tests for your application using Herokuish and Heroku buildpacks by analyzing your project to detect the language and framework.
  • Auto Code Quality uses the Code Quality image to run static analysis and other code checks on the current code.
  • Static Application Security Testing (SAST) uses the SAST Docker image to run static analysis on the current code and checks for potential security issues.
  • Dependency Scanning uses the Dependency Scanning Docker image to run analysis on the project dependencies and checks for potential security issues.
  • License Management uses the License Management Docker image to search the project dependencies for their license.
  • Vulnerability Static Analysis for containers uses Clair to run static analysis on a Docker image and checks for potential security issues.
  • Review Apps are temporary application environments based on the branch’s code so developers, designers, QA, product managers, and other reviewers can actually see and interact with code changes as part of the review process. Auto Review Apps create a Review App for each branch. Auto Review Apps will deploy your app to your Kubernetes cluster only. When no cluster is available, no deployment will occur.
  • The Review App will have a unique URL based on the project ID, the branch or tag name, and a unique number, combined with the Auto DevOps base domain.
  • Review apps are deployed using the auto-deploy-app chart with Helm, which can be customized.
  • Your apps should not be manipulated outside of Helm (using Kubernetes directly).
  • Dynamic Application Security Testing (DAST) uses the popular open source tool OWASP ZAProxy to perform an analysis on the current code and checks for potential security issues.
  • Auto Browser Performance Testing utilizes the Sitespeed.io container to measure the performance of a web page.
  • add the paths to a file named .gitlab-urls.txt in the root directory, one per line.
  • After a branch or merge request is merged into the project’s default branch (usually master), Auto Deploy deploys the application to a production environment in the Kubernetes cluster, with a namespace based on the project name and unique project ID
  • Auto Deploy doesn’t include deployments to staging or canary by default, but the Auto DevOps template contains job definitions for these tasks if you want to enable them.
  • Apps are deployed using the auto-deploy-app chart with Helm.
  • For internal and private projects a GitLab Deploy Token will be automatically created, when Auto DevOps is enabled and the Auto DevOps settings are saved.
  • If the GitLab Deploy Token cannot be found, CI_REGISTRY_PASSWORD is used. Note that CI_REGISTRY_PASSWORD is only valid during deployment.
  • If present, DB_INITIALIZE will be run as a shell command within an application pod as a helm post-install hook.
  • a post-install hook means that if any deploy succeeds, DB_INITIALIZE will not be processed thereafter.
  • DB_MIGRATE will be run as a shell command within an application pod as a helm pre-upgrade hook.
    • 張 旭
       
      如果專案類型不同,就要去查 buildpacks 裡面如何叫用該指令,例如 laravel 的 migration
    • 張 旭
       
      如果是自己的 Dockerfile 建立起來的,看來就不用鳥 buildpacks 的作法
  • Once your application is deployed, Auto Monitoring makes it possible to monitor your application’s server and response metrics right out of the box.
  • annotate the NGINX Ingress deployment to be scraped by Prometheus using prometheus.io/scrape: "true" and prometheus.io/port: "10254"
  • If you are also using Auto Review Apps and Auto Deploy and choose to provide your own Dockerfile, make sure you expose your application to port 5000 as this is the port assumed by the default Helm chart.
  • While Auto DevOps provides great defaults to get you started, you can customize almost everything to fit your needs; from custom buildpacks, to Dockerfiles, Helm charts, or even copying the complete CI/CD configuration into your project to enable staging and canary deployments, and more.
  • If your project has a Dockerfile in the root of the project repo, Auto DevOps will build a Docker image based on the Dockerfile rather than using buildpacks.
  • Auto DevOps uses Helm to deploy your application to Kubernetes.
  • Bundled chart - If your project has a ./chart directory with a Chart.yaml file in it, Auto DevOps will detect the chart and use it instead of the default one.
  • Create a project variable AUTO_DEVOPS_CHART with the URL of a custom chart to use or create two project variables AUTO_DEVOPS_CHART_REPOSITORY with the URL of a custom chart repository and AUTO_DEVOPS_CHART with the path to the chart.
  • make use of the HELM_UPGRADE_EXTRA_ARGS environment variable to override the default values in the values.yaml file in the default Helm chart.
  • specify the use of a custom Helm chart per environment by scoping the environment variable to the desired environment.
    • 張 旭
       
      Auto DevOps 就是一套人家寫好好的傳便便的 .gitlab-ci.yml
  • Your additions will be merged with the Auto DevOps template using the behaviour described for include
  • copy and paste the contents of the Auto DevOps template into your project and edit this as needed.
  • In order to support applications that require a database, PostgreSQL is provisioned by default.
  • Set up the replica variables using a project variable and scale your application by just redeploying it!
  • You should not scale your application using Kubernetes directly.
  • Some applications need to define secret variables that are accessible by the deployed application.
  • Auto DevOps detects variables where the key starts with K8S_SECRET_ and make these prefixed variables available to the deployed application, as environment variables.
  • Auto DevOps pipelines will take your application secret variables to populate a Kubernetes secret.
  • Environment variables are generally considered immutable in a Kubernetes pod.
  • if you update an application secret without changing any code then manually create a new pipeline, you will find that any running application pods will not have the updated secrets.
  • Variables with multiline values are not currently supported
  • The normal behavior of Auto DevOps is to use Continuous Deployment, pushing automatically to the production environment every time a new pipeline is run on the default branch.
  • If STAGING_ENABLED is defined in your project (e.g., set STAGING_ENABLED to 1 as a CI/CD variable), then the application will be automatically deployed to a staging environment, and a production_manual job will be created for you when you’re ready to manually deploy to production.
  • If CANARY_ENABLED is defined in your project (e.g., set CANARY_ENABLED to 1 as a CI/CD variable) then two manual jobs will be created: canary which will deploy the application to the canary environment production_manual which is to be used by you when you’re ready to manually deploy to production.
  • If INCREMENTAL_ROLLOUT_MODE is set to manual in your project, then instead of the standard production job, 4 different manual jobs will be created: rollout 10% rollout 25% rollout 50% rollout 100%
  • The percentage is based on the REPLICAS variable and defines the number of pods you want to have for your deployment.
  • To start a job, click on the play icon next to the job’s name.
  • Once you get to 100%, you cannot scale down, and you’d have to roll back by redeploying the old version using the rollback button in the environment page.
  • With INCREMENTAL_ROLLOUT_MODE set to manual and with STAGING_ENABLED
  • not all buildpacks support Auto Test yet
  • When a project has been marked as private, GitLab’s Container Registry requires authentication when downloading containers.
  • Authentication credentials will be valid while the pipeline is running, allowing for a successful initial deployment.
  • After the pipeline completes, Kubernetes will no longer be able to access the Container Registry.
  • We strongly advise using GitLab Container Registry with Auto DevOps in order to simplify configuration and prevent any unforeseen issues.
張 旭

Forwarding (DNS and BIND, 4th Edition) - 0 views

  • Forwarders are also useful if you need to shunt name resolution to a particular name server
  • If you designate one or more servers at your site as forwarders, your name servers will send all their off-site queries to the forwarders first.
  • A primary master or slave name server's mode of operation changes slightly when it is configured to use a forwarder.
  • ...2 more annotations...
  • If a resolver requests records that are already in the name server's authoritative data or cached data, the name server answers with that information
  • if the records aren't in its database, the name server sends the query to a forwarder and waits a short period for an answer before resuming normal operation and contacting the remote name servers itself. What the name server is doing differently here is sending a recursive query to the forwarder, expecting it to find the answer.
張 旭

Understanding Nginx Server and Location Block Selection Algorithms | DigitalOcean - 0 views

  • A server block is a subset of Nginx’s configuration that defines a virtual server used to handle requests of a defined type. Administrators often configure multiple server blocks and decide which block should handle which connection based on the requested domain name, port, and IP address.
  • A location block lives within a server block and is used to define how Nginx should handle requests for different resources and URIs for the parent server. The URI space can be subdivided in whatever way the administrator likes using these blocks. It is an extremely flexible model.
  • Nginx logically divides the configurations meant to serve different content into blocks, which live in a hierarchical structure. Each time a client request is made, Nginx begins a process of determining which configuration blocks should be used to handle the request.
  • ...37 more annotations...
  • Nginx is one of the most popular web servers in the world. It can successfully handle high loads with many concurrent client connections, and can easily function as a web server, a mail server, or a reverse proxy server.
  • The main server block directives that Nginx is concerned with during this process are the listen directive, and the server_name directive.
  • The listen directive typically defines which IP address and port that the server block will respond to.
  • 0.0.0.0:8080 if Nginx is being run by a normal, non-root user
  • Nginx translates all “incomplete” listen directives by substituting missing values with their default values so that each block can be evaluated by its IP address and port.
  • In any case, the port must be matched exactly.
  • If there are multiple server blocks with the same level of specificity matching, Nginx then begins to evaluate the server_name directive of each server block.
  • Nginx will only evaluate the server_name directive when it needs to distinguish between server blocks that match to the same level of specificity in the listen directive.
  • Nginx checks the request’s “Host” header. This value holds the domain or IP address that the client was actually trying to reach.
  • Nginx will first try to find a server block with a server_name that matches the value in the “Host” header of the request exactly.
  • If no exact match is found, Nginx will then try to find a server block with a server_name that matches using a leading wildcard (indicated by a * at the beginning of the name in the config).
  • If no match is found using a leading wildcard, Nginx then looks for a server block with a server_name that matches using a trailing wildcard (indicated by a server name ending with a * in the config)
  • If no match is found using a trailing wildcard, Nginx then evaluates server blocks that define the server_name using regular expressions (indicated by a ~ before the name).
  • If no regular expression match is found, Nginx then selects the default server block for that IP address and port.
  • There can be only one default_server declaration per each IP address/port combination.
  • Location blocks live within server blocks (or other location blocks) and are used to decide how to process the request URI (the part of the request that comes after the domain name or IP address/port).
  • If no modifiers are present, the location is interpreted as a prefix match.
  • =: If an equal sign is used, this block will be considered a match if the request URI exactly matches the location given.
  • ~: If a tilde modifier is present, this location will be interpreted as a case-sensitive regular expression match.
  • ~*: If a tilde and asterisk modifier is used, the location block will be interpreted as a case-insensitive regular expression match.
  • ^~: If a carat and tilde modifier is present, and if this block is selected as the best non-regular expression match, regular expression matching will not take place.
  • Keep in mind that if this block is selected and the request is fulfilled using an index page, an internal redirect will take place to another location that will be the actual handler of the request
  • Keeping in mind the types of location declarations we described above, Nginx evaluates the possible location contexts by comparing the request URI to each of the locations.
  • Nginx begins by checking all prefix-based location matches (all location types not involving a regular expression).
  • First, Nginx looks for an exact match.
  • If no exact (with the = modifier) location block matches are found, Nginx then moves on to evaluating non-exact prefixes.
  • After the longest matching prefix location is determined and stored, Nginx moves on to evaluating the regular expression locations (both case sensitive and insensitive).
  • by default, Nginx will serve regular expression matches in preference to prefix matches.
  • regular expression matches within the longest prefix match will “jump the line” when Nginx evaluates regex locations.
  • The exceptions to the “only one location block” rule may have implications on how the request is actually served and may not align with the expectations you had when designing your location blocks.
  • The index directive always leads to an internal redirect if it is used to handle the request.
  • In the case above, if you really need the execution to stay in the first block, you will have to come up with a different method of satisfying the request to the directory.
  • one way of preventing an index from switching contexts, but it’s probably not useful for most configurations
  • the try_files directive. This directive tells Nginx to check for the existence of a named set of files or directories.
  • the rewrite directive. When using the last parameter with the rewrite directive, or when using no parameter at all, Nginx will search for a new matching location based on the results of the rewrite.
  • The error_page directive can lead to an internal redirect similar to that created by try_files.
  • when certain status codes are encountered.
張 旭

Understanding the GitHub flow · GitHub Guides - 0 views

  • anything in the master branch is always deployable.
  • Your branch name should be descriptive
  • Commits also create a transparent history of your work that others can follow to understand what you've done and why.
  • ...9 more annotations...
  • each commit is considered a separate unit of change.
  • By writing clear commit messages, you can make it easier for other people to follow along and provide feedback.
  • Pull Requests initiate discussion about your commits.
  • If you're using a Fork & Pull Model, Pull Requests provide a way to notify project maintainers about the changes you'd like them to consider.
  • Pull Requests are designed to encourage and capture this type of conversation.
  • You can also continue to push to your branch in light of discussion and feedback about your commits.
  • If your branch causes issues, you can roll it back by deploying the existing master into production.
  • With GitHub, you can deploy from a branch for final testing in production before merging to master.
  • your changes have been verified in production, it is time to merge your code into the master branch.
  •  
    "anything in the master branch is always deployable."
« First ‹ Previous 61 - 80 of 94 Next ›
Showing 20 items per page