Skip to main content

Home/ Larvata/ Group items tagged writing

Rss Feed Group items tagged

張 旭

ProxySQL Series : Percona Cluster/MariaDB Cluster (Galera) Read-write Split - Mydbops - 0 views

  • PXC / MariaDB Clusters really works better with writes on single ode than multi node writes.
  • proxySQL setup for a cluster in Single-writer mode, Which is the most recommended for Cluster to avoid of conflicts of writes and split-Brain scenarios.
  • listening on ports 6032 for proxysql admin interface and 6033 for MySQL interface by default
  •  
    "PXC / MariaDB Clusters really works better with writes on single ode than multi node writes. "
張 旭

Production Notes - MongoDB Manual - 0 views

  • mongod will not start if dbPath contains data files created by a storage engine other than the one specified by --storageEngine.
  • mongod must possess read and write permissions for the specified dbPath.
  • WiredTiger supports concurrent access by readers and writers to the documents in a collection
  • ...9 more annotations...
  • Journaling guarantees that MongoDB can quickly recover write operations that were written to the journal but not written to data files in cases where mongod terminated due to a crash or other serious failure.
  • To use read concern level of "majority", replica sets must use WiredTiger storage engine.
  • Write concern describes the level of acknowledgement requested from MongoDB for write operations.
  • With stronger write concerns, clients must wait after sending a write operation until MongoDB confirms the write operation at the requested write concern level.
  • By default, authorization is not enabled, and mongod assumes a trusted environment
  • The HTTP interface is disabled by default. Do not enable the HTTP interface in production environments.
  • Avoid overloading the connection resources of a mongod or mongos instance by adjusting the connection pool size to suit your use case.
  • ensure that each mongod or mongos instance has access to two real cores or one multi-core physical CPU.
  • The WiredTiger storage engine is multithreaded and can take advantage of additional CPU cores
張 旭

Probably Done Before: Visualizing Docker Containers and Images - 0 views

  •  In my opinion, understanding how a technology works under the hood is the best way to achieve learning speed and to build confidence that you are using the tool in the correct way.
  • union view
    • 張 旭
       
      把多層 image layer 串接起來,看上去就像是在讀一個 image 檔案而已。
  • The top-level layer may be read by a union-ing file system (AUFS on my docker implementation) to present a single cohesive view of all the changes as one read-only file system
  • ...36 more annotations...
  • it is nearly the same thing as an image, except that the top layer is read-write
  • A container is defined only as a read-write layer atop an image (of read-only layers itself).  It does not have to be running.
  • a running container
    • 張 旭
       
      之前一直搞錯了!不是 run 起來的才會叫 container,只要有 read-write layer 就是了!
  • the the isolated process-space and processes within
  • A running container is defined as a read-write "union view" and
  • kernel-level technologies like cgroups, namespaces
  • The processes within this process-space may change, delete or create files within the "union view" file that will be captured in the read-write layer
  • there is no longer a running container
    • 張 旭
       
      這行指令執行結束之後,running container 就停掉了,但是該 container 還在!
  • each layer contains a pointer to a parent layer using the Id
  • The 'docker create' command adds a read-write layer to the top stack based on the image id.  It does not run this container.
  • The command 'docker start' creates a process space around the union view of the container's layers.
  • can only be one process space per container.
  • the docker run command starts with an image, creates a container, and starts the container
  • 'git pull' (which is a combination of 'git fetch' and 'git merge')
  • 'docker ps' lists out the inventory of running containers on your system
  • 'docker ps -a' where the 'a' is short for 'all' lists out all the containers on your system, whether stopped or running.
  • Only those images that have containers attached to them or that have been pulled are considered top-level.
  • 'docker stop' issues a SIGTERM to a running container which politely stops all the processes in that process-space.
  • results is a normal, but non-running, container
  • 'docker kill' issues a non-polite SIGKILL command to all the processes in a running container.
  • 'docker stop' and 'docker kill' which send actual UNIX signals to a running process
  • 'docker pause' uses a special cgroups feature to freeze/pause a running process-space
  • 'docker rm' removes the read-write layer that defines a container from your host system
  • It effectively deletes files
  • 'docker rmi' removes the read-layer that defines a "union view" of an image.
  • 'docker commit' takes a container's top-level read-write layer and burns it into a read-only layer.
  • turns a container (whether running or stopped) into an immutable image
  • uses the FROM directive in the Dockerfile file as the starting image and iteratively 1) runs (create and start) 2) modifies and 3) commits.
  • At each step in the iteration a new layer is created.
  • 'docker exec' command runs on a running container and executes a process in that running container's process space
  • 'docker inspect' fetches the metadata that has been associated with the top-layer of the container or image
  • 'docker save' creates a single tar file that can be used to import on a different host system
  • only be run on an image
  • 'docker export' command creates a tar file of the contents of the "union view" and flattens it for consumption for non-Docker usages
  • This command removes the metadata and the layers.  This command can only be run on containers.
  • 'docker history' command takes an image-id and recursively prints out the read-only layers
張 旭

Logging Architecture | Kubernetes - 0 views

  • Application logs can help you understand what is happening inside your application
  • container engines are designed to support logging.
  • The easiest and most adopted logging method for containerized applications is writing to standard output and standard error streams.
  • ...26 more annotations...
  • In a cluster, logs should have a separate storage and lifecycle independent of nodes, pods, or containers. This concept is called cluster-level logging.
  • Cluster-level logging architectures require a separate backend to store, analyze, and query logs
  • Kubernetes does not provide a native storage solution for log data.
  • use kubectl logs --previous to retrieve logs from a previous instantiation of a container.
  • A container engine handles and redirects any output generated to a containerized application's stdout and stderr streams
  • The Docker JSON logging driver treats each line as a separate message.
  • By default, if a container restarts, the kubelet keeps one terminated container with its logs.
  • An important consideration in node-level logging is implementing log rotation, so that logs don't consume all available storage on the node
  • You can also set up a container runtime to rotate an application's logs automatically.
  • The two kubelet flags container-log-max-size and container-log-max-files can be used to configure the maximum size for each log file and the maximum number of files allowed for each container respectively.
  • The kubelet and container runtime do not run in containers.
  • On machines with systemd, the kubelet and container runtime write to journald. If systemd is not present, the kubelet and container runtime write to .log files in the /var/log directory.
  • System components inside containers always write to the /var/log directory, bypassing the default logging mechanism.
  • Kubernetes does not provide a native solution for cluster-level logging
  • Use a node-level logging agent that runs on every node.
  • implement cluster-level logging by including a node-level logging agent on each node.
  • the logging agent is a container that has access to a directory with log files from all of the application containers on that node.
  • the logging agent must run on every node, it is recommended to run the agent as a DaemonSet
  • Node-level logging creates only one agent per node and doesn't require any changes to the applications running on the node.
  • Containers write stdout and stderr, but with no agreed format. A node-level agent collects these logs and forwards them for aggregation.
  • Each sidecar container prints a log to its own stdout or stderr stream.
  • It is not recommended to write log entries with different formats to the same log stream
  • writing logs to a file and then streaming them to stdout can double disk usage.
  • If you have an application that writes to a single file, it's recommended to set /dev/stdout as the destination
  • it's recommended to use stdout and stderr directly and leave rotation and retention policies to the kubelet.
  • Using a logging agent in a sidecar container can lead to significant resource consumption. Moreover, you won't be able to access those logs using kubectl logs because they are not controlled by the kubelet.
張 旭

Replication - MongoDB Manual - 0 views

  • A replica set in MongoDB is a group of mongod processes that maintain the same data set.
  • Replica sets provide redundancy and high availability, and are the basis for all production deployments.
  • With multiple copies of data on different database servers, replication provides a level of fault tolerance against the loss of a single database server.
  • ...18 more annotations...
  • replication can provide increased read capacity as clients can send read operations to different servers.
  • A replica set is a group of mongod instances that maintain the same data set.
  • A replica set contains several data bearing nodes and optionally one arbiter node.
  • one and only one member is deemed the primary node, while the other nodes are deemed secondary nodes.
  • A replica set can have only one primary capable of confirming writes with { w: "majority" } write concern; although in some circumstances, another mongod instance may transiently believe itself to also be primary.
  • The secondaries replicate the primary’s oplog and apply the operations to their data sets such that the secondaries’ data sets reflect the primary’s data set
  • add a mongod instance to a replica set as an arbiter. An arbiter participates in elections but does not hold data
  • An arbiter will always be an arbiter whereas a primary may step down and become a secondary and a secondary may become the primary during an election.
  • Secondaries replicate the primary’s oplog and apply the operations to their data sets asynchronously.
  • These slow oplog messages are logged for the secondaries in the diagnostic log under the REPL component with the text applied op: <oplog entry> took <num>ms.
  • Replication lag refers to the amount of time that it takes to copy (i.e. replicate) a write operation on the primary to a secondary.
  • When a primary does not communicate with the other members of the set for more than the configured electionTimeoutMillis period (10 seconds by default), an eligible secondary calls for an election to nominate itself as the new primary.
  • The replica set cannot process write operations until the election completes successfully.
  • The median time before a cluster elects a new primary should not typically exceed 12 seconds, assuming default replica configuration settings.
  • Factors such as network latency may extend the time required for replica set elections to complete, which in turn affects the amount of time your cluster may operate without a primary.
  • Your application connection logic should include tolerance for automatic failovers and the subsequent elections.
  • MongoDB drivers can detect the loss of the primary and automatically retry certain write operations a single time, providing additional built-in handling of automatic failovers and elections
  • By default, clients read from the primary [1]; however, clients can specify a read preference to send read operations to secondaries.
crazylion lee

Writing a Lexer in Go with LexMachine - 0 views

  •  
    "This article is about lexmachine, a library I wrote to help you write great lexers in Go. If you are looking to write a golang lexer or a lexer in golang this article is for you."
crazylion lee

Introduction | MaintainableCSS - an approach to writing modular, scalable and maintaina... - 0 views

  •  
    "MaintainableCSS is an approach to architecting and writing CSS that helps you and your team write modular, scalable and maintainable code. "
張 旭

Scalable architecture without magic (and how to build it if you're not Google) - DEV Co... - 0 views

  • Don’t mess up write-first and read-first databases.
  • keep them stateless.
  • you should know how to make a scalable setup on bare metal.
  • ...29 more annotations...
  • Different programming languages are for different tasks.
  • Go or C which are compiled to run on bare metal.
  • To run NodeJS on multiple cores, you have to use something like PM2, but since this you have to keep your code stateless.
  • Python have very rich and sugary syntax that’s great for working with data while keeping your code small and expressive.
  • SQL is almost always slower than NoSQL
  • databases are often read-first or write-first
  • write-first, just like Cassandra.
  • store all of your data to your databases and leave nothing at backend
  • Functional code is stateless by default
  • It’s better to go for stateless right from the very beginning.
  • deliver exactly the same responses for same requests.
  • Sessions? Store them at Redis and allow all of your servers to access it.
  • Only the first user will trigger a data query, and all others will be receiving exactly the same data straight from the RAM
  • never, never cache user input
  • Only the server output should be cached
  • Varnish is a great cache option that works with HTTP responses, so it may work with any backend.
  • a rate limiter – if there’s not enough time have passed since last request, the ongoing request will be denied.
  • different requests blasting every 10ms can bring your server down
  • Just set up entry relations and allow your database to calculate external keys for you
  • the query planner will always be faster than your backend.
  • Backend should have different responsibilities: hashing, building web pages from data and templates, managing sessions and so on.
  • For anything related to data management or data models, move it to your database as procedures or queries.
  • a distributed database.
  • your code has to be stateless
  • Move anything related to the data to the database.
  • For load-balancing a database, go for cluster.
  • DB is balancing requests, as well as your backend.
  • Users from different continents are separated with DNS.
  • Keep is scalable, keep is stateless.
  •  
    "Don't mess up write-first and read-first databases."
crazylion lee

Tutorial series: learning how to write a 3D soft engine from scratch in C#, TypeScript ... - 0 views

  •  
    "Tutorial series: learning how to write a 3D soft engine from scratch in C#, TypeScript or JavaScript"
張 旭

Containers Vs. Config Management - 0 views

  • With configuration management systems, you write code that describes how you want some component of your systems to be installed and configured, and when you execute the code on your server, it should end up in the desired state.
  • building a hosting platform that is capable of a lot of things that system administrators used to do manually
  • build modules on deployment via bundler or npm or similar, it can be incredibly slow to run, taking minutes or longer in some cases
  • ...10 more annotations...
  • pulling from git is slow.
  • deploying with configuration management tools is a pain in the ass and error prone.
  • Support for containers has existed in the Linux kernel since version 2.6.24 when cgroup support was added
  • All of the logic that used to live in your cookbooks/playbooks/manifests/etc now lives in a Dockerfile that resides directly in the repository for the application it is designed to build
  • All of the dependencies of the application are bundled with the container which means no need to build on the fly on every server during deployment.
  • Containers bring standardization which allows for systems like centralized logging, monitoring, and metrics to easily snap into place no matter what is running in the container.
  • Dockerfiles do not give you the same level of control over configuration as your application transitions between environments, like dev, staging, and production.
  • You may even need to have different Dockerfile’s for each environment in certain cases.
  • configuration management systems now have hooks for docker integration.
  • Config management will only be used to install Docker, an orchestration system, configure PAM/SSH auth, and tune OS sysctl values.
  •  
    "With configuration management systems, you write code that describes how you want some component of your systems to be installed and configured, and when you execute the code on your server, it should end up in the desired state."
張 旭

Active Record Migrations - Ruby on Rails Guides - 0 views

    • 張 旭
       
       跟 belongs_to 與 has_many 設定對應的 Migrattion
    • 張 旭
       
      has_and_belongs_to_many 的對應?
  • add_column and remove_column
  • ...114 more annotations...
  • allowing your schema and changes to be database independent.
  • each migration as being a new 'version' of the database
  • each migration modifies it to add or remove tables, columns, or entries
  • Active Record will also update your db/schema.rb file to match the up-to-date structure of your database.
  • A primary key column called id will also be added implicitly, as it's the default primary key for all Active Record models
  • roll this migration back, it will remove the table
  • timestamps macro adds two columns, created_at and updated_at
  • On databases that support transactions with statements that change the schema, migrations are wrapped in a transaction
  • reversible
  • use up and down instead of change
  • Migrations are stored as files in the db/migrate directory, one for each migration class.
  • a UTC timestamp identifying
  • Rails uses this timestamp to determine which migration should be run and in what order
  • "AddXXXToYYY" or "RemoveXXXFromYYY"
  • use a Ruby DSL
  • column type as references
  • part_number:string:index
  • a migration to remove a column
  • "CreateXXX"
  • change_column_null
  • AddUserRefToProducts
  • :references
  • produce join tables if JoinTable is part of the name
  • CreateJoinTable
  • The model and scaffold generators will create migrations appropriate for adding a new model.
  • enclosed by curly braces and follow the field type
  • create_table
  • By default, create_table will create a primary key called id
  • add an index on the new column
  • when using MySQL, the default is ENGINE=InnoDB
  • create_join_table creates an HABTM (has and belongs to many) join table
  • To customize the name of the table, provide a :table_name option:
  • create_join_table also accepts a block
  • change_table, used for changing existing tables
  • remove
  • rename
  • add_column
  • change_column
  • remove_column
  • change_column_default
  • place an SQL fragment in the :options option.
  • limit
  • precision
  • scale
  • polymorphic
  • default
  • index
  • add_foreign_key
  • Active Record only supports single column foreign keys.
  • use the old style of migration using up and down methods instead of the change method.
  • .connection.execute
  • change_table is also reversible, as long as the block does not call change, change_default or remove.
  • remove_column is reversible if you supply the column type as the third argument
  • Complex migrations may require processing that Active Record doesn't know how to reverse
  • reversible
  • Using reversible will ensure that the instructions are executed in the right order too.
  • add_column add_foreign_key add_index add_reference add_timestamps change_column_default (must supply a :from and :to option) change_column_null create_join_table create_table disable_extension drop_join_table drop_table (must supply a block) enable_extension remove_column (must supply a type) remove_foreign_key (must supply a second table) remove_index remove_reference remove_timestamps rename_column rename_index rename_table
  • :column_options option
  • have the option :null set to false by default
  • By default, the name of the join table comes from the union of the first two arguments provided to create_join_table
  • in alphabetical order
  • change_column command is irreversible.
    • 張 旭
       
      關聯物在前,被關聯物在後。 A 關聯到 B
  • If the column names can not be derived from the table names, you can use the :column and :primary_key options.
  • figure out the column name
  • foreign key for a specific column
  • foreign key by name
    • 張 旭
       
      不懂 column 跟 name 的用法差異,基本上一樣。
  • Active Record knows how to reverse the migration automatically
    • 張 旭
       
      使用內建的 method,Rails 比較容易自動 rollback
    • 張 旭
       
      除了幾個特殊的 change_ 跟 remove_
  • should use reversible or write the up and down methods instead of using the change method
  • If your migration is irreversible, you should raise ActiveRecord::IrreversibleMigration from your down method.
  • DontUseConstraintForZipcodeValidationMigration
  • rails db:migrate
  • the db:migrate task also invokes the db:schema:dump task, which will update your db/schema.rb file to match the structure of your database.
  • specify a target version
  • all migrations up to and including 20080906120000
  • run the down method on all the migrations down to, but not including, 20080906120000
  • rails db:rollback
  • db:migrate:redo task is a shortcut for doing a rollback and then migrating back up again
    • 張 旭
       
      舊版的還是 rake!
  • STEP parameter
  • db:setup task will create the database, load the schema and initialize it with the seed data
  • db:reset task will drop the database and set it up again. This is functionally equivalent to rails db:drop db:setup.
  • run a specific migration up or down, the db:migrate:up and db:migrate:down
  • the RAILS_ENV environment variable
  • db:migrate VERBOSE=false will suppress all output.
  • If you have already run the migration, then you cannot just edit the migration and run the migration again: Rails thinks it has already run the migration and so will do nothing when you run rails db:migrate.
  • must rollback the migration (for example with bin/rails db:rollback), edit your migration and then run rails db:migrate to run the corrected version.
  • editing existing migrations is not a good idea.
  • should write a new migration that performs the changes you require
  • revert method can be helpful when writing a new migration to undo previous migrations in whole or in part
  • require_relative
  • revert
  • They are not designed to be edited, they just represent the current state of the database.
  • Schema Files for
  • Schema files are also useful if you want a quick look at what attributes an Active Record object has
  • annotate_models gem automatically adds and updates comments at the top of each model summarizing the schema if you desire that functionality.
  • database-independent
  • multiple databases
  • db/schema.rb cannot express database specific items such as triggers, stored procedures or check constraints
  • you can execute custom SQL statements, the schema dumper cannot reconstitute those statements from the database
  • db:structure:dump
    • 張 旭
       
      資料庫種類不相依的 schema 付出的代價就是有些特殊的資料庫特性無法描述出來,例如 trigger;如果有在 migration 寫 SQL 的,簡單說 schema dumper 這邊就要設定成 :sql 而不是預設的 :ruby
  • set in config/application.rb by the config.active_record.schema_format setting, which may be either :sql or :ruby.
  • check them into source control.
  • db/schema.rb contains the current version number of the database
  • Validations such as validates :foreign_key, uniqueness: true are one way in which models can enforce data integrity
  • The :dependent option on associations allows models to automatically destroy child objects when the parent is destroyed.
  • Migrations can also be used to add or modify data
  • Initial
  • To add initial data after a database is created, Rails has a built-in 'seeds' feature that makes the process quick and easy.
  • db/seeds.rb
  • rails db:seed
張 旭

How to Write a Git Commit Message - 1 views

  • a well-crafted Git commit message is the best way to communicate context about a change to fellow developers (and indeed to their future selves).
  • A diff will tell you what changed, but only the commit message can properly tell you why.
  • a commit message shows whether a developer is a good collaborator
  • ...22 more annotations...
  • a well-cared for log is a beautiful and useful thing
  • Reviewing others’ commits and pull requests becomes something worth doing, and suddenly can be done independently.
  • Understanding why something happened months or years ago becomes not only possible but efficient.
  • how to write an individual commit message.
  • Markup syntax, wrap margins, grammar, capitalization, punctuation.
  • What should it not contain?
  • issue tracking IDs
  • pull request numbers
  • The seven rules of a great Git commit message
  • Use the body to explain what and why vs. how
  • Use the imperative mood in the subject line
  • it’s a good idea to begin the commit message with a single short (less than 50 character) line summarizing the change, followed by a blank line and then a more thorough description.
  • forces the author to think for a moment about the most concise way to explain what’s going on.
  • If you’re having a hard time summarizing, you might be committing too many changes at once.
  • shoot for 50 characters, but consider 72 the hard limit
  • Imperative mood just means “spoken or written as if giving a command or instruction”.
  • Git itself uses the imperative whenever it creates a commit on your behalf.
  • when you write your commit messages in the imperative, you’re following Git’s own built-in conventions.
  • A properly formed Git commit subject line should always be able to complete the following sentence: If applied, this commit will your subject line here
  • explaining what changed and why
  • Code is generally self-explanatory in this regard (and if the code is so complex that it needs to be explained in prose, that’s what source comments are for).
  • there are tab completion scripts that take much of the pain out of remembering the subcommands and switches.
張 旭

Deploying Rails Apps, Part 6: Writing Capistrano Tasks - Vladi Gleba - 0 views

  • we can write our own tasks to help us automate various things.
  • organizing all of the tasks here under a namespace
  • upload a file from our local computer.
  • ...27 more annotations...
  • learn about is SSHKit and the various methods it provides
  • SSHKit was actually developed and released with Capistrano 3, and it’s basically a lower-level tool that provides methods for connecting and interacting with remote servers
  • on(): specifies the server to run on
  • within(): specifies the directory path to run in
  • with(): specifies the environment variables to run with
  • run on the application server
  • within the path specified
  • with certain environment variables set
  • execute(): the workhorse that runs the commands on your server
  • upload(): uploads a file from your local computer to your remote server
  • capture(): executes a command and returns its output as a string
    • 張 旭
       
      capture 是跑在遠端伺服器上
  • upload() has the bang symbol (!) because that’s how it’s defined in SSHKit, and it’s just a convention letting us know that the method will block until it finishes.
  • But in order to ensure rake runs with the proper environment variables set, we have to use rake as a symbol and pass db:seed as a string
  • This format will also be necessary whenever you’re running any other Rails-specific commands that rely on certain environment variables being set
  • I recommend you take a look at SSHKit’s example page to learn more
  • make sure we pushed all our local changes to the remote master branch
  • run this task before Capistrano runs its own deploy task
  • actually creates three separate tasks
  • I created a namespace called deploy to contain these tasks since that’s what they’re related to.
  • we’re using the callbacks inside a namespace to make sure Capistrano knows which tasks the callbacks are referencing.
  • custom recipe (a Capistrano term meaning a series of tasks)
  • /shared: holds files and directories that persist throughout deploys
  • When you run cap production deploy, you’re actually calling a Capistrano task called deploy, which then sequentially invokes other tasks
  • your favorite browser (I hope it’s not Internet Explorer)
  • Deployment is hard and takes a while to sink in.
  • the most important thing is to not get discouraged
  • I didn’t want other people going through the same thing
張 旭

Syntax - Configuration Language | Terraform | HashiCorp Developer - 0 views

  • the native syntax of the Terraform language, which is a rich language designed to be relatively easy for humans to read and write.
  • Terraform's configuration language is based on a more general language called HCL, and HCL's documentation usually uses the word "attribute" instead of "argument."
  • A particular block type may have any number of required labels, or it may require none
  • ...34 more annotations...
  • After the block type keyword and any labels, the block body is delimited by the { and } characters
  • Identifiers can contain letters, digits, underscores (_), and hyphens (-). The first character of an identifier must not be a digit, to avoid ambiguity with literal numbers.
  • The # single-line comment style is the default comment style and should be used in most cases.
  • he idiomatic style is to use the Unix convention
  • Indent two spaces for each nesting level.
  • align their equals signs
  • Use empty lines to separate logical groups of arguments within a block.
  • Use one blank line to separate the arguments from the blocks.
  • "meta-arguments" (as defined by the Terraform language semantics)
  • Avoid separating multiple blocks of the same type with other blocks of a different type, unless the block types are defined by semantics to form a family.
  • Resource names must start with a letter or underscore, and may contain only letters, digits, underscores, and dashes.
  • Each resource is associated with a single resource type, which determines the kind of infrastructure object it manages and what arguments and other attributes the resource supports.
  • Each resource type is implemented by a provider, which is a plugin for Terraform that offers a collection of resource types.
  • By convention, resource type names start with their provider's preferred local name.
  • Most publicly available providers are distributed on the Terraform Registry, which also hosts their documentation.
  • The Terraform language defines several meta-arguments, which can be used with any resource type to change the behavior of resources.
  • use precondition and postcondition blocks to specify assumptions and guarantees about how the resource operates.
  • Some resource types provide a special timeouts nested block argument that allows you to customize how long certain operations are allowed to take before being considered to have failed.
  • Timeouts are handled entirely by the resource type implementation in the provider
  • Most resource types do not support the timeouts block at all.
  • A resource block declares that you want a particular infrastructure object to exist with the given settings.
  • Destroy resources that exist in the state but no longer exist in the configuration.
  • Destroy and re-create resources whose arguments have changed but which cannot be updated in-place due to remote API limitations.
  • Expressions within a Terraform module can access information about resources in the same module, and you can use that information to help configure other resources. Use the <RESOURCE TYPE>.<NAME>.<ATTRIBUTE> syntax to reference a resource attribute in an expression.
  • resources often provide read-only attributes with information obtained from the remote API; this often includes things that can't be known until the resource is created, like the resource's unique random ID.
  • data sources, which are a special type of resource used only for looking up information.
  • some dependencies cannot be recognized implicitly in configuration.
  • local-only resource types exist for generating private keys, issuing self-signed TLS certificates, and even generating random ids.
  • The behavior of local-only resources is the same as all other resources, but their result data exists only within the Terraform state.
  • The count meta-argument accepts a whole number, and creates that many instances of the resource or module.
  • count.index — The distinct index number (starting with 0) corresponding to this instance.
  • the count value must be known before Terraform performs any remote resource actions. This means count can't refer to any resource attributes that aren't known until after a configuration is applied
  • Within nested provisioner or connection blocks, the special self object refers to the current resource instance, not the resource block as a whole.
  • This was fragile, because the resource instances were still identified by their index instead of the string values in the list.
  •  
    "the native syntax of the Terraform language, which is a rich language designed to be relatively easy for humans to read and write. "
crazylion lee

Rocket: Web Framework for Rust - 1 views

shared by crazylion lee on 24 Dec 16 - No Cached
  •  
    "Rocket is a web framework for Rust that makes it simple to write fast web applications without sacrificing flexibility or type safety. All with minimal code."
crazylion lee

crystal-lang/crystal: The Crystal Programming Language - 1 views

  •  
    "Crystal is a programming language with the following goals: Have a syntax similar to Ruby (but compatibility with it is not a goal) Statically type-checked but without having to specify the type of variables or method arguments. Be able to call C code by writing bindings to it in Crystal. Have compile-time evaluation and generation of code, to avoid boilerplate code. Compile to efficient native code. "
crazylion lee

Scalable C (in progress) - GitBook - 0 views

  •  
    "In this book I'll explain "Scalable C," which kicks C into the 21st Century. We use actors, message passing, code generation, and other tricks. I've been writing C for 30 years. It's never been this fun and productive. - Pieter Hintjens"
crazylion lee

GNU Octave - 0 views

  •  
    "GNU Octave is a high-level interpreted language, primarily intended for numerical computations. It provides capabilities for the numerical solution of linear and nonlinear problems, and for performing other numerical experiments. It also provides extensive graphics capabilities for data visualization and manipulation. Octave is normally used through its interactive command line interface, but it can also be used to write non-interactive programs. The Octave language is quite similar to Matlab so that most programs are easily portable."
張 旭

Serverless Architectures - 0 views

  • Serverless was first used to describe applications that significantly or fully depend on 3rd party applications / services (‘in the cloud’) to manage server-side logic and state.
  • ‘rich client’ applications (think single page web apps, or mobile apps) that use the vast ecosystem of cloud accessible databases (like Parse, Firebase), authentication services (Auth0, AWS Cognito), etc.
  • ‘(Mobile) Backend as a Service’
  • ...33 more annotations...
  • Serverless can also mean applications where some amount of server-side logic is still written by the application developer but unlike traditional architectures is run in stateless compute containers that are event-triggered, ephemeral (may only last for one invocation), and fully managed by a 3rd party.
  • ‘Functions as a service
  • AWS Lambda is one of the most popular implementations of FaaS at present,
  • A good example is Auth0 - they started initially with BaaS ‘Authentication as a Service’, but with Auth0 Webtask they are entering the FaaS space.
  • a typical ecommerce app
  • a backend data-processing service
  • with zero administration.
  • FaaS offerings do not require coding to a specific framework or library.
  • Horizontal scaling is completely automatic, elastic, and managed by the provider
  • Functions in FaaS are triggered by event types defined by the provider.
  • a FaaS-supported message broker
  • from a deployment-unit point of view FaaS functions are stateless.
  • allowed the client direct access to a subset of our database
  • deleted the authentication logic in the original application and have replaced it with a third party BaaS service
  • The client is in fact well on its way to becoming a Single Page Application.
  • implement a FaaS function that responds to http requests via an API Gateway
  • port the search code from the Pet Store server to the Pet Store Search function
  • replaced a long lived consumer application with a FaaS function that runs within the event driven context
  • server applications - is a key difference when comparing with other modern architectural trends like containers and PaaS
  • the only code that needs to change when moving to FaaS is the ‘main method / startup’ code, in that it is deleted, and likely the specific code that is the top-level message handler (the ‘message listener interface’ implementation), but this might only be a change in method signature
  • With FaaS you need to write the function ahead of time to assume parallelism
  • Most providers also allow functions to be triggered as a response to inbound http requests, typically in some kind of API gateway
  • you should assume that for any given invocation of a function none of the in-process or host state that you create will be available to any subsequent invocation.
  • FaaS functions are either naturally stateless
  • store state across requests or for further input to handle a request.
  • certain classes of long lived task are not suited to FaaS functions without re-architecture
  • if you were writing a low-latency trading application you probably wouldn’t want to use FaaS systems at this time
  • An API Gateway is an HTTP server where routes / endpoints are defined in configuration and each route is associated with a FaaS function.
  • API Gateway will allow mapping from http request parameters to inputs arguments for the FaaS function
  • API Gateways may also perform authentication, input validation, response code mapping, etc.
  • the Serverless Framework makes working with API Gateway + Lambda significantly easier than using the first principles provided by AWS.
  • Apex - a project to ‘Build, deploy, and manage AWS Lambda functions with ease.'
  • 'Serverless' to mean the union of a couple of other ideas - 'Backend as a Service' and 'Functions as a Service'.
1 - 20 of 76 Next › Last »
Showing 20 items per page