Skip to main content

Home/ Larvata/ Group items tagged system

Rss Feed Group items tagged

張 旭

Automated Docker-based Rails deployments - 0 views

  • how to automate the whole deployment process with a real world
  • use Unicorn as our webserver
  •  
    "This is the third post in a series of 3 on how my company moved its infrastructure from PaaS to Docker based deployment."
張 旭

Getting Started with Rails - Ruby on Rails Guides - 0 views

  • A controller's purpose is to receive specific requests for the application.
  • Routing decides which controller receives which requests
  • The view should just display that information
  • ...55 more annotations...
  • view templates are written in a language called ERB (Embedded Ruby) which is converted by the request cycle in Rails before being sent to the user.
  • Each action's purpose is to collect information to provide it to a view.
  • A view's purpose is to display this information in a human readable format.
  • routing file which holds entries in a special DSL (domain-specific language) that tells Rails how to connect incoming requests to controllers and actions.
  • You can create, read, update and destroy items for a resource and these operations are referred to as CRUD operations
  • A controller is simply a class that is defined to inherit from ApplicationController.
  • If not found, then it will attempt to load a template called application/new. It looks for one here because the PostsController inherits from ApplicationController
  • :formats specifies the format of template to be served in response. The default format is :html, and so Rails is looking for an HTML template.
  • :handlers, is telling us what template handlers could be used to render our template.
  • When you call form_for, you pass it an identifying object for this form. In this case, it's the symbol :post. This tells the form_for helper what this form is for.
  • that the action attribute for the form is pointing at /posts/new
  • When a form is submitted, the fields of the form are sent to Rails as parameters.
  • parameters can then be referenced inside the controller actions, typically to perform a particular task
  • params method is the object which represents the parameters (or fields) coming in from the form.
  • Active Record is smart enough to automatically map column names to model attributes,
  • Rails uses rake commands to run migrations, and it's possible to undo a migration after it's been applied to your database
  • every Rails model can be initialized with its respective attributes, which are automatically mapped to the respective database columns.
  • migration creates a method named change which will be called when you run this migration.
  • The action defined in this method is also reversible, which means Rails knows how to reverse the change made by this migration, in case you want to reverse it later
  • Migration filenames include a timestamp to ensure that they're processed in the order that they were created.
  • @post.save returns a boolean indicating whether the model was saved or not.
  • prevents an attacker from setting the model's attributes by manipulating the hash passed to the model.
  • If you want to link to an action in the same controller, you don't need to specify the :controller option, as Rails will use the current controller by default.
  • inherits from ActiveRecord::Base
  • Active Record supplies a great deal of functionality to your Rails models for free, including basic database CRUD (Create, Read, Update, Destroy) operations, data validation, as well as sophisticated search support and the ability to relate multiple models to one another.
  • Rails includes methods to help you validate the data that you send to models
  • Rails can validate a variety of conditions in a model, including the presence or uniqueness of columns, their format, and the existence of associated objects.
  • redirect_to will tell the browser to issue another request.
  • rendering is done within the same request as the form submission
  • Each request for a comment has to keep track of the post to which the comment is attached, thus the initial call to the find method of the Post model to get the post in question.
  • pluralize is a rails helper that takes a number and a string as its arguments. If the number is greater than one, the string will be automatically pluralized.
  • The render method is used so that the @post object is passed back to the new template when it is rendered.
  • The method: :patch option tells Rails that we want this form to be submitted via the PATCH HTTP method which is the HTTP method you're expected to use to update resources according to the REST protocol.
  • it accepts a hash containing the attributes that you want to update.
  • field_with_errors. You can define a css rule to make them standout
  • belongs_to :post, which sets up an Active Record association
  • creates comments as a nested resource within posts
  • call destroy on Active Record objects when you want to delete them from the database.
  • Rails allows you to use the dependent option of an association to achieve this.
  • store all external data as UTF-8
  • you're better off ensuring that all external data is UTF-8
  • use UTF-8 as the internal storage of your database
  • Rails defaults to converting data from your database into UTF-8 at the boundary.
  • :patch
  • By default forms built with the form_for helper are sent via POST
  • The :method and :'data-confirm' options are used as HTML5 attributes so that when the link is clicked, Rails will first show a confirm dialog to the user, and then submit the link with method delete. This is done via the JavaScript file jquery_ujs which is automatically included into your application's layout (app/views/layouts/application.html.erb) when you generated the application.
  • Without this file, the confirmation dialog box wouldn't appear.
  • just defines the partial template we want to render
  • As the render method iterates over the @post.comments collection, it assigns each comment to
  • a local variable named the same as the partial
  • use the authentication system
  • require and permit
  • the method is often made private to make sure it can't be called outside its intended context.
  • standard CRUD actions in each controller in the following order: index, show, new, edit, create, update and destroy.
  • must be placed before any private or protected method in the controller in order to work
張 旭

Managing files | Django documentation | Django - 0 views

  • By default, Django stores files locally, using the MEDIA_ROOT and MEDIA_URL settings.
  • use a FileField or ImageField, Django provides a set of APIs you can use to deal with that file.
  • Behind the scenes, Django delegates decisions about how and where to store files to a file storage system.
  • ...1 more annotation...
  • Django uses a django.core.files.File instance any time it needs to represent a file.
張 旭

Docker ARG, ENV and .env - a Complete Guide · vsupalov.com - 1 views

  • understand and use Docker build-time variables, environment variables and docker-compose templating the right way.
  • ARG is only available during the build of a Docker image (RUN etc), not after the image is created and containers are started from it (ENTRYPOINT, CMD).
  • ENV values are available to containers, but also RUN-style commands during the Docker build starting with the line where they are introduced.
  • ...20 more annotations...
  • set an environment variable in an intermediate container using bash (RUN export VARI=5 && …) it will not persist in the next command.
  • An env_file, is a convenient way to pass many environment variables to a single command in one batch.
  • not be confused with a .env file
  • the dot in front of env - .env, not an “env_file”.
  • If you have a file named .env in your project, it’s only used to put values into the docker-compose.yml file which is in the same folder. Those are used with Docker Compose and Docker Stack.
  • Just type docker-compose config. This way you’ll see how the docker-compose.yml file content looks after the substitution step has been performed without running anything else.
  • ARG are also known as build-time variables. They are only available from the moment they are ‘announced’ in the Dockerfile with an ARG instruction up to the moment when the image is built.
  • Running containers can’t access values of ARG variables.
  • ENV variables are also available during the build, as soon as you introduce them with an ENV instruction. However, unlike ARG, they are also accessible by containers started from the final image.
  • ENV values can be overridden when starting a container,
  • If you don’t provide a value to expected ARG variables which don’t have a default, you’ll get an error message.
  • args block
  • You can use ARG to set the default values of ENV vars.
  • dynamic on-build env values
  • 2. Pass environment variable values from your host
  • 1. Provide values one by one
  • 3. Take values from a file (env_file)
  • for each RUN statement, a new container is launched from an intermediate image.
  • An image is saved by the end of the command, but environment variables do not persist that way.
  • The precedence is, from stronger to less-strong: stuff the containerized application sets, values from single environment entries, values from the env_file(s) and finally Dockerfile defaults.
張 旭

Using Workflows to Schedule Jobs - CircleCI - 1 views

  • A workflow is a set of rules for defining a collection of jobs and their run order.
  • Schedule workflows for jobs that should only run periodically.
  • run multiple jobs in parallel
  • ...37 more annotations...
  • rerun just the failed job
  • Builds without workflows require a build job.
  • Refer the YAML Anchors/Aliases documentation for information about how to alias and reuse syntax to keep your .circleci/config.yml file small.
  • workflow orchestration with two parallel jobs
  • jobs run according to configured requirements, each job waiting to start until the required job finishes successfully
  • requires: key
  • fans-out to run a set of acceptance test jobs in parallel, and finally fans-in to run a common deploy job.
  • Holding a Workflow for a Manual Approval
  • Workflows can be configured to wait for manual approval of a job before continuing to the next job
  • add a job to the jobs list with the key type: approval
  • approval is a special job type that is only available to jobs under the workflow key
  • The name of the job to hold is arbitrary - it could be wait or pause, for example, as long as the job has a type: approval key in it.
  • schedule a workflow to run at a certain time for specific branches.
  • The triggers key is only added under your workflows key
  • using cron syntax to represent Coordinated Universal Time (UTC) for specified branches.
  • By default, a workflow is triggered on every git push
  • the commit workflow has no triggers key and will run on every git push
  • The nightly workflow has a triggers key and will run on the specified schedule
  • Cron step syntax (for example, */1, */20) is not supported.
  • use a context to share environment variables
  • use the same shared environment variables when initiated by a user who is part of the organization.
  • CircleCI does not run workflows for tags unless you explicitly specify tag filters.
  • CircleCI branch and tag filters support the Java variant of regex pattern matching.
  • Each workflow has an associated workspace which can be used to transfer files to downstream jobs as the workflow progresses.
  • The workspace is an additive-only store of data.
  • Jobs can persist data to the workspace
  • Downstream jobs can attach the workspace to their container filesystem.
  • Attaching the workspace downloads and unpacks each layer based on the ordering of the upstream jobs in the workflow graph.
  • Workflows that include jobs running on multiple branches may require data to be shared using workspaces
  • To persist data from a job and make it available to other jobs, configure the job to use the persist_to_workspace key.
  • Files and directories named in the paths: property of persist_to_workspace will be uploaded to the workflow’s temporary workspace relative to the directory specified with the root key.
  • Configure a job to get saved data by configuring the attach_workspace key.
  • persist_to_workspace
  • attach_workspace
  • To rerun only a workflow’s failed jobs, click the Workflows icon in the app and select a workflow to see the status of each job, then click the Rerun button and select Rerun from failed.
  • if you do not see your workflows triggering, a configuration error is preventing the workflow from starting.
  • check your Workflows page of the CircleCI app (not the Job page)
  •  
    "A workflow is a set of rules for defining a collection of jobs and their run order."
張 旭

Virtual Private Cloud (VPC)  |  Virtual Private Cloud  |  Google Cloud - 0 views

  • A single Google Cloud VPC can span multiple regions without communicating across the public Internet.
  • Google Cloud VPCs let you increase the IP space of any subnets without any workload shutdown or downtime.
  • Get private access to Google services, such as storage, big data, analytics, or machine learning, without having to give your service a public IP address.
  • ...3 more annotations...
  • Enable dynamic Border Gateway Protocol (BGP) route updates between your VPC network and your non-Google network with our virtual router.
  • Configure a VPC Network to be shared across several projects in your organization.
  • Hosting globally distributed multi-tier applications, by creating a VPC with subnets.
張 旭

Overview of Virtual Private Cloud  |  VPC  |  Google Cloud - 0 views

  • a VPC network the same way you'd think of a physical network, except that it is virtualized within GCP.
  • VPC networks are logically isolated from each other in GCP.
  • The network connects the resources to each other and to the Internet.
  •  
    "a VPC network the same way you'd think of a physical network, except that it is virtualized within GCP."
張 旭

数据库水平分片心得 · ScienJus's Blog - 0 views

  • 水平分片(也叫水平分库)指的是将整体存储在单个数据库中的数据,通过某种策略分摊到多个表结构与其相同的数据库中,这样每个数据库中的数据量就会相对减少很多,并且可以部署在不同物理服务器上,理论上能够实现数据库的无限横向拓展。
  • 当遇到第一次数据库性能问题时,最先想到的方案应该是读写分离,将所有写操作都放在主数据库上,所有读操作都放在从数据库上
  • 配置一主多从
  • ...16 more annotations...
  • 主从关系(Master-Slave),此时所有操作还是由主数据库完成,主数据库再同步到从数据库上,而从数据库只需要在主数据库挂掉之后代替其工作。
  • 一般来说读写分离加上缓存已经可以应付绝大多数情况了,并且几乎不需要对业务层面进行修改。
  • 对数据库进行垂直分库,将业务彼此无关的表放在单独的数据库中,分库后不同库中的表无法进行联合查询等操作,但是可以平摊压力,并且独立做读写分离。
  • 对数据库进行水平分表,建立多个结构相同的表分摊数据,使得每个表的数据量减少从而提升速度。
  • 分表却只能在单台机器的单个数据库上,如果是服务器本身的性能达到瓶颈,则分表不会有明显作用。
  • 分表后各个子表还是可以通过 union 等命令联合查询,分库后则不行
  • 基于 id 的区间分片,例如:将 id 为 1-2w 的数据存放在 A 数据库,2w-4w 的数据存放在 B 数据库。
  • 基于 id 的 hash 分片,例如:将 id%2=0 的数据存放在 A 数据库,id%2=1 的数据放在 B 数据库。
  • 基于时间的区间分片,大部分软件都会有一个特征:越新的数据被操作的几率越大,老数据几乎不会被操作。所以通过数据的插入时间进行分库(也称为冷热分离)
  • 基于检索表分片,通过额外建立一张检索表保存 id 与所在数据库节点的对应关系,优点是逻辑简单,自由且不会有迁移问题,缺点是每次查询都需要额外查询检索表,所以一般会选择将检索表缓存到内存中。
  • 基于地理位置分片
  • 分库策略
  • 在数据库表设计时,为了保证 id 唯一,大部分人都会将主键设为自增的 int 类型。但是由于 auto\_increment 是和表所绑定的,所以在分库后每个表的自增 id 也是独立的。这样就肯定会发生主键冲突
  • 但是很多人都希望主键即使不是连续自增,也是一个有序的整数,这样在排序等情况下会有用。这时候就需要自己实现一个 id 生成算法了,一般都会使用 unix 时间戳保持有序,混入 Mac 地址等保证唯一。
  • 在分库情况下,需要将大部分联合查询都替换为至少两次查询,先从关联的表中查询出符合条件的 id,在根据 id 去对应的数据库里查询主体信息。
  • 数据库水平分片作为数据库性能瓶颈的最终解决方案,确实可以有效的解决这个问题。但是它将业务逻辑变得非常复杂(主要是关联表冗余和字段冗余,以及这些数据的更新),并且有分布式事务这个难题。所以不到必要时刻,尽量不要轻易尝试数据库分片。
張 旭

A few ways to execute commands remotely using SSH · zaiste.net - 0 views

  • Using heredoc is probably the most convenient way to execute multi-line commands on a remote machine
張 旭

从《凤凰项目》谈一谈"业务IT一体化" - 知乎 - 0 views

  • IT能多大程度上参与到业务系统中去帮助到业务部门,甚至影响到业务部门,你的价值就有多大。这项工作列为紧急并且重要
  • “IT内部的项目”,有一些IT部门很热衷做这方面的项目,在我看来部分的原因是因为做这些东西相对来说是IT比较好玩或者擅长的。
  • 重要但不紧急的工作,例如认真地研究和建立devops的基础环境。
  • ...11 more annotations...
  • 但变更虽然不可避免,我个人觉得应该尽可能减少(至少做到可预测),并且将变更流程自动化。
  • 最容易被人忽视的是“计划外工作”,它偷走了我们的时间。这就等于我们经常在讲的那些紧急但不重要,或者那些不紧急也不重要的事情。
  • 约束管理( Theory Of Constrain )理论
  • 他太厉害,所以不屑于写文档;他太重要,所以可以随心所欲地改东西而不走流程;他太忙,所以很多事情都要排队等他的时间来做。
  • 需要布伦特这样的人才,只是说布伦特成为了团队的约束点,怎么利用好他成为工作成败的关键(建立一些好的机制,确保他们能够更好地工作,而不是在一些低价值的内容上),而怎么帮助他提高到一个新的水平(或者培养更多的布伦特)才是长治久安的方法。
  • “业务IT一体化”与传统的模式有一个根本的区别,就是大量地使用了自动化的技术,减少中间环节。
  • IT部门内部开发、测试、运维、安全等环节的信任
  • 第一步,从产品构想、设计、开发、测试、运维到客户,这个正向的工作流,一定要理顺
  • 第二步,从客户往回推,如何建立一个健康和高效的反馈流。这里我总结为快速试错和迭代
  • 第三步,我觉得是讲到点子上了—— 业务IT一体化当然是好啊,但流程再合理,工具再强大,领导再重视,如果没有一个所有员工都认同的企业文化来做支撑,都将流于形式。没有信任来谈创新,终究是扯淡。
  • 各部门只关注自己的小目标,以自己干了多少事为荣,而不管这些事到底对于整个公司的目标实现意味着什么。
張 旭

跳出运维,才能做好运维(凤凰项目)书评 - 0 views

  • 1. 业务项目:这些通常是公司业务部门,比如产品研发部门或销售部门所提出的需求,比如新产品发布上线、为客户做实施、双十一这种大促活动的规划等等。这些工作通常具有一定的系统性,需要部门间通力合作。2. 内部项目:运维部门内部围绕业务项目所实施的一些列基础设施研发,部署自动化、多环境构建、持续交付、监控报警等等。3. 变更:根据其它部门申请,对运维组件进行变更操作,相对于业务项目,变更通常都会比较零散,比如加权限、开端口、开机器等等。对于变更,我们要维护好操作记录,做到有迹可寻。4. 计划外的工作:无法预料的问题处理,即“救火”。
  • 计划外的工作越多,就会占用其它三种工作的时间,导致其它三种工作大量积累。
  • 运维内部的基础建设跟不上,无法从根上去思考和解决计划外的工作,就会造成一种恶性循环,做不完的工作越堆越多(书中喜欢叫这玩意儿"半成品")直至让人崩溃。
  • ...4 more annotations...
  • 一些周期性的工作对于运维其实也是非常重要的,比如对组件进行周期性的巡检、压测、像ChaosMonkey那样对分布式系统进行随机破坏、灾难演习、备份演习等等。不过这些也可以归结在内部项目之中
  • 流水线可视化了后,除了解决掉瓶颈节点,其它的诸如任务优先级、任务的依赖关系、每批次解决的任务数量,也都会显示出来并得到解决。
  • 反馈和朔源。我们需要知道每种任务流水线的每个环节的执行情况,效率、质量,然后与该环节相关的部门一起去探讨和优化改进的措施,比如部署时间过长,那么就要与开发团队一起优化打包流程和部署所依赖的中间件架构;线上故障太多,那么就需要联合开发和QA部门一道,去审视这些故障所爆发的源头,定制解决方案,等等
  • 第三步,就需要建立一种文化机制,努力把这种工作方式延续下去,做到持续改进,越来越好!
張 旭

Variables - Ansible Documentation - 0 views

  • with the last listed variables winning prioritization
  • anything that goes into “role defaults” (the defaults folder inside the role) is the most malleable and easily overridden.
  • Anything in the vars directory of the role overrides previous versions of that variable in namespace.
  • ...1 more annotation...
  • with command line -e extra vars always winning
張 旭

How services work | Docker Documentation - 0 views

  • a service is the image for a microservice within the context of some larger application.
  • When you create a service, you specify which container image to use and which commands to execute inside running containers.
  • an overlay network for the service to connect to other services in the swarm
  • ...13 more annotations...
  • In the swarm mode model, each task invokes exactly one container
  • A task is analogous to a “slot” where the scheduler places a container.
  • A task is the atomic unit of scheduling within a swarm.
  • A task is a one-directional mechanism. It progresses monotonically through a series of states: assigned, prepared, running, etc.
  • Docker swarm mode is a general purpose scheduler and orchestrator.
  • Hypothetically, you could implement other types of tasks such as virtual machine tasks or non-containerized process tasks.
  • If all nodes are paused or drained, and you create a service, it is pending until a node becomes available.
  • reserve a specific amount of memory for a service.
  • impose placement constraints on the service
  • As the administrator of a swarm, you declare the desired state of your swarm, and the manager works with the nodes in the swarm to create that state.
  • two types of service deployments, replicated and global.
  • A global service is a service that runs one task on every node.
  • Good candidates for global services are monitoring agents, an anti-virus scanners or other types of containers that you want to run on every node in the swarm.
張 旭

Manage nodes in a swarm | Docker Documentation - 0 views

  • Drain means the scheduler doesn’t assign new tasks to the node. The scheduler shuts down any existing tasks and schedules them on an available node.
  • Reachable means the node is a manager node participating in the Raft consensus quorum. If the leader node becomes unavailable, the node is eligible for election as the new leader.
  • If a manager node becomes unavailable, you should either join a new manager node to the swarm or promote a worker node to be a manager.
  • ...8 more annotations...
  • docker node inspect self --pretty
  • docker node update --availability drain node
  • use node labels in service constraints
  • The labels you set for nodes using docker node update apply only to the node entity within the swarm
  • node labels can be used to limit critical tasks to nodes that meet certain requirements
  • promote a worker node to the manager role
  • demote a manager node to the worker role
  • If the last manager node leaves the swarm, the swarm becomes unavailable requiring you to take disaster recovery measures.
張 旭

Manage swarm security with public key infrastructure (PKI) | Docker Documentation - 0 views

  • The nodes in a swarm use mutual Transport Layer Security (TLS) to authenticate, authorize, and encrypt the communications with other nodes in the swarm.
  • By default, the manager node generates a new root Certificate Authority (CA) along with a key pair, which are used to secure communications with other nodes that join the swarm.
  • The manager node also generates two tokens to use when you join additional nodes to the swarm: one worker token and one manager token.
  • ...3 more annotations...
  • Each time a new node joins the swarm, the manager issues a certificate to the node
  • By default, each node in the swarm renews its certificate every three months.
  • a cluster CA key or a manager node is compromised, you can rotate the swarm root CA so that none of the nodes trust certificates signed by the old root CA anymore.
  •  
    "The nodes in a swarm use mutual Transport Layer Security (TLS) to authenticate, authorize, and encrypt the communications with other nodes in the swarm."
張 旭

Swarm task states | Docker Documentation - 0 views

  • Each service can start multiple tasks.
  • Tasks are execution units that run once to completion.
  • The task progresses forward through a number of states, and its state doesn’t go backward.
張 旭

Swarm Mode Cluster - Træfik - 0 views

  • The only requirement for Træfik to work with swarm mode is that it needs to run on a manager node
  •  
    "The only requirement for Træfik to work with swarm mode is that it needs to run on a manager node"
張 旭

Internet Gateways - Amazon Virtual Private Cloud - 0 views

  • to provide a target in your VPC route tables for internet-routable traffic
  • to perform network address translation (NAT) for instances that have been assigned public IPv4 addresses
  • Ensure that instances in your subnet have a globally unique IP address (public IPv4 address, Elastic IP address, or IPv6 address)
  • ...10 more annotations...
  • To use an internet gateway, your subnet's route table must contain a route that directs internet-bound traffic to the internet gateway.
  • If your subnet is associated with a route table that has a route to an internet gateway, it's known as a public subnet.
  • To enable communication over the internet for IPv4, your instance must have a public IPv4 address or an Elastic IP address that's associated with a private IPv4 address on your instance.
  • Your instance is only aware of the private (internal) IP address space defined within the VPC and subnet
  • internet gateway logically provides the one-to-one NAT on behalf of your instance
  • To enable communication over the internet for IPv6, your VPC and subnet must have an associated IPv6 CIDR block, and your instance must be assigned an IPv6 address from the range of the subnet.
  • When you create a subnet, we automatically associate it with the main route table for the VPC.
  • the main route table doesn't contain a route to an internet gateway
  • Each instance that you launch into a VPC is automatically associated with its default security group.
  • a default security group allow no inbound traffic from the internet and allow all outbound traffic to the internet.
« First ‹ Previous 141 - 160 of 257 Next › Last »
Showing 20 items per page