Skip to main content

Home/ Larvata/ Group items tagged system

Rss Feed Group items tagged

張 旭

Queues - Laravel - The PHP Framework For Web Artisans - 0 views

  • Laravel queues provide a unified API across a variety of different queue backends, such as Beanstalk, Amazon SQS, Redis, or even a relational database.
  • The queue configuration file is stored in config/queue.php
  • a synchronous driver that will execute jobs immediately (for local use)
  • ...56 more annotations...
  • A null queue driver is also included which discards queued jobs.
  • In your config/queue.php configuration file, there is a connections configuration option.
  • any given queue connection may have multiple "queues" which may be thought of as different stacks or piles of queued jobs.
  • each connection configuration example in the queue configuration file contains a queue attribute.
  • if you dispatch a job without explicitly defining which queue it should be dispatched to, the job will be placed on the queue that is defined in the queue attribute of the connection configuration
  • pushing jobs to multiple queues can be especially useful for applications that wish to prioritize or segment how jobs are processed
  • specify which queues it should process by priority.
  • If your Redis queue connection uses a Redis Cluster, your queue names must contain a key hash tag.
  • ensure all of the Redis keys for a given queue are placed into the same hash slot
  • all of the queueable jobs for your application are stored in the app/Jobs directory.
  • Job classes are very simple, normally containing only a handle method which is called when the job is processed by the queue.
  • we were able to pass an Eloquent model directly into the queued job's constructor. Because of the SerializesModels trait that the job is using, Eloquent models will be gracefully serialized and unserialized when the job is processing.
  • When the job is actually handled, the queue system will automatically re-retrieve the full model instance from the database.
  • The handle method is called when the job is processed by the queue
  • The arguments passed to the dispatch method will be given to the job's constructor
  • delay the execution of a queued job, you may use the delay method when dispatching a job.
  • dispatch a job immediately (synchronously), you may use the dispatchNow method.
  • When using this method, the job will not be queued and will be run immediately within the current process
  • specify a list of queued jobs that should be run in sequence.
  • Deleting jobs using the $this->delete() method will not prevent chained jobs from being processed. The chain will only stop executing if a job in the chain fails.
  • this does not push jobs to different queue "connections" as defined by your queue configuration file, but only to specific queues within a single connection.
  • To specify the queue, use the onQueue method when dispatching the job
  • To specify the connection, use the onConnection method when dispatching the job
  • defining the maximum number of attempts on the job class itself.
  • to defining how many times a job may be attempted before it fails, you may define a time at which the job should timeout.
  • using the funnel method, you may limit jobs of a given type to only be processed by one worker at a time
  • using the throttle method, you may throttle a given type of job to only run 10 times every 60 seconds.
  • If an exception is thrown while the job is being processed, the job will automatically be released back onto the queue so it may be attempted again.
  • dispatch a Closure. This is great for quick, simple tasks that need to be executed outside of the current request cycle
  • When dispatching Closures to the queue, the Closure's code contents is cryptographically signed so it can not be modified in transit.
  • Laravel includes a queue worker that will process new jobs as they are pushed onto the queue.
  • once the queue:work command has started, it will continue to run until it is manually stopped or you close your terminal
  • queue workers are long-lived processes and store the booted application state in memory.
  • they will not notice changes in your code base after they have been started.
  • during your deployment process, be sure to restart your queue workers.
  • customize your queue worker even further by only processing particular queues for a given connection
  • The --once option may be used to instruct the worker to only process a single job from the queue
  • The --stop-when-empty option may be used to instruct the worker to process all jobs and then exit gracefully.
  • Daemon queue workers do not "reboot" the framework before processing each job.
  • you should free any heavy resources after each job completes.
  • Since queue workers are long-lived processes, they will not pick up changes to your code without being restarted.
  • restart the workers during your deployment process.
  • php artisan queue:restart
  • The queue uses the cache to store restart signals
  • the queue workers will die when the queue:restart command is executed, you should be running a process manager such as Supervisor to automatically restart the queue workers.
  • each queue connection defines a retry_after option. This option specifies how many seconds the queue connection should wait before retrying a job that is being processed.
  • The --timeout option specifies how long the Laravel queue master process will wait before killing off a child queue worker that is processing a job.
  • When jobs are available on the queue, the worker will keep processing jobs with no delay in between them.
  • While sleeping, the worker will not process any new jobs - the jobs will be processed after the worker wakes up again
  • the numprocs directive will instruct Supervisor to run 8 queue:work processes and monitor all of them, automatically restarting them if they fail.
  • Laravel includes a convenient way to specify the maximum number of times a job should be attempted.
  • define a failed method directly on your job class, allowing you to perform job specific clean-up when a failure occurs.
  • a great opportunity to notify your team via email or Slack.
  • php artisan queue:retry all
  • php artisan queue:flush
  • When injecting an Eloquent model into a job, it is automatically serialized before being placed on the queue and restored when the job is processed
chiehting

Top 5 Kubernetes Best Practices From Sandeep Dinesh (Google) - DZone Cloud - 0 views

  • Best Practices for Kubernetes
  • #1: Building Containers
  • Don’t Trust Arbitrary Base Images!
  • ...29 more annotations...
  • There’s a lot wrong with this: you could be using the wrong version of code that has exploits, has a bug in it, or worse it could have malware bundled in on purpose—you just don’t know.
  • Keep Base Images Small
  • Node.js for example, it includes an extra 600MB of libraries you don’t need.
  • Use the Builder Pattern
  • #2: Container Internals
  • Use a Non-Root User Inside the Container
  • Make the File System Read-Only
  • One Process per Container
  • Don’t Restart on Failure. Crash Cleanly Instead.
  • Log Everything to stdout and stderr
  • #3: Deployments
  • Use the “Record” Option for Easier Rollbacks
  • Use Weave Cloud
  • Use Sidecars for Proxies, Watchers, Etc.
  • Don’t Use Sidecars for Bootstrapping!
  • Don’t Use :Latest or No Tag
  • Readiness and Liveness Probes are Your Friend
  • #4: Services
  • Don’t Use type: LoadBalancer
  • Type: Nodeport Can Be “Good Enough”
  • Use Static IPs They Are Free!
  • Map External Services to Internal Ones
  • #5: Application Architecture
  • Use Helm Charts
  • All Downstream Dependencies Are Unreliable
  • Use Plenty of Descriptive Labels
  • Make Sure Your Microservices Aren’t Too Micro
  • Use Namespaces to Split Up Your Cluster
  • Role-Based Access Control
張 旭

The Twelve-Factor App - 0 views

  • software is commonly delivered as a service: called web apps, or software-as-a-service.
  • Use declarative formats for setup automation
  • offering maximum portability between execution environments
  • ...18 more annotations...
  • obviating the need for servers and systems administration
  • Minimize divergence between development and production
  • scale up without significant changes to tooling, architecture, or development practices
  • Ops engineers who deploy or manage such applications.
  • developer building applications which run as a service
  • One codebase
  • many deploys
  • in the environment
  • services as attached resources
  • Explicitly declare
  • separate build and run stages
  • stateless processes
  • Export services via port binding
  • Scale out
  • fast startup and graceful shutdown
  • as similar as possible
  • logs as event streams
  • admin/management tasks as one-off processes
  •  
    "software is commonly delivered as a service: called web apps, or software-as-a-service"
張 旭

DevOps - 0 views

  • 对于运维来说,知识的传承非常重要,非常有必要建立运维的知识库。一方面 有利于对事件的复盘回顾,另一方面也有助于日后参加运维的人员尽快熟悉与掌握系统的运维技能
  • 云平台主要从以下3个方面对DevOps提供支撑(括号内为承载此能力的软件工具): 1. 基于IaaS的自服务与环境编排能力(VMWare) 2. 基于PaaS的弹性伸缩能力(K8s) 3. 基于SaaS的软件服务能力
  • 考虑自建私有云,至少是混合云。
  • ...11 more annotations...
  • 内网建立所谓的私库,作为代理与外网的公共库同步。
  • 很难做到真正意义上的DevOps to Production
  • 可视化是为了实时展现持续交付流水线执行情况与单元测试的执行报告
  • 通过持续交付流水线串联自动化测试,在测试环境部署成功后触发自动化测试。
  • 测试阶段也需要测试报告的可视化与结果通知
  • 企业的持续交付流水线往往都打不通到生产环境
  • Service Desk不是某一款软件的名字,而是ITIL(信息技术基础架构库,可认为是ITSM的落地实现)里面承载变更管理与事件管理的工具统称。
  • 构建底层的云平台,是整个DevOps基础架构的基石
  • 架构不是一成不变的,而是应该随着实际需求变化而持续演化,能力也要跟着持续提升。
  • 并行测试的执行环境通过PaaS平台按需自动生成,测试执行完毕后自动销毁。
  • 即使是雷同的项目,在对编译构建上的一些细枝末节的差别也很可能导致它们的持续交付流水线设计非常不一样。
  •  
    "对于运维来说,知识的传承非常重要,非常有必要建立运维的知识库。一方面 有利于对事件的复盘回顾,另一方面也有助于日后参加运维的人员尽快熟悉与掌握系统的运维技能。"
張 旭

The Twelve-Factor App - 0 views

  • An app’s config is everything that is likely to vary between deploys (staging, production, developer environments, etc)
  • Resource handles
  • Credentials
  • ...8 more annotations...
  • Per-deploy values
  • trict separation of config from code.
  • Config varies substantially across deploys, code does not.
  • he codebase could be made open source at any moment, without compromising any credentials.
  • “config” does not include internal application config
  • stores config in environment variables (often shortened to env vars or env).
  • env vars are granular controls, each fully orthogonal to other env vars
  • They are never grouped together as “environments”
張 旭

Public Key Infrastructure (PKI) Overview - 0 views

  • A PKI allows you to bind public keys (contained in SSL certificates) with a person in a way that allows you to trust the certificate.
  • Public Key Infrastructures, like the one used to secure the Internet, most commonly use a Certificate Authority (also called a Registration Authority) to verify the identity of an entity and create unforgeable certificates.
  • An SSL Certificate Authority (also called a trusted third party or CA) is an organization that issues digital certificates to organizations or individuals after verifying their identity.
  • ...9 more annotations...
  • An SSL Certificate provides assurances that we are talking to the right server, but the assurances are limited.
  • In PKI, trust simply means that a certificate can be validated by a CA that is in our trust store.
  • An SSL Certificate in a PKI is a digital document containing a public key, entity information, and a digital signature from the certificate issuer.
  • it is much more practical and secure to establish a chain of trust to the Root certificate by signing an Intermediate certificate
  • A trust store is a collection of Root certificates that are trusted by default.
  • there are four primary trust stores that are relied upon for the majority of software: Apple, Microsoft, Chrome, and Mozilla.
  • a revocation system that allows a certificate to be listed as invalid if it was improperly issued or if the private key has been compromised.
  • Online Certificate Status Protocol (OCSP)
  • Certificate Revocation List (CRL)
張 旭

Docker for AWS persistent data volumes | Docker Documentation - 0 views

  • Cloudstor is a modern volume plugin built by Docker
  • Docker swarm mode tasks and regular Docker containers can use a volume created with Cloudstor to mount a persistent data volume.
  • Global shared Cloudstor volumes mounted by all tasks in a swarm service.
  • ...14 more annotations...
  • Workloads running in a Docker service that require access to low latency/high IOPs persistent storage, such as a database engine, can use a relocatable Cloudstor volume backed by EBS.
  • Each relocatable Cloudstor volume is backed by a single EBS volume.
  • If a swarm task using a relocatable Cloudstor volume gets rescheduled to another node within the same availability zone as the original node where the task was running, Cloudstor detaches the backing EBS volume from the original node and attaches it to the new target node automatically.
  • in a different availability zone,
  • Cloudstor transfers the contents of the backing EBS volume to the destination availability zone using a snapshot, and cleans up the EBS volume in the original availability zone.
  • Typically the snapshot-based transfer process across availability zones takes between 2 and 5 minutes unless the work load is write-heavy.
  • A swarm task is not started until the volume it mounts becomes available
  • Sharing/mounting the same Cloudstor volume backed by EBS among multiple tasks is not a supported scenario and leads to data loss.
  • a Cloudstor volume to share data between tasks, choose the appropriate EFS backed shared volume option.
  • When multiple swarm service tasks need to share data in a persistent storage volume, you can use a shared Cloudstor volume backed by EFS.
  • a volume and its contents can be mounted by multiple swarm service tasks without the risk of data loss
  • over NFS
  • the persistent data backed by EFS volumes is always available.
  • shared Cloudstor volumes only work in those AWS regions where EFS is supported.
張 旭

Glossary - CircleCI - 0 views

  • User authentication may use LDAP for an instance of the CircleCI application that is installed on your private server or cloud
  • The first user to log into a private installation of CircleCI
  • Contexts provide a mechanism for securing and sharing environment variables across projects.
  • ...22 more annotations...
  • The environment variables are defined as name/value pairs and are injected at runtime.
  • The CircleCI Docker Layer Caching feature allows builds to reuse Docker image layers
  • from previous builds.
  • Image layers are stored in separate volumes in the cloud and are not shared between projects.
  • Layers may only be used by builds from the same project.
  • Environment variables store customer data that is used by a project.
  • Defines the underlying technology to run a job.
  • machine to run your job inside a full virtual machine.
  • docker to run your job inside a Docker container with a specified image
  • A job is a collection of steps.
  • The first image listed in config.yml
  • A CircleCI project shares the name of the code repository for which it automates workflows, tests, and deployment.
  • must be added with the Add Project button
  • Following a project enables a user to subscribe to email notifications for the project build status and adds the project to their CircleCI dashboard.
  • A step is a collection of executable commands
  • Users must be added to a GitHub or Bitbucket org to view or follow associated CircleCI projects.
  • Users may not view project data that is stored in environment variables.  
  • A Workflow is a set of rules for defining a collection of jobs and their run order.
  • Workflows are implemented as a directed acyclic graph (DAG) of jobs for greatest flexibility.
  • referred to as Pipelines
  • A workspace is a workflows-aware storage mechanism.
  • A workspace stores data unique to the job, which may be needed in downstream jobs.
張 旭

Persisting Data in Workflows: When to Use Caching, Artifacts, and Workspaces - CircleCI - 0 views

  • Repeatability is also important
  • When a CI process isn’t repeatable you’ll find yourself wasting time re-running jobs to get them to go green.
  • Workspaces persist data between jobs in a single Workflow.
  • ...9 more annotations...
  • Caching persists data between the same job in different Workflow builds.
  • Artifacts persist data after a Workflow has finished
  • When a Workspace is declared in a job, one or more files or directories can be added. Each addition creates a new layer in the Workspace filesystem. Downstreams jobs can then use this Workspace for its own needs or add more layers on top.
  • Unlike caching, Workspaces are not shared between runs as they no longer exists once a Workflow is complete.
  • Caching lets you reuse the data from expensive fetch operations from previous jobs.
  • A prime example is package dependency managers such as Yarn, Bundler, or Pip.
  • Caches are global within a project, a cache saved on one branch will be used by others so they should only be used for data that is OK to share across Branches
  • Artifacts are used for longer-term storage of the outputs of your build process.
  • If your project needs to be packaged in some form or fashion, say an Android app where the .apk file is uploaded to Google Play, that’s a great example of an artifact.
  •  
    "CircleCI 2.0 provides a number of different ways to move data into and out of jobs, persist data, and with the introduction of Workspaces, move data between jobs"
張 旭

Pre-Built CircleCI Docker Images - CircleCI - 0 views

  • typically extensions of official Docker images and include tools especially useful for CI/CD.
  • Convenience images are based on the most recently built versions of upstream images, so it is best practice to use the most specific image possible.
  • add -jessie or -stretch to the end of each of those containers to ensure you’re only using that version of the Debian base OS.
  • ...12 more annotations...
  • language images
  • service images
  • All images add a circleci user as a system user
  • A language image should be listed first under the docker key in your configuration, making it the primary container during execution.
  • For example, if you want to add browsers to the circleci/golang:1.9 image, use the circleci/golang:1.9-browsers image.
  • Service images are convenience images for services like databases
  • should be listed after language images so they become secondary service containers.
  • To speed up builds using RAM volume, add the -ram suffix to the end of a service image tag
  • All convenience images have been extended with additional tools.
  • all images include the following packages, installed via apt-get
  • Most CircleCI convenience images are Debian Jessie- or Stretch-based images, however some extend Ubuntu-based images.
  • The following packages are installed via curl
張 旭

What's the difference between Prometheus and Zabbix? - Stack Overflow - 0 views

  • Zabbix has core written in C and webUI based on PHP
  • Zabbix stores data in RDBMS (MySQL, PostgreSQL, Oracle, sqlite) of user's choice.
  • Prometheus uses its own database embedded into backend process
  • ...8 more annotations...
  • Zabbix by default uses "pull" model when a server connects to agents on each monitoring machine, agents periodically gather the info and send it to a server.
  • Prometheus prefers "pull" model when a server gather info from client machines.
  • Prometheus requires an application to be instrumented with Prometheus client library (available in different programming languages) for preparing metrics.
  • expose metrics for Prometheus (similar to "agents" for Zabbix)
  • Zabbix uses its own tcp-based communication protocol between agents and a server.
  • Prometheus uses HTTP with protocol buffers (+ text format for ease of use with curl).
  • Prometheus offers basic tool for exploring gathered data and visualizing it in simple graphs on its native server and also offers a minimal dashboard builder PromDash. But Prometheus is and is designed to be supported by modern visualizing tools like Grafana.
  • Prometheus offers solution for alerting that is separated from its core into Alertmanager application.
張 旭

Ask HN: What are the best practises for using SSH keys? | Hacker News - 0 views

  • Make sure you use full disk encryption and never stand up from your machine without locking it, and make sure you keep your local machine patched.
  • I'm more focused on just stealing your keys from you regardless of length
  • attacks that aren't after your keys specifically, e.g. your home directory gets stolen.
  • ...19 more annotations...
  • ED25519 is more vulnerable to quantum computation than is RSA
  • best practice to be using a hardware token
  • to use a yubikey via gpg: with this method you use your gpg subkey as an ssh key
  • sit down and spend an hour thinking about your backup and recovery strategy first
  • never share a private keys between physical devices
  • allows you to revoke a single credential if you lose (control over) that device
  • If a private key ever turns up on the wrong machine, you *know* the key and both source and destination machines have been compromised.
  • centralized management of authentication/authorization
  • I have setup a VPS, disabled passwords, and setup a key with a passphrase to gain access. At this point my greatest worry is losing this private key, as that means I can't access the server.What is a reasonable way to backup my private key?
  • a mountable disk image that's encrypted
  • a system that can update/rotate your keys across all of your servers on the fly in case one is compromised or assumed to be compromised.
  • different keys for different purposes per client device
  • fall back to password plus OTP
  • relying completely on the security of your disk, against either physical or cyber.
  • It is better to use a different passphrase for each key but it is also less convenient unless you're using a password manager (personally, I'm using KeePass)
  • - RSA is pretty standard, and generally speaking is fairly secure for key lengths >=2048. RSA-2048 is the default for ssh-keygen, and is compatible with just about everything.
  • public-key authentication has somewhat unexpected side effect of preventing MITM per this security consulting firm
  • Disable passwords and only allow keys even for root with PermitRootLogin without-password
  • You should definitely use a different passphrase for keys stored on separate computers,
  •  
    "Make sure you use full disk encryption and never stand up from your machine without locking it, and make sure you keep your local machine patched"
張 旭

elabs/pundit: Minimal authorization through OO design and pure Ruby classes - 0 views

  • The class implements some kind of query method
  • Pundit will call the current_user method to retrieve what to send into this argumen
  • put these classes in app/policies
  • ...49 more annotations...
  • in leveraging regular Ruby classes and object oriented design patterns to build a simple, robust and scaleable authorization system
  • map to the name of a particular controller action
  • In the generated ApplicationPolicy, the model object is called record.
  • record
  • authorize
  • authorize would have done something like this: raise "not authorized" unless PostPolicy.new(current_user, @post).update?
  • pass a second argument to authorize if the name of the permission you want to check doesn't match the action name.
  • you can chain it
  • authorize returns the object passed to it
  • the policy method in both the view and controller.
  • have some kind of view listing records which a particular user has access to
  • ActiveRecord::Relation
  • Instances of this class respond to the method resolve, which should return some kind of result which can be iterated over.
  • scope.where(published: true)
    • 張 旭
       
      我想大概的意思就是:如果是 admin 可以看到全部 post,如果不是只能看到 published = true 的 post
  • use this class from your controller via the policy_scope method:
  • PostPolicy::Scope.new(current_user, Post).resolve
  • policy_scope(@user.posts).each
  • This method will raise an exception if authorize has not yet been called.
  • verify_policy_scoped to your controller. This will raise an exception in the vein of verify_authorized. However, it tracks if policy_scope is used instead of authorize
  • need to conditionally bypass verification, you can use skip_authorization
  • skip_policy_scope
  • Having a mechanism that ensures authorization happens allows developers to thoroughly test authorization scenarios as units on the policy objects themselves.
  • Pundit doesn't do anything you couldn't have easily done yourself. It's a very small library, it just provides a few neat helpers.
  • all of the policy and scope classes are just plain Ruby classes
  • rails g pundit:policy post
  • define a filter that redirects unauthenticated users to the login page
  • fail more gracefully
  • raise Pundit::NotAuthorizedError, "must be logged in" unless user
  • having rails handle them as a 403 error and serving a 403 error page.
  • config.action_dispatch.rescue_responses["Pundit::NotAuthorizedError"] = :forbidden
  • with I18n to generate error messages
  • retrieve a policy for a record outside the controller or view
  • define a method in your controller called pundit_user
  • Pundit strongly encourages you to model your application in such a way that the only context you need for authorization is a user object and a domain model that you want to check authorization for.
  • Pundit does not allow you to pass additional arguments to policies
  • authorization is dependent on IP address in addition to the authenticated user
  • create a special class which wraps up both user and IP and passes it to the policy.
  • set up a permitted_attributes method in your policy
  • policy(@post).permitted_attributes
  • permitted_attributes(@post)
  • Pundit provides a convenient helper method
  • permit different attributes based on the current action,
  • If you have defined an action-specific method on your policy for the current action, the permitted_attributes helper will call it instead of calling permitted_attributes on your controller
  • If you don't have an instance for the first argument to authorize, then you can pass the class
  • restart the Rails server
  • Given there is a policy without a corresponding model / ruby class, you can retrieve it by passing a symbol
  • after_action :verify_authorized
  • It is not some kind of failsafe mechanism or authorization mechanism.
  • Pundit will work just fine without using verify_authorized and verify_policy_scoped
  •  
    "Minimal authorization through OO design and pure Ruby classes"
張 旭

我必须得告诉大家的MySQL优化原理 - 简书 - 0 views

  • 很多的查询优化工作实际上就是遵循一些原则让MySQL的优化器能够按照预想的合理方式运行而已
  • MySQL客户端/服务端通信协议是“半双工”的:在任一时刻,要么是服务器向客户端发送数据,要么是客户端向服务器发送数据,这两个动作不能同时发生。
  • 当查询语句很长的时候,需要设置max_allowed_packet参数。
  • ...70 more annotations...
  • 如果查询实在是太大,服务端会拒绝接收更多数据并抛出异常
  • 服务器响应给用户的数据通常会很多,由多个数据包组成
  • 减小通信间数据包的大小和数量是一个非常好的习惯
  • 查询中尽量避免使用SELECT *以及加上LIMIT限制
  • 在解析一个查询语句前,如果查询缓存是打开的,那么MySQL会检查这个查询语句是否命中查询缓存中的数据。
  • 两个查询在任何字符上的不同(例如:空格、注释),都会导致缓存不会命中。
  • MySQL将缓存存放在一个引用表
  • 如果查询中包含任何用户自定义函数、存储函数、用户变量、临时表、mysql库中的系统表,其查询结果 都不会被缓存。
  • MySQL的查询缓存系统会跟踪查询中涉及的每个表,如果这些表(数据或结构)发生变化,那么和这张表相关的所有缓存数据都将失效。
  • 在任何的写操作时,MySQL必须将对应表的所有缓存都设置为失效。
  • 查询缓存对系统的额外消耗也不仅仅在写操作,读操作也不例外
  • 任何的查询语句在开始之前都必须经过检查,即使这条SQL语句永远不会命中缓存
  • 如果查询结果可以被缓存,那么执行完成后,会将结果存入缓存,也会带来额外的系统消耗
  • 并不是什么情况下查询缓存都会提高系统性能,缓存和失效都会带来额外消耗
  • 用多个小表代替一个大表
  • 批量插入代替循环单条插入
  • 合理控制缓存空间大小
  • 可以通过SQL_CACHE和SQL_NO_CACHE来控制某个查询语句是否需要进行缓存
  • 不要轻易打开查询缓存,特别是写密集型应用
  • 将query_cache_type设置为DEMAND,这时只有加入SQL_CACHE的查询才会走缓存,其他查询则不会,这样可以非常自由地控制哪些查询需要被缓存。
  • 预处理则会根据MySQL规则进一步检查解析树是否合法。比如检查要查询的数据表和数据列是否存在等等。
  • 一条查询可以有很多种执行方式,最后都返回相应的结果。优化器的作用就是找到这其中最好的执行计划。
  • 查询当前会话的last_query_cost的值来得到其计算当前查询的成本。
  • 尝试预测一个查询使用某种执行计划时的成本
  • 有非常多的原因会导致MySQL选择错误的执行计划
  • MySQL值选择它认为成本小的,但成本小并不意味着执行时间短
  • 提前终止查询(比如:使用Limit时,查找到满足数量的结果集后会立即终止查询)
  • 找某列的最小值,如果该列有索引,只需要查找B+Tree索引最左端,反之则可以找到最大值
  • 在完成解析和优化阶段以后,MySQL会生成对应的执行计划,查询执行引擎根据执行计划给出的指令逐步执行得出结果。
  • 查询过程中的每一张表由一个handler实例表示。
  • MySQL在查询优化阶段就为每一张表创建了一个handler实例,优化器可以根据这些实例的接口来获取表的相关信息
  • 即使查询不到数据,MySQL仍然会返回这个查询的相关信息,比如改查询影响到的行数以及执行时间等等。
  • 结果集返回客户端是一个增量且逐步返回的过程
  • 整型就比字符操作代价低,因而会使用整型来存储ip地址,使用DATETIME来存储时间,而不是使用字符串。
  • 如果计划在列上创建索引,就应该将该列设置为NOT NULL
  • UNSIGNED表示不允许负值,大致可以使正数的上限提高一倍。
  • 没有太大的必要使用DECIMAL数据类型。即使是在需要存储财务数据时,仍然可以使用BIGINT
  • 大表ALTER TABLE非常耗时
  • MySQL执行大部分修改表结果操作的方法是用新的结构创建一个张空表,从旧表中查出所有的数据插入新表,然后再删除旧表
  • 索引是提高MySQL查询性能的一个重要途径,但过多的索引可能会导致过高的磁盘使用率以及过高的内存占用,从而影响应用程序的整体性能。
  • 索引是指B-Tree索引,它是目前关系型数据库中查找数据最为常用和有效的索引,大多数存储引擎都支持这种索引
  • B+Tree中的B是指balance,意为平衡。
  • 平衡二叉树首先需要符合二叉查找树的定义,其次必须满足任何节点的两个子树的高度差不能大于1。
  • 索引往往以索引文件的形式存储的磁盘上
  • 如何减少查找过程中的I/O存取次数?
  • 减少树的深度,将二叉树变为m叉树(多路搜索树),而B+Tree就是一种多路搜索树。
  • 页是计算机管理存储器的逻辑块,硬件及OS往往将主存和磁盘存储区分割为连续的大小相等的块
  • B+Tree为了保持平衡,对于新插入的值需要做大量的拆分页操作,而页的拆分需要I/O操作,为了尽可能的减少页的拆分操作,B+Tree也提供了类似于平衡二叉树的旋转功能。
  • 在多数情况下,在多个列上建立独立的索引并不能提高查询性能。理由非常简单,MySQL不知道选择哪个索引的查询效率更好
  • 当出现多个索引做联合操作时(多个OR条件),对结果集的合并、排序等操作需要耗费大量的CPU和内存资源,特别是当其中的某些索引的选择性不高,需要返回合并大量数据时,查询成本更高。
  • explain时如果发现有索引合并(Extra字段出现Using union),应该好好检查一下查询和表结构是不是已经是最优的
  • 索引选择性是指不重复的索引值和数据表的总记录数的比值
  • 唯一索引的选择性是1,这时最好的索引选择性,性能也是最好的。
  • 如果一个索引包含或者说覆盖所有需要查询的字段的值,那么就没有必要再回表查询,这就称为覆盖索引。
  • 对结果集进行排序的操作
  • 按照索引顺序扫描得出的结果自然是有序的
  • 如果explain的结果中type列的值为index表示使用了索引扫描来做排序。
  • 在设计索引时,如果一个索引既能够满足排序,有满足查询,是最好的。
  • 比如有一个索引(A,B),再创建索引(A)就是冗余索引。
  • 对于非常小的表,简单的全表扫描更高效。对于中到大型的表,索引就非常有效。对于超大型的表,建立和维护索引的代价随之增长,这时候其他技术也许更有效,比如分区表。
  • explain后再提测是一种美德。
  • 如果要统计行数,直接使用COUNT(*),意义清晰,且性能更好。
  • 执行EXPLAIN并不需要真正地去执行查询,所以成本非常低。
  • 确保ON和USING字句中的列上有索引。在创建索引的时候就要考虑到关联的顺序
  • 确保任何的GROUP BY和ORDER BY中的表达式只涉及到一个表中的列,这样MySQL才有可能使用索引来优化
  • MySQL关联执行的策略非常简单,它对任何的关联都执行嵌套循环关联操作,即先在一个表中循环取出单条数据,然后在嵌套循环到下一个表中寻找匹配的行,依次下去,直到找到所有表中匹配的行为为止。然后根据各个表匹配的行,返回查询中需要的各个列。
  • 最外层的查询是根据A.xx列来查询的,A.c上如果有索引的话,整个关联查询也不会使用。再看内层的查询,很明显B.c上如果有索引的话,能够加速查询,因此只需要在关联顺序中的第二张表的相应列上创建索引即可。
  • MySQL处理UNION的策略是先创建临时表,然后再把各个查询结果插入到临时表中,最后再来做查询。
  • 要使用UNION ALL,如果没有ALL关键字,MySQL会给临时表加上DISTINCT选项,这会导致整个临时表的数据做唯一性检查,这样做的代价非常高。
  • 尽可能不要使用存储过程
張 旭

RolifyCommunity/rolify: Role management library with resource scoping - 0 views

  •  
    "resource"
張 旭

How To Use Bash's Job Control to Manage Foreground and Background Processes | DigitalOcean - 0 views

  • Most processes that you start on a Linux machine will run in the foreground. The command will begin execution, blocking use of the shell for the duration of the process.
  • By default, processes are started in the foreground. Until the program exits or changes state, you will not be able to interact with the shell.
  • stop the process by sending it a signal
  • ...17 more annotations...
  • Linux terminals are usually configured to send the "SIGINT" signal (typically signal number 2) to current foreground process when the CTRL-C key combination is pressed.
  • Another signal that we can send is the "SIGTSTP" signal (typically signal number 20).
  • A background process is associated with the specific terminal that started it, but does not block access to the shell
  • start a background process by appending an ampersand character ("&") to the end of your commands.
  • type commands at the same time.
  • The [1] represents the command's "job spec" or job number. We can reference this with other job and process control commands, like kill, fg, and bg by preceding the job number with a percentage sign. In this case, we'd reference this job as %1.
  • Once the process is stopped, we can use the bg command to start it again in the background
  • By default, the bg command operates on the most recently stopped process.
  • Whether a process is in the background or in the foreground, it is rather tightly tied with the terminal instance that started it
  • When a terminal closes, it typically sends a SIGHUP signal to all of the processes (foreground, background, or stopped) that are tied to the terminal.
  • a terminal multiplexer
  • start it using the nohup command
  • appending output to ‘nohup.out’
  • pgrep -a
  • The disown command, in its default configuration, removes a job from the jobs queue of a terminal.
  • You can pass the -h flag to the disown process instead in order to mark the process to ignore SIGHUP signals, but to otherwise continue on as a regular job
  • The huponexit shell option controls whether bash will send its child processes the SIGHUP signal when it exits.
« First ‹ Previous 181 - 200 of 257 Next › Last »
Showing 20 items per page