Skip to main content

Home/ Larvata/ Group items tagged queue

Rss Feed Group items tagged

張 旭

Queues - Laravel - The PHP Framework For Web Artisans - 0 views

  • Laravel queues provide a unified API across a variety of different queue backends, such as Beanstalk, Amazon SQS, Redis, or even a relational database.
  • The queue configuration file is stored in config/queue.php
  • a synchronous driver that will execute jobs immediately (for local use)
  • ...56 more annotations...
  • A null queue driver is also included which discards queued jobs.
  • In your config/queue.php configuration file, there is a connections configuration option.
  • any given queue connection may have multiple "queues" which may be thought of as different stacks or piles of queued jobs.
  • each connection configuration example in the queue configuration file contains a queue attribute.
  • if you dispatch a job without explicitly defining which queue it should be dispatched to, the job will be placed on the queue that is defined in the queue attribute of the connection configuration
  • pushing jobs to multiple queues can be especially useful for applications that wish to prioritize or segment how jobs are processed
  • specify which queues it should process by priority.
  • If your Redis queue connection uses a Redis Cluster, your queue names must contain a key hash tag.
  • ensure all of the Redis keys for a given queue are placed into the same hash slot
  • all of the queueable jobs for your application are stored in the app/Jobs directory.
  • Job classes are very simple, normally containing only a handle method which is called when the job is processed by the queue.
  • we were able to pass an Eloquent model directly into the queued job's constructor. Because of the SerializesModels trait that the job is using, Eloquent models will be gracefully serialized and unserialized when the job is processing.
  • When the job is actually handled, the queue system will automatically re-retrieve the full model instance from the database.
  • The handle method is called when the job is processed by the queue
  • The arguments passed to the dispatch method will be given to the job's constructor
  • delay the execution of a queued job, you may use the delay method when dispatching a job.
  • dispatch a job immediately (synchronously), you may use the dispatchNow method.
  • When using this method, the job will not be queued and will be run immediately within the current process
  • specify a list of queued jobs that should be run in sequence.
  • Deleting jobs using the $this->delete() method will not prevent chained jobs from being processed. The chain will only stop executing if a job in the chain fails.
  • this does not push jobs to different queue "connections" as defined by your queue configuration file, but only to specific queues within a single connection.
  • To specify the queue, use the onQueue method when dispatching the job
  • To specify the connection, use the onConnection method when dispatching the job
  • defining the maximum number of attempts on the job class itself.
  • to defining how many times a job may be attempted before it fails, you may define a time at which the job should timeout.
  • using the funnel method, you may limit jobs of a given type to only be processed by one worker at a time
  • using the throttle method, you may throttle a given type of job to only run 10 times every 60 seconds.
  • If an exception is thrown while the job is being processed, the job will automatically be released back onto the queue so it may be attempted again.
  • dispatch a Closure. This is great for quick, simple tasks that need to be executed outside of the current request cycle
  • When dispatching Closures to the queue, the Closure's code contents is cryptographically signed so it can not be modified in transit.
  • Laravel includes a queue worker that will process new jobs as they are pushed onto the queue.
  • once the queue:work command has started, it will continue to run until it is manually stopped or you close your terminal
  • queue workers are long-lived processes and store the booted application state in memory.
  • they will not notice changes in your code base after they have been started.
  • during your deployment process, be sure to restart your queue workers.
  • customize your queue worker even further by only processing particular queues for a given connection
  • The --once option may be used to instruct the worker to only process a single job from the queue
  • The --stop-when-empty option may be used to instruct the worker to process all jobs and then exit gracefully.
  • Daemon queue workers do not "reboot" the framework before processing each job.
  • you should free any heavy resources after each job completes.
  • Since queue workers are long-lived processes, they will not pick up changes to your code without being restarted.
  • restart the workers during your deployment process.
  • php artisan queue:restart
  • The queue uses the cache to store restart signals
  • the queue workers will die when the queue:restart command is executed, you should be running a process manager such as Supervisor to automatically restart the queue workers.
  • each queue connection defines a retry_after option. This option specifies how many seconds the queue connection should wait before retrying a job that is being processed.
  • The --timeout option specifies how long the Laravel queue master process will wait before killing off a child queue worker that is processing a job.
  • When jobs are available on the queue, the worker will keep processing jobs with no delay in between them.
  • While sleeping, the worker will not process any new jobs - the jobs will be processed after the worker wakes up again
  • the numprocs directive will instruct Supervisor to run 8 queue:work processes and monitor all of them, automatically restarting them if they fail.
  • Laravel includes a convenient way to specify the maximum number of times a job should be attempted.
  • define a failed method directly on your job class, allowing you to perform job specific clean-up when a failure occurs.
  • a great opportunity to notify your team via email or Slack.
  • php artisan queue:retry all
  • php artisan queue:flush
  • When injecting an Eloquent model into a job, it is automatically serialized before being placed on the queue and restored when the job is processed
張 旭

Introduction To The Queue System - Diving Laravel - 0 views

  • Laravel is shipped with a built-in queue system that helps you run tasks in the background
  • The QueueManager is registered into the container and it knows how to connect to the different built-in queue drivers
  • for example when we called the Queue::push() method, what happened is that the manager selected the desired queue driver, connected to it, and called the push method on that driver.
  • ...2 more annotations...
  • All calls to methods that don't exist in the QueueManager class will be sent to the loaded driver
  • when you do Queue::push() you're actually calling the push method on the queue driver you're using
  •  
    "Laravel is shipped with a built-in queue system that helps you run tasks in the background "
crazylion lee

Homepage | Celery: Distributed Task Queue - 0 views

  •  
    "Celery is an asynchronous task queue/job queue based on distributed message passing. It is focused on real-time operation, but supports scheduling as well. The execution units, called tasks, are executed concurrently on a single or more worker servers using multiprocessing, Eventlet, or gevent. Tasks can execute asynchronously (in the background) or synchronously (wait until ready)."
張 旭

Queue Workers: How they work - Diving Laravel - 0 views

  • define workers as a simple PHP process that runs in the background with the purpose of extracting jobs from a storage space and run them with respect to several configuration options.
  • have to manually restart the worker to reflect any code change you made in your application.
  • avoiding booting up the whole app on every job
  • ...7 more annotations...
  • instruct Laravel to create an instance of your application and start executing jobs, this instance will stay alive indefinitely which means the action of starting your Laravel application happens only once when the command was run & the same instance will be used to execute your jobs
  • This will start an instance of the application, process a single job,
  • and then kill the script.
  • Using queue:listen ensures that a new instance of the app is created for every job, that means you don't have to manually restart the worker in case you made changes to your code, but also means more server resources will be consumed.
  • the queue:listen command runs the WorkCommand inside a loop
  • The connection this worker will be pulling jobs from
  • The queue the worker will use to find jobs
  •  
    "define workers as a simple PHP process that runs in the background with the purpose of extracting jobs from a storage space and run them with respect to several configuration options."
張 旭

MongoDB Performance - MongoDB Manual - 0 views

  • MongoDB uses a locking system to ensure data set consistency. If certain operations are long-running or a queue forms, performance will degrade as requests and operations wait for the lock.
  • performance limitations as a result of inadequate or inappropriate indexing strategies, or as a consequence of poor schema design patterns.
  • performance issues may be temporary and related to abnormal traffic load.
  • ...9 more annotations...
  • Lock-related slowdowns can be intermittent.
  • If globalLock.currentQueue.total is consistently high, then there is a chance that a large number of requests are waiting for a lock.
  • If globalLock.totalTime is high relative to uptime, the database has existed in a lock state for a significant amount of time.
  • For write-heavy applications, deploy sharding and add one or more shards to a sharded cluster to distribute load among mongod instances.
  • Unless constrained by system-wide limits, the maximum number of incoming connections supported by MongoDB is configured with the maxIncomingConnections setting.
  • When logLevel is set to 0, MongoDB records slow operations to the diagnostic log at a rate determined by slowOpSampleRate.
  • At higher logLevel settings, all operations appear in the diagnostic log regardless of their latency with the following exception
  • Full Time Diagnostic Data Collection (FTDC) mechanism. FTDC data files are compressed, are not human-readable, and inherit the same file access permissions as the MongoDB data files.
  • mongod processes store FTDC data files in a diagnostic.data directory under the instances storage.dbPath.
  •  
    "MongoDB uses a locking system to ensure data set consistency. If certain operations are long-running or a queue forms, performance will degrade as requests and operations wait for the lock."
張 旭

第 06 章 - 計算機概論 - 作業系統概論 - 0 views

  • 自行參考電腦硬體來設計出運算的軟體,當時的系統並沒有『作業系統』的概念,因為應用程式與作業系統是同時設計的。
  • 電腦裡面有儲存設備 (不論是硬碟還是記憶體), 所以電腦硬體裡面會執行一隻監督程式 (monitor),使用者可以預先將自己的程式讀進系統,系統先儲存該程式到佇列 (queue),等到輪到該程式運作後, 就將該程式讀入讓 CPU 開始運作,直到運作結束輸出到印表機之後,將該工作丟棄,然後開始讀入在 queue 裡面的新的程式,依序執行。
  • 將 CPU 與 I/O 分離開
  • ...38 more annotations...
  • 透過卡片與讀卡機,將程式碼一次性的讀進大機器,然後就是等待大機器的運作, 結果再交由印表機印出。如果打卡紙打洞錯誤呢?只好重新打洞,重新排隊去運作程式了。
  • 允許兩個以上的程序在記憶體中等待被 CPU 執行,當 CPU 執行完其中一隻程式後, 第二隻程式就可以立刻被執行,因此效能會比較好。
  • 程序的狀態進入中斷狀態,CPU 不會理會該程序
  • CPU 的排程 (cpu scheduling)
  • 早期單核 CPU 的運作中,CPU 一次只能運作一個工作,因此,若有多個工作要同時進行, 那麼 CPU 就得要安排一個 CPU 運作時間給所有的工作,當該程序達到最大工作時間後,CPU 就會將該工作排回佇列,讓下一隻程序接著運作。
  • 你會覺得 CPU 是同時運作所有的程序,其實不是的!而是 CPU 在各個程序之間切換工作而已。
  • 分時系統其實與多元程式處理系統有點類似, 只是工作的輸入改為透過終端機操作輸入,CPU 可以在各個用戶操作間切換工作,於是每個使用者感覺似乎都是在同步操作電腦系統一般, 這就是分時系統。
  • 早期的程式設計師要設計程式是件苦差事,因為得要了解電腦硬體,並根據該電腦硬體來選擇程式語言,然後根據程式語言來設計運算工作、記憶體讀寫工作、 磁碟與影像輸入輸出工作、檔案存取工作等。等於從硬體、軟體、輸入輸出行為都得要在自己的程式碼裡面一口氣完成才行。
  • 在 1971 年開始的 unix 系統開發後,後續的系統大多使用 unix 的概念
  • 將硬體管理的工作統一交給一組程式碼去進行,而且這組程式碼還提供了一個開發界面
  • 軟體工程師只要依據這組程式碼規範的開發界面後,該軟體開發完成就能夠在這組程式碼上面運作了
  • 程式的執行
  • 作業系統需要將使用者交付的軟體程序分配到記憶體中, 然後透過 CPU 排程持續的交錯的完成各項任務才行。
  • CPU 中斷 (interrupt) 的功能
  • CPU 根據硬體擁有許多與週邊硬體的中斷通道, 當接收到中斷訊號時,CPU 就會嘗試將該程序列入等待的狀態下,讓該硬體自行完成相關的任務後,然後再接管系統。
  • 記憶體管理模組
  • 舊的環境底下,程式設計師需要自己判斷自己的程式會用到多少記憶體,然後自行指定記憶體使用位址的任務。
  • 系統會自動去偵測與管理主記憶體的使用狀態,避免同一個記憶體位址同時被兩個程序所使用而讓程序工作損毀
  • 作業系統核心也在記憶體中, 因此核心也會被這個子系統放入受保護的記憶體區段,一般用戶是無法直接操作該受保護的記憶區段的。
  • 虛擬記憶體 (virtual memory)
  • 主記憶體當中的資料並不是連續的,主記憶體的資料就像磁碟一樣,重複讀、刪、寫之後, 記憶區段是不會連續的
  • CPU 主要讀出虛擬記憶體,記憶體管理模組就會主動讀出資料
  • 一隻程序的資料是連續的 (左側),但是實際上對應的是在主記憶體或其他位置上
  • CPU 排程
  • 作業系統好不好的重要指標之一!如何讓 CPU 在多工的情況下以最快速的方式將所有的工作完成,這方面的演算法是目前各主要作業系統持續在進步的部份。
  • 磁碟存取與檔案系統
  • 作業系統則需要驅動磁碟(不論是傳統硬碟還是 SSD),然後也需要了解該磁碟內的檔案系統格式, 之後透過檔案系統這個子系統來進行資料的處理。
  • 裝置的驅動程式
  • 作業系統必須要能夠接受硬體裝置的驅動,所以硬體製造商可以推出給各個不同作業系統使用的驅動程式 (dirver / modules), 這樣作業系統直接將該驅動程式載入後,即可開始使用該硬體,而不需要重新編譯作業系統。
  • 網路子系統
  • 使用者界面
  • 現代 CPU 設計的主要思考依據,讓一個 CPU 封裝 (單一一顆 CPU 硬體) 裡面,整合多個 CPU 核心,也就是多核心 CPU 製造的思考方向。
  • 對於單執行緒的程式來說, 多核心的 CPU 不見得會跑得比單核的快!這是因為單執行緒只有一個程序在進行,所以 CPU 時脈越高,代表會越快執行完畢。
  • 軟體會將單一工作拆分成數個小工作,分別交給不同的核心去執行,這樣每個核心只要負責一小段任務, 當然 CPU 時脈不用高,只要數量夠大,效能就會提昇很明顯
  • 由於 CPU 是由作業系統控制的,因此,你要使用到多核心的硬體系統,你的作業系統、應用程式都需要設計程可以支援多核心才行!
  • 所謂的平行處理功能,讓一件工作可以拆分成數個部份,讓這些不同的部份丟給不同的 CPU 去運算, 然後再透過一支監控程式,將各別的計算在一定的時間內收回統整後,再次的細分小工作發派出去,持續這些動作後,直到程式執行完畢為止。
  • 對於 Linux 來說,大部分都可以支援到 4096 個 CPU 核心數。
  • 銀行商用大型主機 Unix 系統
張 旭

mqtt - 0 views

  • MQTT is a lightweight publish/subscribe messaging protocol. It is useful for use with low power sensors
  • The MQTT protocol is based on the principle of publishing messages and subscribing to topics, or "pub/sub".
  • Multiple clients connect to a broker and subscribe to topics that they are interested in
  • ...22 more annotations...
  • Many clients may subscribe to the same topics
  • The broker and MQTT act as a simple, common interface for everything to connect to
  • Messages in MQTT are published on topics
  • no need to configure a topic, publishing on it is enough
  • Topics are treated as a hierarchy, using a slash (/) as a separator.
  • Clients can receive messages by creating subscriptions
  • A subscription may be to an explicit topic
  • Two wildcards are available, + or #.
  • # can be used as a wildcard for all remaining levels of hierarchy
  • + can be used as a wildcard for a single level of hierarchy
  • Zero length topic levels are valid, which can lead to some slightly non-obvious behaviour.
  • The QoS defines how hard the broker/client will try to ensure that a message is received.
  • Messages may be sent at any QoS level, and clients may attempt to subscribe to topics at any QoS level
  • the client chooses the maximum QoS it will receive
  • if a client is subscribed with QoS 2 and a message is published on QoS 0, the client will receive it on QoS 0.
  • 1: The broker/client will deliver the message at least once, with confirmation required.
  • All messages may be set to be retained.
  • the broker will keep the message even after sending it to all current subscribers
  • useful as a "last known good" mechanism
  • If clean session is set to false, then the connection is treated as durable
  • when the client disconnects, any subscriptions it has will remain and any subsequent QoS 1 or 2 messages will be stored until it connects again in the future
  • If clean session is true, then all subscriptions will be removed for the client when it disconnects
張 旭

The Twelve-Factor App - 0 views

  • they can be started or stopped at a moment’s notice.
  • Processes should strive to minimize startup time
  • Processes shut down gracefully when they receive a SIGTERM signal from the process manager.
  • ...4 more annotations...
  • returning the current job to the work queue
  • all jobs are reentrant, which typically is achieved by wrapping the results in a transaction, or making the operation idempotent
  • Processes should also be robust against sudden death, in the case of a failure in the underlying hardware.
  • a twelve-factor app is architected to handle unexpected, non-graceful terminations
張 旭

The Twelve-Factor App - 0 views

  • Keep development, staging, and production as similar as possible
  • Developers write code, ops engineers deploy it.
  • The twelve-factor app is designed for continuous deployment by keeping the gap between development and production small
  • ...4 more annotations...
  • Backing services, such as the app’s database, queueing system, or cache, is one area where dev/prod parity is important
  • The twelve-factor developer resists the urge to use different backing services between development and production, even when adapters theoretically abstract away any differences in backing services.
  • declarative provisioning tools such as Chef and Puppet combined with light-weight virtual environments such as Docker and Vagrant allow developers to run local environments which closely approximate production environments.
  • all deploys of the app (developer environments, staging, production) should be using the same type and version of each of the backing services.
  •  
    "as similar as possible "
張 旭

How To Use Bash's Job Control to Manage Foreground and Background Processes | DigitalOcean - 0 views

  • Most processes that you start on a Linux machine will run in the foreground. The command will begin execution, blocking use of the shell for the duration of the process.
  • By default, processes are started in the foreground. Until the program exits or changes state, you will not be able to interact with the shell.
  • stop the process by sending it a signal
  • ...17 more annotations...
  • Linux terminals are usually configured to send the "SIGINT" signal (typically signal number 2) to current foreground process when the CTRL-C key combination is pressed.
  • Another signal that we can send is the "SIGTSTP" signal (typically signal number 20).
  • A background process is associated with the specific terminal that started it, but does not block access to the shell
  • start a background process by appending an ampersand character ("&") to the end of your commands.
  • type commands at the same time.
  • The [1] represents the command's "job spec" or job number. We can reference this with other job and process control commands, like kill, fg, and bg by preceding the job number with a percentage sign. In this case, we'd reference this job as %1.
  • Once the process is stopped, we can use the bg command to start it again in the background
  • By default, the bg command operates on the most recently stopped process.
  • Whether a process is in the background or in the foreground, it is rather tightly tied with the terminal instance that started it
  • When a terminal closes, it typically sends a SIGHUP signal to all of the processes (foreground, background, or stopped) that are tied to the terminal.
  • a terminal multiplexer
  • start it using the nohup command
  • appending output to ‘nohup.out’
  • pgrep -a
  • The disown command, in its default configuration, removes a job from the jobs queue of a terminal.
  • You can pass the -h flag to the disown process instead in order to mark the process to ignore SIGHUP signals, but to otherwise continue on as a regular job
  • The huponexit shell option controls whether bash will send its child processes the SIGHUP signal when it exits.
張 旭

Logstash Alternatives: Pros & Cons of 5 Log Shippers [2019] - Sematext - 0 views

  • In this case, Elasticsearch. And because Elasticsearch can be down or struggling, or the network can be down, the shipper would ideally be able to buffer and retry
  • Logstash is typically used for collecting, parsing, and storing logs for future use as part of log management.
  • Logstash’s biggest con or “Achille’s heel” has always been performance and resource consumption (the default heap size is 1GB).
  • ...37 more annotations...
  • This can be a problem for high traffic deployments, when Logstash servers would need to be comparable with the Elasticsearch ones.
  • Filebeat was made to be that lightweight log shipper that pushes to Logstash or Elasticsearch.
  • differences between Logstash and Filebeat are that Logstash has more functionality, while Filebeat takes less resources.
  • Filebeat is just a tiny binary with no dependencies.
  • For example, how aggressive it should be in searching for new files to tail and when to close file handles when a file didn’t get changes for a while.
  • For example, the apache module will point Filebeat to default access.log and error.log paths
  • Filebeat’s scope is very limited,
  • Initially it could only send logs to Logstash and Elasticsearch, but now it can send to Kafka and Redis, and in 5.x it also gains filtering capabilities.
  • Filebeat can parse JSON
  • you can push directly from Filebeat to Elasticsearch, and have Elasticsearch do both parsing and storing.
  • You shouldn’t need a buffer when tailing files because, just as Logstash, Filebeat remembers where it left off
  • For larger deployments, you’d typically use Kafka as a queue instead, because Filebeat can talk to Kafka as well
  • The default syslog daemon on most Linux distros, rsyslog can do so much more than just picking logs from the syslog socket and writing to /var/log/messages.
  • It can tail files, parse them, buffer (on disk and in memory) and ship to a number of destinations, including Elasticsearch.
  • rsyslog is the fastest shipper
  • Its grammar-based parsing module (mmnormalize) works at constant speed no matter the number of rules (we tested this claim).
  • use it as a simple router/shipper, any decent machine will be limited by network bandwidth
  • It’s also one of the lightest parsers you can find, depending on the configured memory buffers.
  • rsyslog requires more work to get the configuration right
  • the main difference between Logstash and rsyslog is that Logstash is easier to use while rsyslog lighter.
  • rsyslog fits well in scenarios where you either need something very light yet capable (an appliance, a small VM, collecting syslog from within a Docker container).
  • rsyslog also works well when you need that ultimate performance.
  • syslog-ng as an alternative to rsyslog (though historically it was actually the other way around).
  • a modular syslog daemon, that can do much more than just syslog
  • Unlike rsyslog, it features a clear, consistent configuration format and has nice documentation.
  • Similarly to rsyslog, you’d probably want to deploy syslog-ng on boxes where resources are tight, yet you do want to perform potentially complex processing.
  • syslog-ng has an easier, more polished feel than rsyslog, but likely not that ultimate performance
  • Fluentd was built on the idea of logging in JSON wherever possible (which is a practice we totally agree with) so that log shippers down the line don’t have to guess which substring is which field of which type.
  • Fluentd plugins are in Ruby and very easy to write.
  • structured data through Fluentd, it’s not made to have the flexibility of other shippers on this list (Filebeat excluded).
  • Fluent Bit, which is to Fluentd similar to how Filebeat is for Logstash.
  • Fluentd is a good fit when you have diverse or exotic sources and destinations for your logs, because of the number of plugins.
  • Splunk isn’t a log shipper, it’s a commercial logging solution
  • Graylog is another complete logging solution, an open-source alternative to Splunk.
  • everything goes through graylog-server, from authentication to queries.
  • Graylog is nice because you have a complete logging solution, but it’s going to be harder to customize than an ELK stack.
  • it depends
1 - 13 of 13
Showing 20 items per page