Skip to main content

Home/ Larvata/ Group items tagged streaming

Rss Feed Group items tagged

張 旭

Logging Architecture | Kubernetes - 0 views

  • Application logs can help you understand what is happening inside your application
  • container engines are designed to support logging.
  • The easiest and most adopted logging method for containerized applications is writing to standard output and standard error streams.
  • ...26 more annotations...
  • In a cluster, logs should have a separate storage and lifecycle independent of nodes, pods, or containers. This concept is called cluster-level logging.
  • Cluster-level logging architectures require a separate backend to store, analyze, and query logs
  • Kubernetes does not provide a native storage solution for log data.
  • use kubectl logs --previous to retrieve logs from a previous instantiation of a container.
  • A container engine handles and redirects any output generated to a containerized application's stdout and stderr streams
  • The Docker JSON logging driver treats each line as a separate message.
  • By default, if a container restarts, the kubelet keeps one terminated container with its logs.
  • An important consideration in node-level logging is implementing log rotation, so that logs don't consume all available storage on the node
  • You can also set up a container runtime to rotate an application's logs automatically.
  • The two kubelet flags container-log-max-size and container-log-max-files can be used to configure the maximum size for each log file and the maximum number of files allowed for each container respectively.
  • The kubelet and container runtime do not run in containers.
  • On machines with systemd, the kubelet and container runtime write to journald. If systemd is not present, the kubelet and container runtime write to .log files in the /var/log directory.
  • System components inside containers always write to the /var/log directory, bypassing the default logging mechanism.
  • Kubernetes does not provide a native solution for cluster-level logging
  • Use a node-level logging agent that runs on every node.
  • implement cluster-level logging by including a node-level logging agent on each node.
  • the logging agent is a container that has access to a directory with log files from all of the application containers on that node.
  • the logging agent must run on every node, it is recommended to run the agent as a DaemonSet
  • Node-level logging creates only one agent per node and doesn't require any changes to the applications running on the node.
  • Containers write stdout and stderr, but with no agreed format. A node-level agent collects these logs and forwards them for aggregation.
  • Each sidecar container prints a log to its own stdout or stderr stream.
  • It is not recommended to write log entries with different formats to the same log stream
  • writing logs to a file and then streaming them to stdout can double disk usage.
  • If you have an application that writes to a single file, it's recommended to set /dev/stdout as the destination
  • it's recommended to use stdout and stderr directly and leave rotation and retention policies to the kubelet.
  • Using a logging agent in a sidecar container can lead to significant resource consumption. Moreover, you won't be able to access those logs using kubectl logs because they are not controlled by the kubelet.
張 旭

The Twelve-Factor App - 0 views

  • Logs are the stream of aggregated, time-ordered events collected from the output streams of all running processes and backing services.
  • Logs have no fixed beginning or end, but flow continuously as long as the app is operating.
  • each running process writes its event stream, unbuffered, to stdout.
  • ...2 more annotations...
  • long-term archival. These archival destinations are not visible to or configurable by the app, and instead are completely managed by the execution environment.
  • Most significantly, the stream can be sent to a log indexing and analysis system such as Splunk, or a general-purpose data warehousing system such as Hadoop/Hive.
crazylion lee

phanan/koel: A personal music streaming server that works. - 0 views

  •  
    "A personal music streaming server that works"
crazylion lee

amznlabs/ion-java: Java streaming parser/serializer for Ion. - 0 views

  •  
    "Java streaming parser/serializer for Ion."
crazylion lee

Instant.io - Streaming file transfer over WebTorrent - 0 views

shared by crazylion lee on 27 Sep 16 - No Cached
  •  
    "Streaming file transfer over WebTorrent (torrents on the web)"
張 旭

kube-proxy | Kubernetes - 0 views

  • The Kubernetes network proxy runs on each node. This reflects services as defined in the Kubernetes API on each node and can do simple TCP, UDP, and SCTP stream forwarding or round robin TCP, UDP, and SCTP forwarding across a set of backends.
  • Service cluster IPs and ports are currently found through Docker-links-compatible environment variables specifying ports opened by the service proxy.
  •  
    "The Kubernetes network proxy runs on each node. This reflects services as defined in the Kubernetes API on each node and can do simple TCP, UDP, and SCTP stream forwarding or round robin TCP, UDP, and SCTP forwarding across a set of backends."
crazylion lee

Java Streams, Part 4: From concurrent to parallel - 0 views

  •  
    "Understanding the factors influencing parallel performance"
crazylion lee

Riemann - A network monitoring system - 0 views

  •  
    "Riemann aggregates events from your servers and applications with a powerful stream processing language. Send an email for every exception in your app. Track the latency distribution of your web app. See the top processes on any host, by memory and CPU. Combine statistics from every Riak node in your cluster and forward to Graphite. Track user activity from second to second."
crazylion lee

Libav - 0 views

shared by crazylion lee on 10 Feb 16 - No Cached
  •  
    "Libav provides cross-platform tools and libraries to convert, manipulate and stream a wide range of multimedia formats and protocols."
crazylion lee

dularion/streama: It's like Netflix, but self-hosted! http://dularion.github.io/streama/ - 0 views

  •  
    "It's like Netflix, but self-hosted! http://dularion.github.io/streama/"
crazylion lee

Software Library: Amiga : Free Texts : Download & Streaming : Internet Archive - 0 views

  •  
    "Software Library: Amiga"
張 旭

The Twelve-Factor App - 0 views

  • software is commonly delivered as a service: called web apps, or software-as-a-service.
  • Use declarative formats for setup automation
  • offering maximum portability between execution environments
  • ...18 more annotations...
  • obviating the need for servers and systems administration
  • Minimize divergence between development and production
  • scale up without significant changes to tooling, architecture, or development practices
  • Ops engineers who deploy or manage such applications.
  • developer building applications which run as a service
  • One codebase
  • many deploys
  • in the environment
  • services as attached resources
  • Explicitly declare
  • separate build and run stages
  • stateless processes
  • Export services via port binding
  • Scale out
  • fast startup and graceful shutdown
  • as similar as possible
  • logs as event streams
  • admin/management tasks as one-off processes
  •  
    "software is commonly delivered as a service: called web apps, or software-as-a-service"
張 旭

Replication - Redis - 0 views

  • leader follower (master-slave) replication
  • slave Redis instances to be exact copies of master instances.
  • The slave will automatically reconnect to the master every time the link breaks, and will attempt to be an exact copy of it regardless of what happens to the master.
  • ...2 more annotations...
  • the master keeps the slave updated by sending a stream of commands to the slave
  • When a partial resynchronization is not possible, the slave will ask for a full resynchronization.
張 旭

How Percona XtraBackup Works - 0 views

  • Percona XtraBackup is based on InnoDB‘s crash-recovery functionality.
  • it performs crash recovery on the files to make them a consistent, usable database again
  • InnoDB maintains a redo log, also called the transaction log. This contains a record of every change to InnoDB data.
  • ...14 more annotations...
  • When InnoDB starts, it inspects the data files and the transaction log, and performs two steps. It applies committed transaction log entries to the data files, and it performs an undo operation on any transactions that modified data but did not commit.
  • Percona XtraBackup works by remembering the log sequence number (LSN) when it starts, and then copying away the data files.
  • Percona XtraBackup runs a background process that watches the transaction log files, and copies changes from it.
  • Percona XtraBackup needs to do this continually
  • Percona XtraBackup needs the transaction log records for every change to the data files since it began execution.
  • Percona XtraBackup uses Backup locks where available as a lightweight alternative to FLUSH TABLES WITH READ LOCK.
  • Locking is only done for MyISAM and other non-InnoDB tables after Percona XtraBackup finishes backing up all InnoDB/XtraDB data and logs.
  • xtrabackup tries to avoid backup locks and FLUSH TABLES WITH READ LOCK when the instance contains only InnoDB tables. In this case, xtrabackup obtains binary log coordinates from performance_schema.log_status
  • When backup locks are supported by the server, xtrabackup first copies InnoDB data, runs the LOCK TABLES FOR BACKUP and then copies the MyISAM tables.
  • the STDERR of xtrabackup is not written in any file. You will have to redirect it to a file, e.g., xtrabackup OPTIONS 2> backupout.log
  • During the prepare phase, Percona XtraBackup performs crash recovery against the copied data files, using the copied transaction log file. After this is done, the database is ready to restore and use.
  • the tools enable you to do operations such as streaming and incremental backups with various combinations of copying the data files, copying the log files, and applying the logs to the data.
  • To restore a backup with xtrabackup you can use the --copy-back or --move-back options.
  • you may have to change the files’ ownership to mysql before starting the database server, as they will be owned by the user who created the backup.
  •  
    "Percona XtraBackup is based on InnoDB's crash-recovery functionality."
張 旭

Trunk-based Development | Atlassian - 0 views

  • Trunk-based development is a version control management practice where developers merge small, frequent updates to a core “trunk” or main branch.
  • Gitflow and trunk-based development. 
  • Gitflow, which was popularized first, is a stricter development model where only certain individuals can approve changes to the main code. This maintains code quality and minimizes the number of bugs.
  • ...20 more annotations...
  • Trunk-based development is a more open model since all developers have access to the main code. This enables teams to iterate quickly and implement CI/CD.
  • Developers can create short-lived branches with a few small commits compared to other long-lived feature branching strategies.
  • Gitflow is an alternative Git branching model that uses long-lived feature branches and multiple primary branches.
  • Gitflow also has separate primary branch lines for development, hotfixes, features, and releases.
  • Trunk-based development is far more simplified since it focuses on the main branch as the source of fixes and releases.
  • Trunk-based development eases the friction of code integration.
  • trunk-based development model reduces these conflicts.
  • Adding an automated test suite and code coverage monitoring for this stream of commits enables continuous integration.
  • When new code is merged into the trunk, automated integration and code coverage tests run to validate the code quality.
  • Trunk-based development strives to keep the trunk branch “green”, meaning it's ready to deploy at any commit.
  • With continuous integration, developers perform trunk-based development in conjunction with automated tests that run after each committee to a trunk.
  • If trunk-based development was like music it would be a rapid staccato -- short, succinct notes in rapid succession, with the repository commits being the notes.
  • Instead of creating a feature branch and waiting to build out the complete specification, developers can instead create a trunk commit that introduces the feature flag and pushes new trunk commits that build out the feature specification within the flag.
  • Automated testing is necessary for any modern software project intending to achieve CI/CD.
  • Short running unit and integration tests are executed during development and upon code merge.
  • Automated tests provide a layer of preemptive code review.
  • Once a branch merges, it is best practice to delete it.
  • A repository with a large amount of active branches has some unfortunate side effects
  • Merge branches to the trunk at least once a day
  • The “continuous” in CI/CD implies that updates are constantly flowing.
1 - 15 of 15
Showing 20 items per page