Skip to main content

Home/ Larvata/ Group items tagged HTTP

Rss Feed Group items tagged

張 旭

Full Cycle Developers at Netflix - Operate What You Build - 1 views

  • Researching issues felt like bouncing a rubber ball between teams, hard to catch the root cause and harder yet to stop from bouncing between one another.
  • In the past, Edge Engineering had ops-focused teams and SRE specialists who owned the deploy+operate+support parts of the software life cycle
  • hearing about those problems second-hand
  • ...17 more annotations...
  • devs could push code themselves when needed, and also were responsible for off-hours production issues and support requests
  • What were we trying to accomplish and why weren’t we being successful?
  • These specialized roles create efficiencies within each segment while potentially creating inefficiencies across the entire life cycle.
  • Grouping differing specialists together into one team can reduce silos, but having different people do each role adds communication overhead, introduces bottlenecks, and inhibits the effectiveness of feedback loops.
  • devops principles
  • develops a system also be responsible for operating and supporting that system
  • Each development team owns deployment issues, performance bugs, capacity planning, alerting gaps, partner support, and so on.
  • Those centralized teams act as force multipliers by turning their specialized knowledge into reusable building blocks.
  • Communication and alignment are the keys to success.
  • Full cycle developers are expected to be knowledgeable and effective in all areas of the software life cycle.
  • ramping up on areas they haven’t focused on before
  • We run dev bootcamps and other forms of ongoing training to impart this knowledge and build up these skills
  • “how can I automate what is needed to operate this system?”
  • “what self-service tool will enable my partners to answer their questions without needing me to be involved?”
  • A full cycle developer thinks and acts like an SWE, SDET, and SRE. At times they create software that solves business problems, at other times they write test cases for that, and still other times they automate operational aspects of that system.
  • the need for continuous delivery pipelines, monitoring/observability, and so on.
  • Tooling and automation help to scale expertise, but no tool will solve every problem in the developer productivity and operations space
張 旭

Replication - MongoDB Manual - 0 views

  • A replica set in MongoDB is a group of mongod processes that maintain the same data set.
  • Replica sets provide redundancy and high availability, and are the basis for all production deployments.
  • With multiple copies of data on different database servers, replication provides a level of fault tolerance against the loss of a single database server.
  • ...18 more annotations...
  • replication can provide increased read capacity as clients can send read operations to different servers.
  • A replica set is a group of mongod instances that maintain the same data set.
  • A replica set contains several data bearing nodes and optionally one arbiter node.
  • one and only one member is deemed the primary node, while the other nodes are deemed secondary nodes.
  • A replica set can have only one primary capable of confirming writes with { w: "majority" } write concern; although in some circumstances, another mongod instance may transiently believe itself to also be primary.
  • The secondaries replicate the primary’s oplog and apply the operations to their data sets such that the secondaries’ data sets reflect the primary’s data set
  • add a mongod instance to a replica set as an arbiter. An arbiter participates in elections but does not hold data
  • An arbiter will always be an arbiter whereas a primary may step down and become a secondary and a secondary may become the primary during an election.
  • Secondaries replicate the primary’s oplog and apply the operations to their data sets asynchronously.
  • These slow oplog messages are logged for the secondaries in the diagnostic log under the REPL component with the text applied op: <oplog entry> took <num>ms.
  • Replication lag refers to the amount of time that it takes to copy (i.e. replicate) a write operation on the primary to a secondary.
  • When a primary does not communicate with the other members of the set for more than the configured electionTimeoutMillis period (10 seconds by default), an eligible secondary calls for an election to nominate itself as the new primary.
  • The replica set cannot process write operations until the election completes successfully.
  • The median time before a cluster elects a new primary should not typically exceed 12 seconds, assuming default replica configuration settings.
  • Factors such as network latency may extend the time required for replica set elections to complete, which in turn affects the amount of time your cluster may operate without a primary.
  • Your application connection logic should include tolerance for automatic failovers and the subsequent elections.
  • MongoDB drivers can detect the loss of the primary and automatically retry certain write operations a single time, providing additional built-in handling of automatic failovers and elections
  • By default, clients read from the primary [1]; however, clients can specify a read preference to send read operations to secondaries.
張 旭

Deploy a Replica Set - MongoDB Manual - 0 views

  • Three member replica sets provide enough redundancy to survive most network partitions and other system failures.
張 旭

Internal/Membership Authentication - MongoDB Manual - 0 views

  • equire that members of replica sets and sharded clusters authenticate to each other.
  • Enabling internal authentication also enables client authorization.
張 旭

Convert a Standalone to a Replica Set - MongoDB Manual - 0 views

  • Restart the instance. Use the --replSet option to specify the name of the new replica set.
  •  
    "Restart the instance. Use the --replSet option to specify the name of the new replica set. "
張 旭

Deploy Replica Set With Keyfile Authentication - MongoDB Manual - 0 views

  • Keyfiles are bare-minimum forms of security and are best suited for testing or development environments.
  • With keyfile authentication, each mongod instances in the replica set uses the contents of the keyfile as the shared password for authenticating other members in the deployment.
  • On UNIX systems, the keyfile must not have group or world permissions.
  • ...3 more annotations...
  • Copy the keyfile to each server hosting the replica set members.
  • the user running the mongod instances is the owner of the file and can access the keyfile.
  • For each member in the replica set, start the mongod with either the security.keyFile configuration file setting or the --keyFile command-line option.
crazylion lee

GitHub - fonoster/fonos: - 0 views

shared by crazylion lee on 29 Oct 20 - No Cached
  •  
    "Project Fonos is open-source telecommunications for the cloud. It helps VoIP integrators quickly deploy new networks and benefit from value-added services such as Programmable Voice, Messaging, and Video. This repository assembles the various components needed to deploy a telephony system at scale."
crazylion lee

Apache Helix - Near-Realtime Rsync Replicated File System - 1 views

  •  
    "Near-Realtime Rsync Replicated File System "
張 旭

Gracefully Shutdown Docker Container - Kakashi's Blog - 1 views

  • The initial idea is to make application invokes deconstructor of each component as soon as the application receives specific signals such as SIGTERM and SIGINT
  • When you run a docker container, by default it has a PID namespace, which means the docker process is isolated from other processes on your host.
  • The PID namespace has an important task to reap zombie processes.
  • ...11 more annotations...
  • This uses /bin/bash as PID1 and runs your program as the subprocess.
  • When a signal is sent to a shell, the signal actually won’t be forwarded to subprocesses.
  • By using the exec form, we can run our program as PID1
  • if you use exec form to run a shell script to spawn your application, remember to use exec syscall to overwrite /usr/bin/bash otherwise it will act as senario1
  • /bin/bash can handle repeating zombie process
  • with Tini, SIGTERM properly terminates your process even if you didn’t explicitly install a signal handler for it.
  • run tini as PID1 and it will forward the signal for subprocesses.
  • tini is a signal proxy and it also can deal with zombie process issue automatically.
  • run your program with tini by passing --init flag to docker run
  • use docker stop, docker will wait for 10s for stopping container before killing a process (by default). The main process inside the container will receive SIGTERM, then docker daemon will wait for 10s and send SIGKILL to terminate process.
  • kill running containers immediately. it’s more like kill -9 and kill --SIGKILL
crazylion lee

Little Snitch - 1 views

  •  
    "As soon as you're connected to the Internet, applications can potentially send whatever they want to wherever they want. Most often they do this to your benefit. But sometimes, like in case of tracking software, trojans or other malware, they don't. But you don't notice anything, because all of this happens invisibly under the hood."
張 旭

Understanding GitHub Actions - GitHub Docs - 0 views

  • A job is a set of steps that execute on the same runner. By default, a workflow with multiple jobs will run those jobs in parallel.
  • Workflows are made up of one or more jobs and can be scheduled or triggered by an event
  • An event is a specific activity that triggers a workflow.
  • ...8 more annotations...
  • configure a workflow to run jobs sequentially.
  • A step is an individual task that can run commands in a job. A step can be either an action or a shell command.
  • Each step in a job executes on the same runner, allowing the actions in that job to share data with each other.
  • Actions are standalone commands that are combined into steps to create a job.
  • Actions are the smallest portable building block of a workflow.
  • To use an action in a workflow, you must include it as a step.
  • You can use a runner hosted by GitHub, or you can host your own.
  • GitHub-hosted runners are based on Ubuntu Linux, Microsoft Windows, and macOS, and each job in a workflow runs in a fresh virtual environment.
  •  
    "A job is a set of steps that execute on the same runner. By default, a workflow with multiple jobs will run those jobs in parallel. "
張 旭

Services | GitLab - 0 views

  • The services keyword defines a Docker image that runs during a job linked to the Docker image that the image keyword defines. This allows you to access the service image during build time.
« First ‹ Previous 1341 - 1360 of 1420 Next › Last »
Showing 20 items per page