Skip to main content

Home/ Larvata/ Group items tagged package

Rss Feed Group items tagged

crazylion lee

GitHub - tidwall/evio: Fast event-loop networking for Go - 0 views

shared by crazylion lee on 05 Nov 17 - No Cached
  •  
    "evio is an event loop networking framework that is fast and small. It makes direct epoll and kqueue syscalls rather than using the standard Go net package, and works in a similar manner as libuv and libevent."
張 旭

Dependency Lock File (.terraform.lock.hcl) - Configuration Language | Terraform | Hashi... - 0 views

  • Version constraints within the configuration itself determine which versions of dependencies are potentially compatible, but after selecting a specific version of each dependency Terraform remembers the decisions it made in a dependency lock file
  • At present, the dependency lock file tracks only provider dependencies.
  • Terraform does not remember version selections for remote modules, and so Terraform will always select the newest available module version that meets the specified version constraints.
  • ...14 more annotations...
  • The lock file is always named .terraform.lock.hcl, and this name is intended to signify that it is a lock file for various items that Terraform caches in the .terraform
  • Terraform automatically creates or updates the dependency lock file each time you run the terraform init command.
  • You should include this file in your version control repository
  • If a particular provider has no existing recorded selection, Terraform will select the newest available version that matches the given version constraint, and then update the lock file to include that selection.
  • the "trust on first use" model
  • you can pre-populate checksums for a variety of different platforms in your lock file using the terraform providers lock command, which will then allow future calls to terraform init to verify that the packages available in your chosen mirror match the official packages from the provider's origin registry.
  • The h1: and zh: prefixes on these values represent different hashing schemes, each of which represents calculating a checksum using a different algorithm.
  • zh:: a mnemonic for "zip hash"
  • h1:: a mnemonic for "hash scheme 1", which is the current preferred hashing scheme.
  • To determine whether there still exists a dependency on a given provider, Terraform uses two sources of truth: the configuration itself, and the state.
  • Version constraints within the configuration itself determine which versions of dependencies are potentially compatible, but after selecting a specific version of each dependency Terraform remembers the decisions it made in a dependency lock file so that it can (by default) make the same decisions again in future.
  • At present, the dependency lock file tracks only provider dependencies.
  • Terraform will always select the newest available module version that meets the specified version constraints.
  • The lock file is always named .terraform.lock.hcl
  •  
    "the overriding effect is compounded, with later blocks taking precedence over earlier blocks."
張 旭

Moving away from Alpine - DEV Community - 0 views

  • it’s a lot of work to get packages that are not readily available in Alpine repository.
  • things compiled in Alpine won’t be usable on Ubuntu, for example, and vice versa.
  • the difficulty in pinning package versions in Alpine.
  • ...2 more annotations...
  • Developers rely heavily on app logs via syslog (mounted /dev/log) and Alpine uses busybox syslog by default.
  • Ubuntu officially launched minimal ubuntu images for cloud / container use
crazylion lee

CocoaPods.org - 0 views

  •  
    "CocoaPods is the dependency manager for Swift and Objective-C Cocoa projects. It has over ten thousand libraries and can help you scale your projects elegantly."
張 旭

Best practices for writing Dockerfiles - Docker Documentation - 0 views

  • Run only one process per container
  • use current Official Repositories as the basis for your image
  • put long or complex RUN statements on multiple lines separated with backslashes.
  • ...16 more annotations...
  • CMD instruction should be used to run the software contained by your image, along with any arguments
  • CMD should be given an interactive shell (bash, python, perl, etc)
  • COPY them individually, rather than all at once
  • COPY is preferred
  • using ADD to fetch packages from remote URLs is strongly discouraged
  • always use COPY
  • The best use for ENTRYPOINT is to set the image's main command, allowing that image to be run as though it was that command (and then use CMD as the default flags).
  • the image name can double as a reference to the binary as shown in the command above
  • ENTRYPOINT instruction can also be used in combination with a helper script
  • The VOLUME instruction should be used to expose any database storage area, configuration storage, or files/folders created by your docker container.
  • use USER to change to a non-root user
  • avoid installing or using sudo
  • avoid switching USER back and forth frequently.
  • always use absolute paths for your WORKDIR
  • ONBUILD is only useful for images that are going to be built FROM a given image
  • The “onbuild” image will fail catastrophically if the new build's context is missing the resource being added.
張 旭

What is DevOps? | Atlassian - 0 views

  • DevOps is a set of practices that automates the processes between software development and IT teams, in order that they can build, test, and release software faster and more reliably.
  • increased trust, faster software releases, ability to solve critical issues quickly, and better manage unplanned work.
  • bringing together the best of software development and IT operations.
  • ...39 more annotations...
  • DevOps is a culture, a movement, a philosophy.
  • a firm handshake between development and operations
  • DevOps isn’t magic, and transformations don’t happen overnight.
  • Infrastructure as code
  • Culture is the #1 success factor in DevOps.
  • Building a culture of shared responsibility, transparency and faster feedback is the foundation of every high performing DevOps team.
  •  'not our problem' mentality
  • DevOps is that change in mindset of looking at the development process holistically and breaking down the barrier between Dev and Ops.
  • Speed is everything.
  • Lack of automated test and review cycles block the release to production and poor incident response time kills velocity and team confidence
  • Open communication helps Dev and Ops teams swarm on issues, fix incidents, and unblock the release pipeline faster.
  • Unplanned work is a reality that every team faces–a reality that most often impacts team productivity.
  • “cross-functional collaboration.”
  • All the tooling and automation in the world are useless if they aren’t accompanied by a genuine desire on the part of development and IT/Ops professionals to work together.
  • DevOps doesn’t solve tooling problems. It solves human problems.
  • Forming project- or product-oriented teams to replace function-based teams is a step in the right direction.
  • sharing a common goal and having a plan to reach it together
  • join sprint planning sessions, daily stand-ups, and sprint demos.
  • DevOps culture across every department
  • open channels of communication, and talk regularly
  • DevOps isn’t one team’s job. It’s everyone’s job.
  • automation eliminates repetitive manual work, yields repeatable processes, and creates reliable systems.
  • Build, test, deploy, and provisioning automation
  • continuous delivery: the practice of running each code change through a gauntlet of automated tests, often facilitated by cloud-based infrastructure, then packaging up successful builds and promoting them up toward production using automated deploys.
  • automated deploys alert IT/Ops to server “drift” between environments, which reduces or eliminates surprises when it’s time to release.
  • “configuration as code.”
  • when DevOps uses automated deploys to send thoroughly tested code to identically provisioned environments, “Works on my machine!” becomes irrelevant.
  • A DevOps mindset sees opportunities for continuous improvement everywhere.
  • regular retrospectives
  • A/B testing
  • failure is inevitable. So you might as well set up your team to absorb it, recover, and learn from it (some call this “being anti-fragile”).
  • Postmortems focus on where processes fell down and how to strengthen them – not on which team member f'ed up the code.
  • Our engineers are responsible for QA, writing, and running their own tests to get the software out to customers.
  • How long did it take to go from development to deployment? 
  • How long does it take to recover after a system failure?
  • service level agreements (SLAs)
  • Devops isn't any single person's job. It's everyone's job.
  • DevOps is big on the idea that the same people who build an application should be involved in shipping and running it.
  • developers and operators pair with each other in each phase of the application’s lifecycle.
張 旭

Orbs, Jobs, Steps, and Workflows - CircleCI - 0 views

  • Orbs are packages of config that you either import by name or configure inline to simplify your config, share, and reuse config within and across projects.
  • Jobs are a collection of Steps.
  • All of the steps in the job are executed in a single unit which consumes a CircleCI container from your plan while it’s running.
  • ...11 more annotations...
  • Workspaces persist data between jobs in a single Workflow.
  • Caching persists data between the same job in different Workflow builds.
  • Artifacts persist data after a Workflow has finished.
  • run using the machine executor which enables reuse of recently used machine executor runs,
  • docker executor which can compose Docker containers to run your tests and any services they require
  • macos executor
  • Steps are a collection of executable commands which are run during a job
  • In addition to the run: key, keys for save_cache:, restore_cache:, deploy:, store_artifacts:, store_test_results: and add_ssh_keys are nested under Steps.
  • checkout: key is required to checkout your code
  • run: enables addition of arbitrary, multi-line shell command scripting
  • orchestrating job runs with parallel, sequential, and manual approval workflows.
張 旭

phusion/baseimage-docker - 1 views

    • 張 旭
       
      原始的 docker 在執行命令時,預設就是將傳入的 COMMAND 當成 PID 1 的程序,執行完畢就結束這個  docker,其他的 daemons 並不會執行,而 baseimage 解決了這個問題。
    • crazylion lee
       
      好棒棒
  • docker exec
  • Through SSH
  • ...57 more annotations...
  • docker exec -t -i YOUR-CONTAINER-ID bash -l
  • Login to the container
  • Baseimage-docker only advocates running multiple OS processes inside a single container.
  • Password and challenge-response authentication are disabled by default. Only key authentication is allowed.
  • A tool for running a command as another user
  • The Docker developers advocate the philosophy of running a single logical service per container. A logical service can consist of multiple OS processes.
  • All syslog messages are forwarded to "docker logs".
  • Baseimage-docker advocates running multiple OS processes inside a single container, and a single logical service can consist of multiple OS processes.
  • Baseimage-docker provides tools to encourage running processes as different users
  • sometimes it makes sense to run multiple services in a single container, and sometimes it doesn't.
  • Splitting your logical service into multiple OS processes also makes sense from a security standpoint.
  • using environment variables to pass parameters to containers is very much the "Docker way"
  • Baseimage-docker provides a facility to run a single one-shot command, while solving all of the aforementioned problems
  • the shell script must run the daemon without letting it daemonize/fork it.
  • All executable scripts in /etc/my_init.d, if this directory exists. The scripts are run in lexicographic order.
  • variables will also be passed to all child processes
  • Environment variables on Unix are inherited on a per-process basis
  • there is no good central place for defining environment variables for all applications and services
  • centrally defining environment variables
  • One of the ideas behind Docker is that containers should be stateless, easily restartable, and behave like a black box.
  • a one-shot command in a new container
  • immediately exit after the command exits,
  • However the downside of this approach is that the init system is not started. That is, while invoking COMMAND, important daemons such as cron and syslog are not running. Also, orphaned child processes are not properly reaped, because COMMAND is PID 1.
  • add additional daemons (e.g. your own app) to the image by creating runit entries.
  • Nginx is one such example: it removes all environment variables unless you explicitly instruct it to retain them through the env configuration option.
  • Mechanisms for easily running multiple processes, without violating the Docker philosophy
  • Ubuntu is not designed to be run inside Docker
  • According to the Unix process model, the init process -- PID 1 -- inherits all orphaned child processes and must reap them
  • Syslog-ng seems to be much more stable
  • cron daemon
  • Rotates and compresses logs
  • /sbin/setuser
  • A tool for installing apt packages that automatically cleans up after itself.
  • a single logical service inside a single container
  • A daemon is a program which runs in the background of its system, such as a web server.
  • The shell script must be called run, must be executable, and is to be placed in the directory /etc/service/<NAME>. runsv will switch to the directory and invoke ./run after your container starts.
  • If any script exits with a non-zero exit code, the booting will fail.
  • If your process is started with a shell script, make sure you exec the actual process, otherwise the shell will receive the signal and not your process.
  • any environment variables set with docker run --env or with the ENV command in the Dockerfile, will be picked up by my_init
  • not possible for a child process to change the environment variables of other processes
  • they will not see the environment variables that were originally passed by Docker.
  • We ignore HOME, SHELL, USER and a bunch of other environment variables on purpose, because not ignoring them will break multi-user containers.
  • my_init imports environment variables from the directory /etc/container_environment
  • /etc/container_environment.sh - a dump of the environment variables in Bash format.
  • modify the environment variables in my_init (and therefore the environment variables in all child processes that are spawned after that point in time), by altering the files in /etc/container_environment
  • my_init only activates changes in /etc/container_environment when running startup scripts
  • environment variables don't contain sensitive data, then you can also relax the permissions
  • Syslog messages are forwarded to the console
  • syslog-ng is started separately before the runit supervisor process, and shutdown after runit exits.
  • RUN apt-get update && apt-get upgrade -y -o Dpkg::Options::="--force-confold"
  • /sbin/my_init --skip-startup-files --quiet --
  • By default, no keys are installed, so nobody can login
  • provide a pregenerated, insecure key (PuTTY format)
  • RUN /usr/sbin/enable_insecure_key
  • docker run YOUR_IMAGE /sbin/my_init --enable-insecure-key
  • RUN cat /tmp/your_key.pub >> /root/.ssh/authorized_keys && rm -f /tmp/your_key.pub
  • The default baseimage-docker installs syslog-ng, cron and sshd services during the build process
張 旭

Using Orbs - CircleCI - 0 views

  • Orbs enable you to share, standardize, and simplify config across your projects.
  • Jobs are comprised of two parts: a set of steps, and the environment they should be executed within.
  • Executors define the environment in which the steps of a job will be run.
  • ...12 more annotations...
  • Commands are reusable sets of steps that you can invoke with specific parameters within an existing job.
  • you can pass my-executor as the value of a name key under executor. This method is primarily employed when passing parameters to executor invocations.
  • Development orbs are mutable and expire after 90 days.
  • Production Orbs are immutable and durable.
  • CircleCI allows development orbs that have versions that start with dev:
  • Production orbs are immutable
  • Each installation of CircleCI, including circleci.com, has only one registry where orbs can be kept.
  • Organization Admins publish production orbs.
  • Organization members publish development orbs
  • You must invoke jobs in the workflow stanza of config.yml file, making sure to pass any necessary parameters as subkeys to the job.
  • When you declare an executor in a configuration outside of jobs, you can use these declarations for all jobs in the scope of that declaration, enabling you to reuse a single executor definition across multiple jobs.
  • Orbs are transparent - If you can execute an orb, you and anyone else can view the source of that orb.
張 旭

Intro to deployment strategies: blue-green, canary, and more - DEV Community - 0 views

  • using a service-oriented architecture and microservices approach, developers can design a code base to be modular.
  • Modern applications are often distributed and cloud-based
  • different release cycles for different components
  • ...20 more annotations...
  • the abstraction of the infrastructure layer, which is now considered code. Deployment of a new application may require the deployment of new infrastructure code as well.
  • "big bang" deployments update whole or large parts of an application in one fell swoop.
  • Big bang deployments required the business to conduct extensive development and testing before release, often associated with the "waterfall model" of large sequential releases.
  • Rollbacks are often costly, time-consuming, or even impossible.
  • In a rolling deployment, an application’s new version gradually replaces the old one.
  • new and old versions will coexist without affecting functionality or user experience.
  • Each container is modified to download the latest image from the app vendor’s site.
  • two identical production environments work in parallel.
  • Once the testing results are successful, application traffic is routed from blue to green.
  • In a blue-green deployment, both systems use the same persistence layer or database back end.
  • You can use the primary database by blue for write operations and use the secondary by green for read operations.
  • Blue-green deployments rely on traffic routing.
  • long TTL values can delay these changes.
  • The main challenge of canary deployment is to devise a way to route some users to the new application.
  • Using an application logic to unlock new features to specific users and groups.
  • With CD, the CI-built code artifact is packaged and always ready to be deployed in one or more environments.
  • Use Build Automation tools to automate environment builds
  • Use configuration management tools
  • Enable automated rollbacks for deployments
  • An application performance monitoring (APM) tool can help your team monitor critical performance metrics including server response times after deployments.
crazylion lee

taiko - 0 views

  •  
    "An easy to use wrapper over Google Chrome's Puppeteer library."
張 旭

Open source load testing tool review 2020 - 0 views

  • Hey is a simple tool, written in Go, with good performance and the most common features you'll need to run simple static URL tests.
  • Hey supports HTTP/2, which neither Wrk nor Apachebench does
  • Apachebench is very fast, so often you will not need more than one CPU core to generate enough traffic
  • ...16 more annotations...
  • Hey has rate limiting, which can be used to run fixed-rate tests.
  • Vegeta was designed to be run on the command line; it reads from stdin a list of HTTP transactions to generate, and sends results in binary format to stdout,
  • Vegeta is a really strong tool that caters to people who want a tool to test simple, static URLs (perhaps API end points) but also want a bit more functionality.
  • Vegeta can even be used as a Golang library/package if you want to create your own load testing tool.
  • Wrk is so damn fast
  • being fast and measuring correctly is about all that Wrk does
  • k6 is scriptable in plain Javascript
  • k6 is average or better. In some categories (documentation, scripting API, command line UX) it is outstanding.
  • Jmeter is a huge beast compared to most other tools.
  • Siege is a simple tool, similar to e.g. Apachebench in that it has no scripting and is primarily used when you want to hit a single, static URL repeatedly.
  • A good way of testing the testing tools is to not test them on your code, but on some third-party thing that is sure to be very high-performing.
  • use a tool like e.g. top to keep track of Nginx CPU usage while testing. If you see just one process, and see it using close to 100% CPU, it means you could be CPU-bound on the target side.
  • If you see multiple Nginx processes but only one is using a lot of CPU, it means your load testing tool is only talking to that particular worker process.
  • Network delay is also important to take into account as it sets an upper limit on the number of requests per second you can push through.
  • If, say, the Nginx default page requires a transfer of 250 bytes to load, it means that if the servers are connected via a 100 Mbit/s link, the theoretical max RPS rate would be around 100,000,000 divided by 8 (bits per byte) divided by 250 => 100M/2000 = 50,000 RPS. Though that is a very optimistic calculation - protocol overhead will make the actual number a lot lower so in the case above I would start to get worried bandwidth was an issue if I saw I could push through max 30,000 RPS, or something like that.
  • Wrk managed to push through over 50,000 RPS and that made 8 Nginx workers on the target system consume about 600% CPU.
張 旭

The differences between Docker, containerd, CRI-O and runc - Tutorial Works - 0 views

  • Docker isn’t the only container contender on the block.
  • Container Runtime Interface (CRI), which defines an API between Kubernetes and the container runtime
  • Open Container Initiative (OCI) which publishes specifications for images and containers.
  • ...20 more annotations...
  • for a lot of people, the name “Docker” itself is synonymous with the word “container”.
  • Docker created a very ergonomic (nice-to-use) tool for working with containers – also called docker.
  • docker is designed to be installed on a workstation or server and comes with a bunch of tools to make it easy to build and run containers as a developer, or DevOps person.
  • containerd: This is a daemon process that manages and runs containers.
  • runc: This is the low-level container runtime (the thing that actually creates and runs containers).
  • libcontainer, a native Go-based implementation for creating containers.
  • Kubernetes includes a component called dockershim, which allows it to support Docker.
  • Kubernetes prefers to run containers through any container runtime which supports its Container Runtime Interface (CRI).
  • Kubernetes will remove support for Docker directly, and prefer to use only container runtimes that implement its Container Runtime Interface.
  • Both containerd and CRI-O can run Docker-formatted (actually OCI-formatted) images, they just do it without having to use the docker command or the Docker daemon.
  • Docker images, are actually images packaged in the Open Container Initiative (OCI) format.
  • CRI is the API that Kubernetes uses to control the different runtimes that create and manage containers.
  • CRI makes it easier for Kubernetes to use different container runtimes
  • containerd is a high-level container runtime that came from Docker, and implements the CRI spec
  • containerd was separated out of the Docker project, to make Docker more modular.
  • CRI-O is another high-level container runtime which implements the Container Runtime Interface (CRI).
  • The idea behind the OCI is that you can choose between different runtimes which conform to the spec.
  • runc is an OCI-compatible container runtime.
  • A reference implementation is a piece of software that has implemented all the requirements of a specification or standard.
  • runc provides all of the low-level functionality for containers, interacting with existing low-level Linux features, like namespaces and control groups.
張 旭

3. Hello, world! - Err 9.9.9 documentation - 0 views

  • The first is a Message object, which represents the full message object received by Errbot.
  • The second is a string (or a list, if using the split_args_with parameter of botcmd()) with the arguments passed to the command.
  • args would be the string “Mister Errbot”.
  • ...6 more annotations...
  • If you return None, Errbot will not respond with any kind of message when executing the command.
  • a file that ends with the extension .plug and it is used by Errbot to identify and load plugins.
  • The key Module should point to a module that Python can find and import.
  • The key Name should be identical to the name you gave to the class in your plugin file
  • The presence of __init__.py indicates lib is a Python regular package.
  • the !status command,
‹ Previous 21 - 37 of 37
Showing 20 items per page