Skip to main content

Home/ Larvata/ Group items tagged HA

Rss Feed Group items tagged

張 旭

Understanding the Nginx Configuration File Structure and Configuration Contexts | Digit... - 0 views

  • discussing the basic structure of an Nginx configuration file along with some guidelines on how to design your files
  • /etc/nginx/nginx.conf
  • In Nginx parlance, the areas that these brackets define are called "contexts" because they contain configuration details that are separated according to their area of concern
  • ...50 more annotations...
  • contexts can be layered within one another
  • if a directive is valid in multiple nested scopes, a declaration in a broader context will be passed on to any child contexts as default values.
  • The children contexts can override these values at will
  • Nginx will error out on reading a configuration file with directives that are declared in the wrong context.
  • The most general context is the "main" or "global" context
  • Any directive that exist entirely outside of these blocks is said to inhabit the "main" context
  • The main context represents the broadest environment for Nginx configuration.
  • The "events" context is contained within the "main" context. It is used to set global options that affect how Nginx handles connections at a general level.
  • Nginx uses an event-based connection processing model, so the directives defined within this context determine how worker processes should handle connections.
  • the connection processing method is automatically selected based on the most efficient choice that the platform has available
  • a worker will only take a single connection at a time
  • When configuring Nginx as a web server or reverse proxy, the "http" context will hold the majority of the configuration.
  • The http context is a sibling of the events context, so they should be listed side-by-side, rather than nested
  • fine-tune the TCP keep alive settings (keepalive_disable, keepalive_requests, and keepalive_timeout)
  • The "server" context is declared within the "http" context.
  • multiple declarations
  • each instance defines a specific virtual server to handle client requests
  • Each client request will be handled according to the configuration defined in a single server context, so Nginx must decide which server context is most appropriate based on details of the request.
  • listen: The ip address / port combination that this server block is designed to respond to.
  • server_name: This directive is the other component used to select a server block for processing.
  • "Host" header
  • configure files to try to respond to requests (try_files)
  • issue redirects and rewrites (return and rewrite)
  • set arbitrary variables (set)
  • Location contexts share many relational qualities with server contexts
  • multiple location contexts can be defined, each location is used to handle a certain type of client request, and each location is selected by virtue of matching the location definition against the client request through a selection algorithm
  • Location blocks live within server contexts and, unlike server blocks, can be nested inside one another.
  • While server contexts are selected based on the requested IP address/port combination and the host name in the "Host" header, location blocks further divide up the request handling within a server block by looking at the request URI
  • The request URI is the portion of the request that comes after the domain name or IP address/port combination.
  • New directives at this level allow you to reach locations outside of the document root (alias), mark the location as only internally accessible (internal), and proxy to other servers or locations (using http, fastcgi, scgi, and uwsgi proxying).
  • These can then be used to do A/B testing by providing different content to different hosts.
  • configures Perl handlers for the location they appear in
  • set the value of a variable depending on the value of another variable
  • used to map MIME types to the file extensions that should be associated with them.
  • this context defines a named pool of servers that Nginx can then proxy requests to
  • The upstream context should be placed within the http context, outside of any specific server contexts.
  • The upstream context can then be referenced by name within server or location blocks to pass requests of a certain type to the pool of servers that have been defined.
  • function as a high performance mail proxy server
  • The mail context is defined within the "main" or "global" context (outside of the http context).
  • Nginx has the ability to redirect authentication requests to an external authentication server
  • the if directive in Nginx will execute the instructions contained if a given test returns "true".
  • Since Nginx will test conditions of a request with many other purpose-made directives, if should not be used for most forms of conditional execution.
  • The limit_except context is used to restrict the use of certain HTTP methods within a location context.
  • The result of the above example is that any client can use the GET and HEAD verbs, but only clients coming from the 192.168.1.1/24 subnet are allowed to use other methods.
  • Many directives are valid in more than one context
  • it is usually best to declare directives in the highest context to which they are applicable, and overriding them in lower contexts as necessary.
  • Declaring at higher levels provides you with a sane default
  • Nginx already engages in a well-documented selection algorithm for things like selecting server blocks and location blocks.
  • instead of relying on rewrites to get a user supplied request into the format that you would like to work with, you should try to set up two blocks for the request, one of which represents the desired method, and the other that catches messy requests and redirects (and possibly rewrites) them to your correct block.
  • incorrect requests can get by with a redirect rather than a rewrite, which should execute with lower overhead.
張 旭

An Introduction to HAProxy and Load Balancing Concepts | DigitalOcean - 0 views

  • HAProxy, which stands for High Availability Proxy
  • improve the performance and reliability of a server environment by distributing the workload across multiple servers (e.g. web, application, database).
  • ACLs are used to test some condition and perform an action (e.g. select a server, or block a request) based on the test result.
  • ...28 more annotations...
  • Access Control List (ACL)
  • ACLs allows flexible network traffic forwarding based on a variety of factors like pattern-matching and the number of connections to a backend
  • A backend is a set of servers that receives forwarded requests
  • adding more servers to your backend will increase your potential load capacity by spreading the load over multiple servers
  • mode http specifies that layer 7 proxying will be used
  • specifies the load balancing algorithm
  • health checks
  • A frontend defines how requests should be forwarded to backends
  • use_backend rules, which define which backends to use depending on which ACL conditions are matched, and/or a default_backend rule that handles every other case
  • A frontend can be configured to various types of network traffic
  • Load balancing this way will forward user traffic based on IP range and port
  • Generally, all of the servers in the web-backend should be serving identical content--otherwise the user might receive inconsistent content.
  • Using layer 7 allows the load balancer to forward requests to different backend servers based on the content of the user's request.
  • allows you to run multiple web application servers under the same domain and port
  • acl url_blog path_beg /blog matches a request if the path of the user's request begins with /blog.
  • Round Robin selects servers in turns
  • Selects the server with the least number of connections--it is recommended for longer sessions
  • This selects which server to use based on a hash of the source IP
  • ensure that a user will connect to the same server
  • require that a user continues to connect to the same backend server. This persistence is achieved through sticky sessions, using the appsession parameter in the backend that requires it.
  • HAProxy uses health checks to determine if a backend server is available to process requests.
  • The default health check is to try to establish a TCP connection to the server
  • If a server fails a health check, and therefore is unable to serve requests, it is automatically disabled in the backend
  • For certain types of backends, like database servers in certain situations, the default health check is insufficient to determine whether a server is still healthy.
  • However, your load balancer is a single point of failure in these setups; if it goes down or gets overwhelmed with requests, it can cause high latency or downtime for your service.
  • A high availability (HA) setup is an infrastructure without a single point of failure
  • a static IP address that can be remapped from one server to another.
  • If that load balancer fails, your failover mechanism will detect it and automatically reassign the IP address to one of the passive servers.
張 旭

Open source load testing tool review 2020 - 0 views

  • Hey is a simple tool, written in Go, with good performance and the most common features you'll need to run simple static URL tests.
  • Hey supports HTTP/2, which neither Wrk nor Apachebench does
  • Apachebench is very fast, so often you will not need more than one CPU core to generate enough traffic
  • ...16 more annotations...
  • Hey has rate limiting, which can be used to run fixed-rate tests.
  • Vegeta was designed to be run on the command line; it reads from stdin a list of HTTP transactions to generate, and sends results in binary format to stdout,
  • Vegeta is a really strong tool that caters to people who want a tool to test simple, static URLs (perhaps API end points) but also want a bit more functionality.
  • Vegeta can even be used as a Golang library/package if you want to create your own load testing tool.
  • Wrk is so damn fast
  • being fast and measuring correctly is about all that Wrk does
  • k6 is scriptable in plain Javascript
  • k6 is average or better. In some categories (documentation, scripting API, command line UX) it is outstanding.
  • Jmeter is a huge beast compared to most other tools.
  • Siege is a simple tool, similar to e.g. Apachebench in that it has no scripting and is primarily used when you want to hit a single, static URL repeatedly.
  • A good way of testing the testing tools is to not test them on your code, but on some third-party thing that is sure to be very high-performing.
  • use a tool like e.g. top to keep track of Nginx CPU usage while testing. If you see just one process, and see it using close to 100% CPU, it means you could be CPU-bound on the target side.
  • If you see multiple Nginx processes but only one is using a lot of CPU, it means your load testing tool is only talking to that particular worker process.
  • Network delay is also important to take into account as it sets an upper limit on the number of requests per second you can push through.
  • If, say, the Nginx default page requires a transfer of 250 bytes to load, it means that if the servers are connected via a 100 Mbit/s link, the theoretical max RPS rate would be around 100,000,000 divided by 8 (bits per byte) divided by 250 => 100M/2000 = 50,000 RPS. Though that is a very optimistic calculation - protocol overhead will make the actual number a lot lower so in the case above I would start to get worried bandwidth was an issue if I saw I could push through max 30,000 RPS, or something like that.
  • Wrk managed to push through over 50,000 RPS and that made 8 Nginx workers on the target system consume about 600% CPU.
張 旭

MetalLB, bare metal load-balancer for Kubernetes - 0 views

  • it allows you to create Kubernetes services of type “LoadBalancer” in clusters that don’t run on a cloud provider
  • In a cloud-enabled Kubernetes cluster, you request a load-balancer, and your cloud platform assigns an IP address to you.
  • MetalLB cannot create IP addresses out of thin air, so you do have to give it pools of IP addresses that it can use.
  • ...6 more annotations...
  • MetalLB lets you define as many address pools as you want, and doesn’t care what “kind” of addresses you give it.
  • Once MetalLB has assigned an external IP address to a service, it needs to make the network beyond the cluster aware that the IP “lives” in the cluster.
  • In layer 2 mode, one machine in the cluster takes ownership of the service, and uses standard address discovery protocols (ARP for IPv4, NDP for IPv6) to make those IPs reachable on the local network
  • From the LAN’s point of view, the announcing machine simply has multiple IP addresses.
  • In BGP mode, all machines in the cluster establish BGP peering sessions with nearby routers that you control, and tell those routers how to forward traffic to the service IPs.
  • Using BGP allows for true load balancing across multiple nodes, and fine-grained traffic control thanks to BGP’s policy mechanisms.
張 旭

Override Files - Configuration Language - Terraform by HashiCorp - 0 views

  • In both the required_version and required_providers settings, each override constraint entirely replaces the constraints for the same component in the original block.
  • If both the base block and the override block both set required_version then the constraints in the base block are entirely ignored.
  • Terraform normally loads all of the .tf and .tf.json files within a directory and expects each one to define a distinct set of configuration objects.
  • ...14 more annotations...
  • If two files attempt to define the same object, Terraform returns an error.
  • a human-edited configuration file in the Terraform language native syntax could be partially overridden using a programmatically-generated file in JSON syntax.
  • Terraform has special handling of any configuration file whose name ends in _override.tf or _override.tf.json
  • Terraform initially skips these override files when loading configuration, and then afterwards processes each one in turn (in lexicographical order).
  • merges the override block contents into the existing object.
  • Over-use of override files hurts readability, since a reader looking only at the original files cannot easily see that some portions of those files have been overridden without consulting all of the override files that are present.
  • When using override files, use comments in the original files to warn future readers about which override files apply changes to each block.
  • A top-level block in an override file merges with a block in a normal configuration file that has the same block header.
  • Within a top-level block, an attribute argument within an override block replaces any argument of the same name in the original block.
  • Within a top-level block, any nested blocks within an override block replace all blocks of the same type in the original block.
  • The contents of nested configuration blocks are not merged.
  • If more than one override file defines the same top-level block, the overriding effect is compounded, with later blocks taking precedence over earlier blocks
  • The settings within terraform blocks are considered individually when merging.
  • If the required_providers argument is set, its value is merged on an element-by-element basis, which allows an override block to adjust the constraint for a single provider without affecting the constraints for other providers.
  •  
    "In both the required_version and required_providers settings, each override constraint entirely replaces the constraints for the same component in the original block. "
張 旭

Controllers | Kubernetes - 0 views

  • In robotics and automation, a control loop is a non-terminating loop that regulates the state of a system.
  • controllers are control loops that watch the state of your cluster, then make or request changes where needed
  • Each controller tries to move the current cluster state closer to the desired state.
  • ...12 more annotations...
  • A controller tracks at least one Kubernetes resource type.
  • The controller(s) for that resource are responsible for making the current state come closer to that desired state.
  • in Kubernetes, a controller will send messages to the API server that have useful side effects.
  • Built-in controllers manage state by interacting with the cluster API server.
  • By contrast with Job, some controllers need to make changes to things outside of your cluster.
  • the controller makes some change to bring about your desired state, and then reports current state back to your cluster's API server. Other control loops can observe that reported data and take their own actions.
  • As long as the controllers for your cluster are running and able to make useful changes, it doesn't matter if the overall state is stable or not.
  • Kubernetes uses lots of controllers that each manage a particular aspect of cluster state.
  • a particular control loop (controller) uses one kind of resource as its desired state, and has a different kind of resource that it manages to make that desired state happen.
  • There can be several controllers that create or update the same kind of object.
  • you can have Deployments and Jobs; these both create Pods. The Job controller does not delete the Pods that your Deployment created, because there is information (labels) the controllers can use to tell those Pods apart.
  • Kubernetes comes with a set of built-in controllers that run inside the kube-controller-manager.
  •  
    "In robotics and automation, a control loop is a non-terminating loop that regulates the state of a system. "
張 旭

LXC vs Docker: Why Docker is Better | UpGuard - 0 views

  • LXC (LinuX Containers) is a OS-level virtualization technology that allows creation and running of multiple isolated Linux virtual environments (VE) on a single control host.
  • Docker, previously called dotCloud, was started as a side project and only open-sourced in 2013. It is really an extension of LXC’s capabilities.
  • run processes in isolation.
  • ...35 more annotations...
  • Docker is developed in the Go language and utilizes LXC, cgroups, and the Linux kernel itself. Since it’s based on LXC, a Docker container does not include a separate operating system; instead it relies on the operating system’s own functionality as provided by the underlying infrastructure.
  • Docker acts as a portable container engine, packaging the application and all its dependencies in a virtual container that can run on any Linux server.
  • a VE there is no preloaded emulation manager software as in a VM.
  • In a VE, the application (or OS) is spawned in a container and runs with no added overhead, except for a usually minuscule VE initialization process.
  • LXC will boast bare metal performance characteristics because it only packages the needed applications.
  • the OS is also just another application that can be packaged too.
  • a VM, which packages the entire OS and machine setup, including hard drive, virtual processors and network interfaces. The resulting bloated mass usually takes a long time to boot and consumes a lot of CPU and RAM.
  • don’t offer some other neat features of VM’s such as IaaS setups and live migration.
  • LXC as supercharged chroot on Linux. It allows you to not only isolate applications, but even the entire OS.
  • Libvirt, which allows the use of containers through the LXC driver by connecting to 'lxc:///'.
  • 'LXC', is not compatible with libvirt, but is more flexible with more userspace tools.
  • Portable deployment across machines
  • Versioning: Docker includes git-like capabilities for tracking successive versions of a container
  • Component reuse: Docker allows building or stacking of already created packages.
  • Shared libraries: There is already a public registry (http://index.docker.io/ ) where thousands have already uploaded the useful containers they have created.
  • Docker taking the devops world by storm since its launch back in 2013.
  • LXC, while older, has not been as popular with developers as Docker has proven to be
  • LXC having a focus on sys admins that’s similar to what solutions like the Solaris operating system, with its Solaris Zones, Linux OpenVZ, and FreeBSD, with its BSD Jails virtualization system
  • it started out being built on top of LXC, Docker later moved beyond LXC containers to its own execution environment called libcontainer.
  • Unlike LXC, which launches an operating system init for each container, Docker provides one OS environment, supplied by the Docker Engine
  • LXC tooling sticks close to what system administrators running bare metal servers are used to
  • The LXC command line provides essential commands that cover routine management tasks, including the creation, launch, and deletion of LXC containers.
  • Docker containers aim to be even lighter weight in order to support the fast, highly scalable, deployment of applications with microservice architecture.
  • With backing from Canonical, LXC and LXD have an ecosystem tightly bound to the rest of the open source Linux community.
  • Docker Swarm
  • Docker Trusted Registry
  • Docker Compose
  • Docker Machine
  • Kubernetes facilitates the deployment of containers in your data center by representing a cluster of servers as a single system.
  • Swarm is Docker’s clustering, scheduling and orchestration tool for managing a cluster of Docker hosts. 
  • rkt is a security minded container engine that uses KVM for VM-based isolation and packs other enhanced security features. 
  • Apache Mesos can run different kinds of distributed jobs, including containers. 
  • Elastic Container Service is Amazon’s service for running and orchestrating containerized applications on AWS
  • LXC offers the advantages of a VE on Linux, mainly the ability to isolate your own private workloads from one another. It is a cheaper and faster solution to implement than a VM, but doing so requires a bit of extra learning and expertise.
  • Docker is a significant improvement of LXC’s capabilities.
張 旭

Incremental Backup - 0 views

  • xtrabackup supports incremental backups, which means that they can copy only the data that has changed since the last backup.
  • You can perform many incremental backups between each full backup, so you can set up a backup process such as a full backup once a week and an incremental backup every day, or full backups every day and incremental backups every hour.
  • each InnoDB page contains a log sequence number, or LSN. The LSN is the system version number for the entire database. Each page’s LSN shows how recently it was changed.
  • ...18 more annotations...
  • In full backups, two types of operations are performed to make the database consistent: committed transactions are replayed from the log file against the data files, and uncommitted transactions are rolled back.
  • You should use the --apply-log-only option to prevent the rollback phase.
  • An incremental backup copies each page whose LSN is newer than the previous incremental or full backup’s LSN.
  • Incremental backups do not actually compare the data files to the previous backup’s data files.
  • you can use --incremental-lsn to perform an incremental backup without even having the previous backup, if you know its LSN
  • Incremental backups simply read the pages and compare their LSN to the last backup’s LSN.
  • without a full backup to act as a base, the incremental backups are useless.
  • The xtrabackup binary writes a file called xtrabackup_checkpoints into the backup’s target directory. This file contains a line showing the to_lsn, which is the database’s LSN at the end of the backup.
  • from_lsn is the starting LSN of the backup and for incremental it has to be the same as to_lsn (if it is the last checkpoint) of the previous/base backup.
  • If you do not use the --apply-log-only option to prevent the rollback phase, then your incremental backups will be useless.
  • run --prepare as usual, but prevent the rollback phase
  • If you restore it and start MySQL, InnoDB will detect that the rollback phase was not performed, and it will do that in the background, as it usually does for a crash recovery upon start.
  • xtrabackup --prepare --apply-log-only --target-dir=/data/backups/base \ --incremental-dir=/data/backups/inc1
  • The final data is in /data/backups/base, not in the incremental directory.
  • Do not run xtrabackup --prepare with the same incremental backup directory (the value of –incremental-dir) more than once.
  • xtrabackup --prepare --target-dir=/data/backups/base \ --incremental-dir=/data/backups/inc2
  • --apply-log-only should be used when merging all incrementals except the last one.
  • Even if the --apply-log-only was used on the last step, backup would still be consistent but in that case server would perform the rollback phase.
張 旭

Gracefully Shutdown Docker Container - Kakashi's Blog - 1 views

  • The initial idea is to make application invokes deconstructor of each component as soon as the application receives specific signals such as SIGTERM and SIGINT
  • When you run a docker container, by default it has a PID namespace, which means the docker process is isolated from other processes on your host.
  • The PID namespace has an important task to reap zombie processes.
  • ...11 more annotations...
  • This uses /bin/bash as PID1 and runs your program as the subprocess.
  • When a signal is sent to a shell, the signal actually won’t be forwarded to subprocesses.
  • By using the exec form, we can run our program as PID1
  • if you use exec form to run a shell script to spawn your application, remember to use exec syscall to overwrite /usr/bin/bash otherwise it will act as senario1
  • /bin/bash can handle repeating zombie process
  • with Tini, SIGTERM properly terminates your process even if you didn’t explicitly install a signal handler for it.
  • run tini as PID1 and it will forward the signal for subprocesses.
  • tini is a signal proxy and it also can deal with zombie process issue automatically.
  • run your program with tini by passing --init flag to docker run
  • use docker stop, docker will wait for 10s for stopping container before killing a process (by default). The main process inside the container will receive SIGTERM, then docker daemon will wait for 10s and send SIGKILL to terminate process.
  • kill running containers immediately. it’s more like kill -9 and kill --SIGKILL
張 旭

Trunk-based Development | Atlassian - 0 views

  • Trunk-based development is a version control management practice where developers merge small, frequent updates to a core “trunk” or main branch.
  • Gitflow and trunk-based development. 
  • Gitflow, which was popularized first, is a stricter development model where only certain individuals can approve changes to the main code. This maintains code quality and minimizes the number of bugs.
  • ...20 more annotations...
  • Trunk-based development is a more open model since all developers have access to the main code. This enables teams to iterate quickly and implement CI/CD.
  • Developers can create short-lived branches with a few small commits compared to other long-lived feature branching strategies.
  • Gitflow is an alternative Git branching model that uses long-lived feature branches and multiple primary branches.
  • Gitflow also has separate primary branch lines for development, hotfixes, features, and releases.
  • Trunk-based development is far more simplified since it focuses on the main branch as the source of fixes and releases.
  • Trunk-based development eases the friction of code integration.
  • trunk-based development model reduces these conflicts.
  • Adding an automated test suite and code coverage monitoring for this stream of commits enables continuous integration.
  • When new code is merged into the trunk, automated integration and code coverage tests run to validate the code quality.
  • Trunk-based development strives to keep the trunk branch “green”, meaning it's ready to deploy at any commit.
  • With continuous integration, developers perform trunk-based development in conjunction with automated tests that run after each committee to a trunk.
  • If trunk-based development was like music it would be a rapid staccato -- short, succinct notes in rapid succession, with the repository commits being the notes.
  • Instead of creating a feature branch and waiting to build out the complete specification, developers can instead create a trunk commit that introduces the feature flag and pushes new trunk commits that build out the feature specification within the flag.
  • Automated testing is necessary for any modern software project intending to achieve CI/CD.
  • Short running unit and integration tests are executed during development and upon code merge.
  • Automated tests provide a layer of preemptive code review.
  • Once a branch merges, it is best practice to delete it.
  • A repository with a large amount of active branches has some unfortunate side effects
  • Merge branches to the trunk at least once a day
  • The “continuous” in CI/CD implies that updates are constantly flowing.
張 旭

Introducing CNAME Flattening: RFC-Compliant CNAMEs at a Domain's Root - 0 views

  • you can now safely use a CNAME record, as opposed to an A record that points to a fixed IP address, as your root record in CloudFlare DNS without triggering a number of edge case error conditions because you’re violating the DNS spec.
  • CNAME Flattening allowed us to use a root domain while still maintaining DNS fault-tolerance across multiple IP addresses.
  • Traditionally, the root record of a domain needed to point to an IP address (known as an A -- for "address" -- Record).
  • ...13 more annotations...
  • WordPlumblr allows its users to use custom domains that point to the WordPlumblr infrastructure
  • A CNAME is an alias. It allows one domain to point to another domain which, eventually if you follow the CNAME chain, will resolve to an A record and IP address.
  • For example, WordPlumblr might have assigned the CNAME 6equj5.wordplumblr.com for Foo.com. Foo.com and the other customers may have all initially resolved, at the end of the CNAME chain, to the same IP address.
  • you usually don't want to address memory directly but, instead, you set up a pointer to a block of memory where you're going to store something. If the operating system needs to move the memory around then it just updates the pointer to point to wherever the chunk of memory has been moved to.
  • CNAMEs work great for subdomains like www.foo.com or blog.foo.com. Unfortunately, they don't work for a naked domain like foo.com itself.
  • the DNS spec enshrined that the root record -- the naked domain without any subdomain -- could not be a CNAME.
  • Technically, the root could be a CNAME but the RFCs state that once a record has a CNAME it can't have any other entries associated with it
  • a way to support a CNAME at the root, but still follow the RFC and return an IP address for any query for the root record.
  • extended our authoritative DNS infrastructure to, in certain cases, act as a kind of DNS resolver.
  • if there's a CNAME at the root, rather than returning that record directly we recurse through the CNAME chain ourselves until we find an A Record.
  • allows the flexibility of having CNAMEs at the root without breaking the DNS specification.
  • We cache the CNAME responses -- respecting the DNS TTLs, just like a recursor should -- which means often we have the answer without having to traverse the chain.
  • CNAME flattening solved email resolution errors for us which was very key.
張 旭

Using cache in GitLab CI with Docker-in-Docker | $AYMDEV() - 0 views

  • optimize our images.
  • When you build an image, it is made of multiple layers: we add a layer per instruction.
  • If we build the same image again without modifying any file, Docker will use existing layers rather than re-executing the instructions.
  • ...21 more annotations...
  • an image is made of multiple layers, and we can accelerate its build by using layers cache from the previous image version.
  • by using Docker-in-Docker, we get a fresh Docker instance per job which local registry is empty.
  • docker build --cache-from "$CI_REGISTRY_IMAGE:latest" -t "$CI_REGISTRY_IMAGE:new-tag"
  • But if you maintain a CHANGELOG in this format, and/or your Git tags are also your Docker tags, you can get the previous version and use cache the this image version.
  • script: - export PREVIOUS_VERSION=$(perl -lne 'print "v${1}" if /^##\s\[(\d\.\d\.\d)\]\s-\s\d{4}(?:-\d{2}){2}\s*$/' CHANGELOG.md | sed -n '2 p') - docker build --cache-from "$CI_REGISTRY_IMAGE:$PREVIOUS_VERSION" -t "$CI_REGISTRY_IMAGE:$CI_COMMIT_TAG" -f ./prod.Dockerfile .
  • « Docker layer caching » is enough to optimize the build time.
  • Cache in CI/CD is about saving directories or files across pipelines.
  • We're building a Docker image, dependencies are installed inside a container.We can't cache a dependencies directory if it doesn't exists in the job workspace.
  • Dependencies will always be installed from a container but will be extracted by the GitLab Runner in the job workspace. Our goal is to send the cached version in the build context.
  • We set the directories to cache in the job settings with a key to share the cache per branch and stage.
  • - docker cp app:/var/www/html/vendor/ ./vendor
  • after_script
  • - docker cp app:/var/www/html/node_modules/ ./node_modules
  • To avoid old dependencies to be mixed with the new ones, at the risk of keeping unused dependencies in cache, which would make cache and images heavier.
  • If you need to cache directories in testing jobs, it's easier: use volumes !
  • version your cache keys !
  • sharing Docker image between jobs
  • In every job, we automatically get artifacts from previous stages.
  • docker save $DOCKER_CI_IMAGE | gzip > app.tar.gz
  • I personally use the « push / pull » technique,
  • we docker push after the build, then we docker pull if needed in the next jobs.
張 旭

Creating a cluster with kubeadm | Kubernetes - 0 views

  • (Recommended) If you have plans to upgrade this single control-plane kubeadm cluster to high availability you should specify the --control-plane-endpoint to set the shared endpoint for all control-plane nodes
  • set the --pod-network-cidr to a provider-specific value.
  • kubeadm tries to detect the container runtime by using a list of well known endpoints.
  • ...12 more annotations...
  • kubeadm uses the network interface associated with the default gateway to set the advertise address for this particular control-plane node's API server. To use a different network interface, specify the --apiserver-advertise-address=<ip-address> argument to kubeadm init
  • Do not share the admin.conf file with anyone and instead grant users custom permissions by generating them a kubeconfig file using the kubeadm kubeconfig user command.
  • The token is used for mutual authentication between the control-plane node and the joining nodes. The token included here is secret. Keep it safe, because anyone with this token can add authenticated nodes to your cluster.
  • You must deploy a Container Network Interface (CNI) based Pod network add-on so that your Pods can communicate with each other. Cluster DNS (CoreDNS) will not start up before a network is installed.
  • Take care that your Pod network must not overlap with any of the host networks
  • Make sure that your Pod network plugin supports RBAC, and so do any manifests that you use to deploy it.
  • You can install only one Pod network per cluster.
  • The cluster created here has a single control-plane node, with a single etcd database running on it.
  • The node-role.kubernetes.io/control-plane label is such a restricted label and kubeadm manually applies it using a privileged client after a node has been created.
  • By default, your cluster will not schedule Pods on the control plane nodes for security reasons.
  • kubectl taint nodes --all node-role.kubernetes.io/control-plane-
  • remove the node-role.kubernetes.io/control-plane:NoSchedule taint from any nodes that have it, including the control plane nodes, meaning that the scheduler will then be able to schedule Pods everywhere.
張 旭

Container Runtimes | Kubernetes - 0 views

  • Kubernetes releases before v1.24 included a direct integration with Docker Engine, using a component named dockershim. That special direct integration is no longer part of Kubernetes
  • You need to install a container runtime into each node in the cluster so that Pods can run there.
  • Kubernetes 1.26 requires that you use a runtime that conforms with the Container Runtime Interface (CRI).
  • ...9 more annotations...
  • On Linux, control groups are used to constrain resources that are allocated to processes.
  • Both kubelet and the underlying container runtime need to interface with control groups to enforce resource management for pods and containers and set resources such as cpu/memory requests and limits.
  • When the cgroupfs driver is used, the kubelet and the container runtime directly interface with the cgroup filesystem to configure cgroups.
  • The cgroupfs driver is not recommended when systemd is the init system
  • When systemd is chosen as the init system for a Linux distribution, the init process generates and consumes a root control group (cgroup) and acts as a cgroup manager.
  • Two cgroup managers result in two views of the available and in-use resources in the system.
  • Changing the cgroup driver of a Node that has joined a cluster is a sensitive operation. If the kubelet has created Pods using the semantics of one cgroup driver, changing the container runtime to another cgroup driver can cause errors when trying to re-create the Pod sandbox for such existing Pods. Restarting the kubelet may not solve such errors.
  • The approach to mitigate this instability is to use systemd as the cgroup driver for the kubelet and the container runtime when systemd is the selected init system.
  • Kubernetes 1.26 defaults to using v1 of the CRI API. If a container runtime does not support the v1 API, the kubelet falls back to using the (deprecated) v1alpha2 API instead.
張 旭

What ChatOps Solutions Should You Use Today? | PäksTech - 0 views

shared by 張 旭 on 16 Feb 22 - No Cached
  • The big elephant in the room is of course Hubot, which now hasn’t seen new commits in over three years.
  • Botkit bots are written in JavaScript and they run on Node.js
  • Errbot is a chatbot written in Python, it comes with a ton of features, and it is extendable with custom plugins.
  • ...8 more annotations...
  • by default they react to !commands in your chatroom. Commands can also trigger on regular expression matches, with or without a bot prefix.
  • Errbot also supports Markdown responses with Jinja2 templating.
  • Errbot supports webhooks; It has a small web server that can translate endpoints to your custom plugins.
  • It’s recommended that you configure this behind a web server such as nginx or Apache.
  • It works with the If This Then That (IFTTT) principle, meaning that you define a set of rules that the system then uses to take action.
  • Lita is a chat bot written in Ruby. Like the other bots I’ve mentioned, it is also open source and supports different chat platforms via plugins.
  • Gort is a newer entrant to the ChatOps space. As the name suggests it has been written in Go, and it is still under active development.
  • can persist information in databases, supports advanced parsers, and is extendable with custom skills.
張 旭

Java microservices architecture by example - 0 views

  • A microservices architecture is a particular case of a service-oriented architecture (SOA)
  • What sets microservices apart is the extent to which these modules are interconnected.
  • Every server comprises just one certain business process and never consists of several smaller servers.
  • ...12 more annotations...
  • Microservices also bring a set of additional benefits, such as easier scaling, the possibility to use multiple programming languages and technologies, and others.
  • Java is a frequent choice for building a microservices architecture as it is a mature language tested over decades and has a multitude of microservices-favorable frameworks, such as legendary Spring, Jersey, Play, and others.
  • A monolithic architecture keeps it all simple. An app has just one server and one database.
  • All the connections between units are inside-code calls.
  • split our application into microservices and got a set of units completely independent for deployment and maintenance.
  • Each of microservices responsible for a certain business function communicates either via sync HTTP/REST or async AMQP protocols.
  • ensure seamless communication between newly created distributed components.
  • The gateway became an entry point for all clients’ requests.
  • We also set the Zuul 2 framework for our gateway service so that the application could leverage the benefits of non-blocking HTTP calls.
  • we've implemented the Eureka server as our server discovery that keeps a list of utilized user profile and order servers to help them discover each other.
  • We also have a message broker (RabbitMQ) as an intermediary between the notification server and the rest of the servers to allow async messaging in-between.
  • microservices can definitely help when it comes to creating complex applications that deal with huge loads and need continuous improvement and scaling.
張 旭

A Guide to Testing Rails Applications - Ruby on Rails Guides - 0 views

  • Rails tests can also simulate browser requests and thus you can test your application's response without having to test it through your browser.
  • your tests will need a database to interact with as well.
  • By default, every Rails application has three environments: development, test, and production
  • ...25 more annotations...
  • models directory is meant to hold tests for your models
  • controllers directory is meant to hold tests for your controllers
  • integration directory is meant to hold tests that involve any number of controllers interacting
  • Fixtures are a way of organizing test data; they reside in the fixtures folder
  • The test_helper.rb file holds the default configuration for your tests
  • Fixtures allow you to populate your testing database with predefined data before your tests run
  • Fixtures are database independent written in YAML.
  • one file per model.
  • Each fixture is given a name followed by an indented list of colon-separated key/value pairs.
  • Keys which resemble YAML keywords such as 'yes' and 'no' are quoted so that the YAML Parser correctly interprets them.
  • define a reference node between two different fixtures.
  • ERB allows you to embed Ruby code within templates
  • The YAML fixture format is pre-processed with ERB when Rails loads fixtures.
  • Rails by default automatically loads all fixtures from the test/fixtures folder for your models and controllers test.
  • Fixtures are instances of Active Record.
  • access the object directly
  • test_helper.rb specifies the default configuration to run our tests. This is included with all the tests, so any methods added to this file are available to all your tests.
  • test with method names prefixed with test_.
  • An assertion is a line of code that evaluates an object (or expression) for expected results.
  • bin/rake db:test:prepare
  • Every test contains one or more assertions. Only when all the assertions are successful will the test pass.
  • rake test command
  • run a particular test method from the test case by running the test and providing the test method name.
  • The . (dot) above indicates a passing test. When a test fails you see an F; when a test throws an error you see an E in its place.
  • we first wrote a test which fails for a desired functionality, then we wrote some code which adds the functionality and finally we ensured that our test passes. This approach to software development is referred to as Test-Driven Development (TDD).
張 旭

Slimming Down Your Models and Controllers with Concerns, Service Objects, and Tableless... - 1 views

  • The single responsibility principle asserts that every class should have exactly one responsibility. In other words, each class should be concerned about one unique nugget of functionality
  • fat models are a little better than fat controllers
  • when every bit of functionality has been encapsulated into its own object, you find yourself repeating code a lot less.
  • ...2 more annotations...
  • objects with a single responsibility can easily be unit tested
  • a vanilla Rails 4 app directory contains models, views, controllers, and helpers does not mean you are restricted to those four domains.
張 旭

Service objects in Rails will help you design clean and maintainable code. Here's how. - 0 views

  • Services has the benefit of concentrating the core logic of the application in a separate object, instead of scattering it around controllers and models.
  • Additional initialize arguments might include other context information if applicable.
  • And as programmers, we know that when something can go wrong, sooner or later it will!
  • ...7 more annotations...
  • we need a way to signal success or failure when using a service
  • what ActiveRecord save method uses
  • if the services role is to create or update rails models, it makes sense to return such an object as result.
  • utility objects to signal success or error
  • services will be used on the boundary between user interface and application
  • All the business logic is encapsulated in services and models
  • how we can use Service Objects, Status Objects and Rails’s Responders to produce a nice, consistent API
張 旭

Specification - Swagger - 0 views

shared by 張 旭 on 29 Jul 16 - No Cached
  • A list of parameters that are applicable for all the operations described under this path.
  • MUST NOT include duplicated parameters
  • this field SHOULD be less than 120 characters.
  • ...33 more annotations...
  • Unique string used to identify the operation.
  • The id MUST be unique among all operations described in the API.
  • A list of MIME types the operation can consume.
  • A list of MIME types the operation can produce
  • A unique parameter is defined by a combination of a name and location.
  • There can be one "body" parameter at most.
  • Required. The list of possible responses as they are returned from executing this operation.
  • The transfer protocol for the operation. Values MUST be from the list: "http", "https", "ws", "wss".
  • Declares this operation to be deprecated. Usage of the declared operation should be refrained. Default value is
  • A declaration of which security schemes are applied for this operation.
  • A unique parameter is defined by a combination of a name and location.
  • Path
  • Query
  • Header
  • Body
  • Form
  • Required. The location of the parameter. Possible values are "query", "header", "path", "formData" or "body".
  • the parameter value is actually part of the operation's URL
  • Parameters that are appended to the URL
  • The payload that's appended to the HTTP request.
  • Since there can only be one payload, there can only be one body parameter.
  • The name of the body parameter has no effect on the parameter itself and is used for documentation purposes only
  • body and form parameters cannot exist together for the same operation
  • This is the only parameter type that can be used to send files, thus supporting the file type.
  • If the parameter is in "path", this property is required and its value MUST be true.
  • default value is false.
  • The schema defining the type used for the body parameter.
  • The value MUST be one of "string", "number", "integer", "boolean", "array" or "file"
  • Default value is false
  • Required if type is "array". Describes the type of items in the array.
  • Determines the format of the array if type array is used
  • enum
  • pattern
« First ‹ Previous 41 - 60 of 147 Next › Last »
Showing 20 items per page