Skip to main content

Home/ Larvata/ Group items tagged case

Rss Feed Group items tagged

張 旭

How Percona XtraBackup Works - 0 views

  • Percona XtraBackup is based on InnoDB‘s crash-recovery functionality.
  • it performs crash recovery on the files to make them a consistent, usable database again
  • InnoDB maintains a redo log, also called the transaction log. This contains a record of every change to InnoDB data.
  • ...14 more annotations...
  • When InnoDB starts, it inspects the data files and the transaction log, and performs two steps. It applies committed transaction log entries to the data files, and it performs an undo operation on any transactions that modified data but did not commit.
  • Percona XtraBackup works by remembering the log sequence number (LSN) when it starts, and then copying away the data files.
  • Percona XtraBackup runs a background process that watches the transaction log files, and copies changes from it.
  • Percona XtraBackup needs to do this continually
  • Percona XtraBackup needs the transaction log records for every change to the data files since it began execution.
  • Percona XtraBackup uses Backup locks where available as a lightweight alternative to FLUSH TABLES WITH READ LOCK.
  • Locking is only done for MyISAM and other non-InnoDB tables after Percona XtraBackup finishes backing up all InnoDB/XtraDB data and logs.
  • xtrabackup tries to avoid backup locks and FLUSH TABLES WITH READ LOCK when the instance contains only InnoDB tables. In this case, xtrabackup obtains binary log coordinates from performance_schema.log_status
  • When backup locks are supported by the server, xtrabackup first copies InnoDB data, runs the LOCK TABLES FOR BACKUP and then copies the MyISAM tables.
  • the STDERR of xtrabackup is not written in any file. You will have to redirect it to a file, e.g., xtrabackup OPTIONS 2> backupout.log
  • During the prepare phase, Percona XtraBackup performs crash recovery against the copied data files, using the copied transaction log file. After this is done, the database is ready to restore and use.
  • the tools enable you to do operations such as streaming and incremental backups with various combinations of copying the data files, copying the log files, and applying the logs to the data.
  • To restore a backup with xtrabackup you can use the --copy-back or --move-back options.
  • you may have to change the files’ ownership to mysql before starting the database server, as they will be owned by the user who created the backup.
  •  
    "Percona XtraBackup is based on InnoDB's crash-recovery functionality."
張 旭

Incremental Backup - 0 views

  • xtrabackup supports incremental backups, which means that they can copy only the data that has changed since the last backup.
  • You can perform many incremental backups between each full backup, so you can set up a backup process such as a full backup once a week and an incremental backup every day, or full backups every day and incremental backups every hour.
  • each InnoDB page contains a log sequence number, or LSN. The LSN is the system version number for the entire database. Each page’s LSN shows how recently it was changed.
  • ...18 more annotations...
  • In full backups, two types of operations are performed to make the database consistent: committed transactions are replayed from the log file against the data files, and uncommitted transactions are rolled back.
  • You should use the --apply-log-only option to prevent the rollback phase.
  • An incremental backup copies each page whose LSN is newer than the previous incremental or full backup’s LSN.
  • Incremental backups do not actually compare the data files to the previous backup’s data files.
  • you can use --incremental-lsn to perform an incremental backup without even having the previous backup, if you know its LSN
  • Incremental backups simply read the pages and compare their LSN to the last backup’s LSN.
  • without a full backup to act as a base, the incremental backups are useless.
  • The xtrabackup binary writes a file called xtrabackup_checkpoints into the backup’s target directory. This file contains a line showing the to_lsn, which is the database’s LSN at the end of the backup.
  • from_lsn is the starting LSN of the backup and for incremental it has to be the same as to_lsn (if it is the last checkpoint) of the previous/base backup.
  • If you do not use the --apply-log-only option to prevent the rollback phase, then your incremental backups will be useless.
  • run --prepare as usual, but prevent the rollback phase
  • If you restore it and start MySQL, InnoDB will detect that the rollback phase was not performed, and it will do that in the background, as it usually does for a crash recovery upon start.
  • xtrabackup --prepare --apply-log-only --target-dir=/data/backups/base \ --incremental-dir=/data/backups/inc1
  • The final data is in /data/backups/base, not in the incremental directory.
  • Do not run xtrabackup --prepare with the same incremental backup directory (the value of –incremental-dir) more than once.
  • xtrabackup --prepare --target-dir=/data/backups/base \ --incremental-dir=/data/backups/inc2
  • --apply-log-only should be used when merging all incrementals except the last one.
  • Even if the --apply-log-only was used on the last step, backup would still be consistent but in that case server would perform the rollback phase.
張 旭

How to Benchmark Performance of MySQL & MariaDB Using SysBench | Severalnines - 1 views

  • SysBench is a C binary which uses LUA scripts to execute benchmarks
  • support for parallelization in the LUA scripts, multiple queries can be executed in parallel
  • by default, benchmarks which cover most of the cases - OLTP workloads, read-only or read-write, primary key lookups and primary key updates.
  • ...21 more annotations...
  • SysBench is not a tool which you can use to tune configurations of your MySQL servers (unless you prepared LUA scripts with custom workload or your workload happen to be very similar to the benchmark workloads that SysBench comes with)
  • it is great for is to compare performance of different hardware.
  • Every new server acquired should go through a warm-up period during which you will stress it to pinpoint potential hardware defects
  • by executing OLTP workload which overloads the server, or you can also use dedicated benchmarks for CPU, disk and memory.
  • bulk_insert.lua. This test can be used to benchmark the ability of MySQL to perform multi-row inserts.
  • All oltp_* scripts share a common table structure. First two of them (oltp_delete.lua and oltp_insert.lua) execute single DELETE and INSERT statements.
  • oltp_point_select, oltp_update_index and oltp_update_non_index. These will execute a subset of queries - primary key-based selects, index-based updates and non-index-based updates.
  • you can run different workload patterns using the same benchmark.
  • Warmup helps to identify “regular” throughput by executing benchmark for a predefined time, allowing to warm up the cache, buffer pools etc.
  • By default SysBench will attempt to execute queries as fast as possible. To simulate slower traffic this option may be used. You can define here how many transactions should be executed per second.
  • SysBench gives you ability to generate different types of data distribution.
  • decide if SysBench should use prepared statements (as long as they are available in the given datastore - for MySQL it means PS will be enabled by default) or not.
  • sysbench ./sysbench/src/lua/oltp_read_write.lua  help
  • By default, SysBench will attempt to execute queries in explicit transaction. This way the dataset will stay consistent and not affected: SysBench will, for example, execute INSERT and DELETE on the same row, making sure the data set will not grow (impacting your ability to reproduce results).
  • specify error codes from MySQL which SysBench should ignore (and not kill the connection).
  • the two most popular benchmarks - OLTP read only and OLTP read/write.
  • 1 million rows will result in ~240 MB of data. Ten tables, 1000 000 rows each equals to 2.4GB
  • by default, SysBench looks for ‘sbtest’ schema which has to exist before you prepare the data set. You may have to create it manually.
  • pass ‘--histogram’ argument to SysBench
  • ~48GB of data (20 tables, 10 000 000 rows each).
  • if you don’t understand why the performance was like it was, you may draw incorrect conclusions out of the benchmarks.
張 旭

Full Cycle Developers at Netflix - Operate What You Build - 1 views

  • Researching issues felt like bouncing a rubber ball between teams, hard to catch the root cause and harder yet to stop from bouncing between one another.
  • In the past, Edge Engineering had ops-focused teams and SRE specialists who owned the deploy+operate+support parts of the software life cycle
  • hearing about those problems second-hand
  • ...17 more annotations...
  • devs could push code themselves when needed, and also were responsible for off-hours production issues and support requests
  • What were we trying to accomplish and why weren’t we being successful?
  • These specialized roles create efficiencies within each segment while potentially creating inefficiencies across the entire life cycle.
  • Grouping differing specialists together into one team can reduce silos, but having different people do each role adds communication overhead, introduces bottlenecks, and inhibits the effectiveness of feedback loops.
  • devops principles
  • develops a system also be responsible for operating and supporting that system
  • Each development team owns deployment issues, performance bugs, capacity planning, alerting gaps, partner support, and so on.
  • Those centralized teams act as force multipliers by turning their specialized knowledge into reusable building blocks.
  • Communication and alignment are the keys to success.
  • Full cycle developers are expected to be knowledgeable and effective in all areas of the software life cycle.
  • ramping up on areas they haven’t focused on before
  • We run dev bootcamps and other forms of ongoing training to impart this knowledge and build up these skills
  • “how can I automate what is needed to operate this system?”
  • “what self-service tool will enable my partners to answer their questions without needing me to be involved?”
  • A full cycle developer thinks and acts like an SWE, SDET, and SRE. At times they create software that solves business problems, at other times they write test cases for that, and still other times they automate operational aspects of that system.
  • the need for continuous delivery pipelines, monitoring/observability, and so on.
  • Tooling and automation help to scale expertise, but no tool will solve every problem in the developer productivity and operations space
張 旭

Replication - MongoDB Manual - 0 views

  • A replica set in MongoDB is a group of mongod processes that maintain the same data set.
  • Replica sets provide redundancy and high availability, and are the basis for all production deployments.
  • With multiple copies of data on different database servers, replication provides a level of fault tolerance against the loss of a single database server.
  • ...18 more annotations...
  • replication can provide increased read capacity as clients can send read operations to different servers.
  • A replica set is a group of mongod instances that maintain the same data set.
  • A replica set contains several data bearing nodes and optionally one arbiter node.
  • one and only one member is deemed the primary node, while the other nodes are deemed secondary nodes.
  • A replica set can have only one primary capable of confirming writes with { w: "majority" } write concern; although in some circumstances, another mongod instance may transiently believe itself to also be primary.
  • The secondaries replicate the primary’s oplog and apply the operations to their data sets such that the secondaries’ data sets reflect the primary’s data set
  • add a mongod instance to a replica set as an arbiter. An arbiter participates in elections but does not hold data
  • An arbiter will always be an arbiter whereas a primary may step down and become a secondary and a secondary may become the primary during an election.
  • Secondaries replicate the primary’s oplog and apply the operations to their data sets asynchronously.
  • These slow oplog messages are logged for the secondaries in the diagnostic log under the REPL component with the text applied op: <oplog entry> took <num>ms.
  • Replication lag refers to the amount of time that it takes to copy (i.e. replicate) a write operation on the primary to a secondary.
  • When a primary does not communicate with the other members of the set for more than the configured electionTimeoutMillis period (10 seconds by default), an eligible secondary calls for an election to nominate itself as the new primary.
  • The replica set cannot process write operations until the election completes successfully.
  • The median time before a cluster elects a new primary should not typically exceed 12 seconds, assuming default replica configuration settings.
  • Factors such as network latency may extend the time required for replica set elections to complete, which in turn affects the amount of time your cluster may operate without a primary.
  • Your application connection logic should include tolerance for automatic failovers and the subsequent elections.
  • MongoDB drivers can detect the loss of the primary and automatically retry certain write operations a single time, providing additional built-in handling of automatic failovers and elections
  • By default, clients read from the primary [1]; however, clients can specify a read preference to send read operations to secondaries.
張 旭

Helm | Built-in Objects - 0 views

  • The built-in values always begin with a capital letter.
  • use only initial lower case letters in order to distinguish local names from those built-in.
  • Files.Get is a function for getting a file by name
  • ...3 more annotations...
  • While you cannot use it to access templates, you can use it to access other files in the chart.
  • Release: This object describes the release itself.
  • Values: Values passed into the template from the values.yaml file
  •  
    "The built-in values always begin with a capital letter. "
張 旭

Helm | Values Files - 0 views

shared by 張 旭 on 02 Oct 21 - No Cached
  • a subchart, the values.yaml file of a parent chart
  • Individual parameters passed with --set
  • The list above is in order of specificity: values.yaml is the default, which can be overridden by a parent chart's values.yaml, which can in turn be overridden by a user-supplied values file, which can in turn be overridden by --set parameters.
  • ...4 more annotations...
  • --set has a higher precedence than the default values.yaml file
  • Values files can contain more structured content
  • If you need to delete a key from the default values, you may override the value of the key to be null, in which case Helm will remove the key from the overridden values merge.
  • Kubernetes would then fail because you can not declare more than one livenessProbe handler.
張 旭

Helm | Template Functions and Pipelines - 0 views

  • When injecting strings from the .Values object into the template, we ought to quote these strings.
  • Helm has over 60 available functions. Some of them are defined by the Go template language itself. Most of the others are part of the Sprig template library
  • the "Helm template language" as if it is Helm-specific, it is actually a combination of the Go template language, some extra functions, and a variety of wrappers to expose certain objects to the templates.
  • ...10 more annotations...
  • Drawing on a concept from UNIX, pipelines are a tool for chaining together a series of template commands to compactly express a series of transformations.
  • the default function: default DEFAULT_VALUE GIVEN_VALUE
  • all static default values should live in the values.yaml, and should not be repeated using the default command (otherwise they would be redundant).
  • the default command is perfect for computed values, which can not be declared inside values.yaml.
  • When lookup returns an object, it will return a dictionary.
  • The synopsis of the lookup function is lookup apiVersion, kind, namespace, name -> resource or resource list
  • When no object is found, an empty value is returned. This can be used to check for the existence of an object.
  • The lookup function uses Helm's existing Kubernetes connection configuration to query Kubernetes.
  • Helm is not supposed to contact the Kubernetes API Server during a helm template or a helm install|update|delete|rollback --dry-run, so the lookup function will return an empty list (i.e. dict) in such a case.
  • the operators (eq, ne, lt, gt, and, or and so on) are all implemented as functions. In pipelines, operations can be grouped with parentheses ((, and )).
  •  
    "When injecting strings from the .Values object into the template, we ought to quote these strings. "
張 旭

Syntax - Configuration Language | Terraform | HashiCorp Developer - 0 views

  • the native syntax of the Terraform language, which is a rich language designed to be relatively easy for humans to read and write.
  • Terraform's configuration language is based on a more general language called HCL, and HCL's documentation usually uses the word "attribute" instead of "argument."
  • A particular block type may have any number of required labels, or it may require none
  • ...34 more annotations...
  • After the block type keyword and any labels, the block body is delimited by the { and } characters
  • Identifiers can contain letters, digits, underscores (_), and hyphens (-). The first character of an identifier must not be a digit, to avoid ambiguity with literal numbers.
  • The # single-line comment style is the default comment style and should be used in most cases.
  • he idiomatic style is to use the Unix convention
  • Indent two spaces for each nesting level.
  • align their equals signs
  • Use empty lines to separate logical groups of arguments within a block.
  • Use one blank line to separate the arguments from the blocks.
  • "meta-arguments" (as defined by the Terraform language semantics)
  • Avoid separating multiple blocks of the same type with other blocks of a different type, unless the block types are defined by semantics to form a family.
  • Resource names must start with a letter or underscore, and may contain only letters, digits, underscores, and dashes.
  • Each resource is associated with a single resource type, which determines the kind of infrastructure object it manages and what arguments and other attributes the resource supports.
  • Each resource type is implemented by a provider, which is a plugin for Terraform that offers a collection of resource types.
  • By convention, resource type names start with their provider's preferred local name.
  • Most publicly available providers are distributed on the Terraform Registry, which also hosts their documentation.
  • The Terraform language defines several meta-arguments, which can be used with any resource type to change the behavior of resources.
  • use precondition and postcondition blocks to specify assumptions and guarantees about how the resource operates.
  • Some resource types provide a special timeouts nested block argument that allows you to customize how long certain operations are allowed to take before being considered to have failed.
  • Timeouts are handled entirely by the resource type implementation in the provider
  • Most resource types do not support the timeouts block at all.
  • A resource block declares that you want a particular infrastructure object to exist with the given settings.
  • Destroy resources that exist in the state but no longer exist in the configuration.
  • Destroy and re-create resources whose arguments have changed but which cannot be updated in-place due to remote API limitations.
  • Expressions within a Terraform module can access information about resources in the same module, and you can use that information to help configure other resources. Use the <RESOURCE TYPE>.<NAME>.<ATTRIBUTE> syntax to reference a resource attribute in an expression.
  • resources often provide read-only attributes with information obtained from the remote API; this often includes things that can't be known until the resource is created, like the resource's unique random ID.
  • data sources, which are a special type of resource used only for looking up information.
  • some dependencies cannot be recognized implicitly in configuration.
  • local-only resource types exist for generating private keys, issuing self-signed TLS certificates, and even generating random ids.
  • The behavior of local-only resources is the same as all other resources, but their result data exists only within the Terraform state.
  • The count meta-argument accepts a whole number, and creates that many instances of the resource or module.
  • count.index — The distinct index number (starting with 0) corresponding to this instance.
  • the count value must be known before Terraform performs any remote resource actions. This means count can't refer to any resource attributes that aren't known until after a configuration is applied
  • Within nested provisioner or connection blocks, the special self object refers to the current resource instance, not the resource block as a whole.
  • This was fragile, because the resource instances were still identified by their index instead of the string values in the list.
  •  
    "the native syntax of the Terraform language, which is a rich language designed to be relatively easy for humans to read and write. "
張 旭

Data Sources - Configuration Language | Terraform | HashiCorp Developer - 0 views

  • Each provider may offer data sources alongside its set of resource types.
  • When distinguishing from data resources, the primary kind of resource (as declared by a resource block) is known as a managed resource.
  • Each data resource is associated with a single data source, which determines the kind of object (or objects) it reads and what query constraint arguments are available.
  • ...4 more annotations...
  • Terraform reads data resources during the planning phase when possible, but announces in the plan when it must defer reading resources until the apply phase to preserve the order of operations.
  • local-only data sources exist for rendering templates, reading local files, and rendering AWS IAM policies.
  • As with managed resources, when count or for_each is present it is important to distinguish the resource itself from the multiple resource instances it creates. Each instance will separately read from its data source with its own variant of the constraint arguments, producing an indexed result.
  • Data instance arguments may refer to computed values, in which case the attributes of the instance itself cannot be resolved until all of its arguments are defined. I
張 旭

Cluster Networking - Kubernetes - 0 views

  • Networking is a central part of Kubernetes, but it can be challenging to understand exactly how it is expected to work
  • Highly-coupled container-to-container communications
  • Pod-to-Pod communications
  • ...57 more annotations...
  • this is the primary focus of this document
    • 張 旭
       
      Cluster Networking 所關注處理的是: Pod 到 Pod 之間的連線
  • Pod-to-Service communications
  • External-to-Service communications
  • Kubernetes is all about sharing machines between applications.
  • sharing machines requires ensuring that two applications do not try to use the same ports.
  • Dynamic port allocation brings a lot of complications to the system
  • Every Pod gets its own IP address
  • do not need to explicitly create links between Pods
  • almost never need to deal with mapping container ports to host ports.
  • Pods can be treated much like VMs or physical hosts from the perspectives of port allocation, naming, service discovery, load balancing, application configuration, and migration.
  • pods on a node can communicate with all pods on all nodes without NAT
  • agents on a node (e.g. system daemons, kubelet) can communicate with all pods on that node
  • pods in the host network of a node can communicate with all pods on all nodes without NAT
  • If your job previously ran in a VM, your VM had an IP and could talk to other VMs in your project. This is the same basic model.
  • containers within a Pod share their network namespaces - including their IP address
  • containers within a Pod can all reach each other’s ports on localhost
  • containers within a Pod must coordinate port usage
  • “IP-per-pod” model.
  • request ports on the Node itself which forward to your Pod (called host ports), but this is a very niche operation
  • The Pod itself is blind to the existence or non-existence of host ports.
  • AOS is an Intent-Based Networking system that creates and manages complex datacenter environments from a simple integrated platform.
  • Cisco Application Centric Infrastructure offers an integrated overlay and underlay SDN solution that supports containers, virtual machines, and bare metal servers.
  • AOS Reference Design currently supports Layer-3 connected hosts that eliminate legacy Layer-2 switching problems.
  • The AWS VPC CNI offers integrated AWS Virtual Private Cloud (VPC) networking for Kubernetes clusters.
  • users can apply existing AWS VPC networking and security best practices for building Kubernetes clusters.
  • Using this CNI plugin allows Kubernetes pods to have the same IP address inside the pod as they do on the VPC network.
  • The CNI allocates AWS Elastic Networking Interfaces (ENIs) to each Kubernetes node and using the secondary IP range from each ENI for pods on the node.
  • Big Cloud Fabric is a cloud native networking architecture, designed to run Kubernetes in private cloud/on-premises environments.
  • Cilium is L7/HTTP aware and can enforce network policies on L3-L7 using an identity based security model that is decoupled from network addressing.
  • CNI-Genie is a CNI plugin that enables Kubernetes to simultaneously have access to different implementations of the Kubernetes network model in runtime.
  • CNI-Genie also supports assigning multiple IP addresses to a pod, each from a different CNI plugin.
  • cni-ipvlan-vpc-k8s contains a set of CNI and IPAM plugins to provide a simple, host-local, low latency, high throughput, and compliant networking stack for Kubernetes within Amazon Virtual Private Cloud (VPC) environments by making use of Amazon Elastic Network Interfaces (ENI) and binding AWS-managed IPs into Pods using the Linux kernel’s IPvlan driver in L2 mode.
  • to be straightforward to configure and deploy within a VPC
  • Contiv provides configurable networking
  • Contrail, based on Tungsten Fabric, is a truly open, multi-cloud network virtualization and policy management platform.
  • DANM is a networking solution for telco workloads running in a Kubernetes cluster.
  • Flannel is a very simple overlay network that satisfies the Kubernetes requirements.
  • Any traffic bound for that subnet will be routed directly to the VM by the GCE network fabric.
  • sysctl net.ipv4.ip_forward=1
  • Jaguar provides overlay network using vxlan and Jaguar CNIPlugin provides one IP address per pod.
  • Knitter is a network solution which supports multiple networking in Kubernetes.
  • Kube-OVN is an OVN-based kubernetes network fabric for enterprises.
  • Kube-router provides a Linux LVS/IPVS-based service proxy, a Linux kernel forwarding-based pod-to-pod networking solution with no overlays, and iptables/ipset-based network policy enforcer.
  • If you have a “dumb” L2 network, such as a simple switch in a “bare-metal” environment, you should be able to do something similar to the above GCE setup.
  • Multus is a Multi CNI plugin to support the Multi Networking feature in Kubernetes using CRD based network objects in Kubernetes.
  • NSX-T can provide network virtualization for a multi-cloud and multi-hypervisor environment and is focused on emerging application frameworks and architectures that have heterogeneous endpoints and technology stacks.
  • NSX-T Container Plug-in (NCP) provides integration between NSX-T and container orchestrators such as Kubernetes
  • Nuage uses the open source Open vSwitch for the data plane along with a feature rich SDN Controller built on open standards.
  • OpenVSwitch is a somewhat more mature but also complicated way to build an overlay network
  • OVN is an opensource network virtualization solution developed by the Open vSwitch community.
  • Project Calico is an open source container networking provider and network policy engine.
  • Calico provides a highly scalable networking and network policy solution for connecting Kubernetes pods based on the same IP networking principles as the internet
  • Calico can be deployed without encapsulation or overlays to provide high-performance, high-scale data center networking.
  • Calico can also be run in policy enforcement mode in conjunction with other networking solutions such as Flannel, aka canal, or native GCE, AWS or Azure networking.
  • Romana is an open source network and security automation solution that lets you deploy Kubernetes without an overlay network
  • Weave Net runs as a CNI plug-in or stand-alone. In either version, it doesn’t require any configuration or extra code to run, and in both cases, the network provides one IP address per pod - as is standard for Kubernetes.
  • The network model is implemented by the container runtime on each node.
張 旭

Java microservices architecture by example - 0 views

  • A microservices architecture is a particular case of a service-oriented architecture (SOA)
  • What sets microservices apart is the extent to which these modules are interconnected.
  • Every server comprises just one certain business process and never consists of several smaller servers.
  • ...12 more annotations...
  • Microservices also bring a set of additional benefits, such as easier scaling, the possibility to use multiple programming languages and technologies, and others.
  • Java is a frequent choice for building a microservices architecture as it is a mature language tested over decades and has a multitude of microservices-favorable frameworks, such as legendary Spring, Jersey, Play, and others.
  • A monolithic architecture keeps it all simple. An app has just one server and one database.
  • All the connections between units are inside-code calls.
  • split our application into microservices and got a set of units completely independent for deployment and maintenance.
  • Each of microservices responsible for a certain business function communicates either via sync HTTP/REST or async AMQP protocols.
  • ensure seamless communication between newly created distributed components.
  • The gateway became an entry point for all clients’ requests.
  • We also set the Zuul 2 framework for our gateway service so that the application could leverage the benefits of non-blocking HTTP calls.
  • we've implemented the Eureka server as our server discovery that keeps a list of utilized user profile and order servers to help them discover each other.
  • We also have a message broker (RabbitMQ) as an intermediary between the notification server and the rest of the servers to allow async messaging in-between.
  • microservices can definitely help when it comes to creating complex applications that deal with huge loads and need continuous improvement and scaling.
« First ‹ Previous 41 - 52 of 52
Showing 20 items per page