"Project Fonos is open-source telecommunications for the cloud. It helps VoIP integrators quickly deploy new networks and benefit from value-added services such as Programmable Voice, Messaging, and Video. This repository assembles the various components needed to deploy a telephony system at scale."
Co stands for cooperation. A co routine is asked to (or better expected to) willingly suspend its execution to give other co-routines a chance to execute too. So a co-routine is about sharing CPU resources (willingly) so others can use the same resource as oneself is using.
A thread on the other hand does not need to suspend its execution. Being suspended is completely transparent to the thread and the thread is forced by underlying hardware to suspend itself.
co-routines can not be concurrently executed and race conditions can not occur.
Concurrency is the separation of tasks to provide interleaved
execution.
Parallelism is the simultaneous execution of multiple
pieces of work in order to increase speed.
With threads, the operating system switches running threads preemptively according to its scheduler, which is an algorithm in the operating system kernel.
With coroutines, the programmer and programming language determine when to switch coroutines
In contrast to threads, which are pre-emptively scheduled by the operating system, coroutine switches are cooperative, meaning the programmer (and possibly the programming language and its runtime) controls when a switch will happen.
preemption
Coroutines are a form of sequential processing: only one is executing at any given time
Threads are (at least conceptually) a form of concurrent processing: multiple threads may be executing at any given time.
"Co stands for cooperation. A co routine is asked to (or better expected to) willingly suspend its execution to give other co-routines a chance to execute too. So a co-routine is about sharing CPU resources (willingly) so others can use the same resource as oneself is using."
Services has the benefit of concentrating the core logic of the application in a separate object, instead of scattering it around controllers and models.
Additional initialize arguments might include other context information if applicable.
And as programmers, we know that when something can go wrong, sooner or later it will!
Services are an abstract way of exposing an application running on a set of pods as a network service.
Pods are immutable, which means that when they die, they are not resurrected. The Kubernetes cluster creates new pods in the same node or in a new node once a pod dies.
A service provides a single point of access from outside the Kubernetes cluster and allows you to dynamically access a group of replica pods.
For internal application access within a Kubernetes cluster, ClusterIP is the preferred method
To expose a service to external network requests, NodePort, LoadBalancer, and Ingress are possible options.
Kubernetes Ingress is an API object that provides routing rules to manage external users' access to the services in a Kubernetes cluster, typically via HTTPS/HTTP.
content-based routing, support for multiple protocols, and authentication.
Ingress is made up of an Ingress API object and the Ingress Controller.
Kubernetes Ingress is an API object that describes the desired state for exposing services to the outside of the Kubernetes cluster.
An Ingress Controller reads and processes the Ingress Resource information and usually runs as pods within the Kubernetes cluster.
If Kubernetes Ingress is the API object that provides routing rules to manage external access to services, Ingress Controller is the actual implementation of the Ingress API.
The Ingress Controller is usually a load balancer for routing external traffic to your Kubernetes cluster and is responsible for L4-L7 Network Services.
Layer 7 (L7) refers to the application level of the OSI stack—external connections load-balanced across pods, based on requests.
if Kubernetes Ingress is a computer, then Ingress Controller is a programmer using the computer and taking action.
Ingress Rules are a set of rules for processing inbound HTTP traffic. An Ingress with no rules sends all traffic to a single default backend service.
the Ingress Controller is an application that runs in a Kubernetes cluster and configures an HTTP load balancer according to Ingress Resources.
The load balancer can be a software load balancer running in the cluster or a hardware or cloud load balancer running externally.
ClusterIP is the preferred option for internal service access and uses an internal IP address to access the service
A NodePort is a virtual machine (VM) used to expose a service on a Static Port number.
a NodePort would be used to expose a single service (with no load-balancing requirements for multiple services).
Ingress enables you to consolidate the traffic-routing rules into a single resource and runs as part of a Kubernetes cluster.
An application is accessed from the Internet via Port 80 (HTTP) or Port 443 (HTTPS), and Ingress is an object that allows access to your Kubernetes services from outside the Kubernetes cluster.
To implement Ingress, you need to configure an Ingress Controller in your cluster—it is responsible for processing Ingress Resource information and allowing traffic based on the Ingress Rules.