automatically discover any services on the Docker host and let Træfik reconfigure itself automatically when containers get created (or shut down) so HTTP traffic can be routed accordingly.
use Træfik as a layer-7 load balancer with SSL termination for a set of micro-services used to run a web application.
Docker containers can only communicate with each other over TCP when they share at least one network.
Docker under the hood creates IPTable rules so containers can't reach other containers unless you'd want to
Træfik can listen to Docker events and reconfigure its own internal configuration when containers are created (or shut down).
Enable the Docker provider and listen for container events on the Docker unix socket we've mounted earlier.
Enable automatic request and configuration of SSL certificates using Let's Encrypt.
These certificates will be stored in the acme.json file, which you can back-up yourself and store off-premises.
there isn't a single container that has any published ports to the host -- everything is routed through Docker networks.
Thanks to Docker labels, we can tell Træfik how to create its internal routing configuration.
container labels and service labels
With the traefik.enable label, we tell Træfik to include this container in its internal configuration.
tell Træfik to use the web network to route HTTP traffic to this container.
Service labels allow managing many routes for the same container.
When both container labels and service labels are defined, container labels are just used as default values for missing service labels but no frontend/backend are going to be defined only with these labels.
In the example, two service names are defined : basic and admin.
They allow creating two frontends and two backends.
Always specify the correct port where the container expects HTTP traffic using traefik.port label.
all containers that are placed in the same network as Træfik will automatically be reachable from the outside world
With the traefik.frontend.auth.basic label, it's possible for Træfik to provide a HTTP basic-auth challenge for the endpoints you provide the label for.
There’s a lot wrong with this: you could be using the wrong version of code that has exploits, has a bug in it, or worse it could have malware bundled in on purpose—you just don’t know.
Keep Base Images Small
Node.js for example, it includes an extra 600MB of libraries you don’t need.
the abstraction of the infrastructure layer, which is now considered code. Deployment of a new application may require the deployment of new infrastructure code as well.
"big bang" deployments update whole or large parts of an application in one fell swoop.
Big bang deployments required the business to conduct extensive development and testing before release, often associated with the "waterfall model" of large sequential releases.
Rollbacks are often costly, time-consuming, or even impossible.
In a rolling deployment, an application’s new version gradually replaces the old one.
new and old versions will coexist without affecting functionality or user experience.
Each container is modified to download the latest image from the app vendor’s site.
two identical production environments work in parallel.
Once the testing results are successful, application traffic is routed from blue to green.
In a blue-green deployment, both systems use the same persistence layer or database back end.
You can use the primary database by blue for write operations and use the secondary by green for read operations.
Blue-green deployments rely on traffic routing.
long TTL values can delay these changes.
The main challenge of canary deployment is to devise a way to route some users to the new application.
Using an application logic to unlock new features to specific users and groups.
With CD, the CI-built code artifact is packaged and always ready to be deployed in one or more environments.
Use Build Automation tools to automate environment builds
Use configuration management tools
Enable automated rollbacks for deployments
An application performance monitoring (APM) tool can help your team monitor critical performance metrics including server response times after deployments.
In Kubernetes, a Service is an abstraction which defines a logical set of Pods
and a policy by which to access them (sometimes this pattern is called
a micro-service).
The set of Pods targeted by a Service is usually determined
by a selector
If you're able to use Kubernetes APIs for service discovery in your application,
you can query the API server
for Endpoints, that get updated whenever the set of Pods in a Service changes.
A Service in Kubernetes is a REST object, similar to a Pod.
The name of a Service object must be a valid
DNS label name
Kubernetes assigns this Service an IP address (sometimes called the "cluster IP"),
which is used by the Service proxies
A Service can map any incoming port to a targetPort. By default and
for convenience, the targetPort is set to the same value as the port
field.
The default protocol for Services is TCP
As many Services need to expose more than one port, Kubernetes supports multiple
port definitions on a Service object.
Each port definition can have the same protocol, or a different one.
Because this Service has no selector, the corresponding Endpoints object is not
created automatically. You can manually map the Service to the network address and port
where it's running, by adding an Endpoints object manually
Endpoint IP addresses cannot be the cluster IPs of other Kubernetes Services
Kubernetes ServiceTypes allow you to specify what kind of Service you want.
The default is ClusterIP
ClusterIP: Exposes the Service on a cluster-internal IP.
NodePort: Exposes the Service on each Node's IP at a static port
(the NodePort). A ClusterIP Service, to which the NodePort Service
routes, is automatically created. You'll be able to contact the NodePort Service,
from outside the cluster,
by requesting <NodeIP>:<NodePort>.
LoadBalancer: Exposes the Service externally using a cloud
provider's load balancer
ExternalName: Maps the Service to the contents of the
externalName field (e.g. foo.bar.example.com), by returning a CNAME record
with its value. No proxying of any kind is set up.
You can also use Ingress to expose your Service. Ingress is not a Service type, but it acts as the entry point for your cluster.
If you set the type field to NodePort, the Kubernetes control plane
allocates a port from a range specified by --service-node-port-range flag (default: 30000-32767).
The default for --nodeport-addresses is an empty list. This means that kube-proxy should consider all available network interfaces for NodePort.
you need to take care of possible port collisions yourself.
You also have to use a valid port number, one that's inside the range configured
for NodePort use.
Service is visible as <NodeIP>:spec.ports[*].nodePort
and .spec.clusterIP:spec.ports[*].port
Choosing this value
makes the Service only reachable from within the cluster.
NodePort: Exposes the Service on each Node's IP at a static port
Microservices also bring a set of additional benefits, such as easier scaling, the possibility to use multiple programming languages and technologies, and others.
Java is a frequent choice for building a microservices architecture as it is a mature language tested over decades and has a multitude of microservices-favorable frameworks, such as legendary Spring, Jersey, Play, and others.
A monolithic architecture keeps it all simple. An app has just one server and one database.
All the connections between units are inside-code calls.
split our application into microservices and got a set of units completely independent for deployment and maintenance.
Each of microservices responsible for a certain business function communicates either via sync HTTP/REST or async AMQP protocols.
ensure seamless communication between newly created distributed components.
The gateway became an entry point for all clients’ requests.
We also set the Zuul 2 framework for our gateway service so that the application could leverage the benefits of non-blocking HTTP calls.
we've implemented the Eureka server as our server discovery that keeps a list of utilized user profile and order servers to help them discover each other.
We also have a message broker (RabbitMQ) as an intermediary between the notification server and the rest of the servers to allow async messaging in-between.
microservices can definitely help when it comes to creating complex applications that deal with huge loads and need continuous improvement and scaling.