Topic: Microservices, Docker and Kubernetes
Monolithic Architecture Vs Microservices Architecture
Microservices separate Business Logic funtions. Instead of One Big program (Monolyth approach) several smaller applications ( Microservices) are used. They communicate with each other through well defined API’s – usually HTTP.
Monolithic Architectures are a tiered layered approach where one big single application including the UI the core Business logic and the Data access layer communicating with one single giant Database which / where everything is stored.
In Microservices Architecture When a user interacts with the UI it triggers independent Microservices using a shared or perhaps a single database on their own fulfilling one specific function or element of the Business function or Data Access Layer
Microservices are popular today.
The Advantages of using Microservices:
- Microservices can be programmed in any programming language best fit for the purpose as long as they are able to use a common API for communicating between each other and/or with the UI. As discussed HTTP is the most commonly used
- Works perfect with Smaller teams assigned and responsable for a single function (Microservice) instead of one big team working on an overly complexed single application doing and containing everything
- Easier Fault Isolation as the Microservices Architecture breaks up a Single Monolithic Application into smaller set of single functions therefore making it easier to troubleshoot what went wrong and where and also provides certain kind of Fault Tolerancy as failure in one Microservice does not always affects the whole user experience or functionality of the application as a whole as long as it is not part of some critical and core functionality and depending of course on the existing dependency of the other Microservices on the one which failed.
If for example the Microservice which failed is the one for example which takes care of the Invoice printing feature once the user completed an order via the UI this single fault while inconvenient as it can be does not cause the application as a whole not to be functional and the failure affects only partially the functionality of the application as a whole. And perhaps also other Microservices could offer for example a service to email the invoice generated for the users order and while the Microservice responsible for the Printing of an Orders invoice fails/does not work the user can have an alternate service to achieve the same or similar function.
Also One Microservice sometimes can handle another Microservices function or failure providing even more Fault Tolerance.
- Pair well with containers architectures. You can containerize a single Microservice
- Dynamically Scalable both up and down and with this feature companies able to save a lot of money in the era of cloud computing on demand pricing / instances are expensive
The Disadvantages of using Microservices:
- Complex Networking: From a logically laid out single application to an X Number of Microservices with or without their own databases with a sometimes confusing web of communication between them makes troubleshooting not always the simplest unless you grew familiar with the design and inner workflows of the application (experience as application support engineer for that specific application)
Imagine the flow through the steps from start to finish with such a complex web of networking…..
- Requires extra knowledge and familiarity of topics such as: with the architecture (how You will build it out piece by piece) , containerization, container orchestration *like Kubernetes for example*
- Overhead: Knowledge (mentioned above) Databases and Servers: overhead in the Infrastructure side as databases,servers need to be spin up instantly
What is Docker?
As We saw in the Episode on Virtualization ( Episodes 14-15 ) Docker is a type 2 virtualization in form of a PAAS (platform as a service) which delivers software in packages called Containers using OS-Level Virtualization
Docker is an application build and deployment tool. It is based on the idea of that you can package your code with dependencies into a deployable unit called a container. Containers have been around for quite some time.
Some say they were developed by Sun Microsystems and released as part of Solaris 10 in 2005 as Zones, others stating BSD Jails were the first container technology
Another explanation: is an open platform for developers and sysadmins to build, ship and run distributed applications wherter on laptops , data center VMs, or the cloud
Containers are a way to package software in a format that can run isolated on a shared operating system.Unlike VMs containers do not bundle a full operating system – only libraries and settings required to make the software work are needed. This makes for efficient , lightweight self contained systems and guaranteess that software will always run the same , regardless of where its deployed
The software that hosts the containers is called Docker Engine
With Docker and Containerization there is no more the issues of ,,the application ran fine on my computer but it doesnt on Client Xs machine” as long as the Client has Docker on it it will run just the same way as it did on the developers machine as the whole thing with all its necessary libraries and bits and pieces comes packaged into that container form.
- Describes the build process for an image:
f.e my app will be based on node, copy my files here run npm install and then npm run when the image is run
- Can be run to automatically create an image
- Contains all the commands necessary to build the image and run your application
Docker Container is the runtime instance of a Docker Image built from a Dockerfile
The Docker software as a service offering consists of three components:
Software: The Docker daemon, called dockerd, is a persistent process that manages Docker containers and handles container objects. The daemon listens for requests sent via the Docker Engine API. The Docker client program, called docker, provides a command-line interface that allows users to interact with Docker daemons.
Objects: Docker objects are various entities used to assemble an application in Docker. The main classes of Docker objects are images, containers, and services.
- A Docker container is a standardized, encapsulated environment that runs applications. A container is managed using the Docker API or CLI.
- A Docker image is a read-only template used to build containers. Images are used to store and ship applications.
- A Docker service allows containers to be scaled across multiple Docker daemons. The result is known as a swarm, a set of cooperating daemons that communicate through the Docker API.
Registries: A Docker registry is a repository for Docker images. Docker clients connect to registries to download (“pull”) images for use or upload (“push”) images that they have built. Registries can be public or private. Two main public registries are Docker Hub and Docker Cloud. Docker Hub is the default registry where Docker looks for images.Docker registries also allow the creation of notifications based on events.
Docker Compose is a tool for defining and running multi-container Docker applications.It uses YAML files to configure the application’s services and performs the creation and start-up process of all the containers with a single command. The docker-compose CLI utility allows users to run commands on multiple containers at once, for example, building images, scaling containers, running containers that were stopped, and more. Commands related to image manipulation, or user-interactive options, are not relevant in Docker Compose because they address one container.The docker-compose.yml file is used to define an application’s services and includes various configuration options. For example, the build option defines configuration options such as the Dockerfile path, the command option allows one to override default Docker commands, and more.
Docker Swarm is in simple words Dockers open source Container Orchestration platform (( We will see more about them below ))
What is Container Orchestration
Container orchestration automates the deployment, management, scaling, and networking of containers. Enterprises that need to deploy and manage hundreds or thousands of Linux® containers and hosts can benefit from container orchestration.
Container orchestration can be used in any environment where you use containers. It can help you to deploy the same application across different environments without needing to redesign it. And microservices in containers make it easier to orchestrate services, including storage, networking, and security.
Containers give your microservice-based apps an ideal application deployment unit and self-contained execution environment. They make it possible to run multiple parts of an app independently in microservices, on the same hardware, with much greater control over individual pieces and life cycles.
Some popular options are Kubernetes, Docker Swarm, and Apache Mesos.
Managing the lifecycle of containers with orchestration also supports DevOps teams who integrate it into CI/CD workflows (Continuous Integration Continuous Delivery / Development) Along with application programming interfaces (APIs) and DevOps teams, containerized microservices are the foundation for cloud-native applications.
What is container orchestration used for?
Use container orchestration to automate and manage tasks such as:
- Provisioning and deployment
- Configuration and scheduling
- Resource allocation
- Container availability
- Scaling or removing containers based on balancing workloads across your infrastructure
- Load balancing and traffic routing
- Monitoring container health
- Configuring applications based on the container in which they will run
- Keeping interactions between containers secure
What is Kubernetes?
Kubernetes is an open-source container-orchestration system for automating application deployment, scaling, and management.It was originally designed by Google, and is now maintained by the Cloud Native Computing Foundation. It aims to provide a “platform for automating deployment, scaling, and operations of application containers across clusters of hosts”.It works with a range of container tools, including Docker.Many cloud services offer a Kubernetes-based platform or infrastructure as a service (PaaS or IaaS) on which Kubernetes can be deployed as a platform-providing service. Many vendors also provide their own branded Kubernetes distributions.
A Kubernetes cluster can be deployed on either physical or virtual machines.
Kubernetes Architecture has the following main components:
- Master nodes
- Worker/Slave nodes
Kubernetes Cluster is a set of node machines for running containerized applications.
The master node is responsible for the management of Kubernetes cluster. It is mainly the entry point for all administrative tasks. There can be more than one master node in the cluster to check for fault tolerance.
As you can see in the above diagram, the master node has various components like API Server, Controller Manager, Scheduler and ETCD.
- API Server: The API server is the entry point for all the REST commands used to control the cluster.
- Controller Manager: Is a daemon that regulates the Kubernetes cluster, and manages different non-terminating control loops.
- Scheduler: The scheduler schedules the tasks to slave nodes. It stores the resource usage information for each slave node.
- ETCD: ETCD is a simple, distributed, consistent key-value store. It’s mainly used for shared configuration and service discovery.
Worker nodes contain all the necessary services to manage the networking between the containers, communicate with the master node, and assign resources to the scheduled containers.
As you can see in the above diagram, the worker node has various components like Docker Container, Kubelet, Kube-proxy, and Pods.
- Docker Container: Docker runs on each of the worker nodes, and runs the configured pods
- Kubelet: Kubelet gets the configuration of a Pod from the API server and ensures that the described containers are up and running.
- Kube-proxy: Kube-proxy acts as a network proxy and a load balancer for a service on a single worker node
- Pods: A pod is one or more containers that logically run together on nodes.
To get started with Kubernetes development, you can use Minikube. Minikube is a lightweight Kubernetes implementation that creates a VM on your local machine and deploys a simple cluster containing only one node. Minikube is available for Linux, macOS, and Windows systems. The Minikube CLI provides basic bootstrapping operations for working with your cluster, including start, stop, status, and delete
Telegram Chat Group
Episodes in FLAC
Video Recording of the Live Show:
VOIP // PSTN
+1 910 665 9191