What is a Monolithic Architecture?
A monolithic architecture is a traditional approach to designing software where an entire
application is built as a single, indivisible unit. In this architecture, all the different
components of the application, such as the user interface, business logic, and data access
layer, are tightly integrated and deployed together.
• This means that any changes or updates to the application require modifying and
redeploying the entire monolith.
• Monolithic architectures are often characterized by their simplicity and ease of
development, especially for small to medium-sized applications.
• However, they can become complex and difficult to maintain as the size and
complexity of the application grow.
To explore the differences between monolithic and microservices architectures, the System
Design Course provides detailed comparisons and real-world applications.
Advantages of using a Monolithic Architecture
Below are the key advantages of monolithic architecture:
• Simplicity
o With a monolithic architecture, all the code for your
application is in one place. This makes it easier to understand
how the different parts of your application work together.
o It also simplifies the development process since developers
don’t need to worry about how different services
communicate with each other.
• Development Speed
o Since all the parts of your application are tightly integrated,
it’s faster to develop new features.
o Developers can make changes to the codebase without having
to worry about breaking other parts of the application.
o This can lead to quicker development cycles and faster time-
to-market for new features.
• Deployment
o Deploying a monolithic application is simpler because you
only need to deploy one artifact.
o This makes it easier to manage deployments and reduces the
risk of deployment errors.
o Additionally, since all the code is in one place, it’s easier to
roll back changes if something goes wrong during
deployment.
• Debugging
o Debugging and tracing issues in a monolithic application is
often easier because everything is connected and in one place.
o Developers can use tools to trace the flow of execution
through the application, making it easier to identify and fix
bugs
Disadvantages of using a Monolithic Architecture
• Complexity
o As a monolithic application grows, it becomes more complex
and harder to manage.
o This complexity can make it difficult for developers to
understand how different parts of the application interact,
leading to longer development times and increased risk of
errors.
• Scalability
o Monolithic applications can be challenging to scale,
especially when certain components need to handle a large
volume of traffic.
o Since all parts of the application are tightly coupled, scaling
one component often requires scaling the entire application,
which can be inefficient and costly.
• Technology Stack
o In a monolithic architecture, all parts of the application share
the same technology stack.
o This can limit the flexibility of the development team, as they
are restricted to using the same technologies for all
components of the application.
• Deployment
o Deploying a monolithic application can be a complex and
time-consuming process.
o Since the entire application is deployed as a single unit, any
change to the application requires deploying the entire
monolith, which can lead to longer deployment times and
increased risk of deployment errors.
• Fault Tolerance
o In a monolithic architecture, there is no isolation between
components.
o This means that if a single component fails, it can bring down
the entire application.
o This lack of fault tolerance can make monolithic applications
more susceptible to downtime and reliability issues.
What is a Microservices Architecture?
In a microservices architecture, an application is built as a collection of small, independent
services, each representing a specific business capability. These services are loosely
coupled and communicate with each other over a network, often using lightweight
protocols like HTTP or messaging queues.
• Each service is responsible for a single functionality or feature of the application
and can be developed, deployed, and scaled independently.
• The Microservice architecture has a significant impact on the relationship
between the application and the database.
• Instead of sharing a single database with other microservices, each microservice
has its own database. It often results in duplication of some data, but having a
database per microservice is essential if you want to benefit from this
architecture, as it ensures loose coupling.
Advantages of using a Microservices Architecture
• Scalability: Microservices allow for individual components of an application to
be scaled independently based on demand. This means that you can scale only
the parts of your application that need to handle more traffic, rather than scaling
the entire application.
• Flexibility: Microservices enable teams to use different technologies and
programming languages for different services based on their specific
requirements. This flexibility allows teams to choose the best tool for the job,
rather than being limited to a single technology stack.
• Resilience: Since microservices are decoupled from each other, a failure in one
service does not necessarily impact the entire application. This improves the
overall resilience of the application and reduces the risk of downtime.
• Agility: Microservices enable teams to independently develop, test, deploy, and
scale services, allowing for faster development cycles and quicker time-to-
market for new features.
• Easier Maintenance: With microservices, it’s easier to understand, update, and
maintain the codebase since each service is smaller and focused on a specific
functionality. This can lead to faster development and debugging times.
• Technology Diversity: Different services in a microservices architecture can
use different technologies, frameworks, and databases based on their specific
requirements. This allows for greater flexibility and innovation in technology
choices.
Disadvantages of using a Microservices Architecture
• Complexity: Managing a large number of microservices can be complex. It
requires careful coordination between teams and can result in a more complex
deployment and monitoring environment.
• Increased Overhead: With microservices, there is overhead associated with
managing the communication between services, such as network latency and
serialization/deserialization of data. This can impact the performance of the
application.
• Deployment Complexity: Deploying and managing a large number of
microservices can be complex. It requires a robust deployment pipeline and
automated tools to ensure that updates are deployed smoothly and without
downtime.
• Monitoring and Debugging: Monitoring and debugging microservices can be
more challenging compared to monolithic applications. Since each service is
independent, tracing issues across multiple services can be complex.
• Cost: While microservices offer scalability and flexibility, they can also increase
costs, especially in terms of infrastructure and operational overhead. Managing a
large number of services can require more resources and investment in tools and
infrastructure.
• Testing: Testing microservices can be more complex compared to monolithic
applications. It requires a comprehensive testing strategy that covers integration
testing between services, as well as unit testing within each service.
• Differences between Monolithic and Microservices Architecture
• Below are the differences the Monolithic and Microservice architecture:
Aspect Monolithic Architecture Microservice Architecture
Architecture Single-tier architecture Multi-tier architecture
Large, all components
Small, loosely coupled components
Size tightly coupled
Individual services can be deployed
Deployed as a single unit
Deployment independently
Horizontal scaling can be
Easier to scale horizontally
Scalability challenging
Development is simpler Complex due to managing multiple
Development initially services
Freedom to choose the best technology
Limited technology choices
Technology for each service
Entire application may fail Individual services can fail without
Fault Tolerance if a part fails affecting others
Easier to maintain due to its Requires more effort to manage
Maintenance simplicity multiple services
Less flexible as all More flexible as components can be
components are tightly developed, deployed, and scaled
Flexibility coupled independently
Communication between Communication may be slower due to
Communication components is faster network calls
Virtual machines: virtual computers within computers
A virtual machine, commonly shortened to just VM, is no different than any other physical
computer like a laptop, smart phone, or server. It has a CPU, memory, disks to store your
files, and can connect to the internet if needed. While the parts that make up your computer
(called hardware) are physical and tangible, VMs are often thought of as virtual computers or
software-defined computers within physical servers, existing only as code.
How does a virtual machine work?
Virtualization is the process of creating a software-based, or "virtual" version of a computer,
with dedicated amounts of CPU, memory, and storage that are "borrowed" from a physical
host computer—such as your personal computer— and/or a remote server—such as a server
in a cloud provider's datacenter. A virtual machine is a computer file, typically called an
image, that behaves like an actual computer. It can run in a window as a separate computing
environment, often to run a different operating system—or even to function as the user's
entire computer experience—as is common on many people's work computers. The virtual
machine is partitioned from the rest of the system, meaning that the software inside a VM
can't interfere with the host computer's primary operating system.
What are VMs used for?
Here are a few ways virtual machines are used:
• Building and deploying apps to the cloud.
• Trying out a new operating system (OS), including beta releases.
• Spinning up a new environment to make it simpler and quicker for developers to run
dev-test scenarios.
• Backing up your existing OS.
• Accessing virus-infected data or running an old application by installing an older OS.
• Running software or apps on operating systems that they weren't originally intended
for.
What are the benefits of using VMs?
While virtual machines run like individual computers with individual operating
systems and applications, they have the advantage of remaining completely
independent of one another and the physical host machine. A piece of software called
a hypervisor, or virtual machine manager, lets you run different operating systems on
different virtual machines at the same time. This makes it possible to run Linux VMs,
for example, on a Windows OS, or to run an earlier version of Windows on more
current Windows OS.
And, because VMs are independent of each other, they're also extremely portable. You
can move a VM on a hypervisor to another hypervisor on a completely different
machine almost instantaneously.
Because of their flexibility and portability, virtual machines provide many benefits,
such as:
• Cost savings—running multiple virtual environments from one piece of infrastructure
means that you can drastically reduce your physical infrastructure footprint. This
boosts your bottom line—decreasing the need to maintain nearly as many servers and
saving on maintenance costs and electricity.
• Agility and speed—Spinning up a VM is relatively easy and quick and is much
simpler than provisioning an entire new environment for your developers.
Virtualization makes the process of running dev-test scenarios a lot quicker.
• Lowered downtime—VMs are so portable and easy to move from one hypervisor to
another on a different machine—this means that they are a great solution for backup,
in the event the host goes down unexpectedly.
• Scalability—VMs allow you to more easily scale your apps by adding more physical
or virtual servers to distribute the workload across multiple VMs. As a result you can
increase the availability and performance of your apps.
• Security benefits— Because virtual machines run in multiple operating systems,
using a guest operating system on a VM allows you to run apps of questionable
security and protects your host operating system. VMs also allow for better security
forensics, and are often used to safely study computer viruses, isolating the viruses to
avoid risking their host computer.
What is a hypervisor?
A hypervisor is a software that you can use to run multiple virtual machines on a single
physical machine. Every virtual machine has its own operating system and applications. The
hypervisor allocates the underlying physical computing resources such as CPU and memory
to individual virtual machines as required. Thus, it supports the optimal use of physical IT
infrastructure.
A hypervisor is a form of virtualization software used in Cloud hosting to divide and
allocate the resources on various pieces of hardware. The program which provides
partitioning, isolation, or abstraction is called a virtualization hypervisor. The hypervisor is
a hardware virtualization technique that allows multiple guest operating systems (OS) to
run on a single host system at the same time. A hypervisor is sometimes also called a
virtual machine manager(VMM).
Types of Hypervisor –
TYPE-1 Hypervisor:
The hypervisor runs directly on the underlying host system. It is also known as a “Native
Hypervisor” or “Bare metal hypervisor”. It does not require any base server operating
system. It has direct access to hardware resources. Examples of Type 1 hypervisors include
VMware ESXi, Citrix XenServer, and Microsoft Hyper-V hypervisor.
Pros & Cons of Type-1 Hypervisor:
Pros: Such kinds of hypervisors are very efficient because they have direct access to the
physical hardware resources(like Cpu, Memory, Network, and Physical storage). This
causes the empowerment of the security because there is nothing any kind of the third party
resource so that attacker couldn’t compromise with anything.
Cons: One problem with Type-1 hypervisors is that they usually need a dedicated separate
machine to perform their operation and to instruct different VMs and control the host
hardware resources.
TYPE-2 Hypervisor:
A Host operating system runs on the underlying host system. It is also known as ‘Hosted
Hypervisor”. Such kind of hypervisors doesn’t run directly over the underlying hardware
rather they run as an application in a Host system(physical machine). Basically, the
software is installed on an operating system. Hypervisor asks the operating system to make
hardware calls. An example of a Type 2 hypervisor includes VMware Player or Parallels
Desktop. Hosted hypervisors are often found on endpoints like PCs. The type-2 hypervisor
is very useful for engineers, and security analysts (for checking malware, or malicious
source code and newly developed applications).
Pros & Cons of Type-2 Hypervisor:
Pros: Such kind of hypervisors allows quick and easy access to a guest Operating System
alongside the host machine running. These hypervisors usually come with additional useful
features for guest machines. Such tools enhance the coordination between the host machine
and the guest machine.
Cons: Here there is no direct access to the physical hardware resources so the efficiency of
these hypervisors lags in performance as compared to the type-1 hypervisors, and potential
security risks are also there an attacker can compromise the security weakness if there is
access to the host operating system so he can also access the guest operating system.
Choosing the right hypervisor :
Type 1 hypervisors offer much better performance than Type 2 ones because there’s no
middle layer, making them the logical choice for mission-critical applications and
workloads. But that’s not to say that hosted hypervisors don’t have their place – they’re
much simpler to set up, so they’re a good bet if, say, you need to deploy a test environment
quickly. One of the best ways to determine which hypervisor meets your needs is to
compare their performance metrics. These include CPU overhead, the amount of maximum
host and guest memory, and support for virtual processors. The following factors should be
examined before choosing a suitable hypervisor:
1. Understand your needs: The company and its applications are the reason for the data
center (and your job). Besides your company’s needs, you (and your co-workers in IT) also
have your own needs. Needs for a virtualization hypervisor are:
a. Flexibility
b. Scalability
c. Usability
d. Availability
e. Reliability
f. Efficiency
g. Reliable support
2. The cost of a hypervisor: For many buyers, the toughest part of choosing a hypervisor
is striking the right balance between cost and functionality. While a number of entry-level
solutions are free, or practically free, the prices at the opposite end of the market can be
staggering. Licensing frameworks also vary, so it’s important to be aware of exactly what
you’re getting for your money.
3. Virtual machine performance: Virtual systems should meet or exceed the performance
of their physical counterparts, at least in relation to the applications within each server.
Everything beyond meeting this benchmark is profit.
4. Ecosystem: It’s tempting to overlook the role of a hypervisor’s ecosystem – that is, the
availability of documentation, support, training, third-party developers and consultancies,
and so on – in determining whether or not a solution is cost-effective in the long term.
5. Test for yourself: You can gain basic experience from your existing desktop or laptop.
You can run both VMware vSphere and Microsoft Hyper-V in either VMware Workstation
or VMware Fusion to create a nice virtual learning and testing environment.
HYPERVISOR REFERENCE MODEL :
There are 3 main modules coordinates in order to emulate the underlying hardware:
1. DISPATCHER:
The dispatcher behaves like the entry point of the monitor and reroutes the
instructions of the virtual machine instance to one of the other two modules.
2. ALLOCATOR:
The allocator is responsible for deciding the system resources to be provided to
the virtual machine instance. It means whenever a virtual machine tries to
execute an instruction that results in changing the machine resources associated
with the virtual machine, the allocator is invoked by the dispatcher.
3. INTERPRETER:
The interpreter module consists of interpreter routines. These are executed,
whenever a virtual machine executes a privileged instruction.