DevOps Unit-II Final
DevOps Unit-II Final
The following are the different phases in the DevOps lifecycle works at every stage.
1. Plan: In this stage, teams identify the project requirements, scope, objectives and collect end-
user feedback. They create a project roadmap to maximize the business value and deliver the
desired product during this stage. Tools such as Jira, Asanaand Trello are commonly used for
1
managing and tracking the project advancements. This ensures that every member of the team is
in agreement regarding its goals and outcomes.
2. Code: The code development takes place at this stage. Team focus is on the creation and
organization of the source code. The development teams use some tools and plugins like Git to
streamline the development process, which helps them avoid security flaws and lousy coding
practices.
3. Build: During the build phase, the code is compiled into executable artifacts(exe, war, jar,,). This
stage frequently involves managing dependencies, compiling code, and packaging it for
deployment. In this stage, once developers finish their task, they commit the code to the shared
code repository using build tools like Maven, Gradle, Ant, and Jenkins. These tools guarantee
that building is done consistently and efficiently.
4. Test: During the testing phase, a combination of automated and manual testing is conducted to
detect and resolve any defects prior to deployment. Once the build is ready, it is deployed to the
test environment first to perform several types of testing like user acceptance test, security test,
integration testing, performance testing, etc., using tools like JUnit, Selenium, TestNG etc., to
ensure software quality.
5. Release: The build is ready to deploy on the production environment at this phase. During the
release phase, the software is prepared for deployment by managing versioning and bundling the
source code into deliverables. Once the build passes all tests, the operations team schedules the
releases or deploys multiple releases to production, depending on the organizational needs. The
process of release can be automated using release management tools, including Jenkins and
GitLab CI/CD.
6. Deploy: In this stage, Infrastructure-as-Code helps build the production environment and then
releases the build with the help of different tools. During the Deployment phase, the procedure
of transferring applications is streamlined across multiple environments, such as development,
staging, and production. CD tools streamline and automate the process of deploying software,
allowing for quick and smooth updates. Widely used tools such as Docker, Kubernetes, and
Ansible provide dependable and expandable deployment procedures.
7. Operate: The release is live now to use by customers. Operations encompass the tasks of
monitoring infrastructure, implementing updates, managing incidents, server configurations and
provisioning using tools like Chef. The platforms such as, Kubernetes, Docker Swarm and cloud
services like AWS and Azure aid in the efficient management and scaling of operations.
8. Monitor: In this stage, the DevOps pipeline is monitored based on data collected from customer
behavior, application performance, and the overall health of the system is observed continuously
in real time. Monitoring the entire environment helps teams to find the bottlenecks impacting the
development and operations team’s productivity. Tools such as Prometheus, Grafana, and the
ELK Stack (Elasticsearch, Logstash, Kibana).
To achieve business agility with the DevOps lifecycle, organizations should follow the following best
practices:
✓ Collaborate and Communicate: Communication and collaboration between development and
operations teams are essential to ensure the success of the DevOps lifecycle. Both teams must
work together to define the requirements, design the architecture, and plan the delivery process.
✓ Automate: Automating the software delivery process can help organizations achieve faster time-
to-market and better quality. Automation can include testing, building, and deploying software.
2
✓ Continuous Integration and Continuous Delivery (CI/CD): CI/CD is a software engineering
practice that aims to provide rapid, continuous delivery of code changes to production. CI/CD
pipelines automate the testing and deployment of code changes.
✓ Infrastructure as Code (IaC): IaC enables organizations to automate the creation and
management of infrastructure. IaC is used to define infrastructure in code and enables
organizations to deploy infrastructure consistently across environments.
✓ Monitor and Measure: Organizations should continuously monitor and measure the
performance of their software products. Monitoring can include the performance of applications,
infrastructure, and user experience. Organizations should use metrics to understand the
performance of their software products and identify areas for improvement.
✓ Security: Security is an essential consideration throughout the DevOps lifecycle. Organizations
should implement security practices in all stages of the lifecycle to reduce security risks. This
includes securing code, integrating security testing into the development process, and
implementing security measures in production environments.
✓ Feedback and Continuous Improvement: DevOps teams should continuously seek feedback
from users, stakeholders, and other teams to improve the software delivery process. They should
use feedback to identify areas for improvement and make changes to the process accordingly.
✓ Agile Development Methodologies: Agile development methodologies enable organizations to
deliver software products quickly and adapt to changing requirements. Agile methodologies
emphasize collaboration, flexibility, and feedback, making them well-suited to the DevOps
lifecycle.
By above best practices, organizations can achieve business agility with the DevOps lifecycle. They can
respond quickly to changing market conditions, reduce time to market, and improve the quality of their
software products.
DevOps Importance and Benefits:
1. Speed: Accelerated delivery allows businesses to align to changes in market and to grow more
efficiently while driving business results by innovating faster for end customer.
2. Time to Market: It’s all about reduction in time taken to deliver changes in the product to
customers. This also demands increase in release frequency and pace of releases to incorporate
innovations and improvements to product. This enables businesses to respond faster to market
needs. CI and CR are two practices to automate Software development and release process.
3. Reliability: Quality of changes to software includes:
✓ Application Update
✓ Modification to Infrastructure
✓ Define reliability from end user experience perspective.
DevOps adoption makes delivery process repeatable, thereby making delivery more predictable
by increasing probability of successful delivery.
4. Scale: Automation and Optimum development plus delivery process make it possible to scale
the size of functional releases on demand. Agile and DevOps adoption ensures increase in
delivery without impacting the delivery quality and timelines.
5. Improved Collaboration: The teams improve the collaboration by understanding and sharing
the delivery responsibilities. Teams independent workflows can be combined in agile process
with DevOps. Better collaborations address inefficiencies in delivery pipeline, which intern saves
time by seamless handover of software between teams.
3
6. Contribution to Quality Assurance: Devops significantly affects quality assurance in
information systems which integrates development , operations and support team with customer.
By bringing these parties closer using tools and improving cooperation the QA process becomes
more predictable. Devops process increases the day to day data gathering process which helps in
enhanced delivery quality.
7. Contribution to Services: DevOps principles can significantly affect output of implementation
of service management frameworks. Service management framework rely on the extent of
cooperation between development and operation teams. Business models can aligned to service
delivery models such as SaaS. Organizations can offer services to others which opens new
business opportunities.
8. Contribution to Information System Development: Devops brought major changes in the
process of development of information systems. It removed communication barriers and gaps
between teams and end user. CICD make it possible for end user to validate the software more
frequently than earlier. CICD made it possible to refine product to match exact requirements.
7 C’s of Devops Life Cycle for Business Agility:
Everything is continuous in DevOps from planning to monitoring. So, let’s break down the entire
lifecycle into seven phases where continuity is at its core. The 7Cs in DevOps are the fundamental
principles that organizations must adhere to successfully deploy and control DevOps techniques. Any
phase in the lifecycle can iterate throughout the projects multiple times until it’s finished. The following
are the various phases of 7Cs of the DevOps life cycle:
1. Continuous Development: This phase plays a pivotal role in delineating the vision for the entire
software development cycle. It primarily focuses on project planning and coding. During this phase,
project requirements are gathered and discussed with stakeholders. Moreover, the product backlog is
also maintained based on customer feedback which is broken down into smaller releases and milestones
for continuous software development. Once the team agrees upon the business needs, the development
team starts coding for the desired requirements. It’s a continuous process where developers are required
to code whenever any changes occur in the project requirement or in case of any performance issues.
4
Tools Used: There are no specific tools for planning, but the development team requires some tools for
code maintenance. GitLab, GIT, TFS, SVN, Mercurial, Jira, BitBucket, Confluence, and Subversion are
a few tools used for version control. Also, tools like Ant, Maven, Gradle can be used in this phase for
building/ packaging the code into an executable file that can be forwarded to any of the next phases.
Many companies prefer agile practices for collaboration and use Scrum, Lean, and Kanban. Among all
those tools, GIT and Jira are the most popular ones used for complex projects and the outstanding
collaboration between teams while developing.
2. Continuous Integration:
Continuous integration is the heart of the entire DevOps lifecycle. It is a software development practice
in which the developers require to commit changes to the source code more frequently. This may be on
a daily or weekly basis. Then every commit is built, and this allows early detection of problems if they
are present. Building code is not only involved compilation, but it also includes unit testing,
integration testing, code review, and packaging.
The code supporting new functionality is continuously integrated with the existing code. This step makes
integration a continuous approach where code is tested at every commit. Moreover, the tests needed are
also planned in this phase.
Tools Used: Jenkin (open-source tool), Bamboo, GitLab CI, TeamCity, Travis, CircleCI and Buddy
(commercial tools) are a few DevOps tools used to make the project workflow smooth and more
productive. For example, Jenkins is a popular tool used in this phase. Whenever there is a change in the
Git repository, then Jenkins fetches the updated code and prepares a build of that code, which is an
executable file in the form of war or jar. Then this build is forwarded to the test server or the production
server.
3. Continuous Testing:
This phase, Quality analysts continuously test the developed software for bugs and issues during this
stage using Docker containers. In case of a bug or an error, the code is sent back to the integration phase
for modification. In this phase, Docker Containers can be used for simulating the test environment.
Automation testing saves a lot of time and effort for executing the tests instead of doing this manually.
Apart from that, report generation is a big plus. The task of evaluating the test cases that failed in a
5
test suite gets simpler. Also, we can schedule the execution of the test cases at predefined times.
After testing, the code is continuously integrated with the existing code.
Tools Used: JUnit, Selenium, TestNG, and TestSigma are a few DevOps tools for continuous testing.
Selenium is the is the most popular open-source automation testing, and TestNG generates the reports.
This entire testing phase can automate with the help of a Continuous Integration tool called Jenkins.
4. Continuous Deployment:
This phase is the crucial and most active one in the DevOps lifecycle, where final code is deployed on
production servers. The continuous deployment includes configuration management to make the
deployment of code on servers accurate and smooth. Development teams release the code to servers and
schedule the updates for servers, keeping the configurations consistent throughout the production
process. Containerization tools also help in the deployment process by providing consistency across
development, testing, production, and staging environments. This practice made the continuous delivery
of new features in production possible.
Tools Used: Ansible, Puppet, and Chef are the configuration management tools that make the
deployment process smooth and consistent throughout the production process. Docker and Vagrant are
another DevOps tool used widely for handling the scalability of the continuous deployment process.
5. Continuous Feedback
Continuous feedback came into existence to analyze and improve the application code. The application
development is consistently improved by analyzing the results from the operations of the software.
During this phase, customer behavior is evaluated regularly on each release to improve future releases
and deployments. Businesses can either opt for a structural or unstructured approach to gather feedback.
In the structural approach, feedback is collected through surveys and questionnaires. In contrast, the
feedback is received through social media platforms in an unstructured approach. Overall, this phase is
quintessential in making continuous delivery possible to introduce a better version of the application.
Tools Used: Pendo is a product analytics tool used to collect customer reviews and insights. Qentelli’s
TED is another tool used primarily for tracking the entire DevOps process to gather actionable insights
for bugs and flaws.
6. Continuous Monitoring
During this phase, the application’s functionality and features are monitored continuously to detect
system errors such as low memory, non-reachable server, etc. This process helps the IT team quickly
6
identify issues related to app performance and the root cause behind it. If IT teams find any critical issue,
the application goes through the entire DevOps cycle again to find the solution. However, the security
issues can be detected and resolved automatically during this phase.
Tools Used: Nagios, Kibana, Splunk, PagerDuty, ELK Stack, New Relic, and Sensu are a few DevOps
tools used to make the continuous monitoring process fast and straightforward.
7. Continuous Operations
The last phase in the DevOps lifecycle is crucial for reducing the planned downtime, such as scheduled
maintenance. All DevOps operations are based on the continuity with complete automation of the release
process and allow the organization to accelerate the overall time to market continuously. Generally,
developers are required to take the server offline to make the updates, which increases the downtime and
might even cost a significant loss to the company. Eventually, continuous operation automates the
process of launching the app and its updates. It uses container management systems like Kubernetes and
Docker to eliminate downtime. These container management tools help simplify the process of building,
testing, and deploying the application on multiple environments. The key objective of this phase is to
boost the application’s uptime to ensure uninterrupted services. Through continuous operations,
developers save time that can be used to accelerate the application’s time-to-market.
Tools Used: Kubernetes and Docker Swarm are the container orchestration tools used for the high
availability of the application and to make the deployment faster.
Agile Methodology:
Agile methodology is a flexible approach for project management as well as software development
technique that prioritizes adaptability, cooperation, and meeting customer needs. The process involves
creating iterative advancements using small, easily manageable increments referred to as sprints or
iterations.
i. Analyze: During the analyze phase, teams collect specifications, establish objectives, and
analyze user demands, providing an accurate concept of the products that needs to be constructed.
ii. Plan: During the planning phase, the team breaks down the project into activities that can be
easily handled, analyzes the expected timeframes, and determines the priority of features
depending on their value and complexity.
iii. Design: Design includes the generation of blueprints and models that depict the look and feel of
the software, ensuring that it meets the requirements of users and project objectives.
iv. Develop: The develop phase contains the process of programming and gradually constructing
the software, with regular releases of functional and practical features.
v. Test: Testing is an essential component of agile methodology, involving an ongoing method of
identifying and fixing bugs throughout the development cycle. It ensures that the final product
satisfies the quality requirements.
vi. Review: Regular reviews enable teams to evaluate their work, pinpoint areas that require
development, and make essential modifications to the project plan and features.
vii. Deploy: During the deployment phase, the product is released and is made available for its users.
viii. Feedback: Agile methodology promotes the adoption of continuous feed! ack loops to receive
input from users and stakeholders. This feedback is used to determine future iterations and
enhancements, ensuring that the software remains adaptable for evolving needs and
requirements.
7
2.2 DEVOPS AND CONTINUOUS TESTING
• Continuous Testing is a key component of the DevOps approach. It involves the use of
automated testing tools and practices to test software throughout the development lifecycle, from
development to production. The primary goal is to provide fast and reliable feedback on the
quality of the software, which enables teams to catch defects early in the development process,
reducing the time, reduce the risk of issues in production and cost of fixing them. Continuous
testing is an integral part of the continuous integration with Agile and DevOps pipeline. The
process of Continuous Integration and delivery requires Continuous Testing.
• Continuous Testing refers to the execution of automated tests, including unit tests, integration
tests, and end-to-end tests that are carried out at regular intervals every time code changes are
made. These tests are automated and run frequently, providing rapid feedback on the quality of
the code. Continuous Testing was introduced initially with the intention of reducing the time
taken to provide feedback to developers.
• Continuous testing also helps to improve the overall quality of the software. By ensuring that
software is thoroughly tested, teams can identify and address issues before they reach production,
reducing the risk of downtime or other issues that can impact the user experience.
How does Continuous Testing play a vital role in DevOps?
It’s evident that every software application is built uniquely, and it needs to be updated regularly to meet
end-user requirements. Earlier, as the development and deployment process were rigid, changing and
deploying features required a considerable amount of time. This is because projects earlier had definite
timelines for development and QA phases, and the codebase was transferred between teams.
However, with the Agile approach becoming mainstream, making changes even in real-time has become
more convenient, primarily due to Continuous Testing and the CI/CD pipeline. This is because the code
is continually moving from Development -> Testing -> Deployment Stages.
With an Agile mindset being at the core, Continuous Testing in DevOps helps teams to explore critical
bugs in the initial stages itself. This ensures that the risk of critical bugs is mitigated beforehand, saving
the cost of bug fixing in later stages.
Continuous Testing encourages automating tests wherever possible throughout the development cycle.
Doing so ensures that teams evaluate the code validity and overall quality of the software at each stage.
These insights can help organizations identify whether the software is ready to go through the delivery
pipeline.
Difference between continuous testing with conventional testing:
The conventional method of testing relies heavily on manual intervention. The project has to be
transferred across the development and the testing teams. A typical project is made up of distinct
Development and Quality Assurance (QA) phases. Quality assurance teams always need additional time
to assure the highest level of quality. The aim is to prioritize quality beyond the project schedule. But,
the organization expects faster software delivery to the end user
Continuous testing varies from conventional testing mostly in its scheduling, frequency, and automation.
Conventional testing always takes place at predetermined stages, such as post-development or pre-
release, and it frequently requires manual procedures. On the other hand, continuous testing is seamlessly
incorporated into each phase of development, performing automated tests constantly and delivering
prompt feedback. By adopting this approach, the chance of issues can be reduced because it allows quick
identification of issues. However, conventional testing methods may identify problems at later stages,
8
resulting in deeper issues and increased expenses. Continuous testing is tightly integrated with Agile and
DevOps methodologies, facilitating rapid iterations and improving cooperation between the
development, testing, and operations teams.
The benefits of Continuous Testing in DevOps are:
• Early Issue Detection: Continuous testing enables early detection of defects and issues, reducing
the cost and effort required for fixing them later in the development process.
• Faster Time-to-Market: By providing rapid feedback on code changes, continuous testing
accelerates the development cycle, allowing teams to release high-quality software more quickly.
• Improved Software Quality: Continuous testing ensures that software meets quality standards
at every stage of development, resulting in more reliable and robust applications.
• Cost Efficiency: Catching defects early in the development process reduces the overall cost of
fixing them and minimizes the risk of expensive rework or post-production issues.
• Enhanced Collaboration: Continuous testing fosters collaboration among developers, testers,
and operations teams, promoting better communication and alignment throughout the
development lifecycle.
• Increased Confidence: With automated tests running continuously, stakeholders gain
confidence in the software's stability and functionality, leading to higher levels of trust in the
final product.
Challenges of Continuous Testing
• Initial Setup Complexity: Implementing continuous testing requires setting up automated
testing frameworks, integrating them with CI/CD pipelines, and establishing robust test
environments, which can be complex and time-consuming.
• Maintenance Overhead: Continuous testing environments and automated test suites require
ongoing maintenance to keep them up-to-date with changes in the application, APIs, and
infrastructure, adding to the workload of development teams.
• Resource Intensiveness: Running tests continuously can consume significant computational
resources and may require scaling infrastructure to accommodate testing needs, leading to
increased costs.
• Test Data Management: Creating and managing test data that accurately reflects real-world
scenarios can be challenging, particularly when dealing with sensitive or complex data sets.
• Dependency Management: Continuous testing relies on various external dependencies such as
third-party APIs, services, and environments, which can introduce complexities and
dependencies that need to be managed effectively.
Continuous Testing Tools: Here is a curated list of best Continuous Testing Tools
1. Selenium: Selenium is open-source software testing tool. It supports all the leading browsers
like Firefox, Chrome, IE, and Safari
2. Jenkins: Jenkins is a Continuous Integration tool which is written using Java language. This tool
can be configured via GUI interface or console commands.
3. QuerySurge: QuerySurge is the smart data testing solution that is the first-of-its-kind full
DevOps solution for continuous data testing.
4. Travis: Travis is continuous testing tool hosted on the GitHub. It offers hosted and on-premises
variants. It provides a variety of different languages and a good documentation.
9
Benefits of Continuous Testing:
• Find errors: Ensure as many errors are found before being released to production
• Test early and often: Tested throughout the development, delivery, testing, and deployment
cycles
• Accelerate testing: Run parallel performance tests to increase testing execution speed
• Earn customer loyalty: Accomplish continuous improvement and quality
• Automation: Automate your test cases to decrease time spent testing
• Increase release rate: Speed up delivery to production and release faster
• Reduce business risks: Assess potential problems before they become an actual problem
• DevOps: Incorporates into your DevOps processes smoothly
• Communication transparency: Eliminate silos between the development, testing, and
operations teams
2.3 DEVOPS INFLUENCE ON ARCHITECTURE: INTRODUCING
SOFTWARE ARCHITECTURE
Introducing software architecture is a critical aspect of DevOps, as it lays the foundation for a resilient,
scalable, and maintainable system. There are various non-functional requirements associated with
DevOps with respect to software architecture. Two of them are:
• Frequent, Small Deployments: The architecture should allow small, frequent updates to be
deployed easily. This reduces the risk of errors and allows quicker responses to changes or issues.
• Improved Quality with respect to Changes: The architecture must ensure the quality of the
changes. This means that dedicated systems are involved to ensure that new updates do not
introduce problems. Moreover, such an approach minimizes the occurrence of unexpected
problems thereby minimizing the rollback activities.
Such drawbacks exist in architectures like monolithic architecture, where all parts of the application are
tightly connected. In such a setup, even a small change requires rebuilding and redeploying the entire
system, which can be time-consuming and risky, Instead, DevOps encourages modular architectures,
where different parts of the system can be updated independently, making deployments faster and safer.
Software architecture refers to the high-level design of a system, including its structure, components,
and interactions between them. In DevOps, software architecture is typically developed collaboratively
by multiple teams, including developers, operations personnel, and security professionals.
The DevOps model goes through several phases, these are:
✓ Planning, Identify and Track
✓ Development Phase
✓ Testing Phase
✓ Deployment Phase
✓ Management Phase
There are several key principles that should be considered when designing software architecture in a
DevOps context, including:
• Scalability: The architecture should be designed to accommodate growth and changing needs,
including increased usage, new features, and changing requirements.
• Resilience: The architecture should be designed to withstand failures and disruptions, including
hardware failures, software bugs, and security breaches.
10
• Maintainability: The architecture should be designed to be easy to maintain and update over
time, including adding new features or fixing bugs.
• Security: The architecture should be designed with security in mind, including authentication,
access control, and data encryption.
• Flexibility: The architecture should be designed to be flexible and adaptable, including the ability
to integrate with third-party systems and technologies.
By following these principles and other best practices, DevOps teams can create software architectures
that are robust, scalable, and maintainable, and that can support the needs of their organization over time.
Benefits of DevOps Architecture: A properly implemented DevOps approach comes with a number of
benefits. These include the following that we selected to highlight:
✓ Decrease Cost: The primary concern for businesses is operational cost, DevOps helps
organizations keep their costs low. Because efficiency gets a boost with DevOps practices,
software production increases and businesses see decreases in overall cost for production.
✓ Increased Productivity and Release Time: With shorter development cycles and streamlined
processes, teams are more productive and software is deployed more quickly.
✓ Customers are Served: User experience, and by design, user feedback is important to the
DevOps process. By gathering information from clients and acting on it, those who practice
DevOps ensure that clients wants and needs get honored, and customer satisfaction reaches new
highs.
✓ It Gets More Efficient with Time: DevOps simplifies the development lifecycle, which in
previous iterations had been increasingly complex. This ensures greater efficiency throughout a
DevOps organization, as does the fact that gathering requirements also gets easier.
2.4 THE MONOLITHIC SCENARIO
The Monolithic Scenario is a traditional approach to software architecture where all components of an
application are tightly integrated into a single, self-contained unit. In this scenario, all features of an
application are developed, tested, and deployed together as a single package.
A monolithic architecture is a single-tiered, traditional, unified architecture for designing a software
application. In context, monolithic refers to “composed all in one”, “unable to change”, and “too large”.
Monolithic applications are self-contained, complex, and tightly coupled. It is simple to develop, deploy,
test, and scale horizontally.
Fig: Monolithic
11
Monolithic software is designed to be self-contained, wherein the program's components or functions
are tightly coupled rather than loosely coupled, like in modular software programs. Monolithic
applications are single-tiered, which means multiple components are combined into one large
application. Consequently, they tend to have large codebases, which can be cumbersome to manage over
time.
Web Applications usually consists of three parts:
1. Front End client-side application is written in HTML, JavaScript, jQuery, Angular JS, React JS…
2. Back End Server-side application which contains business logic written in Java, PHP, Python,
Ruby, C#, F# and some other language.
3. Database of whole web application, to store the data using MS SQL, MySql, MangoDB…
All of these parts are closely coupled and frequently communicate with each other. Hence the whole
web application works as a monolith where every part is dependent on others.
Let's suppose we have a large web application with many different functions. We also have a static
website inside the application. The entire web application is deployed as a single Java EE (Enterprise
Edition) application archive. So, when we need to fix a spelling mistake in the static website, we need
to rebuild the entire web application archive and redeploy it again. This process is not only time-
consuming but also risky because even minor changes could potentially affect other parts of the
application, it is unreliable and difficulty in advancing and adopting new tools and technologies.
What happens when we want to correct a spelling mistake? Let's take a look:
1. Identify the mistake to be fixed, and locate the corresponding code in codebase.
2. Create a new branch (a copy of the code) in version control system that mirrors what is currently
in production.
3. Make the correction in this branch and create a new build of the application with this change.
4. Deploy the updated build to the production environment.
Risks Involved: Even though fixing a small issue might seem straightforward, it carries significant risks
in a monolithic architecture:
a) The entire system is critical to business operations, so any error during deployment could lead to
a loss of revenue.
b) The complexity of the system means that changes, even those that seem trivial, require thorough
checks to ensure that new problems do not arise. This includes not just making the change but
also verifying that it does not have unintended side effects, such as altering version numbers or
causing other issues.
Complexity of Changes: Most changes are more complex than simple fixes, requiring careful
consideration of the entire deployment process. This often includes manual checks and validations,
especially in monolithic systems, where even small updates can have far-reaching consequences.
In recent years, there has been a shift away from the monolithic scenario towards more modular, service-
oriented architectures that are better suited to the needs of modern, cloud-based systems. However, many
organizations still use monolithic architectures for legacy applications or for simpler systems that do not
require the scalability and flexibility of a microservices architecture.
In a DevOps context, the monolithic scenario can present several challenges, including:
✓ Limited scalability: Monolithic architectures can be difficult to scale, as all components are
tightly coupled together and cannot be easily scaled independently.
✓ Limited flexibility: Monolithic architectures can be difficult to update or modify, as changes to
one component may impact other components.
12
✓ Limited resilience: Monolithic architectures can be more prone to failures or disruptions, as all
components are tightly coupled together and a failure in one component can impact the entire
system.
To address these challenges, DevOps teams can use techniques such as modular design, containerization,
and automated testing and deployment to make monolithic architectures more scalable, flexible, and
resilient. For example, by using containers to isolate components, teams can more easily scale individual
components independently, and by using automated testing and deployment, teams can reduce the risk
of errors or disruptions during updates or changes to the system.
Benefits of monolithic architecture:
There are benefits to monolithic architectures, which is why many applications are still created using
this development paradigm. For one, monolithic programs may have better throughput than modular
applications. They may also be easier to test and debug because, with fewer elements, there are fewer
testing variables and scenarios that come into play.
At the beginning of the software development lifecycle, it is usually easier to go with the monolithic
architecture since development can be simpler during the early stages.
• Simplicity of development. The monolithic approach is a standard way of building applications. No
additional knowledge is required. All source code is located in one place which can be quickly
understood.
• Simplicity of debugging. The debugging process is simple because all code is located in one place.
• Simplicity of testing. You test only one service without any dependencies. Everything is usually
clear.
• Simplicity of deployment. Only one deployment unit (e.g. jar file) should be deployed. There are
no dependencies. Everything exists and changes in one place.
• Simplicity of application evolution. Basically, the application does not have any limitation from a
business logic perspective. If you need some data for new feature, it is already there.
• Simplicity in onboarding new team members. The source code is located in one place. New team
members can easily debug some functional flow and to get familiar with the application.
That said, the monolithic approach is usually better for simple, lightweight applications. For more
complex applications with frequent expected code changes or evolving scalability requirements, this
approach is not suitable.
Drawbacks of monolithic architecture:
Generally, monolithic architectures suffer from drawbacks that can delay application development and
deployment. These drawbacks become especially significant when the product's complexity increases or
when the development team grows in size.
• Slow speed of development. The simplest disadvantage relates to CI/CD pipeline. Imagine the
monolith that contains a lot of services. Each service in this monolith is covered with tests that are
executed for each Pull Request. Even for a small change in a source code, you should wait a lot of
time for your pipeline to succeed.
• High code coupling. Of course, you can keep a clear service structure inside your repository. As a
result, the system becomes harder to understand especially for new team members.
• Testing becomes harder. Even a small change can negatively affect the system. As a result, the
regression for full monolithic service is required.
13
• Performance issues. Potentially, you can scale the whole monolithic service in cases of performance
issues.
• The cost of infrastructure. In cases of performance issues, you should scale the whole monolithic
service. It brings additional costs for application operability.
• Lack of flexibility. Using Monolithic Architecture, you are tight to the technologies that are used
inside your monolith.
2.5 ARCHITECTURE RULES OF THUMB
“Architecture Rules of Thumb” are the general principles or guidelines or best practices widely used in
specific area, it can help DevOps teams make important architectural decisions for their systems. They
are not strict rules, more like practical Suggestions to achieve positive results in System design and
implementation.
These rules are based on experience and industry best practices, and can help teams avoid common
pitfalls and ensure that their systems are robust, scalable, and maintainable. There are several architecture
rules of thumb in DevOps that can help teams to design, deploy, and manage applications more
efficiently. Some examples of Architecture Rules of Thumb that are influenced by DevOps include:
1. Automation First: Automate as much as possible in the deployment process. This includes tasks
such as testing, provisioning, and deployment, to minimize the risk of human error and reduce
manual work.
2. Modularity: Systems in a modular fashion (i.e. break Software into Smaller modules that work
independently) enabling easier maintenance updates.
3. Scalability: Ensure that architectures can handle increased load without Sacrificing performance.
4. Resilience: Resilience means building systems that can handle failures by including backup
plans, automatic switches.
5. Infrastructure as Code: Use infrastructure as code (IaC) to define and manage infrastructure
and configuration, making it more predictable, repeatable, and easier to maintain.
6. Microservices Architecture: Use a microservices architecture to break down applications into
smaller, more manageable components. This makes it easier to deploy and scale applications
independently, and allows for faster development cycles.
7. Continuous Integration and Continuous Delivery: Adopt continuous integration and delivery
(CI/CD) practices to automate the software delivery pipeline. This includes tasks such as testing,
building, and deployment, to enable faster feedback loops and reduce the risk of errors.
8. Monitoring and Logging : Build applications with observability in mind, which means they are
designed to be monitored and analyzed in real-time. This includes using tools such as log
aggregation, metrics collection, and distributed tracing.
9. Security by Design: Incorporate security into every stage of the software development lifecycle.
These includes practices such as code scanning, vulnerability testing, and access control. By following
these rules of thumb and other best practices, DevOps teams can create systems that are more robust,
resilient, scalable, secure and maintainable, while also reducing the risk of downtime, data loss, or
security breaches.
DevOps emphasizes certain rules of thumb in software architecture, such as separation of concerns,
high cohesion and low coupling. These principles help ensure that different parts of the system can be
developed, tested, and deployed independently, without affecting the overall functionality of the system.
14
2.6 THE SEPARATION OF CONCERNS
• The separation of concerns introduced by computer scientist Edsger Dijkstra, means that
different aspects of system managed separately. The separation of concerns is a design principle
in computer science that refers to breaking down a complex system into smaller, more
manageable parts or components, with each component focused on a specific task or
responsibility. The idea is to separate different aspects of a system's functionality so that each can
be developed, tested, and maintained independently. The overall goal of separation of concerns
is to establish a well-organized system where each part fulfills a meaningful and intuitive role
while maximizing its ability to adapt to change.
• The separation of concerns is particularly important in software engineering because it can help
to improve the quality, maintainability, and scalability of a software system. By dividing a system
into smaller, more manageable components, developers can work on individual parts of the
system without worrying about how changes to one component will affect others.
• The separation of concerns in DevOps is an important principle that emphasizes the need to
separate the concerns of different teams involved in software development and operations. It
aims to promote collaboration and efficient workflows among teams with different roles and
responsibilities.
• This separation helps to reduce complexity, improve maintainability, and enhance reusability. In
software development, concerns can be categorized into different areas, such as business logic,
data persistence, user interface, security, and performance.
• Separating these concerns into different components allows developers to work on each
component independently, without affecting the other parts of the system.
Ex1: In the context of DevOps, there are generally three main teams involved: Development,
Operations, and Quality Assurance. Each team has a specific set of responsibilities, and
the separation of concerns helps to ensure that these responsibilities are clearly defined and
that each team can focus on its own tasks without interfering with the tasks of others. For
example, the development team is responsible for writing code and implementing new
features, while the operations team is responsible for deploying and maintaining the
software in production environments. The quality assurance team, on the other hand, is
responsible for testing and ensuring the quality of the software.
Ex2: In a web application, the user interface can be separated from the business logic, allowing
developers to focus on the presentation layer without affecting the core functionality of the
application. Similarly, the data persistence layer can be separated from the business logic,
allowing developers to work on the storage and retrieval of data without affecting how the
data is used within the application.
• The separation of concerns design principle is used in many software development
methodologies, such as Model-View-Controller (MVC), Service Oriented Architecture (SOA),
and Microservices Architecture.
• Model-View-Controller (MVC) architecture, where the model represents the data and business
logic, the view handles the presentation of data to the user, and the controller handles user input
and manages the interaction between the model and view.
• Overall, the separation of concerns is a powerful tool for managing complexity and improving
the design and quality of software systems.
15
Advantages of Separation of concerns: Separation of Concerns implemented in software architecture
would have several advantages:
✓ Lack of duplication and singularity of purpose of the individual components render the overall
system easier to maintain.
✓ The system becomes more stable as a byproduct of the increased maintainability.
✓ The strategies required to ensure that each component only concerns itself with a single set of
cohesive responsibilities often result in natural extensibility points.
✓ The decoupling which results from requiring components to focus on a single purpose leads to
components which are more easily reused in other systems, or different contexts within the same
system.
✓ The increase in maintainability and extensibility can have a major impact on the marketability
and adoption rate of the system.
There are several flavors of Separation of Concerns. Horizontal Separation, Vertical Separation, Data
Separation and Aspect Separation.
16
COUPLING:
• Coupling refers to the degree of dependency between two modules. We always want low
coupling between modules. Again, we can see coupling as another aspect of the principle of the
separation of concerns.
• Coupling is a design concept in computer science that refers to the degree to which two or more
components or modules are interconnected or dependent on each other. The principle of coupling
is important because it affects the complexity, maintainability, and scalability of software
systems. A high level of coupling means that the components are tightly connected and rely on
each other extensively, while a low level of coupling means that the components are loosely
connected and do not depend on each other as much.
• High coupling can make a system more difficult to understand and modify, and can lead to
unexpected consequences when changes are made to one component that affect others. Low
coupling, on the other hand, can make a system easier to modify and maintain, and can improve
its ability to scale and evolve over time.
• Systems with high cohesion and low coupling would automatically have separation of concerns,
and vice versa. i.e. low cohesion and high coupling make the code complex and error-prone.
• There are different types of coupling, including content coupling, common coupling, control
coupling, stamp coupling, data coupling, and temporal coupling. Data coupling is considered the
weakest type of coupling because it only requires data to be passed between components, while
content coupling is considered the strongest type of coupling because components share data and
implementation details.
• In summary, the principle of coupling is an important design principle that emphasizes the need
to minimize the interdependencies between components or modules in a software system, in order
to make it more modular, maintainable, and scalable.
Difference between Cohesion and Coupling: The major difference between cohesion and coupling is
that cohesion deals with the interconnection between the elements of the same module. But, coupling
deals with the interdependence between software modules.
Cohesion Coupling
Cohesion is defined as the degree of Coupling is defined as the degree of
relationship between elements of the same interdependence between the modules.
module.
It’s an intra-module approach. It’s an inter-module approach.
High cohesion is preferred due to improved Low Coupling is preferred as it results in less
focus on a particular task. dependency between the modules.
Cohesion is used to indicate a module’s relative Coupling is used to indicate the relative
functional strength. independence among the modules.
In cohesion, the module focuses on a particular In coupling, a particular module is connected to
task. other modules.
Cohesion is also known by the name 'Intra- Coupling is also known by the name ‘Inter-module
module Binding’. binding’.
Cohesion is the concept of intra-module Coupling is the concept of inter-module.
17
Cohesion represents the relationship within a Coupling represents the relationships between
module. modules.
Increasing cohesion is good for software. Increasing coupling is avoided for software.
Cohesion represents the functional strength of Coupling represents the independence among
modules. modules
Highly cohesive gives the best software. Whereas loosely coupling gives the best software.
In cohesion, the module focuses on a single In coupling, modules are connected to the other
thing. modules.
Cohesion is created between the same module. Coupling is created between two different modules.
There are Six types of Cohesion There are Six types of Coupling
1. Functional Cohesion. 1. Common Coupling.
2. Procedural Cohesion. 2. External Coupling.
3. Temporal Cohesion. 3. Control Coupling.
4. Sequential Cohesion. 4. Stamp Coupling.
5. Layer Cohesion. 5. Data Coupling
6. Communication Cohesion. 6. Content Coupling.
19
Some basic functionalities of database migration systems are,
• Database Versions: Similar to software versions, databases also have versions too. When a
change is performed with respect to the structure of a database, a new version of it is created.
• Changesets: These are the individual instructions that tell the database about the ways of
implementing a change. For example, one changeset might say "create a new table called
'customer'," and another might say "add a column called 'address' to the 'customer' table."
• Migration Tools: Tools like Flyway or Liquibase help in managing these changes by keeping
track of different database versions. It uses a set of instructions (called changesets) to update the
database. For example, there can be two changesets where one adds a new table and another adds
a new column to an existing table.
• Version Control: When a tool like Flyway or Liquibase is used, it records changesets that have
already been applied to the database in a special table. In this way, the tool knows about the
changes that are still needed to be applied the next time it runs.
Using a migration tool like Flyway or Liquibase ensures that database changes are applied consistently
across different environments (like development, testing, and production). This approach helps prevent
errors and keeps the database structure aligned with the needs of the application.
A database schema is a blueprint or plan that defines the logical structure of a database, including the
tables, columns, data types, constraints, relationships, and other characteristics of the data stored in the
database. In other words, a database schema describes how the data is organized and how it can be
accessed and manipulated. A database schema is typically created during the design phase of a database
application and is based on the requirements and specifications of the application. The schema can be
implemented using various database management systems (DBMS), such as MySQL, Oracle, SQL
Server, and PostgreSQL.
Database migrations, also known as schema migrations, database schema migrations, or simply
migrations, are controlled sets of changes developed to modify the structure of the objects within a
relational database. Migrations help transition database schemas from their current state to a new desired
state, whether that involves adding tables and columns, removing elements, splitting fields, or changing
types and constraints.
Managing these changes is crucial in a DevOps environment to ensure that updates happen smoothly
without effecting the functionality of the application. Challenges in updating database usually arise
because database not only carries data but also the structure. The application has to be stopped while
upgrading it without considering its current state. Whenever a change is performed, the state of the
application is changed while changing its database structure. These changes are seen as different
versions.
Handling Database Migrations in DevOps: Here are some best practices for handling database
migrations in DevOps:
1. Use a version control system (VCS) for database scripts: Store all database migration scripts
in a VCS, such as Git or SVN, to keep track of changes, collaborate with other team members,
and easily roll back changes if necessary.
2. Use a Continuous Integration/Continuous Deployment (CI/CD) pipeline: Use a CI/CD
pipeline to automate the process of building, testing, and deploying applications and database
changes. This helps to ensure that database migrations are consistently applied across all
environments and reduces the risk of errors and downtime.
20
3. Automate database migrations: Use a tool like Flyway or Liquibase to automate database
migrations across different environments, including development, testing, staging, and
production. This helps to reduce errors and ensure consistency.
4. Test database migrations thoroughly: Perform thorough testing of the migration scripts before
applying them to production, including unit testing, integration testing, and end-to-end testing.
5. Plan for rollback: Plan for the possibility of a migration failure and have a rollback plan in place
to quickly revert to the previous version of the database.
6. Use environment-specific configuration files: Use environment-specific configuration files to
manage database connection settings for different environments, such as development, testing,
staging, and production.
Database Migration Challenges: DB migration has been a common practice for years. However, that
does not change that it requires careful consideration due to the complex nature of its data migration
steps. Some key challenges companies encounter while migrating their data include:
1. Data Loss: The most common issue firms face data loss during the DB migration. During the
planning stage, testing for data loss or data corruption is crucial to verify whether complete data
was migrated during the migration process or not.
2. Data Security: Data is a business’s most valuable asset. Therefore, its security is of utmost
importance. Before the DB migration process occurs, data encryption should be a top priority.
3. Difficulty during planning: Large companies usually have disparate databases in different
departments of the companies. During the planning stage of database migration, locating these
databases and planning how to convert all schemas and normalize data is a common challenge.
4. Migration strategy: A common question asked is how to do DB migration. Companies miss out
on some crucial aspects and use a database migration strategy that is not suitable for their
company. Therefore, it is necessary to conduct ample research before DB migration occurs.
Why Use Database Migration: The common reasons for using database migration are:
• Upgrading to the latest version of the database software to improve security and compliance
• Moving existing data to a new database to reduce cost, enhance reliability, improve performance,
and achieve scalability.
• Moving from an on-premise database to a cloud-based database for better scalability and lower
costs.
• Merge data from several databases into a single database for a unified data view post-merger
Even though they are essential, data migration projects can be very complex. Data migration requires
downtime, which may lead to interruption in data management operations.
How to do Database Migrations:
DB migration is a multi-step process that starts with assessing the source system and finishes at testing
the migration design and replicating it to the product build. It is essential to have an appropriate database
migration strategy and the right DB migration tools to make the process more efficient. Let’s take a look
at the different steps to understand how to do database migration:
1. Understanding the Source Database: A vital data migration step to understand is the source
data that will populate your target database before starting any database migration project.
✓ What is the size of the source database? The size and complexity of the database you
are trying to migrate will determine the scope of your migration project. This will also
determine the time and computing resources required to transfer the data.
21
✓ Does the database contain ‘large’ tables?’ If your source database contains tables that
have millions of rows, you might want to use a tool with the capability to load data in
parallel.
✓ What kind of data types will be involved? If you migrate data between different
databases, such as an SQL database to an Oracle one, you will need schema conversion
capabilities to successfully execute your DB migration project.
2. Assessing the Data: This step involves a more granular assessment of the data you want to
migrate. We would like to profile our source data and define data quality rules to remove
inconsistencies, duplicate values, or incorrect information. Data profiling at an early stage of
migration will help you mitigate the risk of delays, budget overruns, and even complete failures.
We will also be able to define data quality rules to validate your data and improve its quality and
accuracy, resulting in efficient DB migration.
3. Converting Database Schema: Heterogeneous migrations involving migration between
different database engines are relatively more complex than homogenous migrations. While
schemas for heterogeneous database migrations can be converted manually, it is often very
resource-intensive and time-consuming. Therefore, using a data migration tool with database
schema migration conversion capability can help expedite the process and migrate data to the
new database.
4. Testing the Migration Build: It’s a good idea to adopt an iterative approach to testing a
migration build. You can start with a small subset of your data, profile it, and convert its schema
instead of running a full migration exercise at once. This will help you ensure that all mappings,
transformations, and data quality rules are working as intended. Once you have tested a subset
on your database migration tool, you can increase the data volume gradually and build a single
workflow.
5. Executing the Migration: Most companies plan migration projects for when they can afford
downtimes, e.g., on weekends or a public holiday. That said, it is now more important than ever
before to plan DB migrations to minimize or outright ,eliminate interruptions to everyday data
management processes. This can be achieved with paid and free database migration tools that
offer data synchronization or Change Data Capture (CDC) functionality. Using these tools,
you can perform the initial load and then capture any changes during or after the initial load.
Once all the data has been migrated to the new database successfully, a retirement policy needs to be
developed for the old database, if required. In addition, systems need to be put into place to validate and
monitor the quality of the data transferred to the target database.
By following below best practices, organizations can manage database migrations in a way that
minimizes risk, ensures consistency across environments, and reduces downtime and errors.
• Automate migrations: Use tools like Liquibase or Flyway to automate database migrations.
These tools can help to ensure that schema changes are applied consistently across all
environments, and can also help to reduce the risk of errors or downtime during migrations.
• Test thoroughly: Test database migrations thoroughly before deploying them to production. Use
automated testing tools to ensure that schema changes do not cause data loss or other unintended
consequences.
• Use rolling updates: Use rolling updates to apply schema changes to databases in a controlled
and gradual manner. This can help to reduce the risk of downtime or errors during migrations.
22
• Coordinate with other teams: Coordinate with other teams, such as operations and database
administrators, to ensure that database migrations are executed properly and do not conflict with
other changes being made to the system.
• Monitor performance: Monitor database performance before and after migrations to ensure that
the system is running smoothly and that there are no unexpected bottlenecks or issues.
• Backup and recovery: Backup the database before migrating to a new schema, and have a
recovery plan in place in case of any issues during the migration process.
Benefits of Database Migration:
• Migrations are helpful because they allow database schemas to evolve as requirements change.
• DB migration is essential because it helps to save money.
• The benefit of database migration is that it helps move data from an outdated legacy system to a
modernized software.
• Database migration helps unify disparate data, so it is accessible by different techniques.
Tools used in Database Migration:
✓ Liquibase: Liquibase is an open-source database migration tool that helps to manage and
automate the process of database schema changes. It allows developers to track, version, and
deploy changes to database schemas in a consistent and repeatable way, which can help to
improve the quality and reliability of database-driven applications.
✓ Flyway: Flyway is an open-source database migration tool that automates the process of database
schema changes. It allows developers to version, track, and migrate database changes in a simple,
repeatable, and automated way.
ROLLING UPGRADES:
Rolling upgrades refer to the process of updating a system gradually, without causing any downtime for
users. This method is useful when continuous service availability is crucial. In a rolling upgrade, only
part of the system is updated at a time. For example, consider a scenario with a system running on two
servers. In such systems, a load balancer is used between systems and servers. Typical process of such
upgrades involves the following steps:
1. Initial Setup: The system starts with both servers running the current version of the software.
The load balancer directs user traffic to both servers equally.
2. Database Migration: The first step is to make a database migration that introduces new fields
(like splitting a "name" field into "first name" and "surname") while keeping the old field intact.
This ensures compatibility with the old software version.
3. Update One Server: The load balancer is configured to stop sending traffic to one of the servers.
This server is then updated to the new version of the software that uses the new database fields.
4. Switch Traffic: The load balancer is adjusted to route traffic to the updated server. The other
server is taken offline for its update.
5. Complete the Upgrade: After the second server is updated, both servers are made available
online with updated versions. The old database field can now be safely removed it no longer
needed.
This process allows the system to be upgraded without interrupting service, as users continue to interact
with at least one server while the other is being updated. Rolling upgrades are particularly useful for
large-scale systems where downtime must be minimized.
23
MANUAL INSTALLATION:
In DevOps context, manual installation refers to setting up software on a server without automation tools.
The process typically involves installing necessary software packages using command-line instructions.
For instance, consider the manual installation process on Red Hat Linux.
1. Installing PostgreSQL: The PostgreSQL relational database is installed using the command
"dnf install postgresql". This command checks for the already existing installation of the
requested database on the system. It retrieves and installs the required packages from a remote
repository if there is no existing versions installed.
2. Installing NGINX: Similarly, the NGINX web server is installed with the command "dnf install
nginx". NGINX is often used as a frontend web server due to its performan. e advantages and
lower memory usage compared to other servers. It is also commonly used for tasks like SSL
acceleration and load balancing.
3. Running the Application: After the necessary software (like PostgreSQL and NGINX) is
installed, the application's code is built and run. For example, a Java application might be
compiled using a tool like Leiningen, and then executed to start the service.
Even though this process is manual, some steps are already automated by the package manager (such as
dnf). The package manager handles tasks like checking dependencies and ensuring that compatible
versions of software are installed. Manual installation is often the first step before automating the
process, ensuring that everything works correctly before moving on to automated deployments.
24
✓ Payment Application
✓ Allocating delivery to logistics partner
✓ The upselling application that uses data analytics and machine learning.
microservices architecture
• DevOps, on the other hand, is a set of practices that emphasize collaboration and
communication between development and operations teams to automate and streamline the
software delivery process. DevOps aims to create a culture of continuous integration and
continuous delivery (CI/CD) to enable faster and more reliable software releases.
• When used together, microservices and DevOps can enable faster development and
deployment cycles, as well as greater flexibility and scalability. By breaking down a large
application into smaller, independent services, teams can work on different components of
the application in parallel, allowing for faster development and testing. Meanwhile, DevOps
practices can enable rapid and automated deployment of these services, allowing teams to
quickly iterate and release new features.
• However, implementing microservices in a DevOps environment can also introduce new
challenges. Teams must carefully manage the dependencies between different microservices,
as well as ensure that each service meets the necessary quality and performance standards.
Additionally, implementing DevOps practices requires a significant investment in
automation and tooling, which can be challenging for smaller organizations or teams with
limited resources.
Key Features of Microservices:
The following are features of Microservices
✓ Loosely Coupled: Each service is independent, so changes to one service won’t break others.
✓ High Cohesion: Every microservice focuses on a single, specific task (like payment processing
or user management).
✓ Independent Deployment: Developers can deploy updates to one service without touching the
rest of the system.
✓ Communication via APIs: Services interact with each other through lightweight protocols, such
as RESTful APIs or gRPC.
✓ Containerization: Microservices are usually deployed in containers (like Docker) to ensure
consistency across different environments.
25
✓ Managed by Orchestration Tools: Tools like Kubernetes manage the deployment, scaling, and
coordination of services, ensuring the application runs smoothly.
Advantages of microservices architecture: Microservices architecture presents developers and
engineers with a number of benefits that monoliths cannot provide. Here are a few of the most notable.
1. Less development effort: Smaller development teams can work in parallel on different
components to update existing functionalities. This makes it significantly easier to identify hot
services, scale independently from the rest of the application, and improve the application.
2. Improved scalability: Microservices launch individual services independently, developed in
different languages or technologies; all tech stacks are compatible, allowing DevOps to choose
any of the most efficient tech stacks without fearing if they will work well together.
3. Independent deployment: Each microservice constituting an application needs to be a full stack.
This enables microservices to be deployed independently at any point. Since microservices are
granular in nature, development teams can work on one microservice, fix errors, then redeploy it
without redeploying the entire application.
4. Error isolation: In monolithic applications, the failure of even a small component of the overall
application can make it inaccessible. In some cases, determining the error could also be tedious.
With microservices, isolating the problem-causing component is easy since the entire application
is divided into standalone, fully functional software units. If errors occur, other non-related units
will still continue to function.
5. Integration with various tech stacks: With microservices, developers have the freedom to pick
the tech stack best suited for one particular microservice and its functions. Instead of opting for
one standardized tech stack encompassing all of an application’s functions, they have complete
control over their options.
Disadvantages of Microservice Architecture:
1. To build Microservice Architecture based web applications highly experienced and expensive
resources are required.
2. Building Microservices based web applications requires web developers, cloud architects,
DevOps engineer, Quality Analyst, Project managers, Business Analysts, Product Managers and
lots of other team members that are defined in a scaled agile framework.
3. Nowadays Kubernetes and Dockers are being used to host the services.
4. Making changes in services can become really tough when service is widely being used among
other services.
26
Data Tier in DevOps refers to the layer of the application architecture that is responsible for storing,
retrieving, and processing data. The data tier is typically composed of databases, data warehouses, and
data processing systems that manage large amounts of structured and unstructured data.
The data tier consists of a database and a program for managing read and write access to a database. This
tier may also be referred to as the storage tier and can be hosted on-premises or in the cloud.
The data tier is an important component of any software system and is critical to the success of DevOps.
The data tier includes the databases, data storage systems, and data processing systems that support the
application. Effective management of the data tier is essential for ensuring data availability, reliability,
and scalability.
In DevOps, the data tier is considered an important aspect of the overall application architecture and is
typically managed as part of the DevOps process. This includes:
1. Data management and migration: Ensuring that data is properly managed and migrated as part
of the software delivery pipeline.
2. Data backup and recovery: Implementing data backup and recovery strategies to ensure that
data can be recovered in case of failures or disruptions.
3. Data security: Implementing data security measures to protect sensitive information and comply
with regulations.
4. Data performance optimization: Optimizing data performance to ensure that applications and
services perform well, even with large amounts of data.
5. Data integration: Integrating data from multiple sources to provide a unified view of data and
support business decisions.
By integrating data management into the DevOps process, teams can ensure that data is properly
managed and protected, and that data-driven applications and services perform well and deliver value to
customers.
Importance of Planning the Data Tier: It's essential to carefully design the data tier to ensure it meets
the overall scalability, availability, and performance requirements of the application. Consider the
following factors for planning the data tier:
✓ Database Technology: Selecting the right database type (relational vs. NoSQL) based on the
specific needs of each service.
✓ Performance Requirements: Ensuring that the data layer can handle the expected load and data
access patterns.
✓ Availability: Implementing strategies for data backup, replication, and recovery to prevent data
loss.
When it comes to the data tier in a microservices architecture:
• In a microservices architecture, each microservice is designed to perform a specific business
function or service, and is responsible for its own data storage and management. This means that
the data tier in a microservices architecture is decentralized, with each microservice having its
own data storage and management system.
• Decentralizing the data tier in this way offers a number of benefits. First, it allows for greater
flexibility in terms of data storage and management, as each microservice can choose the best
data storage and management system for its specific needs. This can include both traditional
relational databases, as well as newer NoSQL databases and other data storage solutions.
• Additionally, decentralizing the data tier can help to improve scalability and performance. By
distributing the data across multiple microservices, each microservice can be optimized for
27
performance and scalability, allowing the system as a whole to handle higher volumes of traffic
and data.
• However, managing data in a decentralized data tier can also present some challenges. For
example, ensuring data consistency and maintaining data integrity can become more complex
when data is distributed across multiple microservices. Synchronizing data between
microservices can become complicated. There is no shared database to ensure that all services
are working with the same data. Organizations need to carefully design their data architecture
and implement appropriate data management strategies to ensure that data is accurate and
consistent across the entire system.
• In summary, in a microservices architecture, the data tier is decentralized, with each microservice
responsible for its own data storage and management. This approach offers benefits in terms of
flexibility, scalability, and performance, but requires careful planning and management to ensure
data consistency and integrity.
There are several key considerations that DevOps teams need to keep in mind:
• Data storage: In a microservices architecture, each service may have its own database, which
can lead to data redundancy and inconsistency. To address this, DevOps teams can use techniques
such as data replication, event-driven architectures, and data aggregation to ensure that data is
consistent and up-to-date across all services.
• Data access: In a microservices architecture, different services may require access to the same
data, which can lead to data silos and duplication. To address this, DevOps teams can use
techniques such as API gateways, data virtualization, and service meshes to provide secure and
efficient access to data across all services.
• Data security: In a microservices architecture, each service may have its own security model,
which can make it difficult to ensure consistent and effective data security across all services. To
address this, DevOps teams can use techniques such as identity and access management, data
encryption, and security testing to ensure that data is protected throughout the system.
By considering these and other factors when designing and implementing the data tier in a microservices
architecture, DevOps teams can create systems that are more scalable, flexible, and resilient, and that
can support the needs of their organization over time.
Key Features of Microservices and Data Tier: The following are the features of Microservices and
Data Tier
28
✓ Independent Databases: Each microservice typically has its own database. This allows services
to be developed, deployed, and scaled independently, and ensures that the data storage can be
optimized for the specific needs of that service. For example, a User Service might use a
relational database (like PostgreSQL) to manage user accounts, while a Product Service might
use a NoSQL database (like MongoDB) to handle product catalog data, which is more flexible
for handling various product attributes.
✓ Flexibility and Scalability: By allowing each microservice to choose its own data store,
organizations can select the best database technology suited for each service's requirements. This
flexibility enhances overall system scalability. For example, a Recommendation Service could
use an in-memory data store (like Redis) for fast access to frequently requested data, while an
Order Service can rely on a durable, transactional database.
CONWAY'S LAW:
• Conway's Law is a principle that was first introduced by computer scientist Melvin Conway in
1968. The law states that the structure of an organization that designs software winds up copied
in the organization of the software. This is called Conway's Law.
• For example, if different teams in a company don’t communicate well, their software parts may
also not work smoothly together.
• The three-tier pattern, for instance, mirrors the way many organizations’ IT departments are
structured:
✓ The database administrator team, or DBA team for short
✓ The backend developer team
✓ The frontend developer team
✓ The operations team
Well, that makes four teams, but we can see the resemblance clearly between the architectural
pattern and the organization.
• The primary goal of DevOps is to bring different roles together, preferably in cross-functional
teams. If Conway's Law holds true, the organization of such teams would be mirrored in their
designs. The microservice pattern happens to mirror a cross-functional team quite closely.
• For example, if an organization is divided into separate functional teams (such as a team for
front-end development, a team for back-end development, and a team for database management),
the resulting software system is likely to reflect this structure, with separate components for each
area of functionality.
29
• Integration Points and Failures: We want to be able to deploy new code quickly, but we also
want our software to be reliable. Microservices have more integration points between systems
and suffer from a higher possibility of failure than monolithic systems.
• Importance of Automated Testing: Automated testing is very important with DevOps so that
the changes we deploy are of good quality and can be relied upon. This is, however, not a solution
to the problem of services that suddenly stop working for other reasons. Since we have more
running services with the microservice pattern, it is statistically more likely for a service to fail.
• Monitoring and Response: We can partially mitigate this problem by making an effort to
monitor the services and take appropriate action when something fails. This should preferably be
automated. In our customer database example, we can employ the following strategy:
✓ We use two application servers that both run our application.
✓ The application offers a special monitoring interface via JsonRest.
✓ A monitoring daemon periodically polls this monitoring interface.
✓ If a server stops working, the load balancer is reconfigured such that the offending server
is taken out of the server pool.
• Service-Specific Monitoring: General system monitoring (like checking CPU, memory, or disk
space) may not be enough to ensure a service is functioning properly. For example, a service
might appear to be running but could not access its database due to a configuration error. A
service-specific monitoring interface could check database connectivity and report its status.
• Standardization of Monitoring Interfaces: It is beneficial for an organization to standardize
the format of these health checks to ensure consistency across different services. This helps in
effectively using the available monitoring tools.
In summary, DevOps, architecture, and resilience are all critical components of building reliable and
high-performing software systems. By applying DevOps principles to architecture design, teams can
build more resilient and adaptable systems that can respond quickly to changing business needs and
unexpected events.
• DevOps: DevOps is a set of practices that emphasizes collaboration and communication between
development and operations teams to automate and streamline the software delivery process. The
goal is to enable faster and more reliable software releases through continuous integration and
continuous delivery (CI/CD) practices.
• Architecture: Architecture refers to the design and structure of a software system. A well-
designed architecture can improve the system's scalability, resilience, and adaptability to
changing business needs. For example, a microservices architecture can enable teams to work on
different components of the system in parallel, allowing for faster development and testing.
• Resilience: Resilience refers to the ability of a software system to continue functioning even in
the face of unexpected events or failures. A resilient system can recover quickly from failures
and minimize the impact on end-users. DevOps practices and architecture design can both
contribute to resilience by enabling teams to identify and respond to failures quickly, and by
building in redundancy and failover mechanisms to ensure that the system can continue
functioning even if certain components fail.
30