Unit 13
Unit 13
Improvement
UNIT 13 SOFTWARE PROCESS IMPROVEMENT
Structure
13.0 Introduction
13.1 Objectives
13.2 Software Engineering Institute Capability Maturity Model Integrated (SEI
CMMi)
13.3 Software Process Optimization through First Time Right framework.
13.3.1 Requirements Related Optimizations
13.3.2 Architecture and Design Related Optimizations
13.3.3 Security Related Optimizations
13.3.4 Testing Related Optimizations
13.3.5 Development Related Optimizations
13.3.6 DevOps Related Optimizations
13.3.7 Infrastructure Related Optimizations
13.3.8 Project Management Related Optimizations
13.3.9 Governance Related Optimizations
13.3.10 Tools
13.4 Software Process Optimization for Validation Phase
13.4.1 Performance Testing Tools
13.4.2 Performance Monitoring and Notification
13.4.3 Performance Monitoring Tools
13.5 SEI CMM Process Examples
13.5.1 Requirements Development
13.5.2 Technical Solution
13.5.3 Requirements Management
13.5.4 Product Integration
13.5.5 Project planning
13.6 Summary
13.7 Solutions/Answers
13.8 Further Readings
13.0 INTRODUCTION
29
Advanced Topics in
Software Engineering
13.1 OBJECTIVES
We have depicted the various levels of the SEI CMMi in the Figure 13.2.
The figure 13.2 elaborates all the SEI CMM levels and the key focus areas and
processes at each of the levels. The initial stage is the as-is stage of the organization.
Level 1 is called “Managed” wherein the key focus area is basic project management.
At this stage, we optimize and improve the processes related to requirements
management, project planning, project monitoring, supplier agreement, and
measurement, process and product quality assurance and configuration management.
31
Advanced Topics in Level 3 is “Defined” stage wherein we standardize the processes. We standardize the
Software Engineering processes related to requirements development, technical solution, process integration,
verification, validation, risk management, organizational process focus, organizational
process definition, and integrated project management and decision analysis.
Level 4 is called “Quantitatively managed” stage where quantitative management is
the main focus area. We aim to optimize the processes such as organizational process
performance and quantitative project management.
The final level 5 is called “Continuous process improvement” wherein we focus on
iteratively improving the organizational processes such as organizational innovation
and deployment and causal analysis and resolution.
We look at examples for various SEI CMM levels in later section.
We have detailed the first time right (FTR) framework that incorporates the best
practices at various phases of the software engineering. As part of the FTR framework
we have explained the key best practices, methods, tools and improvement techniques
during SDLC phases including requirements gathering, architecture and design,
testing, DevOps, Infrastructure, project management and governance.
In many scenarios the requirement gaps snowball into production issues. Hence it is
crucial to plug all the gaps in the requirements stage. We have proposed below given
optimizations during requirements elaboration stage:
Requirement traceability matrix: The business analysts and the project manager
should jointly own the requirement traceability matrix that maps each use
case/Jira story to the corresponding test case, code artefact and the release details
to ensure that there are no gaps in the delivery.
Metrics and SLA definition and sign-off: We need to quantify the functional
and non-functional requirements with accurate metrics and SLAs. For instance, a
requirement like “performance should be good” is vague and ambiguous; we
should formally quantity it as “the total page load time for the home page should
be less than 2 second with the maximum concurrent user load of 100 users in the
North American geography”. We need to define the metrics related to application
response time, availability, scalability, security and get a formal signoff from
business ness.
NFR definition and sign-off: All the non-functional requirements such as
security, performance, scalability, accessibility, multi-lingual capability and such
should be properly defined and signed off by the business.
Business stakeholder engagement: The business stakeholders should be actively
engaged throughout the requirement elaboration phase. Without active
stakeholder engagement we would miss the requirements and we would face
challenges in getting the sign-off.
Prototype demos: We should prepare the mockups/prototypes iteratively and
should do frequent demos to showcase the design, user journeys and multi-device
experience. This helps us to pro-actively solicit the feedback comments from all
the stakeholders and incorporate them.
Design Thinking Approach: Leverage the design thinking approach to
empathize with the user and iteratively define the optimal solution exploring the
alternatives. Use working ideation and prototype sessions to test the solutions.
During the architecture and design phase, we have has to ensure the following for
delivering the first time right design and architecture:
Patterns and best practices based design: The architect has to identify all the
application architecture patterns, Industry best practices and design patterns
applicable for the solution. As part of this exercise, we has to identify various
layers, components and the responsibilities.
Tools, frameworks, package evaluation and fitment: The architect has to
evaluate the market leading products, frameworks, open source libraries that are
best fit for the requirements. Leveraging proven frameworks, tools and open
source libraries greatly contribute to the improved productivity, turnaround time
and improved quality. For example as part of the architecture phase, the architect
has to critically evaluate technologies such as Angular vs React vs Vue for UI;
Spring Boot vs Serverless services, SQL vs NoSQL, mobile web vs hybrid app vs
native app, PaaS vs IaaS vs SaaS etc.
Standards and architecture principles definition: The architect has to define
the applicable standards for the solution and for the business domain. At this stage
the architect has to define the architecture principles such as headless design,
stateless integration, token based security etc. Few application domains need to
33
Advanced Topics in implement few standards for regulatory compliance (such as HIPAA for health
Software Engineering care domain)
Optimal NFR and Integration design: The architect has to design the
application to satisfy all the specified NFR (performance, scalability, availability
and such) and the integration SLAs.
Identification of automation tools: The architect has to identify all the
automation tools that can be used for the project. This includes automation tools
for code review, IDE, functional testing and such.
Feasibility validation PoCs: For complex requirements, we need to carry out the
feasibility assessment through proof-of-concepts (PoCs). This helps us to finalize
the tool, technology, integration method, performance and assess the scalability
and performance of the method.
As part of process improvement and optimization, we need to track the trends along
various dimensions such as architecture shift, technology shift, integration shift and
others and marked the recommended tenets for the solution. We have depicted the
sample architecture trends in Figure 13.4.
On the security front, we need to continuously monitor and audit for critical security
events (such as failed login attempts, password change events, impersonation events,
role change events etc.) and setup a notification infrastructure for the same.
We should also carry out the penetration testing and vulnerability testing iteratively to
discover the security vulnerabilities early.
A comprehensive validation is one of the key success factors for delivering the first
time right deliverable. The validation team needs to follow the below-given
optimizations:
Automated testing: The testing team has to use the automated tools such as
Apache JMeter or Selenium to automate the regression scenarios and other
functional test scenarios to improve the overall productivity.
Continuous iterative testing: The testing has to be a continuous and iterative
process across all sprints. The helps us to discover the defects in the early stage.
Testing metrics definition: The testing team has to define the quality metrics
such as defect rate, defect slippage rate, defect density and track the metrics with
each sprint.
34
Software Process
Dashboard based quality monitoring: The test lead should pro-actively monitor Improvement
all the testing metrics in a dashboard and notify the project manager in case of any
critical violations.
Sprint-wise early UAT: We have noticed that involving the business
stakeholders with each sprint delivery in the UAT phase helps us to uncover any
business related gaps early in the game. Additionally the team can incorporate the
feedback in the next sprint.
The big chunk of responsibility for first time right deliverable lies with the
development team. I will design and implement the below given optimizations for the
development team:
Code checklist and guidelines: Developers should use the code guidelines,
naming conventions and coding best practices in a checklist format. Before the
start of the project, the architect along with project manager has to create the
checklist, naming conventions, best practices and such. Some of the most
common code checklist are as follows:
o Language specific coding best practices
o Performance checklist
o Security coding checklist
o Code naming conventions
o Design checklist
Automated and Peer Review process: Developers should use automated static
code analyzers such as PMD/SonarQube to ensure the code quality. The
developers should also get their code reviewed by their peers and leads on a
frequent basis.
Code reusability: Developers should actively explore the code reusability
opportunities in the following order:
o Look for reusable libraries, code modules, components offered by the
underlying platform/product/framework/accelerators.
o Explore the availability of reusable libraries, code modules, components
at the organization level.
o Look for approved open source libraries that satisfies the functionality.
o If nothing is available, develop the code in modular way so that it can be
reused.
Performance driven development: Performance should not be an afterthought,
but it must be implemented from the very early stages. Developers need to pro-
actively carry out the performance testing of the integrated code to ensure that
their code is free of memory leaks, integration bottlenecks and others. We have
detailed the performance based design and web optimization framework in the
NFR section.
Optimal code coverage: Developers should ensure that their unit test cases
provide more than 90% code coverage. This ensures high code quality with
minimal defects. Developers should also try to use an automated tool for
generating unit test cases.
Quality gating criteria: We need to define multi-level code quality gating
criteria as follows (this could also go in as code check-in checklist):
o Developer-level code quality: The developer has to use the defined
coding checklist and naming conventions to adhere to those guidelines.
The developer can also use the IDE for the same.
o Automated Local code quality analyzer: The developer has to use the
static code analyzers such as PMD, SonarQube to analyze all coding
issues and fix the major and critical issues.
o Manual code review: The developers can request for peer review and
lead review of the code. Once all of the above is completed, the
developers can check in the code to the source control system.
35
Advanced Topics in o Integrated code review: We could setup a Jenkins job with SonarQube
Software Engineering to continuously review the integrated code and generate the report. The
Jenkins job can notify the developers and project managers for major and
critical violations.
Automated unit testability: Developers should use automated unit test
generators (such as EvoSuite, Veracode) to improve the quality and productivity
of the developer.
DevOps defines a set of tools and processes to ensure the proper delivery and release
of the application. Given below are the key optimizations from DevOps standpoint:
Release management: DevOps setup provides an opportunity for setting up
automated release management system
Automated testing: We can configure unit test jobs and functional testing jobs
(such as Selenium) to test the code from source control system
Automated deployment: We can use Jenkins pipelines and deployment jobs to
automate the deployment.
Continuous build: We can enable the continuous build to catch any errors with
integrated code.
Deployment pipelines: The pipelines enable and automate the code deployment.
Source control management: As part of DevOps process we also define the
source control management processes related to pull request, approval, code
check-in, code merge and such.
Automated reporting and project health notification: We could configure the
project health notification, reporting features to trigger the notification in case of
any SLA violation.
A right sized infrastructure is critical for the optimal application delivery. In order to
deliver the first time right solution, given below are the infrastructure related
optimizations:
36
Software Process
Health check monitoring setup: We need to setup the pro-active Improvement
healthcheck/heartbeat monitoring infrastructure to continuously monitor the
availability of the servers.
Proper capacity sizing: All the server and network capacity should be sized
based on the user load and related non-functional requirements (such as
scalability, availability, performance). Based on the availability requirements, we
should also setup the disaster recovery (DR) and the corresponding sync jobs.
Automated alerts and notification: We should be able to configure notification
triggers and the monitoring setup should notify the administrators in case of SLA
violation.
Monitoring dashboard: The monitoring dashboard should provide a snapshot of
all key parameters such as availability percentage, performance,
CPU/memory/network utilization, request rate and such.
Availability reports: The monitoring setup should be able to generate on-demand
monitoring reports for various infra-related parameters.
SLA based real time monitoring: We should also setup the real-time application
monitoring infrastructure across various geographies to understand the
performance and availability issues.
Single Point of Failure (SPOF) avoidance: Ensure that none of the systems in
the end to end request processing pipeline form the bottleneck leading to single
point of failure (SPOF). We can ensure high availability through cluster setup,
redundancy setup, DR setup, regular backup and by other means.
Additionally we should setup the automatic elastic scalability, optimal load balancing,
CDN based caching based on the application requirements.
13.3.10 Tools
We have given various tools that can be used for implementing the First time right
framework in Table 13.1.
38
Software Process
Table 13.1: FTR Tools Improvement
Tool Category Open Source/
Commercial Tool(s)
Web Page Analysis tools (HTML analysis, Yahoo YSlow , Google PageSpeed ,
performance benchmarking, improvement HTTPWatch , Dynatrace AJAX Edition
guidelines)
Page development tools (analysis of page Firebug, Google Chrome Developer
load times, asset size, asset load times and toolbar, Fiddler , HTTP Archive WEB
such.) PAGEiddle, CSSLint , JSLint , W3 CSS
Validator , W3 HTML validator
Asset merging and minification tools Yahoo UI (YUI) minifier, JSMini ,
(JS/CSS minification) JSCompress
Page Performance Testing tools (load JMeter, LoadUI, Grinder, Selenium
simulation)
Image compression tools PNGCrush, Smush It , Img min , JPEG
Mini
Web Server Plugins (for automatic Mod_pageSpeed , mod_cache,
compression, minification, merging, mod_SPDY
placement, caching etc.) mod_expiry , mod_gzip
Website Performance Testing GTMetrix , Pingdom
Synthetic monitoring (transactions Web Page test , DynaTrace Synthetic
simulation & performance statistics) monitoring
CDN Akamai, CloudFlare, KeyCDN,
Web Analytics (track user behavior, Google Web Analytics, Omniture,
performance reporting) Piwik
CSS Optimization tools CSS Sprites , SpriteMe , SpritePad
Bottleneck Analysis (dependency & WebProphet ,
bottleneck analysis) WProf
Real User Monitoring (RUM) (monitoring New Relic , Dynatrace , Gomez
& bottleneck analysis)
Network analysis (network traffic, HTTP Wireshark , Charles Proxy
headers, request/responses, protocol
analysis)
Application Performance Monitoring New Relic ,
(APM) (Layer wise monitoring of Dyna Trace Monitoring, Nagios
application code)
In this section we discuss the software process optimization for the performance
validation phase. 39
Advanced Topics in A robust web performance test methodology defines the sequence and activities for
Software Engineering various performance testing phases. We have detailed the performance test
methodology in Figure 13.7.
During requirement analysis phase, the performance testing team will get the
complete performance test needs and document the requirements. During this phase,
the testing team assesses the architecture and design documents. The team would also
analyze the business critical and performance critical transactions. The performance
validation team gets sign-off on performance-related NFRs (such as peak user load,
peak content volume and such) and performance-related SLAs (timing metrics for the
defined NFRs) during this phase. The team identifies the key performance indicators
(KPI) such as average page response time across all geographies, availability metrics,
performance metrics, performance SLA adherence rate and such. Some of the
common performance related questions are given below:
Which pages receive highest traffic?
How many pages have acceptable performance above configured performance
thresholds?
Which pages have bad performance?
What is the caching strategy used?
The planning and design phase, the performance testing team will break down the
scope into business transactions that need separate performance scripts. We also
characterize the workload model based on the load numbers we have gathered. The
performance testing team creates the overall performance test plan. The performance
environment will be setup after all the performance-testing tools are finalized.
Performance testing team develops the performance test scripts.
During the execution phase, the performance test team executes all the scripts and
record various timing metrics that are defined in the requirements analysis phase.
Various performance tests such as load testing, stress testing, endurance testing will
be carried out in this phase. Automation scripts will be used to automatically execute
the performance tests after each major release. The results will be analyzed and
document.
Reports will be published in the reporting and recommendation phase. Performance
testing teams recommends a “go” or a “no-go” decision based on the overall
performance and adherence of the performance results to the defined SLAs.
Performance testing and reporting will be done iteratively for various sprints of the
release.
40
Software Process
13.4.1 Performance Testing Tools Improvement
Given below are the key testing tools that can be used for web performance testing in
Table 13.2
A robust monitoring and alerting setup should be able to capture the system metrics
(CPU, memory), log metrics (system logs, application logs), errors, performance and
such. The monitoring setup should also be flexible to monitor various systems such as
Linux OS, database server, stand-alone server, performance and such. The alert and
notification setup should be flexible enough to send notifications to various channels
such as email, pager, incident management system and such. We should be able to
configure the performance thresholds and resource utilization thresholds that can
trigger the notification.
A comprehensive monitoring tool should support these features:
Core Monitoring capabilities
o Resource (CPU, Memory) monitoring
o Network (Router, Switch, and Firewall etc.) Monitoring
o Server Monitoring
o Windows Event Log Monitoring
o Applications Monitoring
o Virtual instances
Application Server monitoring capabilities
o Database Monitoring
o Web Page, Web Server/ Web Services Monitoring
o Middle Ware Monitoring
o Custom Application Monitoring
Reporting Dashboard
o Business service management views
41
Advanced Topics in o Comprehensive dashboard
Software Engineering
o Real Time trends and availability of devices
o Events and Correlated Alarms
Reporting
o Standard daily, weekly, monthly, quarterly, and yearly reports
We could leverage many open source and commercial tools for performance
monitoring. Table 13.3 lists the popular performance monitoring tools.
In this section we shall look at improving the processes at various SEI CMMi levels.
We have taken the focus areas and the best practices for each of the processes.
This is part of level 2 “Managed” stage wherein the expectation is to manage the
requirements and resolve any inconsistencies across various requirements.
This is part of level 2 “Managed” stage wherein the expectation is to define the
accurate project plan for managing the project
13.6 SUMMARY
In this unit, we started discussing main features of the SEI CMMi. We looked at
various 5 levels of the SEI CMMi methodology. The level 1 is called initial. The level
2 is termed as managed where the main focus area is basic project management.
Process standardization is the focus area of level 3 defined. The focus area of level 4
is quantitatively managed and the continuous process improvement is the focus area
of level 5. We then discuss the first time right (FTR) framework that discusses the
optimizations in the areas of requirements, security, testing, development, DevOps,
infrastructure, project management and governance. We also looked at the tools used
in the FTR framework. We then had a deep dive about the software process
optimization for performance validation framework. We then looked at five examples
of SEI CMMi process optimization examples related to requirements development,
technical solution, production integration, and project planning and requirements
management.
13.7 SOLUTIONS/ANSWERS
References
Shivakumar, S. K., Shivakumar (2018). Complete Guide to Digital Project
Management. Apress.
Team, C. P. (2002). CMMI for Systems Engineering/Software Engineering/Integrated
Product and Process Development/Supplier Sourcing, Version 1.1, Continuous
Representation. CMU/SEI.
Conradi, H., & Fuggetta, A. (2002). Improving software process improvement. IEEE
software, 19(4), 92-99.
[Link]
[Link]
47