0% found this document useful (0 votes)
18 views66 pages

Previewpdf

Uploaded by

tranbaolong365
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
18 views66 pages

Previewpdf

Uploaded by

tranbaolong365
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 66

BUILDING IN

SECURITY
AT AGILE SPEED
BUILDING IN
SECURITY
AT AGILE SPEED

JAMES RANSOME
BROOK S. E. SCHOENFIELD
Material in this book is taken from the following books written by one or both of the authors, with permission from Taylor &
Francis Group:

Core Software Security: Security at the Source / ISBN: 9781466560956 / 2013


Securing Systems: Applied Security Architecture and Threat Models / ISBN: 9781482233971 / 2015
Secrets of a Cyber Security Architect / ISBN: 9781498741996 / 2019

CRC Press
Taylor & Francis Group
6000 Broken Sound Parkway NW, Suite 300
Boca Raton, FL 33487-2742

© 2021 by Taylor & Francis Group, LLC


CRC Press is an imprint of Taylor & Francis Group, an Informa business

No claim to original U.S. Government works

Printed on acid-free paper

International Standard Book Number-13:


978-0-367-43326-0 (Hardback)
978-1-032-01005-2 (Paperback)
978-1-003-00245-1 (eBook)

This book contains information obtained from authentic and highly regarded sources. Reasonable efforts have been made to
publish reliable data and information, but the author and publisher cannot assume responsibility for the validity of all materials
or the consequences of their use. The authors and publishers have attempted to trace the copyright holders of all material repro­
duced in this publication and apologize to copyright holders if permission to publish in this form has not been obtained. If any
copyright material has not been acknowledged please write and let us know so we may rectify in any future reprint.

Except as permitted under U.S. Copyright Law, no part of this book may be reprinted, reproduced, transmitted, or utilized in any
form by any electronic, mechanical, or other means, now known or hereafter invented, including photocopying, microfilming,
and recording, or in any information storage or retrieval system, without written permission from the publishers.

For permission to photocopy or use material electronically from this work, please access www.copyright.com (http://www.copy­
right.com/) or contact the Copyright Clearance Center, Inc. (CCC), 222 Rosewood Drive, Danvers, MA 01923, 978-750-8400.
CCC is a not-for-profit organization that provides licenses and registration for a variety of users. For organizations that have
been granted a photocopy license by the CCC, a separate system of payment has been arranged.

Trademark Notice: Product or corporate names may be trademarks or registered trademarks, and are used only for identifica­
tion and explanation without intent to infringe.
Visit the Taylor & Francis Web site at
http://www.taylorandfrancis.com

and the CRC Press Web site at


http://www.crcpress.com
Dedications

This book is dedicated to my parents, who taught me how to think outside the box, and to my teachers
at Almond Avenue School, who taught me to both focus and multi-task at the same time, all at a very
young age. This has served me well in this journey we call life.
— James Ransome

A succeeding generation (to me) of software security specialists are well into their careers, people like
(to name just a sample) Luis Servin, Damilare Fagbemi, Sandeep Kumar Singh, Subir Biswas, François
Proulx, Zoe Braiterman, Jonathan Marcil, Kim Wuyts, and including, I’m proud to say, my daughter,
Allison Schoenfield. You must carry this work forward, critique it, refine it, recreate it. It is to you, and
to those that follow whom you will teach, coach, and mentor, that I dedicate this work. Use what works;
throw out what doesn’t; reinvent as needed. However you shape and execute your version of software
security, you must continue; our digital world is counting on you.
— Brook S. E. Schoenfield

v
Contents

Dedications v

Contents vii

Foreword by Dr. David Brumley xiii

Preface xv

Acknowledgments xvii

About the Authors xix

Chapter 1: Setting the Stage 1


1.1 Introduction 1
1.2 Current Events 3
1.3 The State of Software Security 5
1.4 What Is Secure Software? 12
1.5 Developing an SDL Model That Can Work with Any Development
Methodology 17
1.5.1 Our Previous Secure Development Lifecycle Design and
Methodology 19
1.5.2 Mapping the Security Development Lifecycle (SDL) to the
Software Development Life Cycle (SDLC) 21
1.5.3 Software Development Methodologies 21
1.6 The Progression from Waterfall and Agile to Scrum: A Management
Perspective 30
1.6.1 DevOps and CI/CD 34
1.6.2 Cloud Services 36
1.6.3 Platform Services 36
1.6.4 Automation 37
1.6.5 General Testing and Quality Assurance 37
1.6.6 Security Testing 38

vii
viii Building In Security at Agile Speed

1.6.7 DevSecOps 38
1.6.8 Education 38
1.6.9 Architects and Principal Engineers 39
1.6.10 Pulling It All Together Using Visual Analogies 39
1.6.11 DevOps Best Practices 44
1.6.12 Optimizing Your Team Size 45
1.7 Chapter Summary 46

Chapter 2: Software Development Security Management in an Agile World 47


2.1 Introduction 47
2.2 Building and Managing the DevOps Software Security Organization 48
2.2.1 Use of the Term DevSecOps 48
2.2.2 Product Security Organizational Structure 48
2.2.3 Software Security Program Management 55
2.2.4 Software Security Organizational Realities and Leverage 56
2.2.5 Software Security Organizational and People Management Tips 58
2.3 Security Tools, Automation, and Vendor Management 59
2.3.1 Security Tools and Automation 60
2.3.2 DevOps Tools: Going Beyond the SDL 70
2.3.3 Vendor Management 70
2.4 DevOps Security Incident Response 76
2.4.1 Internal Response to Defects and Security Vulnerabilities in
Your Source Code 77
2.4.2 External Response to Security Vulnerabilities Discovered in
Your Product Source Code 78
2.4.3 Post-Release PSIRT Response 78
2.4.4 Optimizing Post-Release Third-Party Response 81
2.4.5 Key Success Factors 82
2.5 Security Training Management 83
2.6 Security Budget Management 86
2.6.1 Preparing and Delivering the Budget Message 86
2.6.2 Other Things to Consider When Preparing Your Budget 87
2.7 Security Governance, Risk, and Compliance (GRC) Management 88
2.7.1 SDL Coverage of Relevant Regulations, Certifications, and
Compliance Frameworks 90
2.7.2 Third-Party Reviews 91
2.7.3 Post-Release Certifications 92
2.7.4 Privacy 93
2.8 Security Metrics Management 100
2.8.1 The Importance of Metrics 100
2.8.2 SDL Specific Metrics 102
2.8.3 Additional Security Metrics Focused on Optimizing Your
DevOps Environment 103
Contents ix

2.9 Mergers and Acquisitions (M&A) Management 106


2.9.1 Open Source M&A Considerations 107
2.10 Legacy Code Management 107
2.11 Chapter Summary 110

Chapter 3: A Generic Security Development Lifecycle (SDL) 111


3.1 Introduction 111
3.2 Build Software Securely 120
3.2.1 Produce Secure Code 121
3.2.2 Manual Code Review 126
3.2.3 Static Analysis 128
3.2.4 Third-Party Code Assessment 132
3.2.5 Patch (Upgrade or Fix) Issues Identified in Third-Party Code 133
3.3 Determining the Right Activities for Each Project 135
3.3.1 The SDL Determining Questions 135
3.4 Architecture and Design 151
3.5 Testing 160
3.5.1 Functional Testing 161
3.5.2 Dynamic Testing 162
3.5.3 Attack and Penetration Testing 166
3.5.4 Independent Testing 168
3.6 Assess and Threat Model Build/Release/Deploy/Operate Chain 169
3.7 Agile: Sprints 169
3.8 Key Success Factors and Metrics 173
3.8.1 Secure Coding Training Program 173
3.8.2 Secure Coding Frameworks (APIs) 173
3.8.3 Manual Code Review 173
3.8.4 Independent Code Review and Testing (by Experts or
Third Parties) 174
3.8.5 Static Analysis 174
3.8.6 Risk Assessment Methodology 174
3.8.7 Integration of SDL with SDLC 174
3.8.8 Development of Architecture Talent 174
3.8.9 Metrics 175
3.9 Chapter Summary 175

Chapter 4: Secure Design through Threat Modeling 185


4.1 Threat Modeling Is Foundational 185
4.2 Secure Design Primer 188
4.3 Analysis Technique 193
4.3.1 Before the Threat Model 199
4.3.2 Pre-Analysis Knowledge 200
x Building In Security at Agile Speed

4.3.3 ATASM Process 203


4.3.4 Target System Discovery 206
4.4 A Short “How To” Primer 207
4.4.1 Enumerate CAV 207
4.4.2 Structure, Detail, and Abstraction 216
4.4.3 Rating Risk 219
4.4.4 Identifying Defenses 224
4.5 Threat Model Automation 233
4.6 Chapter Summary 235

Chapter 5: Enhancing Software Development Security Management in


an Agile World 237
5.1 Introduction 237
5.2 Building and Managing the DevOps Software Security Organization 240
5.2.1 Continuous and Integrated Security 240
5.2.2 Security Mindset versus Dedicated Security Organization 241
5.2.3 Optimizing Security to Prevent Real-World Threats 243
5.3 Security Tools, Automation, and Vendor Management 254
5.3.1 Static Application Security Testing (SAST) 255
5.3.2 Dynamic Analysis Security Testing (DAST) 255
5.3.3 Fuzzing and Continuous Delivery 256
5.3.4 Unit and Functional Testing 256
5.3.5 Integration Testing 256
5.3.6 Automate Red Team Testing 256
5.3.7 Automate Pen Testing 256
5.3.8 Vulnerability Management 257
5.3.9 Automated Configuration Management 257
5.3.10 Software Composition Analysis 257
5.3.11 Bug Bounty Programs 258
5.3.12 Securing Your Continuous Delivery Pipeline 258
5.3.13 Vendor Management 259
5.4 DevOps Security Incident Response 260
5.4.1 Organizational Structure 260
5.4.2 Proactive Hunting 261
5.4.3 Continuous Detection and Response 261
5.4.4 Software Bill of Materials 262
5.4.5 Organizational Management 263
5.5 Security Training Management 265
5.5.1 People 265
5.5.2 Process 266
5.5.3 Technology 267
5.6 Security Budget Management 268
Contents xi

5.7 Security Governance, Risk, and Compliance (GRC) Management 269


5.8 Security Metrics Management 269
5.9 Mergers and Acquisitions (M&A) Management 271
5.10 Legacy Code Management 273
5.10.1 Security Issues 273
5.10.2 Legal and Compliance Issues 273
5.11 Chapter Summary 274

Chapter 6: Culture Hacking 277


6.1 Introduction 277
6.2 Culture Must Shift 278
6.3 Hack All Levels 280
6.3.1 Executive Support 280
6.3.2 Mid-Management Make or Break 281
6.3.3 Accept All Help 281
6.4 Trust Developers 282
6.5 Build a Community of Practice 285
6.6 Threat Model Training Is for Everyone 288
6.7 Audit and Security Are Not the Same Thing 290
6.8 An Organizational Management Perspective 293
6.8.1 Security Cultural Change 293
6.8.2 Security Incident Response 295
6.8.3 Security Training 296
6.8.4 Security Technical Debt (Legacy Software) 297
6.9 Summary/Conclusion 298

Appendix A: The Generic Security Development Lifecycle 301

Index 309
Foreword

Software is changing the trajectory at which the world operates. From driverless cars to cryptocurrency,
software reimagines possibilities. With software standing at the core of everything we do, we find our­
selves pushing out code faster than ever. Current estimates show that there are over 111 billion lines of
new code written per year. And our fixation on rapidly developing the latest technology has positioned
security to be in the way, coming at a “cost.”
It’s time to redefine the way we view security and see security as a value, not a cost. When we trust
software, new economies are created. For example, today’s e-commerce market is possible because of
cryptographic protocols and software implementations for Transport Layer Security (TLS) to secure
our browsers’ connections. As a result, today’s online economy is the size of Spain’s gross domestic
product (GDP). Security is an enabler for innovation. You cannot have innovation without security.
Today, however, we’re finding ourselves paying the price for neglecting this foundational element
for innovation in favor of speed. Headline-worthy breaches and hacking demonstrations remind us
of the debt we’ve accrued and must pay off. Although some may perceive this as a setback, I see the
paradigm shift from software development to secure software development pivotal to the new wave of
innovation yet to come. I believe that the implementation of DevSecOps as a formal process within
organizations will make a mark on history, multiplying not only the scale but also the speed at which
we will be able to push out ingenious solutions.
What do organizations who excel at creating trustworthy software do differently? Consider Google
Chrome™. Chrome is open source, meaning everyone has complete access to comb through the code to
find new vulnerabilities. Chrome is a highly valued target. A single vulnerability could affect millions
of users.
Google’s Chrome development team has set up security as a process, not an end product. There are
four fundamental steps:

1. Design your code with security in mind.


2. Find a bug or vulnerability in your code.
3. Fix the vulnerability, testing to make sure the patch is fit for purpose.
4. Field the fix into operations.

Google builds security into Chrome with their sandbox design. Chrome sandboxes limit the ability
for any one component, for example, the movie player, to compromise the rest of Chrome. A secure SDLC
goes beyond design and includes security testing. Chrome leverages tens of thousands of CPU cores to
continuously probe for new vulnerabilities using fuzzing. If a vulnerability is found, they can field a fix
to over 90% of the world within 30 days. All automatically, without a user having to click “upgrade.”

xiii
xiv Building In Security at Agile Speed

In Building In Security at Agile Speed, Dr. James Ransome and Brook S. E. Schoenfield have distilled
down the best security lifecycle practices into one comprehensive book. They show both “the how,” as
well as “the why.” The two juggernauts combine their industry experience to demystify continuous
integration and continuous delivery (CI/CD), DevSecOps, and continuous security so that it no longer
remains an art that only the elite few know how to implement. The pair share my vision to democratize
this knowledge and give back to the software security community that has given so much to us.
The new possibilities of technology hinge on our ability to execute the principles outlined in
Building In Security at Agile Speed. With the knowledge we need to set forth, the future of technology
is within reach, but it remains up to you—all of us—to courageously grasp for it.
I look to the future with confidence as we collaborate together to secure the world’s software.
Dr. David Brumley
CEO and Founder
ForAllSecure, Inc.

About David Brumley:

Dr. David Brumley, CEO of ForAllSecure, is a tenured professor at Carnegie Mellon University with an
expertise in software security. With over 20 years of cybersecurity experience in academia and practice,
Dr. Brumley is the author of over 50 publications in computer security, receiving multiple best paper
awards. He is also a founder and faculty sponsor for the competitive hacking team Plaid Parliament of
Pwning (PPP), which is internationally ranked and the most winning DEFCON team ever.
Dr. Brumley’s honors include being selected for the 2010 DARPA CSSP program and 2013
DARPA Information Science and Technology Advisory Board, a 2010 NSF CAREER award, a 2010
US Presidential Early Career Award for Scientists and Engineers (PECASE) from President Obama
(the highest award in the United States for early career scientists), and a 2013 Sloan Foundation award.
Preface

The age of software-driven machines has taken significant leaps over the last few years. Human tasks
such as those of fighter pilots, stock exchange floor traders, surgeons, and industrial production and
power plant operators that are critical to the operation of weapons systems, medical systems, and key
elements of our national infrastructure, have been, or are, rapidly being taken over by software. This
is a revolutionary step in the machine whose brain and nervous system is now controlled by software-
driven programs, taking the place of complex, nonrepetitive tasks that formerly required the use of the
human mind. This has resulted in a paradigm shift in the way the state, military, criminals, activists,
and other adversaries can attempt to destroy, modify, or influence countries, infrastructures, societies,
and cultures. This is true even for corporations, as we have seen increasing cases of cyber corporate
espionage over the years. The previous use of large armies, expensive and devastating weapons systems
and platforms, armed robberies, the physical stealing of information, violent protests, and armed insur­
rection is quickly being replaced by what is called cyber warfare, crime, and activism.
In the end, the cyber approach may have just as profound effects as the techniques used before in
that the potential exploit of software vulnerabilities could result in:

• Entire or partial infrastructures taken down, including power grids, nuclear power plants, com­
munication mediums, and emergency response systems
• Chemical plants modified to create large-yield explosions and/or highly toxic clouds
• Remote control, modification, or disablement of critical weapon systems or platforms
• Disablement or modification of surveillance systems
• Criminal financial exploitation and blackmail
• Manipulating financial markets and investments
• Murder or harm to humans through the modification of medical support systems or devices,
surgery schedules, or pharmaceutical prescriptions
• Political insurrection and special interest influence through the modification of voting software,
blackmail, or brand degradation though website defacement or underlying Web application take-
down or destruction

A side effect of the cyber approach is that it has given us the ability to do all of that listed the above
at a scale, distance, and anonymity previously unthought of from jurisdictionally protected locations
through remote exploitations and attacks. This gives governments, criminal groups, and activists the
ability to proxy prime perpetuators to avoid responsibility, detection, and political fallout.
Although there is much publicity regarding network security, the real Achilles heel is the (insecure)
software that provides the potential ability for total control and/or modification of a target, as described

xv
xvi Building In Security at Agile Speed

above. The criticality of software security as we move quickly towards this new age of tasks previously
relegated to the human mind and now being replaced by software-driven machines cannot be under­
estimated. It is for this reason that we have written this book. In contrast and for the foreseeable future,
software programs will be written by humans. This also means new software will keep building on
legacy code or software that was written prior to security being taken seriously or before sophisticated
attacks become prevalent. As long as humans write the programs, the key to successful security for the
same is in making the software development program process more efficient and effective.
We, the authors, James and Brook, have spent days drawing on whiteboards, hours sitting in cafés
with strategy documents and diagrams spread over a couple of tables (much to the horror of the barista
and waitstaff), coffees in hand, trying to divine the most effective methods to deliver software security.
We’ve wondered why some tactic was as successful as it appeared to have been. We’ve scratched our col­
lective head trying to understand why a method that looked great on paper failed: Was our execution
faulty? Were our assumptions false? Was there some as yet undefined cultural element that we hadn’t
accounted for? Did we just not explain well enough what we believed could work?
This book is surely an update and rework of Core Software Security: Security at the Source.* As such,
some of that book’s material has been reprinted as is, or slightly reworked, sometimes with our updating
comments surrounding the quoted material. Since writing that book, each of us has continued to write
about, present about, and teach our latest understandings. We wanted to pull those single-subject works
from each of us into our holistic views on software security. You will see numerous quotes herein pulled
from these intervening publications. Worth mentioning up front are two of Brook’s books that were
published since 2014: Securing Systems: Applied Security Architecture and Threat Models,† and Secrets of
a Cyber Security Architect.‡
However, this book also includes a great deal of entirely new material. There have been sea changes
in software development, architecture, and operation since 2013–2014, during which we were plan­
ning, then drafting Core Software Security. Furthermore, and perhaps just as importantly, we learned a
lot. Our practices have changed to account for the profound changes that were underway in 2014, but
which have manifested, in fact, pretty much taken over since then. Some of our methods have been
refined. Some have been shifted to new paradigms. In addition, we have a few new tricks up our collec­
tive sleeves that we felt needed to be put into print for others to use, or, at least, react to.
James and Brook have never shied away from conflict, so long as the resulting discussion is con­
structive and in the service of improvement. In many ways, this book is about our striving for “continu­
ous improvement.” Hence, what we say in this book should not be taken as the final word, but rather, as
our point-in-time reflection on where we are, where we might (should?) go, and how we could get there.
Although this book focuses on people, processes, and technological approaches to software security,
we believe that the people element of software security is still the most important part to manage as
long as software is developed, managed, and exploited by humans. What follows is a step-by-step pro­
cess for software security that is relevant to today’s technical, operational, business, and development
environments with a focus on what humans can do to control and manage the process in the form of
best practices and metrics. We will always have security issues, but this book should help to minimize
them when software is finally released or deployed. We hope you enjoy our book as much as we have
enjoyed writing it.

* Ransome, J. and Misra, A. (2014). Core Software Security: Security at the Source. Boca Raton (FL): CRC Press/
Taylor & Francis Group.
† Schoenfield, B. S. E. (2015). Securing Systems: Applied Security Architecture and Threat Models. Boca Raton (FL):
CRC Press/Taylor & Francis Group.
‡ Schoenfield, B. (2019). Secrets of a Cyber Security Architect. Boca Raton, (FL): Auerbach Publications/Taylor &
Francis Group.
Acknowledgments

Without the confidence of CRC Press/Taylor & Francis Group and our editor, John Wyzalek, this book
would have been no more than a gleam in the authors’ eyes. John found a way to realize our vision. We
thank John for his support and for his commitment to the project.
This book could not have been delivered without the guidance and patience of DerryField Publish­
ing Services as we put together this manuscript. Please accept our heartfelt gratitude.
Copyediting, typesetting, and page proofing are key to delivery of any book. Marje Pollack has
once again navigated the authors’ idiosyncrasies. This book was particularly difficult to typeset due to
the interlacing of pieces of other works with new text. We hope that Marje’s concepts will create an
intuitive flow for its readers. Susan Culligan provided much-needed reviews and critiques.
Theron R. Shreve (Director, DerryField Publishing Services) has once again provided invaluable
assistance. We might very well still be struggling through some of the finer points for references if not
for his input. Once again, Theron, we’ve danced the book publishing dance to completion. Thanks for
working with us.
— James Ransome and Brook S. E. Schoenfield

I would like to thank all my former employees, co-workers, team members, and business partners who
have helped me manage application and product security and keep software secure since 1997. A special
thanks to Dr. David Brumley for writing the Foreword for this book on a subject and message for which
we both share a passion and want to get out to both practitioners and decision makers alike. And last,
but not least, a thanks to my co-author, Brook S. E. Schoenfield, who has joined me on this journey to
prove there is another, faster, and efficient way to architect, implement, and manage software security
than what is the current status quo. As Will Roper, the Assistant Secretary for Acquisition in the U.S.
Air Force so aptly said: “We can build the best airplanes and satellites, but we will lose if we can’t update
the software at the speed of relevance in this century.”*
— James Ransome

A book like this cannot, if it is to reflect real-world practice, spring forth from the imaginations of the
authors. This work is firmly grounded in each author’s extensive experience—in security, in software
security, in experience gained from developing and maintaining software, and, most importantly, in

* Christopherson. A. (2019). “Faster, Smarter: Speed Is Key in Acquisition Reform.” Retrieved from https://www
.wpafb.af.mil/News/Article-Display/Article/1771763/faster-smarter-speed-is-key-in-acquisition-reform/

xvii
xviii Building In Security at Agile Speed

experiences derived from working collectively with sometimes thousands of others. That “collective”
must be recognized.
The conclusions expressed herein are firmly based upon the experiences and practices of the devel­
opers, managers, project managers, Scrum Masters, directors, executives, product managers, product
owners, teachers, facilitators, and, of course, security practitioners, who ultimately make up any effec­
tive software security program. Software security cannot be imposed. It can only be earned through the
inspired and driven efforts of many people from varied roles and specialties working together toward
common goals. Failure to acknowledge the universe of contributions would erase enormous amounts of
hard work. But it would also paint a false view of reality. A great many of our colleagues have contrib­
uted to what you will find here. We will remain eternally grateful for your collaboration. Unfortunately,
the list of names could easily fill the pages of the book.
Of particular note are the several hundred security architects with whom we worked at McAfee,
Inc. (which became Intel Security, which became McAfee, LLC), who gave their best efforts to execute
our Security Development Lifecycle (SDL). These “security champions” delivered, but they also cri­
tiqued and helped to refine that SDL, which then was foundational to this SDL.
Mentioned in this book, but worth acknowledging here, are Robert Hale and Catherine Blackadar
Nelson. We might have struggled to plumb the software development assumptions that have lain buried
beneath extant SDLs if we had not participated in that SDL research at Intel, Inc., in 2015–2016. I did
the technical work, yes. But James and the then Vice President of Quality at Intel Security, Noopur
Davis, and I thoroughly and regularly analyzed the results from that project as the work proceeded.
Much of the foundation of a truly generic SDL springs directly from that effort.
In the years since leaving McAfee, LLC, both of the authors have had opportunities to prove that
the work we generated together wasn’t a fluke of circumstance. Every software security program that
I’ve been a part of continues to provide validation as well as refinement. My recent teams’ contributions
also receive our hearty thanks.
No book can be complete without a dash of project management. James is far better at this task
than I. Without his keeping us both on task, the book might not have come together quite as quickly
and easily as it has. This book represents yet another journey through the trials and tribulations of
software security that we’ve taken together.
Dr. James Ransome has long since earned my enduring appreciation of his skills, insight, deter­
mination, and perseverance. I’m incredibly lucky to call James “co-author,” and luckier still, “friend.”
Importantly, no book can be completed without the support of family. I once again must thank my
spouse, Cynthia, for enduring yet one more book on security.
— Brook S. E. Schoenfield
About the Authors

Dr. James Ransome is the Chief Scientist for CyberPhos, an early-stage cybersecurity startup, and
continues to do ad hoc consulting. He also serves on the Board of Directors for the Bay Area CSO
Council. Most recently, Dr. Ransome was the Senior Director, Security Development Lifecycle (SDL)
Engineering, in the Intel Product Security and Assurance, Governance and Operations (IPAS GO)
Group, where he led and developed a team of SDL engineers, architects, and product security experts
that implemented and drove security practices across all of Intel. Prior to that, he was the Senior Director
of Product Security and PSIRT at Intel Security and McAfee, LLC. Over a six-year period, he built,
managed, and enhanced a developer-centric, self-sustaining, and scalable software security program,
with an extended team of 120 software security architects embedded in each product team. All of this
was a result of implementing and enhancing the model described in his most recent book, Core Software
Security: Security at the Source, which has become a standard reference for many corporate security
leaders who are responsible for developing their own SDLs. His career is marked by leadership positions
in the private and public industries, having served in Chief Information Security Officer (CISO) roles
at Applied Materials, Autodesk, and Qwest Communications, and four chief security officer (CSO)
roles at Pilot Network Services, Exodus Communications, Exodus Communications—A Cable and
Wireless Company, and the Cisco Collaborative Software Group.
Prior to entering the corporate world in 1997, Dr. Ransome retired from 23 years of government ser­
vice, having served in various roles supporting the U.S. intelligence community, federal law enforcement,
and the Department of Defense. Key positions held include Weapons Platoon Sergeant (U.S. Marine
Corps), U.S. Federal Special Agent—Foreign Counter-Intelligence (NCIS), Retired Commander and
Intelligence Officer, U.S. Navy, Retired Scientist-Geospatial Intelligence Analyst for WMD and DOE
Nuclear Emergency Search Team (NEST) Key Leader, and Threat Assessment Analyst at Lawrence
Livermore National Laboratory.
Dr. Ransome holds a PhD (https://nsuworks.nova.edu/gscis_etd/790/) in Information Systems,
specializing in Information Security; a Master of Science Degree in Information Systems; and gradu­
ate certificates in International Business and International Affairs. He developed and tested a security
model, architecture, and leading practices for converged wired and wireless network security for his
doctoral dissertation. This work became the baseline for the Getronics Wireless Integrated Security,
Design, Operations, and Management Solution, of which Dr. Ransome was a co-architect This resulted
in an increase of over 45 million USD revenue for Getronics and Cisco within a two-year period.
Building in Security at Agile Speed is Dr. Ransome’s 14th book and the 12th on cybersecurity. His
last book, Core Software Security: Security at the Source, became a standard reference for many corporate

xix
xx Building In Security at Agile Speed

security leaders who are responsible for developing their own Security Development Lifecycles and
Product Security Programs.
Dr. Ransome was an Adjunct Professor for Nova Southeastern University’s Graduate School of
Computer and Information Sciences Information Security Program, designated a National Center of
Academic Excellence in Information Assurance Education by the U.S. National Security Agency and
U.S. Department of Homeland Security, where he taught Applied Cryptography, Advanced Network
Security, and Information Security Management. He received the 2005 Nova Southeastern University
Distinguished Alumni Achievement Award. Dr. Ransome is a member of Upsilon Pi Epsilon, the
International Honor Society for the Computing and Information Disciplines. He is also a Certified
Information Security Manager (CISM), a Certified Information Systems Security Professional (CISSP),
and a Ponemon Institute Distinguished Fellow.
— James Ransome, PhD, CISSP, CISM

Brook S. E. Schoenfield is the author of Secrets of a Cyber Security Architect, Securing Systems: Applied
Security Architecture and Threat Models, and Chapter 9: Applying the SDL Framework to the Real
World in Core Software Security: Security at the Source. He has been published by CRC Press, Auerbach,
SANS Institute, Cisco, SAFECode, and the IEEE. Occasionally, he even posts to his security architec­
ture blog, brookschoenfield.com.
Brook helps organizations achieve their software security goals, with a particular focus on secure
design. He provides his clients with technical leadership and support and mentorship to client leaders.
He has held security architecture leadership positions at high-tech enterprises for 20 years. Previous to
security, he held leadership positions for about 10 of his 20-year software development career. He has
helped hundreds of people in their journey to becoming security architects. Several thousand people
have taken his participatory threat modeling classes.
Brook has presented and taught at conferences such as RSA, BSIMM, OWASP, AppSec, and
SANS What Works Summits and guest lectured at universities on subjects within security architecture,
including threat models, DevOps security, information security risk, and other aspects of secure design
and software security.
Brook lives in the Rocky Mountains of Montana, USA. When he’s not thinking about, practicing,
writing about, and speaking on secure design and software security, he can be found telemark skiing,
hiking, and fly fishing in his beloved mountains, exploring new cooking techniques, or playing various
styles of guitar—from jazz to percussive fingerstyle.
Brook is an inveterate and unrepentant Dodgers* fan.
— Brook S. E. Schoenfield, MBA

* And baseball, in general.


Chapter 1
Setting the Stage

1.1 Introduction

What we didn’t realize while we were drafting Core Software Security: Security at the Source * was that we
were on the cusp of a confluence of a host of software development shifts. At the time, what we dealt
with seemed, perhaps, to be disparate threads. But these were, in fact, highly interdependent and inter­
acting strands that would transform software development. It is in the face of this that we’ve decided to
write another software security book.
We understand that there exist a multitude of software security books. In fact, there are many very
good ones. Including our own more recent publications, there are books devoted to each and every
aspect of secure development—from designing secure software through the testing of it. There are
books about the management of software security programs, to be sure. But none of these bring these
pieces together into that necessary synthesis that deals with software practices of today and, at the same
time, represents a dynamic union of technical, cultural, organizational, and managerial skills.
Trust us, software security requires all of its aspects to be individually successful; however, the
aspects aren’t independent, but rather are highly interdependent and interactive. Software security is a
collection of intersecting, mutually supportive activities, while it also requires cultural change, process,
and organizational muscle to succeed in today’s heterogeneous environments.
Besides, we’ve learned a lot in the intervening years about how to build continuous software security
programs based upon and natively integrating with Agile methods. This includes the way that these
programs are managed, as well as how to build a security development lifecycle (SDL) that doesn’t
interfere with how people are actually creating and then fielding software. Our aim is for security to
be foundational to software creation, as well as for security practices (the SDL) to be a part of the warp
and weft of software development.
As Brook’s Developer-centric Security Manifesto states:

• Enable development teams to be creative and to innovate


• Ensure that developers have as much specificity as required to “deliver security correctly”

* Ransome, J. and Misra, A. (2014). Core Software Security: Security at the Source. Boca Raton (FL): CRC Press/
Taylor & Francis Group.

1
2 Building In Security at Agile Speed

• Build tools for developers to check for correctness


• Deeply participate such that security earns its “rightful place”
• “Prove the value” of security processes and tools*

The authors are hardly alone among security practitioners focused on working with and empower­
ing development versus imposing security on top of it. At the time of this writing, in talking with our
peers, we believe that we still belong to a minority. Enabling development to take charge of security
through the support and specialized knowledge of security folks unfortunately remains an emergent
practice, rather than the norm.
Many of the tectonic shifts that we’ve seen in the years since writing Core Software Security were
already underway. However, none of these shifts had profoundly changed the way that much of soft­
ware was built. Although some development teams might adopt Agile software practices in order to
improve their process, few large organizations were in the process of adopting Agile as an organization-
wide standard. It is true that smaller organizations, especially startups, might be fully Agile; however,
much of this was seen in the software development world as “boutique.”
Continuous integration and continuous delivery (CI/CD) and DevOps were approaches that some
cloud native teams were discovering while other teams, focused on other types of targets, were looking
on in amazement and wonder at the boost in productivity these enabled. But it wasn’t obvious then how
these technologies and approaches might be applied in other contexts and toward other software targets.
As Agile became more and more prevalent, or was just in the process of organization-wide adop­
tion, software security practitioners began to take notice. Indeed, during the drafting of Core Software
Security, two of the authors were engaged in an organizational shift to Agile Scrum. It should be no
surprise that Agile and iterative development, in general, deserved some focus in the book. In hindsight,
we now believe that the software security program that James and Brook were leading at the time pro­
duced one of the industry’s first fully Agile SDLs.
Still, much of the software security industry continued building SDLs aligned with Waterfall
development methods. The prevalent published SDLs at the time were all linear, conceived in phases,
assuming first idea, then requirements, then coding, and followed by testing and other verification,
which, if passed, signaled willingness to release. These “phases” follow each other in an orderly fashion,
the next not starting until the first has been completed fully. (As we shall see, this tendency towards
linearity that does not match how software is actually built continues to plague software security prac­
tice.) But that’s not how software is built today.
All of the software development practices that we now think of as “normal” and “typical” were
emerging when Core Software Security was published. At that time, many of these were considered by
large organizations as “only for special circumstances or small efforts.” Today, we only find Waterfall
development confined to particular contexts to which it is well suited: where coding and compilation
are expensive and where design misses can be catastrophic (e.g., firmware). Some form of Agile, whether
Scrum or another iterative method, has become widespread in organizations large and small, new and
old. CI/CD and DevOps methods are also in wide use, even in organizations or contexts that have little
to do with cloud development (where these approaches got started). DevOps is a worldwide movement.
Public cloud use is normal for at least some types of applications and data. Although some
organizations do maintain private clouds for particular needs, such as compliance or control, those
same organizations also use public clouds. Public cloud use is not at all unusual any longer, but rather
expected. Long since, most security practitioners have stopped worrying about whether “data are safe
in the cloud,” and, rather, focus on what data are in the cloud and how will they be secured. That’s

* First published at http://brookschoenfield.com/?page_id=256; republished in Schoenfield, B. (2019). Secrets of


a Cyber Security Architect. Boca Raton (FL): Auerbach Publications/Taylor & Francis Group, p. 177.
Setting the Stage 3

because just about everybody’s data are in a cloud, probably many clouds. It’s no longer a matter of “if”
or “whether” but rather “where” and “how to secure.”
Software development has been seeking its new “normal” for some time. We believe that security
must meet and enable that “normal” if we are to stop the seemingly never-ending release of software
issues that plague our digital lives. Each of the authors has field-tested the advice and techniques laid
out here—both together and individually. We have seen fairly significant improvements unfold through
our efforts. Still, taken as a whole, the software industry continues to stumble through issue after issue,
allowing compromise after compromise. We know that we can do better. The transformation won’t
be easy; it will take concerted effort, focused on that which developers can and will do, coupled with
significant organizational support, including security and a somewhat shifted security mindset.

1.2 Current Events

Over the last 10 years or so,* there has been a profound and revolutionary shift in the way that software
is produced, employed, and maintained: This paradigm shift has come to be known as “DevOps”—
that is, “development plus (and through) operations” united into as seamless and holistic a practice as
can be fostered by those involved.
At the same time, a platform for running code and storing data has matured—that is, “the cloud”:
public, private, and hybrid. Code still runs on a person’s device, be that a phone, a tablet, a laptop, or
desktop computer (all of which are still sold and used). But, increasingly, many of the functions that
used to stand alone on a device are in some way, and typically, many ways, tied to services and func­
tionality that runs within a cloud, quite often, a public, commercial cloud offering. There are many
compelling reasons to structure (architect) functionality in this way, among them:

• On-demand server workloads (expansion and contraction of compute resources based upon need)
• Ease of operating, maintaining, and updating cloud software
• Maturity of cloud services (allowing operators to delegate some portion of maintenance duties to
the cloud provider, including maintenance for security)
• Availability and stability
• Global distribution
• Continuity requirements
• High resource computation needs (e.g., artificial intelligence and machine learning)
• Larger data sets and their manipulation
• Minimizing compute resources needed from devices (often using battery power)

There are computer tasks that are handled much better in a cloud than on a constrained device.
The nearly “always connected” state of most devices fosters an architecture that can take advantage of
the characteristics of each environment while, at the same time, enabling sufficient inter-environment
communications to make cloud service integration seem nearly seamless.
In short, the paradigms for producing and operating software, as well as the way that functionality
is architected, have been through fairly profound sea changes such that if security is going to be built
and then run effectively, security techniques, tools, and operations must match, and, in fact, integrate
easily, fully, and relatively painlessly with the ways that software is currently built and run (and on into
the foreseeable future, if current trends continue).

* Patrick Debois is credited with coining the term “DevOps” in 2009 for a conference, DevOpsDays. Kim, G.,
Humble, J., Debois, P., and Willis, J. (2016). The DevOps Handbook. Portland (OR): IT Revolution Press, p. xiii.
4 Building In Security at Agile Speed

DevOps involves a continuation of the Agile Manifesto*: Developers must have control over their
work via high trust working environments, coupled to high velocity, rapid delivery tools and methods.
[DevOps is creating]: “safe systems of work and enabling small teams to quickly and independently
develop and validate code that can be safely deployed to customers.Ӡ
DevOps includes a mental and cultural shift toward removing artificial barriers between the vari­
ous technical specializations:

• Creators
• Designers
• Programmers
• Validators
• Operators
• Monitors

In the past, it was not atypical for creators to toss concepts over to designers who generate plans and
specifications that programmers implement and then toss to the “quality people” (validators). Once the
software passes its tests, operators are supposed to release the software onto whatever infrastructure may
be required (if any), while someone, somewhere, watches feedback data to ensure that the software is
running as specified (monitors).
DevOps throws a spanner into the works with the attitude that everyone involved, whatever their
specialty skills, has essentially the same goal: good software that provides value to its owners and users.
There are dependencies. An idea isn’t implementable unless it can fit into existing structures (architec­
ture) and can actually be implemented into working code. Since errors are a very consistent product
of building software, implementers and designers provide a significant contribution to the testing of
the software. At the same time, testers will have expertise in the most effective validation techniques.
Implementers need feedback from operators and monitors so that the software will run effectively, while
at the same time, returning clear information about inconsistencies and unintentional behaviors (i.e.,
“bugs”) so that these can be removed. Going forward, each discipline has input to all the others and
must incorporate the dependencies from other knowledge domains. Ergo, rid development of “artificial
knowledge domain walls.”
At the same time, the ability to write code implementing complex deployment and runtime environ­
ments has exploded. “Operators” are, increasingly, “coders,” not system administrators. The days of spe­
cialty administrators grinding through long series of commands to build environments or bring hosts
online is long gone. The vast majority of those actions can be coded such that deterministic logical con­
ditions trigger software runs. These runs are often highly elastic, in that as load increases, the virtual
runtime environments expand to meet demand, or release resources when no longer needed—that is,
the software runs on and utilizes cloud capabilities. This is all coded; no human interaction is required
after operations code is considered stable and working.
One set of coders may program architecture documents, whereas another generates functional logic,
and another, deploy and run code. “Programmer” no longer solely refers to someone generating func­
tional code; nearly every role may require coding, though the languages and work products differ.
Although DevOps and Agile principles are certainly not adhered to religiously everywhere, even
fairly command and control organizations have adopted some of the techniques associated with Agile:

• Small, independently operating development teams.


• Discreet, easily understandable and sizeable tasks.

* The Agile Manifesto can be retrieved from https://agilemanifesto.org/principles.html


† Kim, G., Humble, J., Debois, P., and Willis, J. (2016). The DevOps Handbook. p. xvi.
Setting the Stage 5

• Regular, usually daily, team meetings.


• A queue of work items and some process for choosing what to work on next.
• Relatively short delivery schedules, often a few weeks.
• Distinct and discrete deliverable chunks or items (“Minimum Releasable Increment” or “Mini­
mum Viable Product”).
• Teams retain some development process control, at minimum control over who works on what
and how algorithms are chosen.
• At the end of a cycle of development, the team reviews the last cycle for challenges and potential
improvements to the work and processes. The team then chooses one or a few of the identified
improvements to attempt in the next cycle of development.

Our list should not be taken as a definitive expression of an Agile software development process.
Please note that we observe these process approaches widely adopted in many organizations. The list
above should be viewed as having descended from Agile practices.
Many development organizations are shortening their delivery schedules and making use of DevOps
or DevOps-like pipelines to build, release, and run software. Again, we have observed this trend even
influencing relatively older software development organizations building legacy applications.
It bears repeating: Software development has and continues to experience a significant paradigm shift.

1.3 The State of Software Security

At the same time, software security industry practices have made relatively incremental improvements.
For instance, if one were to survey the published SDLs, there is a strong tendency to represent secure
development in a linear fashion—security activities preceding from planning and design through test­
ing and release in an orderly fashion. Even Agile SDL presentations try to flatten out the iteration in an
effort to order (and make understandable) what might seem as a relatively chaotic process when viewed
from the outside (although there is nothing inherently chaotic about Agile development).
But these attempts to provide order through linearity are, in the authors’ opinion, a mistake.
First and foremost, since developers often work on tasks with a lot of parallelism and a lot of feed­
back between different mini efforts, a linear representation doesn’t map to what’s actually going on
during development. In fact, our experiences suggest that where there is a large discrepancy between
the expression of SDL timing and the process by which software is actually developed, developers either
outright ignore or give short shrift to the security requirements and tasks because these appear to devel­
opers as nonsensical, unimplementable, or worse, completely irrelevant to their development practices.
Second, extant SDLs typically remain coupled to an underlying software development life cycle
(SDLC) to which the security activities and their timing are tied. In any environment where more than
a single SDLC is employed, some set of developers will feel left out, or worse, alienated, once again
leading to a sense of SDL irrelevancy.
The integration lessons expressed in this book have been hard won over many years of building, and
then maintaining software security programs. We, the authors, have each built programs independently,
as well as two software security programs at two very different organizations together. Our errors, and
our ultimate successes, are founded upon the thousands of dedicated developers who’ve been honest and
vulnerable enough to share what works, what doesn’t, and their willingness to reach with us for better
solutions that both deliver measurable security and are achievable across a gamut of software develop­
ment practices and styles. Yes, we’ve made a lot of mistakes. And, equally so, this work is the direct result
of identifying solutions not for but with our engineering partners. We’ve listened; this work is the result.
We will return to the question of generic SDL below.
6 Building In Security at Agile Speed

Although software development has been undergoing a sea change over the last 10 years, we
don’t mean to imply that the security industry, and especially security tools, have been standing still.
However, the changes in tooling have largely been incremental rather than revolutionary.* The changes
in approach and tooling associated with DevOps have been revolutionary: Software is simply not pro­
duced, deployed, and then maintained in the same way that it had been in the not very distant past.
Vulnerability analysis of various kinds (Chapter 3 will detail various approaches and technologies)
has certainly been improving its reliability, offering developers different analysis options—from analy­
sis as a part of the code-writing process through lengthy, full-build analysis. Vendors have been experi­
menting with combinations of source code analysis (static) and dynamic analysis† of a running program
to improve analysis fidelity. Still, none of these improvements precisely matches continuous delivery
and partition/elastic, code-driven load paradigms, as are typically seen in cloud forms of DevOps.
The second problem that needs to be addressed is the continuing SDL to SDLC coupling that
we repeatedly find in the published SDLs. Even the SDL that we proposed in Core Software Security
(Chapter 9 is devoted to it) suffers from implied linearity as well as attempting to cover just Waterfall
and Agile SDLCs.
It may be worth noting that we were in the midst of an Agile transformation at our day job while we
were drafting Core Software Security. We were most certainly thinking hard about security in an Agile
context. It is possible that McAfee’s Agile SDL was among the first truly Agile SDLs. Perhaps the first?
Still, it was hard for us to divorce ourselves from accounting for the SDLC that lay before us at
work: pure Waterfall and every shade of Agile implementation imaginable. But there are more SDLCs
than Waterfall and Agile. In addition, looking at Figure 1.1 (reprinted from Core Software Security), you
will see some progression from design through validation implied, even though we were trying with the
circles not to imply a strong linear progression.
But as we shall see in the Agile and, especially, DevOps, activities listed in Figure 1.1 can be hap­
pening in parallel with the others; these SDLC methods value the benefits resulting from extreme
parallelism. There is simply no benefit derived from forcing all design and planning to occur before a
line of code can be generated. Likewise, there isn’t an overarching value to waiting until most of the
code has been written before performing validations. All can be executing simultaneously, in small
increments, and providing feedback to improve each iteration based upon learning gathered from each
incremental task.
This is not to suggest that one need plan nothing before starting to code. That is an oft-quoted
myth: “Planning (or design) is dead. Just code.” It remains true that a lack of planning leads to poor
architectures that become brittle and fragile, that is, cannot be changed without profound impacts,
some of which invariably include security weaknesses.
However, equally untrue is the also oft-implemented security preference (sometimes codified into
inflexible policies) that require all security planning to be done early, preferably before much coding
has occurred. Usually, wherever unchangeable security requirements are delivered all at once, especially
in an Agile (iterative development) context, the security requirements will not keep pace with other
shifts during development. This nearly always results in requirements left “on the shop floor,” unimple­
mented because the stated requirement no longer matched the realities of what was being built and
could no longer be implemented as specified. As we shall see in some detail, security requirements are
no different from any other specification: all early specifications benefit from refinement, pivots, and
learning that are the result of an iterative approach. Security has no special attribute making it somehow

* Of course, security tool vendors are free to disagree with us: tool evolution vs. revolution. Our intention is not
to dismiss the efforts of security tool creators, but rather to direct the reader to a set of problems that we believe
need to be addressed, as expressed in this, the first chapter of our book.
† We return to the various types of security analysis in subsequent chapters.
Setting the Stage 7

Figure 1.1 Consensus SDL. (Source: Reproduced from Schoenfield, B. [2014]. “Applying the SDL
Framework to the Real World.” In Ransome, J. and Misra, A. Core Software Security: Security at the
Source. Boca Raton [FL]: CRC Press/Taylor & Francis Group, Ch. 9, Figure 9.1, p. 258, with permission.)

different from other design domains: security can be just as flexible and adaptive as, for instance,
usability or performance.
In short, although the world of developers has become ever more iterative, parallel, and continuous,
security’s processes (SDL, if you will) have remained at least somewhat bound by Waterfall, one-task-at­
a-time thinking. Partly, this is a function of trying to express a highly cross-dependent and overlapping
set of tasks (SDL consensus) in some understandable manner. Abstractions tend toward the linear for
ease of comprehension.
The Microsoft “Simplified” Security Development Lifecycle (SDL):

• Provide Training
• Define Security Requirements
• Define Metrics and Compliance Reporting
• Perform Threat Modeling
• Establish Design Requirements
• Define and Use Cryptography Standards
• Manage the Security Risk of Using Third-Party Components
• Use Approved Tools
• Perform Static Analysis Security Testing (SAST)
• Perform Dynamic Analysis Security Testing (DAST)
• Perform Penetration Testing
• Establish a Standard Incident Response Process*

While we were creating the first draft of this book (2020), Microsoft revised their SDL once again
(activity names are quoted above).

* © Microsoft 2020. Retrieved from https://www.microsoft.com/en-us/securityengineering/sdl/practices#practice7


8 Building In Security at Agile Speed

Previous versions (which we were going to quote as examples) retained an implied linearity to SDL
activities. The first encounter for the reader with Microsoft’s SDL activities on the Microsoft SDL site
indicates no overt ordering with respect to SDLC timing or flows. Like our generic SDL, activities are
grouped around essential SDLC general tasks, just as DevOps acknowledges that “Planning,” that is,
what we term “Design,” requires some different mental approaches from coding, although the two, as
we’ve noted several times, are intimately related.
Despite the implied grouping (nothing is explained in these terms on the Microsoft site) into
design/plan, code generation, and verification activities, no time-bound ordering is implied or explicit.
Microsoft SDL no longer makes any assumption about the ordering of the activities in the SDL’s pre­
sentation. This is in tune with our generic SDL, more closely following today’s software development
practices. Each security task must be taken in its proper relationship to development activities; assump­
tions about ordering only serve to reduce developers’ creativity while solving problems, be those secu­
rity, privacy, or otherwise. The activity explanations have been numbered for convenience: 1–12. Our
quote, above, removes the numbering as an irrelevant presentation device.
As presented, the various aspects of secure design activities have been placed together. Likewise, veri­
fication tasks are grouped. This is useful but does not imply that design precedes coding and verification.
It should be obvious to any engineer that one cannot test without code. Furthermore, unless one knows
at least something about what one will build (design), it’s far more difficult to code it. But, as we’ve
repeated, these are interacting aspects of holistic software development, not discrete, separate activities
that must be fully completed before the next “phase” of software development (strict Waterfall SDLC).
Microsoft was one of the first organizations to attempt to create and then implement software secu­
rity. In our humble opinion, as an organization, Microsoft continues to lead the industry with cutting-
edge software security practices in just about as mature a form as we can find. Once again, Microsoft
is, at the same time as we are, removing Waterfall SDLC artifacts from their SDLC.
We have reproduced our version of Adobe®’s Secure Product Lifecycle (SPLC) in Figure 1.2. The
Adobe activity names have been placed into a circle similar to the way that Adobe describes their SPLC
(equivalent to SDL).* Looking at Adobe’s published SPLC, we believe that it might prove instructive to
illustrate some of the problems that we deem essential to address if security is to be fully integrated fully
into the many different variations of SDLC, including Agile and DevOps processes, and for architec­
tures that span devices, cloud services, and backend cloud (or server) processing.
Although we have been told numerous times that much development at Adobe is Agile, you may
note that the SDL is presented as a circle, with one activity flowing into the next, implying, of course,
an orderly succession of SDL tasks. Training is first, although this isn’t really true: Training is expected
to be repeated and constantly available. Once trained, Requirements and Planning begins, which leads
to Design. Threat modeling is subsumed into the Design bucket. The explanation, “builds defenses
against potential threats directly into the initial design of new products and services”† is Adobe’s
description of threat modeling. Development and Testing then proceed based upon Requirements,
Design, and the threat model that was implied as a part of Design.
The Adobe SPLC circle is meant to indicate that tasks feed each other continuously, that develop­
ment isn’t completed but rather a never-ending loop throughout the life of the product. A circle is a
better paradigm than, perhaps, the older, linear ordering. Ordering and timing are necessarily implied
by the Adobe model, nonetheless.
Iteration and parallelism have been collapsed out of Adobe’s SPLC. There are other conceptual
errors baked into the activities. For instance, privacy must be taken up both during requirements and
planning and as a fundamental part of design activities, whereas privacy engineering will be addressed

* “Adobe® Secure Engineering Overview,” p. 2. Retrieved from https://www.adobe.com/content/dam/cc/en


/security/pdfs/adobe-secure-engineering-wp.pdf
† Ibid., p. 3.
Setting the Stage 9

Figure 1.2 Adobe Secure Product Lifecycle (SPLC). (Source: Redrawn from “Adobe ® Secure
Engineering Overview,” p. 2. Retrieved from https://www.adobe.com/content/dam/cc/en/security
/pdfs/adobe-secure-engineering-wp.pdf)

both in the design and through implementation and validation. Threats to privacy will be a part of
the threat model. Privacy requirements and behavioral attributes must be “engineered,” that is, imple­
mented and tested, like any other set of features. But privacy is only mentioned as a training item and
in requirements in Adobe’s “Secure Engineering Overview.”
Likewise, although a threat model is an implied deliverable in design (and threat landscape is men­
tioned in the Requirements explanation), the analysis technique actually underpins much software
security practice, and relevant to the SDL, threat modeling underpins all of secure design (as we shall
see in Chapter 4). We cannot design security without imagining what attacks will most likely be pro­
mulgated against the software and then specifying those defenses that are believed to be necessary to
prevent compromise. In essence, that’s threat modeling, whatever SDL “task” one chooses to name it.
It has become apparent to the authors that threat modeling is an essential technique that must be
applied repeatedly starting at idea conception, and then will underlie many design and specification
decisions throughout development, including which validations may be necessary to either prove or
correct the threat model. We will dive into threat modeling and secure design in subsequent chapters.
Suffice it to note here that threat modeling mustn’t take place at some particular “perfect moment.”
10 Building In Security at Agile Speed

Rather, it is an analysis technique that gets applied repeatedly as security needs are refined as structures
(architecture), designs, implementation, and testing unfold.
We encourage the reader to analyze other published SDLs for their assumptions, their implications
with respect to DevOps, and their continuous, Agile development practices. Those who understake
a little light research will find any number of linear statements that appear to assume or couple to
Waterfall development. Some organizations have moved beyond Waterfall to circles, and similar repre­
sentations. We offer a model free of most, if not all, of these implications.
Of course, one could start with Microsoft’s SDL. They continue to be among the leaders in soft­
ware security, as is Adobe, which is why we chose to reproduce the high-level activities of each of them.
However, the Microsoft SDL is tied to Microsoft’s products (as it must be) and Adobe’s to their product
portfolio. Thus, either SDL (and any other company’s published set) may not be as appropriate as an
SDL designed, from the bottom up, to be generic. As you work through this book, you will find that our
generic SDL activities match, perhaps not in precise name but certainly in intention, the vast majority of
published SDLs. That is because the generic SDL found here is based upon a consensus SDL drawn from
wide-ranging research across published SDLs and experienced and published development practices.
James and Brook (the authors) spent hours at a whiteboard early in 2014 mapping not just SDL
tasks, but also how they relate to each other, their dependencies and preconditions. Although some SDL
tasks are relatively independent, many rely on outputs from other tasks in order to begin or to complete
properly. Capturing those relations is key to understanding how an SDL must flow.
Figure 1.3 was our early attempt at capturing the dependencies between SDL tasks. The diagram
is heavily weighted to Waterfall SDLC and, thus, must not be regarded as representing our generic
SDL. However, the diagram fairly accurately describes which SDL activities must receive the output
of other activities. You may notice that the dependencies are not trivial and are also not particularly
straightforward.* A few attempts at following arrows from tasks on the left through to tasks on the right
should be enough to highlight the problems that originate from a failure to understand the relationships
between SDL tasks. Most SDL tasks are not independent and discrete, although often, SDL tasks are
conceptualized in this way.
For about 18 months (starting sometime in 2015), Bob Hale, Principal Engineer at Intel† (now
retired); Catherine Blackadar Nelson, Senior Security Researcher at Intel (now at Google); and Brook
(then a Principal Engineer at Intel Security) undertook a study of most of the published SDLs to look
for commonalities and to identify differences. The team also had access to a prepublication draft of
ISO’s 27034 Application Security standard.‡
Unfortunately, the products of that study group belong to Intel and have never been published. We
cannot provide the resulting SDL here. That Intel project produced what may have quite possibly been
the world’s first truly generic SDL based on a consensus analysis of a survey of extant SDLs. That SDL
ensured that it did not favor any particular method of development and could readily be applied across
various SDLCs. Once the team believed that they had an SDL, several development teams piloted the
SDL and helped to refine it. The software in the pilot included differing runtime stacks and involved
varying architectures, from firmware through operating systems, and included applications and cloud
infrastructure code.

* Our diagram does not attempt to capture relationships at all, only dependencies. The question we were
answering is, “Are there SDLC or SDL activity(ies)’ outputs without which an SDL task cannot begin?” It
must be noted that besides dependencies, the execution of some SDL activities affects others, as well as strict
dependency upon outputs.
† Intel’s Principal Engineer may be thought of as essentially equivalent to other organization’s Distinguished
Engineer. Promotion requires that the candidate meet technical, leadership, and strategic criteria, which are
evaluated by a board of peers: other Principal Engineers. The title is not an honorific, but rather, a demonstration
of technical excellence and depth, strategic delivery, and organizational leadership, as well as requiring continued
leadership in each of these arenas.
‡ Intel was a contributor and sponsor of the standard.
Figure 1.3 SDL Task Dependencies.
12 Building In Security at Agile Speed

Although the resulting SDL cannot be reproduced, that work greatly influences what you will find
in this book. Based upon what we learned building a truly generic SDL, such an SDL must:

• Be free from timing expectations based upon an assumed SDLC


• Express industry consensus on a comprehensive set of state-of-the-art software security practices
• State the necessary preconditions before any particular SDL task may be started, including any
dependencies on other SDL task outputs
• Describe those conditions that must be met to consider the task complete
• Explain the output(s), if any, of each task
• Describe any conditions that may trigger a refinement or review of the task’s outputs

When each task’s preconditions, task dependencies, completion requirements, outputs, and triggers
are fully described, the SDL is then freed from assumptions about and bindings to any particular SDLC.
What became obvious to us as we surveyed each SDL is that there exists a firm industry consensus
on what high-level activities constitute a robust and complete SDL. Chapter 3 will dive into the details
of the consensus SDL. At this point, it’s important to note that although each SDL may name activities
differently, and they most certainly do, the actual tasks are in fact the same, usually differing only in
name, or how a particular security task is divided or set of tasks is combined. Based upon our 18 months
of study, there is considerable industry consensus on how to best deliver software that exhibits security
behaviors. (We explain “security behaviors” below.)

1.4 What Is Secure Software?

Given that there exists a consensus (even if not entirely formalized) on what constitutes a reasonable set
of security activities—a consensus SDL—what is it that the SDL is trying to achieve? Unfortunately,
ask 10 people what they believe constitutes “secure software” and you’re likely to receive some appar­
ently divergent answers, maybe even 10 different ones.
In Secrets of a Cyber Security Architect, Brook wrote the following in an attempt to define a set of
criteria for describing “secure software”:

What are the behaviors that secure systems must exhibit? How is a “secure system” defined?
Over the years that I’ve been practicing, as I open a discussion about security with development
teams, I’ve noticed that quite often (not every time, but regularly), team members will
immediately jump to one of four aspects of software security:

• Protection of data (most often via encryption techniques)


• Implementations errors (most often, coding securely)
• Authentication and/or authorization of users of the system
• Network-level protection mechanisms

This set of responses has been remarkably stable for the last nearly 20 years, which is interesting
to ponder all by itself. Despite the dramatic shift in attacker capabilities and techniques over the
last 20 years—a huge shift in attacker objectives—developers seem to be thinking about one of
the above aspects of the security picture. I don’t know why development has not kept pace with
the expansion of adversarial thinking, but apparently it hasn’t (though, of course, my evidence
here is completely anecdotal and not at all scientifically validated).
Lately in my threat modeling classes (and sometimes other presentations), I’ve been polling
my audiences about what jumps first to mind when I say, “software security.” Not surprisingly,
Setting the Stage 13

members of my audiences typically find themselves considering one of the above categories
unless a participant has broader security exposure. My informal polls underline a need to
establish a baseline definition of just what software security must include, the field’s breadth,
its scope.
To address the challenge that development teams often lack a sufficiently complete picture of
what software security entails, as well as to provide a set of secure design goals, I came up with
the following secure software principles. “Secure” software must:

• Be free from implementation errors that can be maliciously manipulated: ergo, vulnerabilities
• Have the security features that stakeholders require for intended use cases
• Be self-protective; resist the types of attacks that will likely be attempted against the software
• In the event of a failure, must “fail well”—that is, fail in such a manner as to minimize
consequences of successful attack
• Install with sensible, “closed” defaults

The foregoing are the attributes that “secure software” displays, to one extent or another, as it
runs. These principles are aspirational, in that no running system will exhibit these behaviors
perfectly; these cannot be implemented to perfection. Indeed, so far as exploitable conditions
are concerned, whether from implementation, from a failure to identify the correct security
requirements, or a failure to design what will be [implemented] correctly, software, at its current
state of the art, will contain errors—bugs, if you will. Some of those errors are likely to have
unintended security consequences—that is, vulnerabilities allowing adversaries leverage or
access of one kind or another. This truism is simply a fact of building software, like it or not.
Then there is the matter of security context and desired security defensive state: a system
or organization’s security posture. Not every system is expected to resist every attack, every
adversary, every level of adversary sophistication and level of effort that can be expended (given
the universe of various threat agents).
Hence, presence and robustness of the above secure software behaviors must vary, system to
system, implementation to implementation.
Still, I expect software to account for the above behaviors, even if by consciously accepting
the risks generated by a considered absence or weakness of one or more of the principles given
above. My software principles are meant to drive secure design decisions, to be goals to reach
for. None of these principles is built as stated. These principles don’t tell you how to protect a
credential that must be held by a system. Rather, from these principles, design choices can be
evaluated. These are guideposts, not design standards. (Secrets, pp. 29–30)

As Brook described in the quotation above, “secure” software (a meaningless statement by itself) exhibits
a fairly distinct set of behaviors. The degree to which the software must or should behave in these
“secure” manners is a matter of risk analysis and decision making: No software (nor any other human
creation) is fully “self-protective.”
Brook has noted from many a conference stage and in Secrets of a Cyber Security Architect that
“whatever can be engineered by humans can be reverse engineered by humans.” Given enough access
and resources, pretty much any defense can be surmounted. In cryptography, strength is measured in
the number of years of computation that are required to break it. That is, it is assumed that encryption
can be broken; it’s just a matter of time and resources. If the amount of time is beyond a human’s typical
working life span, the encryption is considered sufficient for a particular set of purposes.*

* Obviously, different encryption algorithms and key sizes have varying strengths. One must match the
encryption’s strength to needs. Usually, this is done via a risk analysis.
14 Building In Security at Agile Speed

The amount of protection provided to an entity’s (a system, a distinct piece of software, an organiza­
tion) collection of security defenses is very much like cryptography: Effectiveness can only be measured
in attacker cost; there is no bullet-proof defense. Furthermore, much of those defenses will be imple­
mented through software, which we have already noted must be considered flawed (i.e., “has bugs”).
We might restate our guiding assumption as, “Any defense created by humans can be circumvented by
humans.” There is no perfectly safe security. Besides, “safe” is a subjective quality.
And so it is with each of secure software’s behavioral characteristics: What level of adherence to our
software security principles is sufficient given the purposes to which the software will be put and for
the level of attack and adversary sophistication that the software must resist in order to remain usable?
In addition, what is sufficient to not impact the software’s owners and users in ways from which they
cannot readily recover.
The degree to which software adheres to these attributes is contextual and generally locally unique.
It would be foolish for any author (pundit, expert) to declare that they know precisely what level of self-
protection or failure resilience, as well as which security features every piece of software must adhere to.
A tiny utility that I write for my own use and whose code and execution are confined to my presumably
reasonably protected personal machine has a vastly different security posture from a public, commercial
cloud offering. These two cases exist in entirely different orders of security posture magnitude and thus
are not playing in even related leagues, other than both consisting of software.
How security principles are achieved is the purpose that an SDL is supposed to deliver. Of course,
it’s not quite that simple: Some parts of the SDL are intended to foster secure designs but are not, in
and of themselves, secure designs. Likewise, activities in the SDL should reduce implementation errors
but are not secure code. And so forth. The SDL is the blueprint for what sort of activities and skills will
be required to achieve the correct levels of secure behavior. That is, the SDL is intended to tell us what
we have to do to build software that will exhibit the correct amounts of each of the secure software
principles given previously.
But an SDL is not the set of skills; it is our pointer to the correct skills. What to execute and how
to achieve the goals of activities exist at a different level of implementation when building a secure
software program.
For instance, secure coding requires an understanding of what types of coding errors attackers
might leverage. The set of errors will be language dependent: Languages that allow programmatic
manipulation of memory have classes of attacks that are far more difficult to make in languages where
memory usage is opaque to the language. In the same vein, a failure to choose secure design patterns
leads to weaker security postures, that is, may offer attackers leverage.
In short: An SDL sets out that set of processes, activities, and timings that, taken together, provide
developers and system stakeholders a coordinated and holistic blueprint for achieving the security behav­
iors we have listed above—the set of activities that is most likely to achieve the desired security posture.
There are some intricacies that many extant SDLs fail to explain. For instance, some activities are
dependent upon the initiation, completion, or availability of work products that occur during develop­
ment. Some activities are dependent upon a work product having been started. Others require the
completion of tasks. In contrast to SDL activities whose timing is dependent upon the initiation of
the completion of development tasks, there are other SDL activities that start as soon as development
produces code. These activities then proceed alongside of, in concert with, and perhaps are an integral
part of the development process from the point at which code begins to be produced. An SDL activity’s
dependencies tend to be unique to it.
An obvious example would be verification. If no code has yet been generated, the code cannot be
verified for correctness.* Likewise, if no major structures have been identified, then it will be difficult

* In test-driven SDLCs, however, the verification test is written before coding functionality. Even with this
paradigm, the test isn’t run until there’s functionality on which the test can be run.
Setting the Stage 15

to understand at what system points attackers will begin their attacks: a key factor essential to threat
modeling analysis. For manual penetration (pen) testing, typically, most of the development work and
SDL activities should already have been completed. Skilled, manual penetration testing is usually most
effective at the point at which nearly everything else in a cycle of change has been completed; penetra­
tion testing, thus, is highly dependent on the completion of most development.
Interestingly, pen testing can be independent of the remainder of the SDL. In our experience, that
is not the most effective approach. (We will describe effective conditions for pen testing in a subsequent
chapter.) We much prefer integrated pen testing that takes input from the preceding SDL activities
and offers significant feedback to earlier SDL activities. However, pen testing does not require that any
other SDL activities occurred previously.
Hence, an SDL must not only set out discreet activities and requirements, but it must also explain
how the activities and requirements work in concert and how they fit together and support each other:
their dependencies and relative timings.
We have not said “absolute timings” intentionally. An SDL that explicitly declares the timing of
activities becomes tied to the processes through which software will be developed, that is, the specific
Software Development Life Cycle (SDLC) that will be employed. A general-purpose SDL cannot be
tied to any particular SDLC, as outlined previously in this chapter. As stated, timing assumptions
restrict the application of the SDL to particular forms of SDLCs. Doing so then obviates the SDL’s use
for alternative SDLC.
We believe that, based upon work done for our previous employers (as described above), it is not
only possible, but demonstrable to create and use an SDL that isn’t coupled to a particular SDLC. To
deliver such a general-purpose SDL, it must contain:

• A comprehensive set of security activities believed to express industry consensus on delivering


software security
• An accompanying and integrated set of requirements that explains why and when to execute the
activities
• Processes that explain how to execute the activities and requirements
• The set of prerequisite work-products that must be achieved before starting each activity and
requirement
• The work-product(s) expected from execution and fulfillment of activities and requirements
• Any conditions that, when these exist, require the beginning of, and re-evaluation of, activities
and their outputs
• A definition of done that explicitly states when an activity may be considered to have been com­
pleted and its security requirements fulfilled

The preceding bullet points describe, at a high level, a set of requirements that, if fulfilled, will meet
the SDL needs of software development under any SDLC. Such an SDL will be fully independent of
any particular SDLC and can be executed as an integral part of every SDLC.
As we have already noted, there exists a consensus among software security practitioners on that set
of activities that, taken together, comprise the best opportunity to achieve software security objectives,
given the current (as of this writing) state of our art.
There are differences in the names given to various activities. There are differences in granularity of
activities: Some published SDLs tease apart activities that others lump together. This is particularly true
for design-time activities wherein SDL framers attempt to provide linear order to work that, in reality,
often doesn’t require any particular or strict ordering.
We will dig into the nonlinearity of today’s software development methods and the required, inte­
gral security work in fair detail in subsequent chapters. As we will see later, attempts to turn what must
be an integral process into discrete (and individually named) activities is a common error that separates
16 Building In Security at Agile Speed

(and for some SDLCs makes nearly irrelevant) the SDL from software development with which SDL
activities must integrate—the seeming need to carefully order SDL activities obviates that very integra­
tion that is necessary for achieving software security.
“Flattening” iterative SDLC activities, which often occur in parallel, draws a false demarcation
between security and development, which is quite counterproductive. In our experience, when the SDL
is no longer expressed as a highly defined progression of tasks, SDLC integration becomes easy, natural,
and organic. Developers readily (and, importantly, eagerly) add security tasks as an integral part of the
fabric of software development, which reduces to near zero opportunities for nonproductive process
friction. In the chapters devoted to program building, we take up this subject in much greater detail. At
this point, we set the context for the SDL that is offered in this book.
Establishing the set of activities that will deliver needed security postures through development isn’t
sufficient. An SDL must also contain explicit requirements about where and at what points during any
change to execute each relevant SDL activity. A generic SDL needn’t attempt to predict particular tim­
ings, nor do requirements need to tie themselves to assumptions about the SDLC.
It actually turns out that triggers for executing even the most craft-rich activities in the SDL are
quite deterministic. We published several triggers throughout Chapter 9 in Core Software Security—in
particular, Section 9.2.1 The Seven Determining Questions. We will reiterate those later; none of them
has changed, though we are going to reshape the secure design activities around threat modeling—a
subsequent understanding that we’ve gained since publishing Core Software Security. At this point in
this work, it should be sufficient to understand that activity execution signposts can and have been
defined without coupling those to any particular SDLC. It isn’t necessary to definitively sequence the
SDL in order to give developers security requirements.
Some SDL activities are stand-alone: These can be executed without reference to other activities.
For instance, secure coding depends upon language-specific training on errors that must be avoided
and those coding patterns, in that particular language and runtime environment that offer the least
amount of (or better, no) adversary leverage. The only dependency, assuming training, is that there’s
code to generate. That’s a coupling with SDLC activity, but it has no additional dependency on other
SDL activities (other than the aforementioned training).
On the other hand, skilled penetration testing isn’t particularly effective until a set of changes is
pretty near completion—the software running as it will when released. Penetration testing doesn’t neces­
sarily depend upon completion of all other relevant SDL activities. Many organizations wrongly assume
that skilled manual (penetration) testing is the most important activity, maybe the only one needed.
That’s a big mistake, frankly, since if so, one has used the most expensive, most unique and boutique
tool to find issues that could have been identified much earlier and less expensively, or avoided entirely.
Hence, the ideal skilled, manual testing activity ought to (in our strong opinion) depend upon
the successful completion of all previous relevant and applicable SDL activities. Penetration testing
ought to be used as the proving (or not) of the previous SDL work. Penetration testing shouldn’t be the
catchall, single activity to achieve desired security objectives: Skilled penetration testing occurs much
too late for that.
Likewise, it’s usually a mistake to generate specific security designs before the right security struc­
ture is in place, ergo, levels of secure structuring (architecture) activity have been successfully completed
before specific algorithms are tried (or chosen, depending upon SDLC methods). Design (security and
other) depends upon relatively completed structural understanding. There’s also an important feedback
loop between these two: Design constraints may very well (should) influence structure (architecture)
choices. These two activities are interdependent and, in some SDLC methods, conjoined tightly; there
may be little need for division between them. Hence, the SDL must explicitly state how architecture
and design interact or are joined or there will be misunderstanding, or worse, friction around execution.
Finally, every SDL activity must absolutely tell developers when they can consider the activity
completed and what “success” looks like, how to measure effectiveness, what the activity is expected to
Setting the Stage 17

achieve, and how to prove its objectives have been achieved. Those activities whose execution depends
(sometimes heavily) on experience and human analysis, that is, “craft” as much as engineering, can
appear to be an endless time sink. This problem can bedevil those whose role is to ensure that develop­
ment completes—on time and under budget (often project management and similar roles).
We honor the many (hundreds) of people who tirelessly organize and drive what can seem from
the outside to be a chaotic system: software development. Security must not add to that burden. The
SDL we offer here has clear objectives for each activity. For activities such as threat modeling, we have
previously published a fairly clear “Definition of Done,” both as a blog post and in Secrets of a Cyber
Security Architect (Appendix E, pp. 203–206), which we will reiterate here. Every SDL activity must
contain conditions by which its completion can be established. There must be some way to measure at
what level the activity has been effective or successful.
In summary, we believe, based upon our extensive experience, that an SDL can be achieved that
is not tightly coupled to any particular SDLC and which does not express itself in a way that makes
integration into software development difficult, even impossible. We’ve seen it. We’ve lived security as
a part of the fabric of development; we have shifted development organizations from security avoidance
to a culture of security, what Noopur Davis* calls, “culture hacking.” We know in our bones that not
only are these goals possible, they are readily achievable.
The journey isn’t easy. Local variations and unique requirements always exist. Different business
and organizational goals require different levels of security; there is no “one-size-fits-all” security pos­
ture. Ivory tower security pronouncements are exactly that: not real-world needs.
Still, by using a generic, consensus-based SDL, adapted to local needs, we have seen organizations
achieve their security goals. We’ve measured the decline of preventable security leaks in the software
for which we’ve been responsible. There is little in our career lives that is more satisfying. In this book,
we hope to share our technical, process, and organizational methods so that you, too, can reap some of
these rewards.

1.5 Developing an SDL Model That Can Work with


Any Development Methodology
Ensuring that everyone touching the product development lifecycle has the knowledge they need to
support an organization’s software security process is a fundamental challenge for any organization
committed to software security success. The goal is to remove the pain that organizations face in
developing a custom program of their own resource constraints and knowledge vacuums. Developers
are often under intense pressure to deliver more features on time and under budget. Few developers
get the time to review their code for potential security vulnerabilities. When they do get the time, they
often don’t have secure-code training and lack the automated tools, embedded processes and proce­
dures, and resources to prevent hackers from using hundreds of common exploit techniques to trigger
malicious attacks.
Unfortunately, before DevOps and DevSecOps, many companies thought it made more business
sense not to produce secure software products than it did to produce them. Any solution needs to
address this as a fundamental market failure instead of simply wishing it were not true. If security is to
be a business goal, then it needs to make business sense. In the end, security requirements are in fact
the same as any business goals and should be addressed as equally important. Employers should expect
their employees to take pride in and own a certain level of responsibility for their work. And employees
should expect their employers to provide the tools and training they need to get the job done. With

* Noopur Davis, Executive Vice President, Chief Product and Information Security Officer, Comcast, Inc.
18 Building In Security at Agile Speed

these expectations established and goals agreed on, perhaps the software industry can do a better job of
strengthening the security of its products by reducing software vulnerabilities.
A new security model for DevSecOps is needed that requires new mindsets, processes, and tools to
adhere to the collaborative, agile nature of DevOps. The primary focus is creating new solutions for
complex software development processes within an agile and collaborative framework. To migrate to
a new model, you must fully understand your current process and lifecycle so that you can bridge the
traditional gaps between the software development, operations, and security teams. This should include
focus on shared responsibility of security tasks during all phases of the delivery process. This will result
in positive outcomes for the business as a consequence of combining development, security, and opera­
tions teams; shortening feedback loops; reducing incidents; and improving security.
Reinventing how you perform your SDL will be key to your success in optimizing security in an
agile and DevOps environment. The goals of SDL are twofold: The first goal is to reduce the number of
security vulnerabilities and privacy problems; the second goal is to reduce the severity of the vulnerabili­
ties that remain (ergo, “security technical debt”). Although this SDL may look similar to other SDLs
you have seen, our approach to implementing this SDL not only brings the tasks and organizational
responsibilities back into the SDL but also keeps the centralized software security group and engineer­
ing software development teams empowered to own the security process for the products for which they
are directly responsible.
Given the continued pressure to do more with less, we don’t believe most organizations will have
the luxury of having most of the elements that we include in “post-release support” as separate organiza­
tions. This has been typical in the past, but we believe that organizations will need to provide for
innovative ways to include these elements as part of their overall software security program to leverage
the use of available resources. Most important to the SDL are the organizational structure, people, and
process required to deliver it, both effectively and efficiently, while maximizing the return on invest­
ment (ROI) for security in the post-release environment.
Inevitably, teams feel overwhelmed. It seems like even more has been placed on already heav­
ily weighted shoulders. Immediately, the smart folks will ask, “What do I HAVE to do?” Since that
depends, the next query will be, “Then what’s the minimum?” We are getting away from all of that
without sacrificing any security task.
Software developers know how to write software in a way that provides a high level of security and
robustness. So why don’t software developers practice these techniques? The model in this book will
answer this question in two parts:

1. Software is determined to be secure as a result of an analysis of how the program is to be used,


under what conditions, and the security requirements it must meet in the environment in which
it is to be deployed. The SDL must also extend beyond the release of the product in that if the
assumptions underlying the software in an unplanned operational environment and their pre­
viously implied requirements do not hold, the software may no longer be secure, and the SDL
process may start over in part or as a whole, if a complete product redesign is required. In this
sense, the authors establish the need for accurate and meaningful security requirements and
the metrics to govern them, as well as examples of how to develop them. It also assumes that
the security requirements are not all known prior to the development process and describes the
process by which they are derived, analyzed, and validated.
2. Software executives, leaders, and managers must support the robust coding practices and
required security enhancements as required by a business-relevant SDL as well as supporting
the staffing requirements, scheduling, budgeting, and resource allocations required for this
type of work. Part of this model covers the process, requirements, and management of metrics
for people in these roles so they can accurately assess the impact and resources required for an
SDL that is relevant to and works best in their organization and environment. The model is
Setting the Stage 19

approached and designed from real-life, on-the-ground challenges and experiences; the authors
describe how to think about issues in order to develop effective approaches and manage them as
a business process.

Because security is integrated tightly into every part of an Agile process, some of the engagement
activities typically found in Waterfall SDLs are no longer required. Security is built into the process,
from conception of a product and subsequently, throughout the process. In fact, early security require­
ments gathering must be accomplished for a completed Plan of Intent. This portion of the process
equates to the engagement question, “[A]rchitecture is a complete redesign or is entirely new?” Similarly,
once an architecture runway is initiated, the security architect should be included on the architecture
team that shapes the architecture so that it fosters the appropriate security features that will be needed.
The security portion of an Agile process assumes an iterative architecture and design process. There
is no tension between iteration and refinement on the one hand and security on the other within this
process. Security does not attempt to “bound” the iterative process. Rather, since security expertise is
integral to iteration and refinement, a secure design will be a natural result of the Agile process. In this
way, security becomes Agile—that is, able to quickly account for changes. This approach produces flex­
ible, nimble security architecture.
A high-level abstraction: Design, build, and verify remains relatively stable across methodolo­
gies, although, agile approaches shift some parts of these into an iterative, parallel set of ongoing and
repeated tasks. Since the entire agile process emphasizes iteration, the usual SDL activities also must
iterate or be left in the dust. Architecture before formal Sprints begin is not meant to be a completed
process but rather a gateway that seeds, informs, and empowers the iterative design that will take place
during Sprints. This alone is radically different from the way architecture has been perceived as a dis­
crete and independent process. Rather, many formally discrete SDL tasks are taking place in parallel
during any particular Sprint. Secure design, secure coding, manual code review, or any number of test­
ing approaches that don’t require a completed, more or less holistic piece of software are all taking place
at the same time and by the same team. The foregoing implies that security can’t jump in to interject
pronouncements and then jump back out until some later governance step where judgment occurs
about whether the security plan has been carried out correctly. The very nature of agile implies that
plans will change based on newly acquired information. Our approach, then, is for security to benefit
from that iterative process.
Previously, in Core Software Security, we provided a detailed overview of the Waterfall model we
modified to create our first rendition of an agile development process. We also used it to move the
security responsibilities into the development group before DevOps and DevSecOps became a model
to do this. Before we move on to describe a model that can work with any development methodology,
it is important to go back to where we started and the baseline upon which we built what we talk about
in future chapters. A DevSecOps approach drives technical, operational, and social changes that help
organizations address security threats more effectively, in real time. This provides security with a “seat
at the table” with the development and operations teams and a value to speed of delivery rather than a
hinderance. The next few sections describe the evolution of software development practices to include
a previous Waterfall-type SDL model that we used in an Agile environment followed by a newer model
fully optimized for the DevOps environment that can work with any development methodology.

1.5.1 Our Previous Secure Development Lifecycle Design and Methodology


We start this section by introducing the concept of overcoming the challenges of making software
secure through the use of an SDL, as described in Core Software Security. Software security has evolved
at a rapid pace since that book was published.
20 Building In Security at Agile Speed

We will move quickly from a review of our previous design in this section to an approach better
aligned with Agile methods and a design that is more appropriate for a DevOps environment through­
out the remainder of the book.
Further discussions of the models, methodologies, tools, human talent, and metrics for managing
and overcoming the challenges to make software secure can be found later in this book.
It should be noted that there is still a need for better static and dynamic testing tools and a formal­
ized security methodology integrated into SDLCs that is within the reach of a majority of software
development organizations. In the past decade or so, the predominant SDL models have been out of
reach for all but the most resource-rich companies. Our goal in this book is similar to our previous
book: to create an SDL based on leveraging resources and best practices rather than requiring resources
that are out of reach for a majority of software security teams.

1.5.1.1 Overcoming Challenges in Making Software Secure

SDLs are the key step in the evolution of software security and have helped to bring attention to the
need to build security into the SDLC. In the past, software product stakeholders did not view software
security as a high priority. It was believed that a secure network infrastructure would provide the level
of protection needed against malicious attacks. In recent history, network security alone has proved
inadequate against such attacks. Users have been successful in penetrating valid authenticated channels
through techniques such as cross-site scripting (XSS), Structured Query Language (SQL) injection,
and buffer overflow exploitation. In such cases, system assets were compromised, and both data and
organizational integrity were damaged. The security industry has tried to solve software security prob­
lems through stopgap measures. First came platform security (OS security), then network/perimeter
security, and, now, application security. We need to defense-in-depth to protect our assets, but, funda­
mentally, it is a software security flaw and needs to be remediated through a software security approach
(SDL) that tightly integrates and is organic to the SDLC developers’ use.
We might call this integration software security’s “SDLC approach,” or something like “developer­
centric security.” Whatever we term it, the critical concept is that an easily implementable SDL must
support the development methods (SDLC) in use by developers. Ours constitutes an about-face from
software security approaches that force developers to also learn and accommodate security approaches
that conflict with their SDLC.
The SDL has, as its base, components that all of the activities and security controls needed to
develop industry and government compliant as well as best practices hardened software. A knowledge­
able staff, along with secure software policies and controls, is required in order to truly prevent, identify,
and mitigate exploitable vulnerabilities within developed systems.
Not meeting the least of these activities found within the secure SDLC provides an opportunity
for the misuse of system assets from both insider and outsider threats. Security is not simply a network
requirement, it is now an information technology (IT) requirement, which includes the development of
all software for the intent to distribute, store, and manipulate information. Organizations must imple­
ment the highest standards of development in order to ensure the highest quality of products for its
customers and the lives that they protect.
Implementation of an SDLC program ensures that security is inherent in good enterprise soft­
ware design and development, not an afterthought later in production. Taking an SDLC approach
yields tangible benefits such as ensuring that all software releases meet minimum security criteria, and
that all stakeholders support and enforce security guidelines. The elimination of software risk early in
the development cycle, when vulnerabilities are easier and less expensive to fix, provides a systematic
approach for information security teams to collaborate with during the development process.
Setting the Stage 21

1.5.2 Mapping the Security Development Lifecycle (SDL) to the


Software Development Life Cycle (SDLC)
Whatever form of SDL you use, whether it is one that already exists, one you developed yourself, or
a combination of both, you must map it to your current SDLC to be effective. Figure 1.4 (formerly
Figure 2.4) is an SDL activity and best practices model that the authors have developed and mapped to
the typical SDLC phases. Each SDL activity and best practice is based on real-world experience, and
examples from the authors show the reader that security can be built into each of the SDLC phases—a
mapping of security to the SDLC, if you will. If security is built [as a core part of development tasks],
then the software has a higher probability of being secure by default, and later software changes are less
likely to compromise overall security. Another benefit of this mapping is that you will have presumably
worked with the owner(s) and stakeholders of the SDL, which will serve to build buy-in, efficiency, and
achievable security in both the operational and business processes of the SDLC and will include the
developers, product and program managers, business managers, and executives. (As stated previously,
the model in Figure 1.4 is based on the years of experience, research, and stakeholder/customer inter­
action shared between the authors in the field of software and information security.)
Each phase of the SDL in Figure 1.4 was described in great detail in Core Software Security and was
broken up as shown in Figures 1.5–1.10 below.

1.5.3 Software Development Methodologies


In Core Software Security, we discussed the various SDLC models and provided a visual overview of
our mapping of our SDL model to a generic SDLC. It should be noted, however, that multiple software
development methodologies are used within the various SDLC models. Every software development
methodology approach acts as a basis for applying specific frameworks to develop and maintain soft­
ware and is less concerned with the technical side than with the organizational aspects of the process
of creating software. Principal among these development methodologies are the Waterfall model and
Agile, together with their many variants and spin-offs. The Waterfall model is the oldest and most
well-known software development methodology. The distinctive feature of the Waterfall model is its
sequential step-by-step process of requirements. Agile methodologies are gaining popularity in indus­
try, although they comprise a mix of traditional and new software development practices. You may see
Agile or traditional Waterfall or maybe a hybrid of the two. We have chosen to give a high-level descrip­
tion of the Waterfall and Agile development models and a variant or two of each as an introduction to
software development methodologies.

1.5.3.1 Waterfall Development

Waterfall development (see Figure 1.11) is another name for the more traditional approach to soft­
ware development. This approach is typically higher risk, more costly, and less efficient than the Agile
approach, which is discussed later in this chapter. The Waterfall approach uses requirements that are
already known, each stage is signed off before the next commences, and it requires extensive documen­
tation because it is the primary communication mechanism throughout the process. Although most
development organizations have already moved toward Agile methods, the Waterfall method may still
be used when requirements are fully understood and not complex. Since the plan is not to revisit a phase
using this methodology once it is completed, it is imperative that you do it right the first time: There is
generally no second chance.
Figure 1.4 Mapping the Security Development Lifecycle (SDL) to the Software Development Life Cycle (SDLC). (Source: Reproduced from Ransome, J.
and Misra, A. [2014]. Core Software Security: Security at the Source. Boca Raton [FL]: CRC Press/Taylor & Francis Group, p. 46, with permission.)
Figure 1.5 Security Assessment (A1): SDL Activity and Best Practice. (Source: Reproduced from Ransome, J. and Misra, A. [2014]. Core Software
Security: Security at the Source. Boca Raton [FL]: CRC Press/Taylor & Francis Group, p. 47, with permission.)

Figure 1.6 Architecture (A2): SDL Activity and Best Practice. (Source: Reproduced from Ransome, J. and Misra, A. [2014]. Core Software Security:
Security at the Source. Boca Raton [FL]: CRC Press/Taylor & Francis Group, p. 47, with permission.)
Figure 1.7 Design and Development (A3): SDL Activity and Best Practice. (Source: Reproduced from Ransome, J. and Misra, A. [2014]. Core Software
Security: Security at the Source. Boca Raton [FL]: CRC Press/Taylor & Francis Group, p. 48, with permission.)

Figure 1.8 Design and Development (A4): SDL Activity and Best Practice. (Source: Reproduced from Ransome, J. and Misra, A. [2014]. Core Software
Security: Security at the Source. Boca Raton [FL]: CRC Press/Taylor & Francis Group, p. 48, with permission.)
Figure 1.9 Ship (A5): SDL Activity and Best Practice. (Source: Reproduced from Ransome, J. and Misra, A. [2014]. Core Software Security: Security
at the Source. Boca Raton [FL]: CRC Press/Taylor & Francis Group, p. 49, with permission.)

Figure 1.10 Post-Release Support (PRSA1-5): SDL Activity and Best Practice. (Source: Reproduced from Ransome, J. and Misra, A. [2014]. Core
Software Security: Security at the Source. Boca Raton [FL]: CRC Press/Taylor & Francis Group, p. 49, with permission.)
26 Building In Security at Agile Speed

Plan

Build

Test

Review

Deploy

Figure 1.11 Waterfall Software Development Methodology. (Source: Reproduced from Ransome, J.
and Misra, A. [2014]. Core Software Security: Security at the Source. Boca Raton [FL]: CRC Press/Taylor
& Francis Group, p. 51, with permission.)

Although Waterfall development methodologies vary, they tend to be similar in that practitio­
ners try to keep to the initial plan, do not have working software until very late in the cycle, assume
they know everything upfront, minimize changes through a change control board (i.e., assume that
change is bad and can be controlled), put most responsibility on the project manager (PM), optimize
conformance to schedule and budget, generally use weak controls, and allow realization of value only
upon completion. They are driven by a PM-centric approach under the belief that if the processes in
the plan are followed, then everything will work as planned. In today’s development environment,
most of the aforementioned items are considered negative attributes of the Waterfall methodology
and are just a few of the reasons that industry is moving toward Agile development methodologies.
The Waterfall approach may be looked on as an assembly-line approach, which may be excellent when
applied properly to hardware but which has shortcomings in comparison to Agile when it comes to
software development.

1.5.3.2 Iterative Waterfall Development

The iterative Waterfall development model (see Figure 1.12) is an improvement on the standard
Waterfall model. This approach carries less risk than a traditional Waterfall approach but is riskier and
less efficient than the Agile approach. In the iterative Waterfall method, the overall project is divided
into various phases, each executed using the traditional Waterfall method. Dividing larger projects into
smaller identifiable phases results in a smaller scope of work for each phase, and the end deliverable of
each phase can be reviewed and improved, if necessary, before moving to the next phase. Overall risk
is thus reduced.
The iterative method has demonstrated a marked improvement over the traditional Waterfall
method. You are more likely to face an Agile approach to software development rather than either a
standard or an iterative Waterfall methodology in today’s environment.

1.5.3.3 Agile Development

The Agile approach is based on both iterative and incremental development methods. Requirements
and solutions evolve through collaboration among self-organizing, cross-functional teams, and a
Setting the Stage 27

Plan Build Test Review Deploy

Plan Build Test Review Deploy

Figure 1.12 Iterative Waterfall Software Development Methodology. (Source: Reproduced from
Ransome, J. and Misra, A. [2014]. Core Software Security: Security at the Source. Boca Raton [FL]: CRC
Press/Taylor & Francis Group, p. 52, with permission.)

solution resulting from every iteration is reviewed and refined regularly throughout the process. The
Agile method is a time-boxed iterative approach that facilitates a rapid and flexible response to change,
which, in turn, encourages evolutionary development and delivery while promoting adaptive planning,
development, teamwork, collaboration, and process adaptability throughout the lifecycle of the project.
Tasks are broken into small increments that require minimal planning. These iterations have short time
frames called “time boxes” that can last from one to four weeks. Multiple iterations may be required to
release a product or new features. A cross-functional team is responsible for all software development
functions in each iteration, including planning, requirements analysis, design, coding, unit testing, and
acceptance testing. An Agile project is typically cross-functional, and self-organizing teams operate
independently from any corporate hierarchy or other corporate roles of individual team members, who
themselves decide how to meet each iteration’s requirements. This allows the project to adapt to changes
quickly and minimizes overall risk. The goal is to have an available release at the end of the iteration,
and a working product is demonstrated to stakeholders at the end of each iteration.

1.5.3.4 Scrum

Scrum (see Figure 1.13) is an iterative and incremental Agile software development method for manag­
ing software projects and product or application development. Scrum adopts an empirical approach,
accepting that the problem cannot be fully understood or defined and focusing instead on maximizing
the team’s ability to deliver quickly and to respond to emerging requirements. This is accomplished
through the use of co-located, self-organizing teams in which all disciplines can be represented. In
contrast to traditional planned or predictive methodologies, this concept facilitates the ability to handle
churn resulting from customers that change the requirements during project development. The basic
unit of development for Scrum is called a “Sprint,” and a Sprint can last from one week to one month.
Figure 1.13 Scrum Software Development Methodology. (Source: Reproduced from Ransome, J. and Misra, A. [2014]. Core Software Security:
Security at the Source. Boca Raton [FL]: CRC Press/Taylor & Francis Group, p. 54, with permission.)
Setting the Stage 29

Each Sprint is time-boxed so that finished portions of a product are completed on time. A prioritized
list of requirements is derived from the product backlog, and if they are not completed during the
Sprint, they are left out and returned to the product backlog. The team demonstrates the software after
each Sprint is completed. Generally accepted value-added attributes of Scrum include its use of adaptive
planning; that it requires feedback from working software early during the first Sprint (typically two
weeks) and often; that it stresses the maximization of good change, such as focusing on maximizing
learning throughout the project; that it puts most responsibility on small, dedicated, tight-thinking
adaptive teams that plan and re-plan their own work; that it has strong and frequent controls; optimizes
business value, time to market, and quality; and that it supports the realization of value earlier, poten­
tially after every Sprint.

1.5.3.5 Lean Development

In our experience, for those of you who have recently moved from or are in the process of moving from a
Waterfall methodology for software development, Scrum is the most likely variant of Agile that you will
encounter. Lean (see Figure 1.14) is another methodology that is gaining popularity and is thus worth
mentioning. Unfortunately, there are many definitions of Lean, which is a methodology that is evolving
in many directions. Although Lean is similar to Scrum in that it focuses on features rather than groups
of features, it takes this idea one step further in that, in its simplest form, you select, plan, develop,
test, and deploy one feature before you select, plan, develop, test, and deploy the next feature. The
objective is to further isolate risk to the level of an individual feature. This isolation has the advantage
of focusing on eliminating “waste,” when possible, and doing nothing unless it is absolutely necessary
or relevant. Lean development can be summarized by seven principles based on Lean manufacturing
principle concepts: (1) eliminate waste, (2) amplify learning, (3) decide as late as possible, (4) deliver
as fast as possible, (5) empower the team, (6) build integrity in, and (7) see the whole. One of the key
elements of Lean development is to provide a model in which you can see the whole, even when your
developers are scattered across multiple locations and contractors. Although still considered related to
Agile by many in the community, lean software development is a related discipline rather than a specific
subset of Agile.

Plan Plan Plan Plan Plan Plan Plan Plan

Build Build Build Build Build Build Build Build

Test Test Test Test Test Test Test Test

Review Review Review Review Review Review Review Review

Figure 1.14 Lean Software Development Methodology. (Source: Reproduced from Ransome, J. and
Misra, A. [2014]. Core Software Security: Security at the Source. Boca Raton [FL]: CRC Press/Taylor &
Francis Group, p. 55, with permission.)
30 Building In Security at Agile Speed

1.6 The Progression from Waterfall and Agile to Scrum:


A Management Perspective
Up until just a few years ago, most software development projects were created using the Waterfall
method, as described in Core Software Security. Typically, complex Gannt charts laying out every step,
milestone, and delivery dates in great detail are used to manage the Waterfall process and convince
management that there is complete control and predictability for a project. This type of process tries to
restrict change and in-process creativity. It doesn’t account for and adapt to the unknown or support
short-term course corrections. Unfortunately, we still see teams using these charts and methodology to
manage their projects in agile and just-in-time environments, which is a recipe for disaster. Compared
to today’s practices, this is a very slow process that can result in delays of months or even years, resulting
in significant budget overruns.
Jeff Sutherland first created Scrum with Ken Schwaber, in 1993, as a faster, more reliable, and more
effective way to create software in the tech industry.* Jeff states the following in his recent book:

The term “Agile” dates back to a 2001 conclave where I and sixteen other leaders in software development
wrote up what has become known as the “Agile Manifesto.” It declared the following values: people
over processes; products that actually work over documenting what that product is supposed to do;
collaborating with customers over negotiating with them; and responding to change over following a
plan. Scrum is the framework I built to put those values into practice. There is no methodology.†

Scrum provides a framework that enables teams to optimize their speed and quality through self-
organization and realization of what and how they have created a product. The process underlying
Scrum includes:

• The identification of incremental goals to be completed in a fixed length of time, a sequential


part of the product build.
• A daily short sync meeting (typically 15 minutes or less) called a Daily Standup to ensure that you
are headed in the right direction efficiently and effectively.
• Setting goals and systematically working out how to get there. And even more important, it iden­
tifies what is stopping the team from getting there.
• Teams that are cross-functional, autonomous, and empowered.
• The accommodation of necessary changes through the daily management of backlogs and use of
short development cycles called “Sprints.”

Sprints are short, typically two- to four-week cycles (although Sprint length varies considerably
depending upon need and type of software to be built). Each cycle begins with a meeting to plan the
Sprint, where the work to be completed is decided. This is also the meeting where security for the prod­
uct and any adjacencies it may have are defined. In this regard, security is no different from other prop­
erties that the software must exhibit, that is, a level of performance or ease of use. The prioritized list of
things to be accomplished are written on sticky notes and put on a wall or work board. Those working
with the Agile SDLC are likely to also employ Lean to manage work items. Any items remaining that
are still necessary to be completed as part of the product development are then put into a backlog to be
completed in a later Sprint. To avoid distracting the team or interfering with accomplishing the goals
set forth in the Sprint, the tasks are now locked in and nothing else can be added by anyone outside

* Sutherland, J. (2014). SCRUM: The Art of Doing Twice the Work in Half the Time. New York (NY): The Crown
Publishing Group, p. vii.
† Ibid., pp. 12–13.
Setting the Stage 31

the team. Interrupting this process can significantly slow down the process and defeats the purpose of
using Scrum.
The first thing you need to do when you’re implementing Scrum is to create a Backlog. The Product
Backlog is an ordered list of everything that is known to be needed for the product and the single source
of requirements for any changes to be made to the product. Initially, it lays out the known and best-
understood requirements. A Product Backlog remains dynamic and is never complete. It is constantly
changing to identify what the product needs (including new items) and evolves as the product and
the environment in which it will be used evolves. The Product Owner is responsible for the Product
Backlog, including its content, availability, and ordering.
Establishing a work rhythm and consistency are also important, and Sprints are set for specific
lengths of time throughout the development process. For example, if a decision is made for a two-week
Sprint, then each Sprint will remain two weeks in length for the entire product-development process—
not a one-week, followed by a three-week, and then a two-week Sprint, and so on.
As with anything in the development process, fix any mistakes or bugs you have identified immedi­
ately. Jeff Sutherland, the co-creator of Scrum, shares that in his experience, fixing it later can take you
more than twenty times longer than if you fix it now:

It took twenty-four times longer. If a bug was addressed on the day it was created, it would take an
hour to fix; three weeks later, it would take twenty-four hours. It didn’t even matter if the bug was
big or small, complicated or simple—it always took twenty-four times longer three weeks later. As
you can imagine, every software developer in the company was soon required to test and fix their code
on the same day.*

Kaizen is the Japanese word for improvement and is used commonly in Scrum to identify what is
also called the “Sprint Retrospective,” conducted at the end of each Sprint to identify what went right,
what could have gone better, and what can be made better in the next Sprint. What can potentially be
shipped to customers for feedback is also identified at this stage
Most importantly, rather than seeking someone to blame for their mistakes, this is where the team
takes responsibility for their process and outcomes, seeking solutions as a team and their ability to act
on them and make changes as needed.
In this sense, Kaizen is identifying what will actually change the process and make it better the
next time. If you are familiar with Deming’s PDCA (plan–do–check–act) cycle,† this is the “Check”
part and why it is so important to make sure to set up the ability to get to the “Act” step. Each Sprint
should identify and address at least one improvement or kaizen and make it the most important thing
to accomplish in the next Sprint.
The Scrum team divides the work into functional increments called “user stories” that, when imple­
mented, contribute to the overall product value. This is done in consultation with the customer or prod­
uct owner. The elements of the user stories are captured through the use of the INVEST checklist. The
acronym INVEST is used as a checklist for quickly evaluating user stories and originated in an article
by Bill Wake, which also repurposed the acronym SMART (Specific, Measurable, Achievable, Relevant,
Time-boxed) for tasks resulting from the technical decomposition of user stories.‡ The successful com­
pletion of the criteria in the INVEST Checklist is used to tell if the story is ready, as shown below:

* Ibid., p. 100.
† The W. Edwards Deming Institute. (2020). “PDSA Cycle.” Retrieved from https://deming.org/explore/p-d-s-a
‡ Agile Alliance. (2020). “Glossary Definition of INVEST.” Retrieved from https://www.agilealliance.org
/glossary/invest/#q=~(infinite~false~filters~(postType~(~’page~’post~’aa_book~’aa_event_session~’aa_
experience_report~’aa_glossary~’aa_research_paper~’aa_video)~tags~(~’invest))~searchTerm~’~sort~false~sort
Direction~’asc~page~1)
32 Building In Security at Agile Speed

Independent. The story must be actionable and “completable” on its own. It shouldn’t be
inherently dependent on another story.
Negotiable. Until it’s actually being done, it needs to be able to be rewritten. Allowance for
change is built in.
Valuable. It actually delivers value to a customer or user or stakeholder.
Estimable. You have to be able to size it.
Small. The story needs to be small enough to be able to estimate and plan for easily. Preferably,
this should be accomplished in a single Sprint. If it is too big, rewrite it or break it down into
smaller stories.
Testable. The story must have at least one test it is supposed to pass in order to be complete.
Write the test before you do the story.*

Each user story is written on an index card or sticky note as a brief descriptive sentence remind­
ing the team of its value. This makes is easier for teams to collaborate in making collective decisions
about scheduling by moving the sticky notes or cards around on a work board. A user story should not
be confused with a Use Case. A Use Case is a description of all the ways an end user wants to “use” a
system. Use Cases capture all the possible ways the user and system can interact that result in the user
achieving the goal. In addition, the things that can go wrong along the way that prevent the user from
achieving the goal are captured as Use Cases.
Traditional Waterfall teams could fail and wonder what went wrong by waiting too long to get
actionable feedback from each other, the business, and the market. In contrast, when Scrum is man­
aged correctly, things that could result in failure are visible and addressed quickly. Make work visible
through the use of a work board with sticky notes that show all the work that needs to be done, what is
being worked on, and what is actually done. It should be updated every day, and everyone should see it.
An “Epic” is a collection of related user stories that cannot be completed within a single iteration.
Although the team decides how each task will be accomplished, the business value is what will be
accomplished and is typically defined in the Epic.
The most important thing is for the Sprint to decide what you are going to do first. We believe Jeff
Sutherland puts it best:

The key, though, is what you decide to do first. The questions you need to ask are: what are the items
that have the biggest business impact, that are most important to the customer, that can make the
most money, and are the easiest to do? You have to realize that there are a whole bunch of things on
that list that you will never get to, but you want to get to the things that deliver the most value with
the lowest risk first. With Scrum’s incremental development and delivery, you want to begin with the
things that will immediately create revenue, effectively “ de-risking” the project. And you want to do
that on the features level.†

The Pareto Principle (also known as the 80/20 rule), when applied to software development, implies
that 80 percent of the value is in 20 percent of the features. The 80 percent is considered wasted effort
and a challenge for development teams to manage. In the days when the Waterfall development process
was dominant, teams didn’t know what that 20 percent was until product development was completed.
One of the biggest challenges in Scrum is determining how you build that 20 percent first. However,

* Sutherland, J. (2014). SCRUM: The Art of Doing Twice the Work in Half the Time. p. 137.
† Ibid., p. 174.
Setting the Stage 33

it should be noted that security often doesn’t fit the 80/20 rule; rather, 80% is likely to be exploited
versus another 80% weight. We discuss this further, as well as how security gets into the “will do” list,
later in the book. Unfortunately, there is often a misunderstanding that security is “nonfunctional.”
Later in this book, we explain how security fits into the critical 20% and how security must be viewed
as bringing value to a product.
There are three roles in Scrum:

Team Member: Part of the team doing the work. Although it is management’s responsibility to set the
strategic goals, in Scrum it is the team members’ responsibility to decide how they’re going to do the
work and how to reach those goals.

Scrum Master: The Scrum Master is responsible for the “how” to include how fast the team is going
and how much faster they can get there, and is the team leader responsible for helping the team figure
out how to do the work better. The Scrum Master is responsible to do the following:

• Act as a leader for the software development team


• Oversee design, implementation, QA, and validation of programming code and products
• Create project outlines and timelines and distribute responsibilities to team members
• Facilitate daily scrums, stand-ups, and meetings to monitor project progress and resolve any
issues the team may be experiencing
• Shape team behavior through excellent management via the agile method
• Perform reviews on software development team members
• Remove project obstacles and develop solutions with the team
• Ensure milestones are reached and deadlines are met throughout the project lifecycle
• Build strong relationships with stakeholders, application users, and program owners
• Document progress and communicate to upper management and stakeholders
• Take responsibility for successful product delivery *

Product Owner: The Product Owner is responsible for the “how” defining what the work should be.
They are the leader responsible for maximizing the value of the products created by a Scrum develop­
ment team. This is a multifunctional role including being a business strategist, product designer, mar­
ket analyst, customer liaison, and project manager. They are also the owner of Backlog, principally
what is in and what order the items are in. It is important that the team has trust in the Product Owner
to prioritize the Backlog correctly.
As the team’s representative to the customer, it is ideal if the Product Owner has a stronger back­
ground in Product Marketing than engineering since they will be working with the customer a signifi­
cant amount of the time. This will require time spent on obtaining input from the people who will
use, perhaps purchase, or otherwise take ownership of the use of, the software. This includes customer’s
feedback to the team at every Sprint as to whether the product is delivering value and their feedback
regarding what should be in the latest incremental release. This feedback will, of course, drive the con­
tent of the Backlog.
The Product Owner requires a different set of skills from the Scrum Master as they are account­
able for translating the team’s productivity into value. This requires that the Product Owner be not
only a domain expert but also have enough knowledge of the market to know what will make a dif­
ference. A key requirement to their success is if they are empowered to make decisions. Without this

* Zip Recruiter. (2020). “Scrum Master Job Description Sample Template.” Portions of this list have been reproduced
from https://www.ziprecruiter.com/blog/scrum-master-job-description-sample-template/. This website uses data
from GeoNames.org. The data is licensed under the Creative Commons Attribution 3.0 License.
34 Building In Security at Agile Speed

empowerment, they will likely fail, regardless of how great a background they have for the role. They
must balance this power in that, although they are responsible for the team’s outcomes, they must let
the team make their own decisions. This means that to be respected and accountable as the Backlog
owner, they must be available to the team on a regular basis to communicate what needs to be done and
why. This will typically require daily communication with the Scrum team. Discussing these working
increments gives the Product Owner the ability to see how people react to it and how much value it
does or does not create so that any needed change is implemented in the next Sprint. Innovation and
adaptation are facilitated as a result of this constant feedback cycle. As a result, value can be measured.
One key value of Scrum is providing the team with the ability to manage change. This will depend
on the team’s ability to acknowledge uncertainty and that its current view of value task ranking is
only relevant at that one particular moment due to the continuous change that is inherent to prod­
uct development. The Product Owner acts as the gatekeeper, identifying and prioritizing constantly
changing market needs and ranking these items per their value, thus avoiding the “everything is a top
priority” syndrome and associated continuous scope creep, which is a death knell to the Scrum process.
If you limit the number of changes you will limit the cost associated with them. The increase in
development costs resulting from unplanned and likely unnecessary disruptive pre- and post-release
change requests has resulted in many companies in which software development exists to set up Change
Control Boards. If managed correctly, Scrum will significantly reduce and possibly eliminate these
types of change requests.
Later in the book, we describe how security is decided upon through the Scrum as well as how our
fully integrated SDL works in in the process. The Product Owner and their relationship with customers
includes understanding and then advocating for stakeholder security needs. We have, successfully,
gotten this to take place, twice together, and it works!

1.6.1 DevOps and CI/CD


DevOps is more of an engineering cultural change than a process change as it prioritizes people over
process and process over tooling. Building a culture of trust, collaboration, and continuous improve­
ment enables the acceleration of the SDLC. As you may have seen, the Agile movement led a cultural
shift from command and control to team empowerment. DevOps moves that shift forward and beyond
coding to a holistic view of software development and operations.
DevOps also moves coding to Operations, breaking down the differences and silos between “cre­
ators” and “maintainers.” There is a significant change in tech just as much, or alongside, the cultural
shift. Ultimately, it causes development and security to work together to the benefit of all. This is
accomplished by eliminating bottlenecks, increasing engineer empowerment and collaboration, reduc­
ing interpersonal communication issues, and resulting in increased team productivity. DevOps puts
technology second to people and processes and focuses on engineers and how they can better work
together to produce great software. A key element of DevOps success is gaining respect and shared
understanding by unifying each department of the business and achieving a healthy working relation­
ship and collaboration with your partners and stakeholders. This removes roadblocks and increases
cooperation, resulting in improved speed of delivery for the development teams. Another positive result
is that the business will see fewer customer complaints, faster delivery of new features, and improved
reliability of existing services.
Emily Freeman provides an excellent and succinct overview of continuous integration, delivery, and
deployment in her book “DevOps”:

Continuous integration: Teams that practice continuous integration (CI) merge code changes
back into the master or development branch as often as possible. CI typically utilizes an
Setting the Stage 35

integration tool to validate the build and run automated tests against the new code. The process
of CI allows developers on a team to work on the same area of the codebase while keeping
changes minimal and avoiding massive merge conflicts.
Continuous delivery: Continuous delivery (CD) is a step up from CI in that developers treat
every change to the code as deliverable. However, in contrast to continuous deployment, a
release must be triggered by a human, and the change may not be immediately delivered to an
end user. Instead, deployments are automated and developers can merge and deploy their code
with a single button. By making small, frequently delivered iterations, the team ensures that
they can easily troubleshoot changes.
Continuous deployment: Continuous deployment takes continuous delivery even one step
further. Every change that passes the entire production release pipeline is deployed. That’s
right: The code is put directly into production. Continuous deployment eliminates human
intervention from the deployment process and requires a thoroughly automated test suite.*

Continuous deployment also usually employs A/B, even A/B/C . . . testing so that changes which
don’t prove out, or for which there are multiple competing implementations, can be deployed, moni­
tored, and then the results fed back to pivot to the very best implementation choice. As we shall see,
security implementation can benefit greatly from an A/B/C testing strategy.
DevOps fills a gap that existed in Agile by addressing the conflict between the developer’s technical
skill sets and specialties and the skill sets represented by operations specialists. Eliminating the practice
of developers handing off code to operations personnel to deploy and support helps in breaking down
these pre-existing silos.
Some even suggest that automation should replace operations specialists. The term used for this
is no operations (NoOps). Although some operational tasks can be done through automation, the
operations specialists still have skill sets that typical developers do not have—for example, software
infrastructure, which includes experience in system administration and hosting software. Ops special­
ists apply their specialized knowledge to the code that will operate software. Operations in DevOps
apply these skills through code in the new pipeline tools. Rarely do Operations have to get into systems
manually. Operations and core infrastructure knowledge is critical to the engineering teams success;
therefore, we don’t see the need for operations specialists going away; the rules of engagement, roles,
and responsibilities just need to be modified and managed differently from in the past. It is important
to note that operations people are coders, too: In DevOps, many roles, perhaps most, produce code of
one kind or another.
NoOps focuses on automating everything related to operations to include the deployment pro­
cesses, monitoring, and application management. Its goal is to eliminate the need for developers to ever
have to interact with operations specialists, which is in direct opposition to DevOps, which promotes
seamless interaction and working relationships between developers and operations. NoOps focuses on
managing specific issues such as infrastructure and deployment pipelines through the use of automated
software solutions, whereas DevOps provides a holistic approach that is focused on people, processes,
and technology. Operations specialists are the most adept at automating daily required tasks as well as
architecting systems with complex infrastructure components.
Another way to mitigate the divide between developers and operations specialists is for each side to
provide high-level training to familiarize them with each other’s disciplines. They won’t be experts in
each other’s field but it will help the interaction between the two teams. For example, operations can
teach the developers about infrastructure, and, conversely, the developers can teach the operations team

* Freeman, E. (2019). DevOps for Dummies. Hoboken (NJ): John Wiley & Sons, pp. 141–143.
36 Building In Security at Agile Speed

source control and the critical aspects of specific languages that may affect elements of operations and
associated architectures.
Software deployments are the most common action in software development that causes service
disruptions and site outages. Before DevOps, the developers typically deployed new code to release
new features in a siloed environment. This results from a disconnect between the development and
operations team. In a non–DevOps environment, the developers will typically pass their code over
to the operations team that does not understand the code’s infrastructure requirements or how their
code will run on the targeted infrastructure. There is an assumption that the operations team deploy
the code and ensure that it runs perfectly. Since operations teams are rewarded by optimizing uptime,
availability, and reliability, animosity between the two teams occurs when the code is poorly written
because the operations team will be blamed for something they had no control over. James and Brook
cannot count the number of times that siloed developers or security folk have made faulty assumptions
about infrastructure that operations cannot ever meet. Readers can perhaps imagine the ensuing chaos
that results as each side defends what they have built? Hence, the need for better communications and
collaboration between the two teams, which is one of the key goals of DevOps.

1.6.2 Cloud Services


Some also believe that NoOps can be facilitated through cloud services. Cloud services can empower
developers to take more ownership of their components by abstracting complex operations architec­
ture in a way that makes that architecture easier for developers to work with, but it cannot completely
replace all of the operations staff. It does, however, reduce the size of the operations staff needed and
frees up the more experienced operations team members to work on more proactive solutions. When
they are aligned and have common goals, the two teams will collaborate well together while function­
ing independently as the development and operations teams. As with many things in today’s tech
world, the biggest and hardest challenges are associated with human behavior, not technology. DevOps
attempts to solve these problems.

1.6.3 Platform Services


Other than the hardware, cloud providers provide Platform as a Service (PaaS) environments that oper­
ations teams typically provide, such as development, quality assurance (QA), user-acceptance testing,
staging, and production. The staging environment provides an area in which the developers can test
their code without risking unintended effects from tests in a production environment and without wor­
rying about the infrastructure it runs on before the final release of the product is ready. This, of course,
accelerates the development, testing, and releasing of code. The infrastructure resources that the code
will run on PaaS recreates infrastructure resources, such as servers, storage, databases, middleware, and
network, as well as tools that enable the development team to operate as an operations function. The
PaaS ability to automate, control, and tack and track code in a simulated and operational environment
will ultimately and significantly change the structure and function of operations teams going forward.
Operations specialists and their architect partners will still be needed. The question is whether they
will be part of the PaaS or internal to the development team’s company’s organization. At a minimum,
there should be operations specialists co-located with the team that runs the SDLC. Later in the book,
we discuss how cloud providers take responsibility for the lower-stack security maintenance. We also
describe the demarcation of the line of responsibility between what security the cloud provider provides
and handles versus what the cloud consumer must handle.
Rather than involving operations after code release, in DevOps it is involved as part of the develop­
ment process so that together developers and operations can properly plan and design infrastructure
Setting the Stage 37

to support the code. This is reflected in the current DevOps literature when the phrase “moving left”
is used for functions such as operations, security, and quality assurance. This refers to the standard
graphic visualizing the DevOps cycle, as seen in Figure 1.20.*

1.6.4 Automation
For a team to be successful in automating any process, it must first understand the manual process for
the issue they are trying to solve. If the manual process fails, you will only automate and possibly amplify
this failure. You must also gauge whether automation is appropriate and the most cost-effective and effi­
cient way to address the issue. Ensuring that the process is continuous and that the development process
is not slowed down is more than trying to use tools just for the sake of using tools. Efficiency is impera­
tive for successful software development and, in particular, when it comes to DevOps and CI/CD.
One critical element of DevOps and CI/CD that is distinctly different from the Waterfall approach
for software development is the continuous use of an automated test suite throughout the entire devel­
opment process. Continuous testing is more well known in CI/CD than it is in DevOps.† This is
addressed in greater detail later in this chapter.

1.6.5 General Testing and Quality Assurance


Due to the complexity and continuous changes that DevOps and continuous integration face in current
and future working environments, automating testing is mandatory, and manual testing will be rare.
Humans and budgets just can’t absorb the complexity and workloads required in current and future
working environments. As with the operations teams, automation and cloud services will change how
QA teams are used and where they are positioned in the organization. In traditional organizations,
QA teams owned the testing environment and conducted code reviews after the development team
had turned in their pre-release code to be tested in the QA testing environment. In today’s DevOps
environment, many QA teams no longer own the testing environment, and they are concerned that
automation will replace their function. If the QA function wants to survive, they must transition from
manual testing to becoming experts in automated testing and continuous integration. They must be
more like software development engineers and write automated tests and serve as experts in testing
practices, procedures, and approaches. These tests include performance and stability testing as well as
tests for load and security. Load tests will simulate a large number of users or data that will stress the
system, and security tests will eliminate vulnerabilities or, at least, reduce potential impact to survivable
levels in each release.
Continuous testing starts in the development stage and continues throughout the development
process, shorten your cycles, and enable the development teams to rapidly iterate. An organization just
starting to develop their DevOps capabilities should not underestimate the amount of work required to
build a robust testing program, associated security gates, and pipelines. They should consult with others
who have been successful in similar environments before and, then, make a realistic plan to improve
and adapt.
In DevOps, quality and security testing is everybody’s job—most importantly, that of the developers.
Developing good code includes the building in of quality and security.

* We address the implicit linearity of “shift left” thinking later as we discuss SDL activity triggers and timing.
† Many DevOps implementations, but not all, make use of CI/CD. Although not necessarily the same and
certainly not equivalent, modern software development can pick and choose what will be most effective for the
software that is to be built.
38 Building In Security at Agile Speed

1.6.6 Security Testing


Security tests cover network security and system security as well as client-side and server-side applica­
tion security. These tests include both those that occur in QA as well as the development teams as part
of the SDL. Software patches (updates) are used to fix security issues discovered post-release. Minor
changes typically contain new features, whereas major updates may not be backward compatible and
include code that can break previous versions. Either way, it is imperative that these issues are discov­
ered and mitigated to the maximum extent possible during the pre-release development process. If a
software product is released and found to be untested and/or full of security issues, this can have a
serious and likely permanent impact on your reputation as well as consequences for security or privacy
noncompliance. If a product or application security team member is not in the “go/no go” product-
release meeting and not empowered to contribute to a release decision, your organization has failed to
take security seriously and is just a “paper tiger.” A more detailed overview of security testing within the
SDLC and SDL follows later in the book.

1.6.7 DevSecOps
DevSecOps is about everyone in the SDLC being responsible for security, with the goal of bringing
operations and development functions together with security functions by embedding security in every
part of the development process. This minimizes vulnerabilities and brings security closer to business
objectives, and it includes the automation of core security tasks by embedding security controls and
processes early in the DevOps workflow. Automation also reduces the need for security architects to
manually configure security consoles.
Security’s traditional reputation of bolting security in at the end of the process and being a road
block to innovation and on-time delivery has become even more challenging to overcome with the
advent of DevOps and CI/CD. Developers don’t like that their code is insecure and needs to be fixed
only after they’ve completed it. It’s important to assess and respond to threats before they become
security incidents. That is why there is a lot of talk about “shifting security left,”* which ensures that
security is built into the DevOps development lifecycle early and not at the end. Security in DevOps
should be part of the process—that is, DevSecOps—and arguably not a separate term, to signify that
security is truly built-in.
Integrating security into DevOps to deliver DevSecOps requires new mindsets, processes, and
tools. To make security truly everybody’s responsibility will require that this be tied to your governance
of risk security policies, by building those policies into your DevOps process. One of the goals of this
book and the generic SDL is to assist in the success of DevSecOps by reducing or even eliminating fric­
tion between security tasks and development. Ultimately, the result of combining development, secu­
rity, and operations teams will result in shortening feedback loops, reducing incidents, and improving
security through shared responsibility. To discover and mitigate security threats at every point in the
development life cycle, a member (or members) of the security team should be given a seat at the table
at each stage of the DevOps process.

1.6.8 Education
Ongoing education is another key component of the process for teams that produce great software.
This is particularly important for DevOps, where interdisciplinary knowledge about software, tools,

* We will comment on the use of “shift left” as a conceptual paradigm later in the book.
Setting the Stage 39

hardware, networks, and other technology and trends is key for success. In addition to developing code,
your developers should be a knowledge resource that, if nurtured, can provide years of valuable advice
and guidance to both your engineering team and the company they work for. Training can include
outside reading, discussions with other developers, going to conferences, and taking courses.
However, a single break in focus can sideline a developer for hours. The challenge as an engineering
manager is the balancing act of not unnecessarily taking away your team members’ time needed for
developing software, which requires intense focus work, while ensuring that they continue their educa­
tion at the appropriate time. It is also important to support your developers and block off part of the
budget for continuing education.
More about education and awareness follows later in this book.

1.6.9 Architects and Principal Engineers


It is important to give developers a path to promotion while retaining great talent. Unfortunately, there
are only two paths for promotion for developers within an engineering organization—management or
engineer. In many cases, this results in engineers who want a raise or a new title having no choice but to
pursue a management position. There are exceptions to this, but this typically is a disaster for engineers
without the people, communications, business, and other skills that are required to be a successful
manager. Even worse, in many of these cases, they find themselves in a job they really don’t want, which
will ultimately create performance issues as well as diminish the morale of the team working for them.
The career path for development engineers should lead toward becoming an Architect or Principal
Engineer, at the senior ranks of the organization, after they reach the highest grade available in the
development team. Architects and Principle Engineers influence how the system is structured, which
features are prioritized, and how to standardize code. They ensure reusability as well as how the engi­
neering team will tackle the work ahead of them. These positions require knowledge in many areas,
along with the experience to know what works and what doesn’t work. Most important to development
teams, they bring their critical, extensive experience to architectural and code reviews. In addition, they
are able to interact with the operations specialists and others to assess issues before the code is integrated
into the larger codebase as well as what may occur in the operational environments in which the soft­
ware is to be deployed. All will benefit from this shared knowledge. We offer both roles—Architect and
Principal Engineer—as distinct. Architects must have some aptitude for abstraction and structure and
have a fairly high tolerance for interaction with others, sometimes involving conflict. But we also need
a growth path for those who prefer to stick to technology, and who may have a lower tolerance for inter­
action. Those with a strong engineering focus must have a way to grow, and a promotion path, or they
will either lose motivation or, worse, they will leave. Technical security leader roles may have “security”
added to their title for clarification: “Lead Security Architect” and “Principle Security Engineer.” We
will detail these security leader roles in Chapter 2.

1.6.10 Pulling It All Together Using Visual Analogies


1.6.10.1 Tetris Analogy for Agile, Scrum, and CI/CD

The Agile SDLC model is a combination of iterative and incremental process models that focus on pro­
cess adaptability and customer satisfaction by the rapid delivery of a working software product. Agile
methods break the product into small incremental builds. These builds are provided in iterations. Each
iteration typically lasts from about one to three weeks. Every iteration involves cross-functional teams
working simultaneously on various areas, such as:
40 Building In Security at Agile Speed

• Planning
• Requirements Analysis
• Design
• Coding
• Unit Testing
• Acceptance Testing
• Security

At the end of the iteration, a working product is displayed to the customer and important stakeholders.
The first analogy is the use of the timeless game of Tetris™ as a visual example of the operational
interactions that take place when using Agile, Scrum, and CI/CD development practices. First, a
description of the electronic Tetris board and how it is played:

The game is played on a board that has 20 rows and 10 columns. The object of Tetris is to last
as long as possible before the screen fills up with tetrominoes (Figure 1.18). To do this, you must
assemble the tetrominoes to form one or more rows of blocks that span the entire playing field,
called a line clear. When you do so, the row will disappear, causing the ones above it to settle.

First, each of the tetrominoes shown in Figure 1.15 are assigned a cross-functional team name, as
depicted in Figure 1.16:

Figure 1.15 Tetris™ Tetrominoes.

Figure 1.16 Tetrominos Assigned Cross-Functional Team Names.

Next, the tetrominoes are used to show the iterations that occur in the Sprints within the Scrum
development process (Figure 1.17).
Pulling it all together by looking through Figures 1.15–1.18, you should see the following:

• Most of the major decision making is made up front to include all components (analogous to
Agile/Scrum planning and design sessions). It should be noted that some Scrum teams prefer to
design as work items during Sprints. Not “all” or “most” design is necessarily done before a cycle
of Sprints. It depends, and sometimes is up to the team.
• Implementing each component/requirement for each Sprint is maximized for speed and changed
rapidly, when needed, to optimize the necessary fit to complete each/multiple lines (analogous to
Agile SDLC/Scrum/Sprints and CI/CD).


Figure 1.17 Tetrominoes in Sprint Iterations within the Scrum Development Process.
42 Building In Security at Agile Speed

Figure 1.18 Tetris™ as a Visual Analogy of the Complete Agile, Scrum, and CI/CD Development Process.

• Each completed row disappears and total focus is on remaining lines (analogous to Sprints and
CI/CD).
• Note security (blue tetronimo) is cycled through each area/step of the development iteration
which provides the ability to build security in at agile speeds.
• The overall Tetris analogy is analogous to CI/CD.

1.6.10.2 Nesting Doll Analogy for Solution Development

Solutions in software development usually comprise a system of products that interact with each other
to solve a specific customer business problem. Products and product platforms are building blocks
of the solution. This may also require a combination of hardware, software, and services. A suite of
products designed to work together and are tested many times give the customers confidence that their
business outcomes will be achieved. This will require a common understanding that an SDLC and SDL
are to be used across all the products and platforms to be used as part of this solution. This will also

Figure 1.19 Nesting Doll Analogy Visual for Product Solutions.


Setting the Stage 43

require seasoned architects and/or principal engineers with multidisciplinary experience in software,
hardware, networks, and infrastructures to be properly assessed for all of the contingencies that exist
in these environments.
In the nesting doll analogy (Figure 1.19), think of each nesting shell as a secured application or
product that can be secured individually or as a collective nest of dolls forming a solution as part of an
integrated approach to security. Each application or product will require a completed “full or partial
SDLC” (Agile/Scrum/Sprints) that can be stand-alone or nested together as a solution.

1.6.10.3 DevOps Operations “Moving Down” Rather Than


Moving Left Kettlebell Analogy

As mentioned earlier in this chapter, the phrase “moving left” is used in regard to teams such as opera­
tions, security, and QA. This idea simply refers to moving the work completed by these teams leftward
in the development pipeline, or sooner in the process enhanced by automation, cloud, and, specifically,
PaaS services. This means that the services shown in the typical figure eight graphic for the DevOps
process shown on the left side of Figure 1.20 are to be moved to the left. Although not all components
of these functions will move to the left, a significant amount will be provided by automation and cloud
services. Essentially, the center of mass for the figure eight diagram will move to the left.

Figure 1.20 Traditional DevOps Graphic versus the Kettlebell.

Rather than using the figure eight diagram, we like to use the kettlebell analogy (Figures 1.20 and
1.21). “With a thick handle and off-set center of mass, the design of the kettlebell is unique and carries
with it unique benefits and also some challenges. Traditional dumbbells and barbells tend to center the
weight with your hand, but a kettlebell’s center of mass is about six to eight inches from the handle, and
that changes depending on what exercise you are performing.”* Essentially, with the advent of DevOps,
the center of mass or “heavy lifting” will occur on the development side of the house. Therefore, we like
the analogy of a kettlebell rather than the figure eight diagram.

* Jones, B. (2020). “Understanding Center of Mass in Kettlebell Training.” Retrieved from https://www.strong
first.com/understanding-the-center-of-mass-in-kettlebell-training/
44 Building In Security at Agile Speed

Figure 1.21 Kettlebell Visual Analogy for DevOps.

1.6.11 DevOps Best Practices


There are a lot of sources that list best practices for DevOps. We found the most succinct and most
relevant to our discussion to be those that Cesar Abeid from LiquidPlanner® has identified in his online
paper titled “8 Best Practices for Managing DevOps Projects,” seven of which follow:

Minimum viable product: A minimum viable product focuses on high return and lost risk
projects, and can be measured and improved upon with each iteration. Putting this into action
is a way to create fast, small products that can be deployed quickly and measured for feedback
Use the right tools: Because DevOps is all about collaboration, communication, the removal
of silos and unnecessary overhead, it’s extremely important to use tools that will facilitate
these principles. Success in DevOps is typically very much connected to the tools used. When
considering tools for your DevOps project, look for solutions that will simplify aspects of
configuration management, application deployment, monitoring, and version control.
Eliminate silos: DevOps has to do with flow and integration, which means development and
operations move quickly and in a horizontal manner. Silos, on the other hand, are vertical and
walled in.
Reduce handoffs: DevOps projects, on the other hand, see projects as a continuous flow
from beginning to end. By minimizing handoffs, discrete steps tend to disappear, facilitating a
DevOps culture.
Create real-time project visibility: In order to maximize flow and integration, everyone who’s
part of a DevOps system needs to know where the project stands. Creating real-time project
visibility can be done by using the right tools and encouraging all involved to engage in a
centralized way.
Reduce overhead: Work and resources saved from processing overhead can be redirected to
increase productivity and collaboration.
Setting the Stage 45

Manage change collaboratively: Effective change management can be a struggle for any
project. Having a systematic way to approach change management is critical.*

1.6.12 Optimizing Your Team Size


In 1975, Fred Brooks published a book titled The Mythical Man-Month in which he coined the phrase
“adding manpower to a late software project makes it later,” also known as “Brooks Law.”† Being late is
unacceptable and costly when it comes to software development.
Speed and predictable software delivery times are absolutely imperative to be competitive. This
requires the right process to optimize the time and quality of delivery for a software product, discussed
earlier in this chapter, as well as an optimized team size. Some key concepts to consider in optimizing
your team size are below:

Groups made up of three to seven people required about 25 percent of the effort of groups of nine to
twenty to get the same amount of work done. This result recurred over hundreds and hundreds of
projects. That very large groups do less seems to be an ironclad rule of human nature.‡

In 2001, Nelson Cowan of the University of Missouri wondered whether that magic rule of seven
was really true and conducted a wide survey of all the new research on the topic. It turns out that the
number of items one can retain in short-term memory isn’t seven. It’s four.§

So, there’s a hardwired limit to what our brain can hold at any one time. Which leads us back to
Brooks. When he tried to figure out why adding more people to a project made it take longer, he
discovered two reasons. The first is the time it takes to bring people up to speed. As you’ d expect,
bringing a new person up to speed slows down everyone else. The second reason has to do not only
with how we think but, quite literally, with what our brains are capable of thinking. The number of
communication channels increases dramatically with the number of people, and our brains just can’t
handle it. If you want to calculate the impact of group size, you take the number of people on a team,
multiply by “that number minus one,” and divide by two. Communication channels = n (n − 1) / 2.
So, for example, if you have five people, you have ten channels. Six people, fifteen channels. Seven,
twenty-one. Eight, twenty-eight. Nine, thirty-six. Ten, forty-five. Our brains simply can’t keep up
with that many people at once. We don’t know what everyone is doing. And we slow down as we try
to figure it out.¶

As you can see from the above, optimizing size matters, especially when it comes to the complex and
dynamic nature of software development. Team size optimization has become even more important
since the advent of Agile, Scrum, DevOps, and CI/CD and their influence on software development.
Typically, seven team members are ideal, plus or minus two, depending on the complexity and any
additional special skills needed for the project. The team must have every skill needed to complete a
project as well as the freedom and autonomy to make decisions on how they take action and improvise.

* Abeid, C. (2020). “8 Best Practices for Managing DevOps Projects.” Retrieved from https://www.liquidplanner
.com/blog/8-best-practices-for-managing-devops-projects/
† Brooks, F. (1995 [1975]). The Mythical Man-Month. Essays on Software Engineering. New York (NY): Addison-
Wesley Professional.
‡ Sutherland, J. (2014). SCRUM: The Art of Doing Twice the Work in Half the Time. p. 59.
§ Cowan, N. (2001). “The Magical Number 4 in Short-Term Memory: A Reconsideration of Mental Storage
Capacity.” Behavioral and Brain Sciences, vol. 24, pp. 87–185.
¶ Sutherland, J. (2014). SCRUM: The Art of Doing Twice the Work in Half the Time. p. 60.
46 Building In Security at Agile Speed

1.7 Chapter Summary

Hopefully, we have set the stage for a generic security development lifecycle (SDL). In this chapter,
we’ve laid our foundation: Software development has shifted paradigms dramatically. Agile, CI/CD,
DevOps, and cloud use have each contributed to the shift—changing culture, process, and technol­
ogy. Security and, especially, software security practices haven’t entirely kept pace with the changes,
unfortunately.
We reviewed the current background of constant attack, with the repeated major compromises with
which our digital lives exist. Software weaknesses are what attackers leverage; this will be obvious to
most readers. One of our most important protective measures will be to reduce weaknesses to surviv­
able levels—the main goal of an SDL. We presented five software security principles against which to
measure the success of an SDL and which we have used to align and guide software security practices.
We’ve outlined the main goals of a generic SDL, and why there continue to be problems with SDLs
that are tightly coupled to a software development life cycle (SDLC) and which assume SDLC linearity;
a linear, waterfall SDLC has become the less-common case. Agile, continuous practices, and DevOps
exacerbate the problems of a linear SDL. The way forward, as we discovered in our SDL research at
Intel, is to eliminate SDLC dependencies in the SDL so that it will become workable for all SDLC
timings and orderings. This becomes particularly true, we hope you’ll agree, after our review of Agile/
Scrum practices.
The authors, among other practitioners, have now implemented, led, lobbied, and stumped for a
developer-centric view of software security—we might call it, “the SDLC security approach.” We pre­
sented to you the Developer-centric Security Manifesto as a starting point. Empowering developers to
own security tasks, while at the same time leveraging skilled security practitioners to guide and support
that ownership, will be the most effective and scalable approach. We have tasted the sweet results from
this approach in our programs; we have seen the reductions in released issues that such an approach
delivers. A key tenet of an SDLC approach to software security is an SDL that enables developers to
identify and build security as a natural, primary task of building software. To accomplish integral secu­
rity, the SDL must also reduce friction against the execution of the SDL’s tasks.
But the SDL is not the only ingredient for success. Ultimately, software security, as we have con­
tinued to note, is a people problem. Building and running a successful program will take investment,
time, energy, and deft management and organization skills. Thus, the management of software security
comprises a key element of this book.

You might also like