Security Without Obscurity A Guide To Pki Operations 2nd Edition 2nd Jeff Stapleton W Clay Epstein Download
Security Without Obscurity A Guide To Pki Operations 2nd Edition 2nd Jeff Stapleton W Clay Epstein Download
https://ebookbell.com/product/security-without-obscurity-a-guide-
to-pki-operations-2nd-edition-2nd-jeff-stapleton-w-clay-
epstein-54822594
https://ebookbell.com/product/security-without-obscurity-a-guide-to-
pki-operations-second-edition-jeff-stapleton-w-clay-epstein-56899436
https://ebookbell.com/product/security-without-obscurity-a-guide-to-
confidentiality-authentication-and-integrity-jj-stapleton-4741060
https://ebookbell.com/product/security-without-obscurity-a-guide-to-
cryptographic-architectures-first-edition-jeff-stapleton-9954462
https://ebookbell.com/product/security-without-obscurity-a-guide-to-
cryptographic-architectures-1st-edition-j-j-stapleton-10433376
Security Without Obscurity Epstein W Clay Stapleton Jeffrey James
https://ebookbell.com/product/security-without-obscurity-epstein-w-
clay-stapleton-jeffrey-james-5393050
https://ebookbell.com/product/security-without-weapons-rethinking-
violence-nonviolent-action-and-civilian-protection-1st-edition-m-s-
wallace-36927306
https://ebookbell.com/product/the-quest-for-security-protection-
without-protectionism-and-the-challenge-of-global-governance-joseph-e-
stiglitz-editor-mary-kaldor-editor-51903290
https://ebookbell.com/product/dynamic-collaboration-how-to-share-
information-solve-problems-and-increase-productivity-without-
compromising-security-1st-edition-ray-schwemmer-rick-havrilla-51382322
https://ebookbell.com/product/kill-without-shame-ares-security-
alexandra-ivy-ivy-alexandra-22108172
Security Without Obscurity
Public Key Infrastructure (PKI) is an operational ecosystem that employs key management, cryptography,
information technology (IT), information security (cybersecurity), policy and practices, legal matters
(law, regulatory, contractual, privacy), and business rules (processes and procedures). Aproperly managed
PKI requires all of these disparate disciplines to function together – coherently, efficiently, effectually,
and successfully. Clearly defined roles and responsibilities, separation of duties, documentation,
and communications are critical aspects for a successful operation. PKI is not just about certificates,
rather it can be the technical foundation for the elusive “crypto-agility,” which is the ability to manage
cryptographic transitions. The second quantum revolution has begun, quantum computers are coming,
and post-quantum cryptography (PQC) transitions will become PKI operation’s business as usual.
Jeff Stapleton is the author of the Security Without Obscurity five-book series (CRC Press). He has
over 30 years’ cybersecurity experience, including cryptography, key management, PKI, biometrics,
and authentication. Jeff has participated in developing dozens of ISO, ANSI, and X9 security
standards for the financial services industry. He has been an architect, assessor, auditor, author, and
subject matter expert. His 30-year career includes Citicorp, MasterCard, RSA Security, KPMG,
Innové, USAF Crypto Modernization Program Office, Cryptographic Assurance Services (CAS),
Bank of America, and Wells Fargo Bank. He has worked with most of the payment brands, including
MasterCard, Visa, American Express, and Discover. His areas of expertise include payment systems,
cryptography, PKI, PQC, key management, biometrics, IAM, privacy, and zero trust architecture
(ZTA). Jeff holds a Bachelor of Science and Master of Science degrees in computer science from
the University of Missouri. He was an instructor at Washington University (St. Louis) and was an
adjunct professor at the University of Texas at San Antonio (UTSA).
W. Clay Epstein currently operates a cybersecurity consulting company Steintech LLC, specializing
in Cybersecurity, Encryption Technologies, PKI, and Digital Certificates. He has international
experience developing and managing public key infrastructures primarily for the financial services
industry. Clay has worked as an independent Cybersecurity and PKI consultant for the past 11 years.
Previously, Clay was the VP and Technical Manager at Bank of America responsible for the Bank’s
global Public Key Infrastructure and Cryptography Engineering Group. Prior to Bank of America,
Clay was CIO and Head of Operations at Venafi, a certificate and encryption key management
company. Prior to Venafi, Clay was Senior Vice President of Product and Technology at Identrus, a
global identity management network based on PKI for international financial institutions. Previously,
Clay also served as Head of eCommerce Technologies for Australia and New Zealand Banking
Group (ANZ) and was the CTO for Digital Signature Trust Co. Clay holds a Bachelor of Science in
Computer Science degree from the University of Utah and a Master of Business Administration in
Management Information Systems degree from Westminster College.
Security Without Obscurity
A Guide to PKI Operations
Second Edition
Bibliography................................................................................................................................. 287
B.1 ASC X9 Financial Services........................................................................... 287
B.2 European Telecommunication Standards Institute (ETSI) ........................... 288
B.3 Internet Engineering Task Force (IETF) ...................................................... 288
B.4 International Organization for Standardization (ISO) .................................. 290
B.5 National Institutes of Standards and Technology (NIST) ............................ 292
B.6 Public Key Cryptography Standards (PKCS) ............................................... 294
B.7 Miscellaneous ................................................................................................ 294
Index.............................................................................................................................................. 297
Preface
Most of the books on public key infrastructure (PKI) seem to focus on asymmetric cryptography,
X.509 certificates, certificate authority (CA) hierarchies, or certificate policy (CP), and certificate
practice statements (CPS). While algorithms, certificates, and theoretical policies and practices are
all excellent discussions, the real-world issues for operating a commercial or private CA can be
overwhelming. Pragmatically, a PKI is an operational system that employs asymmetric cryptogra
phy, information technology, operating rules, physical and logical security, and legal matters. Much
like any technology, cryptography, in general, undergoes changes: sometimes evolutionary, some
times dramatically, and sometimes unknowingly. Any of these can cause a major impact, which can
have an adverse effect on a PKI’s operational stability. Business requirements can also change such
that old rules must evolve to newer rules, or current rules must devolve to address legal issues such
as lawsuits, regulatory amendments, or contractual relationships. This book provides a no-nonsense
approach and realistic guide for operating a PKI system.
In addition to discussions on PKI best practices, this book also contains warnings against PKI
bad practices. Scattered throughout the book are anonymous case studies identifying good or bad
practices. These highlighted bad practices are based on real-world scenarios from the authors’ expe
riences. Often bad things are done with good intentions but cause bigger problems than the original
one being solved.
As with most new technologies, PKI has survived its period of inflated expectations, struggled
through its disappointment phase, and eventually gained widespread industry adoption. Today, PKI,
as a cryptographic technology, is embedded in hardware, firmware, and software throughout an
enterprise in almost every infrastructure or application environment. However, it now struggles with
apathetic mismanagement and new vulnerabilities. Moore’s law continues to erode cryptographic
strengths, and, in response, keys continue to get larger and protocols get more complicated. Fur
thermore, attackers are becoming more sophisticated, poking holes in cryptographic protocols,
which demands continual reassessments and improvements. Consequently, managing PKI systems
has become problematic. The authors offer a combined knowledge of over 50 years in develop
ing PKI-related policies, standards, practices, procedures, and audits with in-depth experience in
designing and operating various commercial and private PKI systems.
xi
Errata
Our sincere thanks to readers who spotted a few errors in previous books, and recognitions to our
reviewers for this book: Ralph S. Poore, Bill Poletti, and Richard M. Borden. Also, special appreci
ation to Ralph, who has steadfastly reviewed multiple drafts of every book and provided excellent
comments, not only the mundane typos and punctuation but far more importantly brilliant discourse
on a wide variety of information security topics including my favorites: cryptography and key man
agement. Without him, there would be far more errors.
Hopefully, there are no errors in this book: A Guide to PKI Operations, Second Edition
xiii
1 Introduction
Public key infrastructure (PKI) is an operational system that employs key management, cryptog
raphy, information technology (IT), information security (cybersecurity), legal matters, and busi
ness rules as shown in the Venn diagram in Figure 1.1 called PKI Cryptonomics.1 While certainly
there are business, legal, security, technology, and cryptography areas within any organization that
function outside of a PKI, the fact is that a properly managed PKI requires all of these disparate
disciplines to function effectively. The lack of one or more of these factors can undermine a PKI’s
effectiveness and efficiency. Furthermore, all of these disciplines must interact and complement
each other within a PKI framework.
Cryptography includes asymmetric and symmetric encryption algorithms, digital signature algo
rithms, hash algorithms, key lifecycle management controls, cryptographic modules, and security
protocols. Asymmetric cryptography provides digital signatures, encryption, and key management
capabilities to establish symmetric keys. Symmetric cryptography provides legacy key manage
ment, encryption, and Message Authentication Codes (MAC). Legacy key management suffers
from a fundamental problem of how to securely establish an initial key that asymmetric cryptog
raphy overcomes. Hash algorithms enable keyed Hash MAC (HMAC) and digital signatures. Key
lifecycle management includes generation, distribution, installation, usage, backup and recovery,
expiration, revocation, termination, and archive phases. Cryptographic modules include software
libraries and specialized hardware. Security protocols establish cryptographic keys and enable data
protection schemes.
IT involves mainframes, midrange, personal computers, mobile devices, local area networks,
wide area networks, the Internet, applications, browsers, operating systems, and network devices.
Application server platforms include hardware, firmware, operating systems, and software applica
tions. End-user platforms include hardware, firmware, operating systems, applications, browsers,
widgets, and applets. Networks include switches, routers, firewalls, network appliances, and net
work protocols.
Information security, also called cybersecurity, involves entity identification, authentication,
authorization, accountability, access controls, activity monitoring, event logging, information ana
lytics, incident response, and security forensics. Logging tracks event information such as who,
what, where, when, how, and hopefully why. Many aspects of security rely on cryptography and key
management for data confidentiality, data integrity, and data authenticity. For details, refer to the
book Security Without Obscurity: A Guide to Confidentiality, Integrity, and Authentication [B.7.2].
Legal matters address privacy, intellectual property, representations, warranties, disclaimers,
liabilities, indemnities, terms, termination, notices, dispute resolution, national and international
governing law, and compliance. While many of these issues may be incorporated into the CPS or
related agreements such as the CSA or RPA for public consumption, others might be addressed in
separate documents such as contractual master agreements, statements of work, or an organization’s
standard terms and conditions. The application environments, third-party business relationships,
and geopolitical locations often influence the various legal implications.
DOI: 10.1201/9781003425298-1 1
2 Security Without Obscurity
Business rules involve roles and responsibilities for the registration authority (RA), the cer
tificate authority (CA), subscribers (also known as key owners), relying parties, applications,
fees, revenues, risk management, and fraud prevention. RA services include certificate requests,
revocation requests, and status queries. CA services include certificate issuance, certificate revo
cation list (CRL) issuance, and Online Certificate Status Protocol (OCSP) updates. The CA busi
ness rules are defined in a variety of documents including a certificate practice statement (CPS),
certificate subscriber agreement (CSA), and relying party agreement (RPA). Furthermore, the
CA will typically provide some documents for public consumption, some proprietary with dis
closure under a nondisclosure agreement (NDA) or contract, while others are confidential for
internal use only.
PKI is also a subset of the broader field of cryptography, which in turn is a subset of general
mathematics, also shown in Figure 1.1. Cryptography has its own unique set of risks, rules, and
mathematics based on number theory, which essentially creates its own set of operating laws, and so
can be called cryptonomics. PKI Cryptonomics incurs growth as more applications, cloud services,
and microservices are available online, including mobile apps and Internet of Things (IoT), incor
porating cryptography. Cryptonomics also suffers from a type of inflation as Moore’s law continues
to increase computational and storage capabilities [B.7.1]. NIST describes “cryptographic strength”
as the amount of work either to break a cryptographic algorithm or to determine a key, which
is expressed as the number of bits relative to the key size [B.5.19]. Simplistically, as computers
get faster, keys must get larger. Cryptonomics further demonstrate lifecycle differences for various
symmetric, asymmetric, and now post-quantum cryptography (PQC) algorithms. Accordingly, PKI
Cryptonomics embodies all of the general cryptography characteristics and the additional manage
rial traits and issues from IT, business, and legal domains.
For the purposes of this book, cryptographic agility (Crypto-Agility) is defined as the capability
of a PKI to easily switch between cryptographic algorithms, encryption key strengths, and certifi
cate contents in response to changing system and enterprise needs. See Section 7.7 “Crypto-Agil
ity,” for details.
The X9.24–2 [B.1.2] defines Crypto-Agility as “a practice used to ensure a rapid response to a
cryptographic threat. The goal of Crypto-Agility is to adapt to an alternative cryptographic standard
quickly without making any major changes to infrastructure.”
This is a good working definition that has a focus on a PKI’s “infrastructure.” The PKI infra
structure is an important part but not the only part of an overall PKI operation. Other important
components of Crypto-Agility include the PMA’s approval process and Legal’s input as critical
components of the PKI. Another essential component includes the risk analysis of any changes that
a new cryptographic algorithm may have. All of these components and the responsive infrastructure
are required for a PKI to operate in a crypto-agile environment.
Also, for this book, a Cloud PKI is defined as either the migration of a PKI-enabled application
or the relocation of the PKI itself to a cloud environment. See Section 7.8 “Cloud PKI,” for further
discussion. Note that a Cloud PKI can be operated with in-house resources or in conjunction with
cloud provider resources.
This book provides realistic guidelines for operating and managing PKI systems, including pol
icy, standards, practices, and procedures. Various PKI roles and responsibilities are identified, secu
rity and operational considerations are discussed, incident management is reviewed, and aspects of
governance, compliance, and risks (GCR) are presented. This book also both discusses PKI best
practices and provides warnings against PKI bad practices based on anonymous real-world exam
ples drawn from the authors’ personal experiences.
and relying parties. The scope of this book includes both X.509 certificates and other PKI cre
dentials, as well as various certificate authority (CA) and registration authority (RA) hierarchies,
including pyramid architectures, hub and spoke topologies, bridge models, and cross-certification
schemes. Furthermore, this book addresses PKI policy, standards, practices, and procedures, includ
ing certificate policy and certificate practice statements (CP and CPSs).
The intended audience for this book includes technicians, administrators, managers, auditors,
corporate lawyers, security professionals, and senior-level management, including chief information
officers (CIOs), chief technology officers (CTOs), and chief information security officers (CISOs).
Each of these roles can benefit from this book in a variety of ways. A common benefit to the reader
is general education and awareness of PKI systems, cryptography, and key management issues.
• Senior managers are responsible for making strategic decisions and supervising manag
ers. The more senior managers know about the underlying technologies supported by their
subordinate managers and their respective teams, the better they can manage the overall
business. Making unknowledgeable strategic decisions is a bad practice that can adversely
affect staffing, systems, and profitability.
• Managers are responsible for making tactical decisions and supervising personnel includ
ing technicians, administrators, auditors, and security professionals. The more managers
know about the underlying technologies supported by their teams, the better they can
manage their staff. Making unknowledgeable tactical decisions is a bad practice that can
adversely affect staffing, systems, and productivity.
• Administrators are responsible for installing, configuring, running, monitoring, and
eventually decommissioning network or system components inclusive of hardware
devices, firmware modules, and software elements. Extensive knowledge of PKI systems
is needed by administrators to properly manage cryptographic features, functionality, and
associated cryptographic keys.
• Auditors are responsible for verifying compliance to information and security policy,
standards, practices, and procedures. Risk can be better identified and managed when the
evaluator has a solid understanding of general cryptography, specifically PKI, and related
key management. Furthermore, compliance includes not just the organization’s internal
security requirements but also external influences such as regulations, international and
national laws, and contractual responsibilities.
• Security professionals are responsible for assisting other teams and developing security
policy, standards, practices, and procedures. However, security professionals are expected
to have an overall knowledge base with expertise typically in only a few areas, so this
book can be used as a reference guide and refresher material. For example, the (ISC)2
CISSP® program is organized into eight domains with cryptography included in one of
them, cryptography and key management is no longer its own domain area, and further
the CISSP certification only requires emphasis in two of the domains.
• Corporate lawyers are responsible for legal practices and interpretations within an organi
zation. Consequently, having a basic understanding of PKI enhances their ability to advise
on warranty claims, disclaimers, limitations, liabilities, and indemnifications associated
with the issuance and reliance of public key certificates.
• Subscribers are responsible for protecting their cryptographic keys and using them prop
erly in accordance with CA and RA policies, standards, practices, agreements, and pro
cedures. Accordingly, having both a fundamental understanding of all PKI roles and
responsibilities and specific awareness of the subscriber duties is important.
• Relying parties are responsible for validating and using digital certificates and their cor
responding certificate chain in accordance with CA policy, standards, practices, agree
ments, and procedures. Hence, having both a fundamental understanding of all PKI roles
and responsibilities and knowledge of the relying party obligations is paramount.
4 Security Without Obscurity
In addition to operational responsibilities, senior managers and managers need to address risk
and compliance issues. Compliance might focus primarily on regulatory or other federal, state, or
local laws, but contractual obligations must also be addressed. Relative risks also need to be evalu
ated for determining acceptable losses due to fraud, identity theft, data theft, potential legal actions,
or other cost factors. Risks might be detectable or preventable, compensating controls might help
mitigate issues, or backend controls might lessen the overall impacts.
The chapters in this book are organized as a learning course and as a reference guide. As
a learning course approach, each chapter builds on the information provided in the previous
chapters. The reader can progress from chapter to chapter to build their PKI knowledge. From
a reference guide perspective, the chapters are structured by major topics for easy lookup and
refresher material.
• Chapter 1, “Introduction,” provides an overview of the book and introduces security basics
and associated standards organizations as background for Chapter 2, “Cryptography
Basics,” and Chapter 3, “PKI Building Blocks.”
• Chapter 2, “Cryptography Basics,” provides details on symmetric and asymmetric cryp
tographic solutions for encryption, authentication, nonrepudiation, and cryptographic
modules. The cryptography details establish connections between the associated security
areas for confidentiality, integrity and authentication, and nonrepudiation.
• Chapter 3, “PKI Building Blocks,” builds on the general cryptography basics provided
in Chapter 2, “Cryptography Basics,” providing details of PKI standards, protocols, and
architectural components. The PKI building blocks provide a knowledge base for the
remainder of the chapters.
• Chapter 4, “PKI Management and Security,” builds on the information provided in
Chapter 3, “PKI Building Blocks,” to establish the groundwork for documenting and man
aging the PKI system and application components. The methodology is organized in a
top–down approach, beginning with policy, then practices, and finally procedures.
• Chapter 5, “PKI Roles and Responsibilities,” identifies the various PKI roles and corre
sponding responsibilities including positions, functions, jobs, tasks, and applications. The
information presented in this chapter provides a reference baseline used in subsequent
chapters.
• Chapter 6, “Security Considerations,” discusses various security considerations that
affect the operational environments of the CA, the RA, the subscriber, and the relying
party. Subscribers and relying parties include individuals using workstation applications
or server applications running on midrange or mainframe platforms.
• Chapter 7, “Operational Considerations,” discusses operational issues for PKI roles iden
tified in Chapter 5, “PKI Roles and Responsibilities,” balanced against security issues
described in Chapter 6, “Security Considerations.”
• Chapter 8, “Incident Management,” discusses various types of events and methodolo
gies for handling atypical or unusual occurrences that have security or PKI operational
implications. Incidents can originate from insider threats or external attacks, system or
application misconfigurations, software or hardware failures, zero-day faults, and other
vulnerabilities. Components of incident management include monitoring, response, dis
covery, reporting, and remediation.
• Chapter 9, “PKI Governance, Risk, and Compliance,” addresses organizational issues such
as management support, security completeness, and audit independence. Management
support includes the policy authority for managing the CP and CPS, facility and environ
mental controls, asset inventory, application development, system administration, vendor
relationships, business continuity, and disaster recovery. Security completeness includes
planning and execution, data classification, personnel reviews, physical controls, cyberse
curity, incident response, access controls, monitoring, and event logs. Audit independence
Introduction 5
includes assessments, evaluations, and reviews that are reported to senior management
without undue influence from operational teams.
• Chapter 10, “PKI Industry,” provides an overview of the industry including standards and
groups that manage or influence PKI operations, with special attention on the Financial
PKI.
The remainder of this chapter introduces security disciplines (i.e., confidentiality, integrity,
authentication, authorization, accountability, nonrepudiation) and provides background on vari
ous standards organizations (e.g., ANSI, ISO, NIST) and related information security standards
focusing on cryptography and PKI. These discussions establish connections with symmetric and
asymmetric cryptographic methods and the associated standards referenced throughout the book,
in particular Chapter 3, “PKI Building Blocks,” along with a historical perspective of cryptography
and PKI referenced throughout the book.
• Confidentiality is the set of security controls to protect data from unauthorized access
when in transit, process, or storage. Transit occurs when data is transmitted between two
points. Process occurs when data is resident in the memory of a device. Storage occurs
when data is stored on stationary or removable media. Note that the states of transit, pro
cess, and storage align with the PCI Data Security Standard (DSS)2 for cardholder data.
• Integrity is the set of security controls to protect data from undetected modification or
substitution when in transit or storage. Data in process is a necessary unstable state where
data is being intentionally changed; however, this is where software assurance plays an
important role.
• Authentication is the set of security controls to verify an entity and potentially grant
access to system resources, including data and applications. There is a distinction between
regular data and authentication data, also called authentication credentials. Authorization
and accountability are sometimes included as part of authentication controls, but for the
purposes of this book we treat all three of these security areas as separate controls.
• Authorization is the set of security controls to validate permissions for accessing system
or network resources to read or write data, or to read, write, or execute software. Such
approvals are based on authentication occurring first to verify the requestor, followed by
authorization to validate that the requestor is allowed access.
• Accountability is the set of controls to monitor and analyze the three “W” facts: who did
what and when they did it. This includes log management for generating, collecting, stor
ing, examining, alerting, and reporting events.
• Nonrepudiation is the set of security controls to prevent refusal by one party to acknowl
edge an agreement or action claimed by another party. This is accomplished by having
both data integrity and authenticity provable to an independent third party.
Confidentiality can only be partially achieved using access controls. System administrators can
restrict access to files (data in storage) and systems (data in process), and under certain condi
tions limit access to network components (data in transit). Files or memory might contain system
6 Security Without Obscurity
configurations, cryptographic parameters, or application data. However, only encryption can pro
vide confidentiality for data in transit between communication nodes. Additionally, encryption can
also offer confidentiality for data in storage and in some cases even data in process. Because of
the role data encryption keys play in providing confidentiality data, encryption keys must also be
protected from disclosure or unauthorized access; otherwise, if the key is compromised, then the
encrypted data must be viewed as similarly compromised.
Integrity can be achieved using various comparison methods between what is expected (or sent)
versus what is retrieved (or received). Typically, integrity is validated using an integrity check value
(ICV), which might be derived cryptographically or otherwise. In general, the ICV is calculated
when a file is about to be written or a message is about to be sent, and it is verified when the file
is read or the message is received. If the previous (written or sent) ICV does not match the current
(read or received) ICV, then the file (or the ICV) has been modified or substituted and is untrustwor
thy. However, a noncryptographic ICV can be recalculated disguising the attack, whereas the use of
a cryptographic key to generate and validate the ICV deters the attacker, who must first obtain the
cryptographic key in order to recalculate a valid ICV for the changed file or message.
Authentication methods for individuals and devices differ. Individual authentication methods
include the three basic techniques of something you know (called knowledge factors), something
you have (possession factors), and something you are (biometrics), and arguably a fourth method,
something you control (cryptographic keys), often included as a possession factor. Conversely,
device authentication can only use possession or cryptography factors, as devices cannot “remem
ber” passwords or demonstrate biological characteristics. Some hardware devices have unique
magnetic, electromechanical, or electromagnetic characteristics that can be used for recognition.
Regardless, once an entity has been authenticated and its identity has been verified, the associated
permissions are used to determine its authorization status. Furthermore, logs are generated by vari
ous network appliances, system components, and application software for accountability.
All the authentication methods have the prerequisite that an initial authentication must be
achieved before the authentication credential can be established. The individual or device must be
identified to an acceptable level of assurance before a password can be set, a physical token can be
issued, a person can be enrolled into a biometric system, or a cryptographic key can be exchanged.
If the wrong entity is initially registered, then all subsequent authentications become invalid. Sim
ilarly, if a digital certificate is issued to the wrong entity, then all succeeding authentications are
based on a false assumption. Thus, if “Alice” registers as “Bob” or if Bob’s authentication creden
tial is stolen or his identity covertly reassigned to Alice’s authentication credential, then basically
“Alice” becomes “Bob,” resulting in a false-positive authentication.
Nonrepudiation methods are a combination of cryptographic, operational, business, and legal
processes. Cryptographic techniques include controlling keys over the management lifecycle.
Operational procedures include information security controls over personnel and system resources.
Business rules include roles and responsibilities, regulatory requirements, and service agreements
between participating parties. Legal considerations include warranties, liabilities, dispute resolu
tion, and evidentiary rules for testimony and expert witnesses.
For more details on confidentiality, integrity, authentication, and nonrepudiation, refer to the
book Security Without Obscurity: A Guide to Confidentiality, Integrity, and Authentication [B.7.2].
The book organizes security controls into three major areas, confidentiality, integrity, and authen
tication, which were chosen as the primary categories to discuss information security. While data
confidentiality and integrity are fundamental information security controls, other controls such
as authentication, authorization, and accountability (AAA) are equally important. The book also
addresses nonrepudiation and privacy to help round out the overall discussion of information secu
rity controls. And since cryptography is used in almost every information security control such as
data encryption, message authentication, or digital signatures, the book dedicates an entire chapter
to key management, one of the most important controls for any information security program and
unfortunately one of the most overlooked areas.
Introduction 7
PKI incorporates both symmetric and asymmetric cryptography along with many other security
controls. Symmetric cryptography includes data encryption, message integrity and authentication,
and key management. Asymmetric cryptography includes data encryption, message integrity and
authentication, nonrepudiation, and key management. Asymmetric key management is often used
to establish symmetric keys. However, symmetric cryptography has been around for a long while,
whereas asymmetric cryptography is rather recent. Figure 1.2 shows a comparison between the his
tory of symmetric and asymmetric cryptography.
In the known history of the world, cryptography has been around for at least 4,000 years begin
ning with the substitution of hieroglyphics by the Egyptian priesthood to convey secret messages
[B.7.3]. The top arrow shows the 4,000-year history of symmetric cryptography. Interestingly, the
earliest known human writings are Sumerian pictograms and cuneiform in Mesopotamia, which
are only a thousand years older than cryptography. Substitutions ciphers are a form of symmetric
cryptography since the mapping between cleartext and ciphertext characters represents the sym
metric key. For example, a simple substitution cipher maps the English alphabet of 26 letters to the
numbers 1–26, and the key would be the relative position of each letter, for example, A is 1, B is 2,
and so on.
Better-known symmetric algorithms include the Caesar cipher,3 used by Julius Caesar in Rome;
the Jefferson wheel,4 designed by Thomas Jefferson but still used as late as World War I; and the
infamous German Enigma machine,5 ultimately defeated by the Allies of World War II. Further
more, World War II ushered in modern cryptography, which today includes symmetric algorithms
(e.g., DES, RC4, and AES) and asymmetric algorithms (e.g., DH, RSA, and ECC).
In comparison to symmetric ciphers, asymmetric cryptography has only been around since the
1970s depicted by the short bottom arrow showing its short 50-year history. So, in comparison,
symmetric cryptography has been around for thousands of years whereas asymmetric cryptography
has only been around for decades.
In order for the wide variety of symmetric and asymmetric algorithms to be properly imple
mented, and for the two cryptography disciplines to properly interoperate within secure protocols,
applications, and business environments, standardization is necessary. Standards can provide con
sistency and continuity across product lines, among vendors, and for intra- or inter-industry seg
ments. Many different participants rely on standards.
• Developers and manufacturers rely on standards to build secure software, firmware, and
hardware products.
• Implementers and service providers rely on standards for products and protocols to
securely interoperate.
• Managers and administrators rely on the underlying standards to safely operate applica
tions and systems that incorporate cryptographic solutions.
8 Security Without Obscurity
• Auditors and security professionals expect applications to execute properly and to be man
aged properly according to industry standards.
Standards are a big part of security, cryptography, and consequently PKI Cryptonomics so before
delving deeper into cryptography, a basic understanding of the standards organizations and the
standardization process will help. The next section provides a selected overview of PKI-related
standards and the organizations that are accredited to develop industry standards.
• TC68 – One of the many groups within ISO is Technical Committee 68 Financial Services,9
whose scope is standardization in the field of banking, securities, and other financial
services. Membership consists of 34 participating countries, 50 observing countries,
and 8 liaison relationships to other standards groups. This committee has three standing
subcommittees: SC2 Information Security, SC8 Reference Data, and SC9 Information
Exchange. The committee has several Advisory Groups (AG) and Study Groups (SG)
including SG5 Central Bank Digital Currency (CBDC).
• JTC1 – Another ISO group is Joint Technical Committee One,10 whose scope is stan
dardization in the field of information technology. Membership consists of 34 partici
pating countries, 66 observing countries, and 31 liaison relationships to other standards
groups. This committee has 22 subcommittees, including SC27 Information Security,
Cybersecurity, and Privacy Protection, whose focus includes PKI.
• NIST – The National Institute of Standards and Technology11 develops Federal Information
Processing Standards (FIPS) and Special Publications, along with various other technical
specifications.12 All of these are U.S. government documents and are not affiliated with
ANSI or ISO. However, the Information Technology Laboratories (ITL) within NIST is
an ANSI-accredited standards developer. NIST also works with the National Security
Agency13 (NSA) within the Department of Defense14 (DoD) on many standards issues,
including cryptography and key management.
• ITU – The International Telecommunication Union15 (ITU) is the United Nations
specialized agency in the field of telecommunications and Information and
Communications Technologies (ICT) and whose developer group is designed as ITU-T
for Telecommunications Recommendations.16
ITU-T Recommendations include the “X” series for data networks, open system communica
tions, and security, which includes the recommendation X.509 Information Technology – Open
Systems Interconnection – The Directory: Public Key and Attribute Certificate Frameworks, more
commonly known as X.509 certificates. While most PKI-enabled applications support X.509 cer
tificates, there are other PKI formatted credentials in use today, such as PGP-type products, EMV
payment smart cards, and numerous others.
Within the realm of less formal standards groups, there are several notable organizations that
have developed numerous cryptography standards that either extend or rely on PKI.
• IEEE – The Institute of Electrical and Electronics Engineers17 (IEEE), pronounced “Eye
triple-E,” has roots as far back as 1884 when electrical professionalism was new. IEEE
supports a wide variety of standards topics and has liaison status with ISO. Topics include
communications, such as the 802.11 Wireless standards, and cryptography, such as the
1363 Public Key Cryptography standards. Note that the IEEE Standards Association, an
arm of IEEE, is an ANSI-accredited Standards Developing Organization (SDO).
• OASIS – The Organization for the Advancement of Structured Information Standards18
(OASIS), originally founded as “SGML Open” in 1993, is a nonprofit consortium that has
liaison status with ISO. OASIS focuses on Extensible Markup Language (XML) standards,
including the Digital Signature Services (DSS), Key Management Interoperability Protocol
(KMIP) specification, and Security Assertion Markup Language (SAML) specifications.
10 Security Without Obscurity
The Internet Architecture Board21 (IAB), which formally established the IETF in 1986, is respon
sible for defining the overall architecture of the Internet, providing guidance and broad direction to
the IETF. The Internet Engineering Steering Group22 (IESG) is responsible for technical manage
ment of IETF activities and the Internet standards process. In cooperation with the IETF, IAB, and
IESG, the Internet Assigned Numbers Authority23 (IANA) is in charge of all “unique parameters”
on the Internet, including Internet Protocol (IP) addresses.
• PCI SSC – The Payment Card Industry Security Standards Council24 (PCI SSC) is a global
forum established in 2006 by the payment brands: American Express, Discover Financial
Services, JCB International, MasterCard Worldwide, and Visa Incorporated. China Union
Pay (CUP) joined PCI in 2020. PCI develops standards and supports assessment programs
and training, but compliance enforcement of issuers, acquirers, and merchants is solely
the responsibility of the brands. The PCI standards include the Data Security Standard
(DSS), Point-to-Point Encryption (P2PE), and Personal Identification Number (PIN).
The PCI programs include Qualified Security Assessors (QSA), Qualified PIN Assessors
(QPA), Approved Scanning Vendors (ASV), Internal Security Assessors (ISA), and oth
ers. The PCI SSC is not affiliated with ISO but is an active member of ASC X9.
• RSA Labs – RSA Laboratories is the research center of RSA, the Security Division of
DELL EMC, and the security research group within the EMC Innovation Network. The
group was established in 1991 within RSA Data Security, the company founded by the
inventors (Rivest, Shamir, Adleman) of the RSA public key cryptosystem. RSA Labs
developed one of the earliest suites of specifications for using asymmetric cryptography,
the Public Key Cryptography Standards (PKCS).
While this discussion of standards organizations is not all-encompassing, it does provide an equi
table overview of groups that have developed or maintain cryptography standards. These groups and
their associated cryptography standards, specifications, and reports are referenced throughout this
book. Technical and security standards are developed at various levels.
X9.42 using discrete logarithm cryptography (e.g., Diffie–Hellman), X9.44 using integer
factorization cryptography (e.g., RSA), and X9.63 using elliptic curve cryptography (ECC).
• Application standards could define how cryptographic algorithms or protocols are used
in a particular technological environment to protect information, authenticate users, or
secure systems and networks. For example, X9.84 defines security requirements for pro
tecting biometric information (e.g., fingerprints, voiceprints), and X9.95 provides require
ments and methods for using trusted time stamps. While neither standard deals with
business processes for using such technologies, they do address how to make business
applications more secure.
• Management standards often provide evaluation criteria for performing assessments to
identify security gaps and determine remediation. They may also offer exemplars for
policy or practice statements. In addition to policy or practice examples, management
standards might also provide guidelines for security procedures. Examples include X9.79
and ISO 21188 Public Key Infrastructure (PKI) Practices and Policy Framework.
Ironically, many consider security standards as best practices, such that any subset of a stand
ard is sufficient; however, most standards represent minimally acceptable requirements. Standards
include mandatory requirements using the reserved word “shall” and recommendations using the
term “should” to provide additional guidance. The “shall” statements represent the minimal controls
and the “should” statements are the best practices.
The standardization process varies slightly between organizations, but many are similar or equiv
alent. For example, the ASC X9 procedures are modeled after the ISO processes, so we will selec
tively compare the two. The six stages of an ISO standard are shown in Figure 1.4. The process
begins when a new work item proposal (NWIP) is submitted to one of the ISO technical committees
for consideration as a new standard. The NWIP might be an existing national standard submitted for
internationalization or a proposition of a brand-new standard. The NWIP ballot requires a majority
approval of the technical committee membership, and at least five countries must commit to actively
participate. Once approved, an ISO number is allocated and the work item is assigned to either an
existing workgroup within a subcommittee or possibly a new workgroup is created. Occasionally, a
new subcommittee might be created.
Once the workgroup is established, a working draft (WD) is created. The goal of the WD is to
identify the technical aspects of the draft standard. When the workgroup determines the WD is
ready to be promoted, it can issue a committee draft (CD) ballot. If the CD fails the ballot, then the
workgroup resolves the ballot comments and can attempt another CD ballot. Otherwise, the draft is
considered a CD, and the workgroup continues to develop the remaining technical and supportive
material. When the workgroup determines the CD is ready for promotion, it can issue a draft inter
national standard (DIS) ballot. If the DIS ballot fails, the workgroup can resolve the comments and
attempt another DIS ballot; otherwise, the DIS is considered stable and is prepared for its Final DIS
(or FDIS) ballot. Upon completion of the FDIS ballot, an ISO audit is performed to ensure the ISO
procedures were followed, and the International Standard (IS) is published. Multiple CD ballots or
DIS ballots are permitted; however, sometimes interest wanes to the point that an insufficient num
ber of ISO members remain active, and the work item is eventually canceled.
The equivalent six stages of an X9 standard are shown in Figure 1.5. Similar to ISO, the X9
process begins when a new work item (NWI) with at least five member sponsors is submitted as
consideration for a new standard. The NWI ballot is at the full X9 committee level, and if approved
an X9 number is allocated and the work item is assigned to either an existing workgroup within a
subcommittee or possibly a new workgroup is created. Again, comparable to the ISO process, a new
subcommittee might be created.
The draft standard is similar to the ISO WD, and once the workgroup decides the standard is
ready, it is issued for a subcommittee (e.g., X9A, X9B, X9C, X9D, or X9F) ballot. If the ballot is
disapproved, then the workgroup resolves the comments and a subcommittee recirculation ballot is
issued, especially when technical changes are made. But even if the ballot is approved, the work-
group still resolves comments. Regardless, once the subcommittee ballot is approved, the draft
standard is then submitted for its full X9 committee ballot, the same ballot process as for the original
NWI ballot. If the X9 ballot is disapproved, the workgroup resolves the comments and a committee
recirculation ballot is issued. And again, even if the ballot is approved the workgroup still resolves
comments. Once the committee ballot is approved, the final draft standard is submitted to ANSI for
a public comment period. All public comments, if any, are resolved by the workgroup, and the final
draft is submitted again to ANSI for an audit to ensure the ANSI procedures were followed, and
finally the American National Standard is published.
The ISO and ANSI consensus process helps ensure industry representation by interested partici
pants. The standards process is often referred to as the “alphabet soup,” which is understandable due
to the many organizations and numbers. However, the standards process is important for industry
consistency, continuity, and interoperability. This is especially important for cryptography and PKI
Cryptonomics as parties need to establish trusted relationships. If one party cannot trust the cryp
tographic keys and particularly the public key certificates of the other, then the data and interactions
cannot likewise be trusted.
NOTES
1. Cryptonomics originated in the first edition Security Without Obscurity: A Guide to PKI Operations in 2016
but since then has been adopted by others.
2. Payment Card Industry (PCI) Security Standards Council (SSC). Data Security Standard (PCI DSS v3.1):
Requirements and security assessment procedures, www.pcisecuritystandards
3. https://en.wikipedia.org/wiki/Caesar_cipher
4. www.monticello.org/site/research-and-collections/wheel-cipher
5. www.bbc.co.uk/history/worldwars/wwtwo/enigma_01.shtml
6. International Organization for Standardization, www.iso.org.
7. International Committee for Information Technology Standards, www.incits.org.
8. Accredited Standards Committee X9, www.x9.org.
9. ISO TC69, www.iso.org/committee/49650.html
10. www.iso.org/committee/45020.html
11. National Institute for Standards and Technology, www.nist.gov
12. Computer Security Resource Center of NIST, csrc.nist.gov/publications
13. National Security Agency, www.nsa.gov
Introduction 13
• Algorithms are a set of rules for mathematical calculations or other problem-solving oper
ations including cryptographic processes. Ciphers are a method for making information
secret, which includes encryption algorithms, but not all cryptographic algorithms are
necessarily ciphers, and some provide authentication, nonrepudiation, or key management
services.
• Encryption is the process of transforming original information called “cleartext” to an
undecipherable format called “ciphertext,” which provides data confidentiality. The reverse
process is called “decryption,” which changes ciphertext back to cleartext for recover
ing the original information. The cryptographic key is commonly called the encryption
key regardless of if the cleartext is being encrypted or the ciphertext is being decrypted.
Without access to the encryption key, cleartext cannot be encrypted and ciphertext cannot
be decrypted by any other party.
TABLE 2.1
Security and Cryptography Basics
Security Basics Cryptography Basics Rationale
Confidentiality Encryption Confidentiality and encryption controls are essentially equivalent
methods.
Integrity Cryptographic Signature Cryptographic signatures include symmetric and asymmetric signatures.
Authentication Cryptographic Authentication Cryptographic authentication is based on security protocols using
cryptographic methods.
Authorization Cryptographic Credential Cryptographic credentials include X.509 public key and attribute
credentials.
Accountability Log Management Log management is an information technology control applied to
cryptographic methods.
Key Management Key management is specific to cryptography.
Cryptographic Module Cryptographic hardware and software modules are specific to
cryptography.
Nonrepudiation Nonrepudiation Nonrepudiation is based on digital signatures.
14 DOI: 10.1201/9781003425298-2
Cryptography Basics 15
In general, cryptographic algorithms are divided into symmetric and asymmetric algorithms. Sym
metric ciphers use the same key to encrypt and decrypt data, whereas asymmetric ciphers use two
different keys. Figure 2.1 shows a functional diagram with two inputs (parameters and key) and
one output (results). The nomenclature and graphics used in Figure 2.1 are used for all diagrams
used in this book. The function is a cryptographic operation based on a cryptographic algorithm, for
16 Security Without Obscurity
example, a cryptographic algorithm might include encryption and decryption functions. The request
parameters typically include a command, the data, and associated options. The cryptographic key
instructs the algorithm as to how it manipulates the input data. The results provide the output data
and usually include a return code indicating success or an error.
Symmetric and asymmetric algorithms and functions are discussed in Section 2.1, “Encryp
tion,” Section 2.2, “Authentication,” and Section 2.3, “Nonrepudiation.” Where, when, and how
cryptographic keys are stored are in Section 2.4, “Key Management.” Where and how functions
are accessed by applications when using cryptographic algorithms are discussed in Section 2.5,
“Cryptographic Modules.” This book refers to various standards and specifications that define cryp
tographic algorithms, modes of operations, and associated options, but it does not provide process
details or descriptions of the underlying mathematics [B.7.4].
2.1 ENCRYPTION
When describing a symmetric algorithm, sometimes called a symmetric cipher, it is easiest to con
sider the overall process between a sender and a receiver. Figure 2.2 shows a sender encrypting the
cleartext and sending the ciphertext to the receiver and then a receiver decrypting the ciphertext to
recover the original cleartext. Both parties use the same cryptographic algorithm and key, but the
sender uses the encrypt function, while the receiver uses the decrypt function. Since the same key is
used for both encryption and decryption, this is called a symmetric algorithm with a symmetric key.
Furthermore, any interloper without access to the same symmetric key cannot decrypt the
ciphertext, and therefore the encryption provides data confidentiality. An outsider might attempt an
exhaustive key attack, that is, try every possible key until the right one is determined. There are no
protections against an exhaustive attack except for the huge number of possible keys provided by
the size of the symmetric key used to encrypt the cleartext; given enough time and resources, the
symmetric key can theoretically be found.
In fact, there is always a nonzero probability that a key can be found, and it is actually possible
but highly unlikely that the first guess happens to be the correct key. In comparison, a winning lot
tery ticket with 6 out of a possible 100 numbers is 100 × 99 × 98 × 97 × 96 × 95 = 858,277,728,000
or one chance in about 858 billion (a billion is 109 or 9 zeroes), which is better than one chance in
a trillion (or 1012 or 1 with 12 zeroes after it). A modern symmetric algorithm using a 128-bit key
has more than 340 × 1036 possible keys, which is a number that has 27 more zeroes than the lowly
lottery ticket’s chance of winning.
Thus, the huge number of possible symmetric keys typically makes an exhaustive attack imprac
tical. In addition, if the key can be found within someone’s lifetime, the value of the recovered
cleartext is likely expired or useless. For example, if it takes 30 years to determine a key used to
encrypt a password that was only valid for 30 days, the attack is essentially a waste of time. Fur
thermore, changing the key periodically is a common countermeasure, such that the time needed to
exhaustively determine a symmetric key often far exceeds its actual lifetime.
However, there are far easier attacks to get the symmetric key. For example, in order for the
sender and receiver to exchange the encrypted information (ciphertext) and decrypt it, they must
first exchange the symmetric key, which, if not done in a secure manner, might allow an outsider to
get the key. Furthermore, in order for the sender to encrypt and the receiver to decrypt, they must
have access to the same key. Inadequate key storage protection might allow an outsider to hack into
the systems and get the key or an unauthorized insider to access the systems and provide the key to
the outsider. Related vulnerabilities, threats, and risks are discussed in more detail in Section 2.4.
However, insider threats are more likely to occur than outsider threats. The Ponemon Institute2 and
Raytheon released a 2014 study on insider threats including both adversarial and inadvertent secu
rity breaches:
• Eighty-eight percent of respondents recognize insider threats as a cause for alarm but have
difficulty identifying specific threatening actions by insiders.
• Sixty-five percent of respondents say privileged users look at information because they
are curious.
• Forty-two percent of the respondents are not confident that they have enterprise-wide vis
ibility for privileged user access.
The 88% of respondents who recognize a cause of alarm toward insider threats seem justified by
other studies such as the Carnegie Mellon University and Software Engineering Institute study
on insider threats in the U.S. Financial Services Sector,3 which provides a damage comparison
between internal and external attacks. The study was based on 80 fraud cases between 2005 and
2012. On average, the actual internal damage exceeded $800,000, while the average internal
potential was closer to $900,000. In comparison, the actual average external damage was over
$400,000, and the potential average external damage was closer to $600,000. The study indicates
that insider damage is doubled (or 100% greater), while its potential is a third higher (or about
33% greater).
When describing an asymmetric algorithm, also called an “asymmetric cipher,” it is easier to
consider the overall process between a sender and a receiver, as we did for symmetric algorithms.
Figure 2.3 shows a sender encrypting cleartext using the receiver’s public key, sending the ciphertext
to the receiver, and then the receiver decrypting the ciphertext using its private key to recover the
cleartext. The receiver generates an asymmetric key pair consisting of the private key and the public
key and sends its public key to the sender. Similar to symmetric ciphers, both parties used the same
cryptographic algorithm but with this example, each used a different key. Since different keys are
used, the public key for encryption and the private key for decryption, this is called an asymmetric
algorithm with an asymmetric key pair.
The cleartext encrypted using the public key can only be decrypted using the corresponding
private key. Thus, any sender with a copy of the public key can encrypt cleartext but only the
receiver can decrypt the ciphertext. Thus, any interloper with a copy of the receiver’s public key
cannot decrypt the ciphertext so the data confidentiality is maintained. Furthermore, the distribu
tion of the asymmetric public key does not require secrecy as does a symmetric key. On the other
hand, the asymmetric private key is as vulnerable to outside hackers or insider threats as any sym
metric key. In general, the same key management controls used for symmetric keys are equally
needed for asymmetric private keys. However, as we dig deeper into PKI operations further in
the book, it will become readily apparent that the strongest controls are needed for asymmetric
private keys.
With respect to asymmetric public keys, we mentioned earlier that they do not need to be kept
secret; however, they do require authenticity. Otherwise, without the sender being able to authen
ticate the receiver’s public key, asymmetric cryptography is susceptible to a “man-in-the-middle”
(MITM) attack as shown in Figure 2.4. Consider the same scenario of a sender sending ciphertext
to the receiver by encrypting the cleartext with a public key. The MITM attack occurs when an
interloper convinces the sender that its public key belongs to the receiver. The sender encrypts the
cleartext with the interloper’s public key, thinking it is using the receiver’s public key, and sends
the ciphertext to the receiver, which the interloper intercepts. The interloper decrypts the cipher-
text using its private key to recover the original cleartext, then re-encrypts the cleartext using the
actual receiver’s public key and sends the ciphertext onto the receiver. The receiver decrypts the
ciphertext using its private key to recover the original cleartext, unaware that the interloper has
accessed the message.
The sender falsely believes it is securely exchanging information with the receiver as shown
by the dotted line, but the interloper is the man in the middle. Furthermore, the interloper not only
sees the original cleartext from the sender but also changes the cleartext such that the sender sends
one message but the receiver gets a completely different message, and neither of them is aware of
the switch. Often, asymmetric encryption is not used for exchanging data but rather for exchanging
symmetric keys, which are subsequently used to encrypt and decrypt data. So, a MITM attack would
capture the symmetric key again allowing the interloper to encrypt and decrypt information. To
avoid a MITM attack, the sender must be able to validate the public key owner, namely, the receiver,
which is discussed in Section 2.2.
2.2 AUTHENTICATION
We now continue the sender and receiver discussion from Section 2.1, “Encryption,” when using
symmetric and asymmetric ciphers for authentication and integrity.
Figure 2.5 shows a sender generating a Message Authentication Code (MAC) using a symmetric
encryption function. The MAC is generated using a specific cipher mode of operation called cipher
block chaining (CBC). Basically, the cleartext message is divided into chunks of data, each chunk is
encrypted such that the ciphertext from the previous encryption is used with the next encryption, and
the MAC is the first half of the last ciphertext block [B.4.9] and [B.4.3]. Both parties use the same
cryptographic algorithm, the same symmetric key, and the same encrypt function.
The sender generates the MAC from the cleartext and sends both the cleartext message and the
MAC to the receiver. The receiver generates its own MAC from the cleartext message and compares
the received MAC to the newly generated MAC. If the two match, then the receiver has validated
that the cleartext message (and the MAC) has not been altered, and therefore the message integrity
has been verified. Furthermore, since only the sender and the receiver have access to the MAC key
and the receiver did not send the cleartext message, the MAC also provides message authentication.
Additional services are discussed in Section 2.3, “Nonrepudiation.”
Figure 2.6 shows a sender generating a keyed Hash Message Authentication Code (HMAC)
using a hash algorithm instead of a symmetric cipher. In general, a hash function maps data of an
arbitrary large size to data of a smaller fixed size, where slight input differences yield large output
differences. A cryptographic hash function must be able to withstand certain types of cryptanalytic
attacks. At a minimum, cryptographic hash functions must have the following properties:
m, where h = hash (m) = hash (g). This resistance deters an attacker from finding a second
message that might be used to counterfeit the first message.
• Collision resistance is with regard to two unknown messages: it must be infeasible to find
any two messages m and g that yield the same hash results h, where h = hash (m) = hash
(g). This resistance is analogous to the birthday paradox, finding two individuals with the
same birthday within a room of people.
If any of these conditions are not met, then an interloper might be able to determine another
cleartext message and substitute it for the original message. A sender or a receiver might also be able
to determine another message claiming it was the original message and so repudiate the message.
Otherwise, similar to the MAC, the HMAC algorithm [B.5.7] incorporates a symmetric key with the
cryptographic hash function used by the sender and the receiver.
The sender generates the HMAC from the cleartext and sends both the cleartext message and the
HMAC to the receiver. The receiver generates its own HMAC from the cleartext message and com
pares the received HMAC to the newly generated HMAC. If the two match, then the receiver has
validated that the cleartext message (and the HMAC) has not been altered, and therefore the mes
sage integrity has been verified. Since only the sender and the receiver have access to the HMAC
key, and the receiver did not send the cleartext message, the HMAC also provides message authen
tication. Additional services are discussed in Section 2.3, “Nonrepudiation.”
Figure 2.7 shows a sender generating a digital signature, sometimes just called a signature, using
a combination of a hash function and a digital signature algorithm.
The differences between asymmetric encryption (see Section 2.1) and digital signatures include
the following:
• The sender uses its own private key to generate the signature versus the receiver’s public
key to encrypt data.
• The receiver uses the sender’s public key to verify the signature versus its own public key
to encrypt the data.
Similar to HMAC, the sender generates a hash of the cleartext, but then uses its private key to gener
ate the signature, so the sender is often called the signer. The cleartext and signature are sent to the
receiver, who then generates its own hash of the cleartext, and then uses the sender’s public key to
verify the signature. In fact, any receiver who has a copy of the sender’s public key can verify the
sender’s signature. Furthermore, only the sender can generate the signature as only it has access to
its private key.
The sender (or signer) generates a hash from the cleartext and uses its asymmetric private key to
generate the digital signature. The cleartext message and the digital signature are sent to the receiver.
The receiver generates its own hash from the cleartext message and verifies the signature using
the sender’s public key. If the signature verifies, then the receiver has validated that the cleartext
message has not been altered, and therefore the message integrity has been verified. Since only the
sender has access to the private key, the signature also provides message authentication. Additional
services are discussed in Section 2.3.
As mentioned in Section 2.1, “Encryption,” asymmetric public keys do not need to be kept
secret; however, they do require authenticity. Otherwise, without the receiver being able to authen
ticate the sender’s public key, asymmetric cryptography is susceptible to a MITM attack as shown
in Figure 2.8. Consider the same scenario of a sender sending a signed message to the receiver by
hashing the cleartext and generating the signature using its private key. A MITM attack occurs when
an interloper convinces the receiver that its public key belongs to the sender. The sender generates
the digital signature using its private key, which the interloper cannot counterfeit. However, the
interloper intercepts the cleartext message and generates a new digital signature using its private
key. The interloper sends the re-signed message to the receiver. The receiver uses the interloper’s
public key, believing it is the sender’s public key, and successfully verifies the digital signature.
The receiver thinks it has verified a signed message from the sender, unaware that the interloper
is the man in the middle. The interloper can modify or substitute the cleartext message such that the
sender signs one message but the receiver verifies another, and neither of them is aware of the switch.
To avoid a MITM attack, the receiver must be able to validate the public key owner, namely, the
sender. This is accomplished by encapsulating the sender’s public key in a PKI credential, commonly
called public key certificate, sometimes called a digital certificate or just certificate, signed and issued
by a trusted third party (TTP) called a certificate authority (CA). The standards that define the cer
tificate contents and the CA operations are discussed in Section 3.1, “PKI Standards Organizations,”
but here, we introduce the basic concepts regarding the authenticity and integrity of the public key.
Let’s review Figure 2.7, where the sender signs a cleartext message using its asymmetric private
key and sends the original cleartext and signature to the receiver for verification using the sender’s
asymmetric public key. Let’s also reconsider Table 2.2 using the same scenario. It is important to rec
ognize that there are two very different signatures – the signature for the cleartext message generated
by the sender and the signature for the certificate generated by the issuer. And as we will explain,
the receiver will need to verify both signatures in the overall authentication and integrity validation.
• Subject name is the public key owner’s name, in this case the sender.
• Subject public key is the subject’s public key, in this case the sender’s public key.
• Issuer name is the certificate signature signer’s name; in this case, we introduce another
entity, the CA.
• Issuer signature is the issuer’s digital signature, in this case the CA’s signature generated
by the CA’s asymmetric private key. This signature encapsulates the certificate informa
tion like an envelope; that is, the subject, the public key, and the issuer fields are hashed,
and the issuer’s asymmetric private key is used to generate the certificate signature.
Furthermore, the receiver is the relying party using the sender’s public key to verify the sender’s
signature on the cleartext message, but not until after the receiver uses the CA’s public key to verify
the signature on the certificate. As shown in Table 2.2, the certificate signature is over the certifi
cate fields, which provides cryptographic bindings between the sender’s name and public key and
between the sender’s name and the CA’s name. Thus, the receiver can rely on the sender’s public key
to verify the cleartext message because the sender’s certificate has been validated using the CA’s pub
lic key (Figure 2.9). However, the validation process for verifying the sender’s certificate and related
CA certificates, called the certificate chain, shown in Figure 2.10 is discussed in the succeeding text.
Since hashing the cleartext is always part of generating the digital signature, for the remainder of
this chapter, the hash step will not be shown. We also introduce two important concepts for confirm
ing a digital signature – verification versus validation:
TABLE 2.2
Selected Certificate Fields
Certificate Fields Certificate Fields Description
Subject name Subject is the public key owner’s name.
Subject public key Public key is the subject’s public key.
Issuer name Issuer is the TTP signer’s name.
Issuer signature Signature is the TTP digital signature of the certificate.
Figure 2.10 demonstrates a signed cleartext message, the signer’s certificate, an issuer CA cer
tificate, and a root CA certificate. In order for the relying party to verify the signed message, it
must validate the certificate chain, consisting of the subject (signer), issuer CA, and root CA
certificates.
However, in order to validate the certificate chain, the relying party must first determine the cer
tificate chain. This is accomplished by walking the certificate chain in the following manner – refer
to the dotted line from left to right:
(1) The relying party matches the signer’s name with the subject name in the signer’s certifi
cate to find the signer’s public key. This presumes the relying party has a means by which
to identify the signer’s name, also called an identity.
(2) The issuer CA name in the signer’s certificate is then matched with the issuer CA name
in the issuer CA certificate to find the issuer CA’s public key.
(3) The root CA name in the issuer CA certificate is then matched with the root CA name in
the root CA certificate to find the root CA’s public key.
Once the certificate chain is determined and each participant’s public key has likewise been estab
lished, the relying party can then walk the chain in reverse and verify the signature on each certifi
cate in the following manner – refer to the solid line from right to left:
1. The root CA public key is used to verify the signature on the root CA certificate. It is
important to note that since the root CA is the apex of the hierarchy, there is no other
entity to sign the root CA certificate, and therefore it is a self-signed certificate. Root
CA certificates are installed in a reliable and secure storage, often called a trust store, to
ensure and maintain its authenticity and integrity. Once the certificate signature has been
verified, its public key can be used to verify the next certificate in the chain.
2. The root CA public key is then used to verify the signature on the issuer CA certificate,
such that once the certificate signature has been verified, its public key can be used to
verify the next certificate in the chain.
3. The issuer CA public key is then used to verify the signature on the subject certificate,
such that once the certificate signature has been verified, its public key can be used to
verify the next certificate in the chain.
The subject public key can then be used to verify the signature on the cleartext message, which
authenticates the signer and provides message integrity. It is important to note that Table 2.2
and Figure 2.10 do not list all of the fields in a standard certificate. Individual certificate veri
fication and certificate chain validation are more complex than discussed so far. In addition to
signatures, validity dates and certificate status must also be checked. Refer to Chapter 3, “PKI
Building Blocks,” for certificate formats and Chapter 5, “PKI Roles and Responsibilities,” for
more details.
24 Security Without Obscurity
Except for root CA certificates that are installed and protected in trust stores to ensure and main
tain their authenticity and integrity, subject self-signed certificates are unreliable. An example is
shown in Figure 2.11 with the signer sending a signed cleartext message. The left side shows the
legitimate sender signing the cleartext that can be verified using the sender’s self-signed certificate.
However, on the right side, an interloper has resigned modified cleartext but substituted a counterfeit
self-signed certificate that contains the sender’s name but the interloper’s public key. The relying
party verifies the signature on the substituted cleartext unknowingly using the interloper’s certif
icate. Both the interloper’s certificate and signed message verify because they were signed by the
interloper’s private key, but the relying party cannot distinguish between the sender’s real certificate
and the interloper’s certificate. Thus, the content of a subject self-signed certificate is unreliable.
The discussions on certificate verification and certificate chain validation employed examples
where the subject’s public key was used to verify signed cleartext messages. It is important to rec
ognize that a subject’s asymmetric private key and certificate can be used for many other purposes
including data encryption discussed in Section 2.1, “Encryption,” key encryption discussed in Sec
tion 2.4, “Key Management,” code signing, e-mails, and time stamps. Digital signatures for achiev
ing nonrepudiation are discussed in Section 2.3, “Nonrepudiation,” and cryptographic modules to
protect asymmetric keys are discussed in Section 2.5, “Cryptographic Modules.”
2.3 NONREPUDIATION
As described earlier, nonrepudiation is the process of deriving a nonrepudiation token from origi
nal information, usually called a signature. First, let’s discuss why some of the other integrity and
authentication mechanisms cannot provide nonrepudiation services.
Let’s reconsider Figure 2.5, where the sender sends both the cleartext message and the MAC to
the receiver for message integrity. Since only the sender and the receiver have access to the MAC
key and the receiver did not send the cleartext message, the MAC also provides message authentica
tion. However, since both the sender and the receiver have access to the MAC key and can generate
the MAC, either might have also created the message, so the authentication and integrity are not
provable to an independent third party, and therefore the MAC cannot provide nonrepudiation.
Arguably, a symmetric cipher scheme might be used for nonrepudiation. For example, as shown in
Figure 2.12, if an authentication system enforced the use of two separate MAC keys, one for sending
and the other for receiving, then this might meet third-party provability. But, unlike asymmetric ciphers,
the key separation controls are not inherent within the cryptography but rather part of the application.
Thus, the MAC keys are generated and exchanged with send-only and receive-only controls strictly
enforced. For example, Party A might generate its send-only key, and when exchanged with Party B,
Cryptography Basics 25
the key is installed as a receive-only key. Likewise, Party B would generate a different send-only key,
and when exchanged with Party A, the key is installed as a receive-only key. Clearly, additional access
and execution controls are needed for this scenario. Regardless, while the cryptography scheme is an
important aspect of the overall nonrepudiation services, it is not the only consideration.
Likewise, let’s reconsider Figure 2.6, where the sender sends both the cleartext and HMAC to the
receiver for message integrity. Since only the sender and the receiver have access to the HMAC key and
the receiver did not send the cleartext message, the HMAC also provides message authentication. How
ever, since both the sender and the receiver have access to the HMAC key and can generate the HMAC,
either might have also created the message, so the authentication and integrity are not provable to an
independent third party, and therefore the HMAC cannot provide nonrepudiation. Arguably, using two
separate HMAC keys, one for sending and the other for receiving, might meet third-party provability,
but again the cryptographic scheme is not the only consideration for nonrepudiation services.
Now let’s reconsider Figure 2.7, where the sender sends both the cleartext and signature to the
receiver for message integrity. Since only the sender has access to the private key, the signature also
provides message authentication. Because the signature can only be generated using the private
key, the authentication and integrity are provable to an independent third party, and therefore the
signature can provide nonrepudiation. But once again, the cryptographic scheme is not the only
consideration for nonrepudiation services.
Another method involves a combination of cryptographic hashes and digital signatures shown in Fig
ure 2.13. Here, we provide an overview of trusted time stamp technology and the use of a Time Stamp
Authority (TSA). The first step occurs when Party A generates a hash of its cleartext denoted hash(C)
and provides it to the TSA, requesting a Time Stamp Token (TST). The TSA operates independently of
Party A as either an external organization or a service encapsulated within a cryptographic module. Fur
thermore, the TSA never has access to the original cleartext, only the hash(C) of the cleartext [B.1.15].
The second step takes place when the TSA generates a Time Stamp Token (TST) by adding a time
stamp to the submitted hash(C) and generating a signature over both elements. The TSA generates
its own hash of the hash(C) and the time stamp and applies its private key to generate the signature.
The time stamp is from a clock source that is calibrated to the International Timing Authority (ITA).
Calibration means the time difference between the TSA clock and the ITA clock is registered; the
clocks are not synchronized, as that would change the TSA clock. Other cryptographic mechanisms
such as hash chains, MAC, or HMAC might be used; however, for the purposes of this book, we
only refer to digital signatures. The TSA returns the TST to Party A.
The third step happens when Party A provides its cleartext message and the corresponding TST
to Party B. Party B regenerates hash(C) from the cleartext and matches it to the hash(C) in the TST.
Party B then verifies the TST signature using the TSA’s public key. The TSA signature cryptograph
ically binds the original hash(C) to the time stamp such that when Party B validates the TST, the
following has been proved:
• The TST signature authenticates that the TST was generated by the TSA.
• The TST hash(C) confirms the integrity of the cleartext provided by Party A.
• The TST time stamp confirms the reliability of the cleartext to a verifiable time source.
The cleartext created by Party A might be content for a website, a legal document, a financial state
ment, or even an executable code. Party B can then be assured that whatever the nature of the cleart
ext, its integrity is provable to a verifiable point in time. Furthermore, if Party A signed the cleartext
such that the hash(C) includes the cleartext and the digital signature, then Party B not only has assur
ance of the cleartext but can also prove that Party A signed the cleartext no later than the time stamp.
However, all of these methods are from a cryptographic viewpoint of nonrepudiation, and there
are additional operational and legal considerations. From an operation perspective, controls over the
symmetric or asymmetric private keys are paramount. Controls over the application using the keys are
likewise as important. Finally, the reliability and robustness of the protocol are also essential. There are
many operational considerations that might undermine the effectiveness of a nonrepudiation service:
From a legal perspective, the nonrepudiation service needs to provide a dispute resolution process
and, if not settled, needs to be resolved in litigation or arbitration. For the latter case, the presence
of the nonrepudiation token (e.g., digital signature) needs to be admissible per the established rules
of evidence. The operational controls discussed earlier need to be discoverable and confirmed. This
might include interviewing witnesses involved in the dispute or expert testimony with regard to the
nonrepudiation services or general cryptography. Legal issues regarding nonrepudiation are dis
cussed further in Chapter 9, “PKI Governance, Risk, and Compliance.”
key for related functions, and asymmetric ciphers use different keys (public and private) for related
functions. For symmetric ciphers, we discussed the following cryptographic operations:
• Encryption and decryption of data using a symmetric data encryption key (DEK).
• Generation and verification of a Message Authentication Code (MAC) using a symmetric
MAC key.
• Generation and verification of a keyed Hash Message Authentication Code (HMAC)
using a symmetric HMAC key.
• Encryption and decryption of data using an asymmetric key pair, where the data are
encrypted using the public key and decrypted using the associated private key.
• Signature generation and verification using an asymmetric key pair, where the signature
is generated using the private key and verified using the corresponding public key.
In addition to the aforementioned operations, we now discuss additional key management functions
for both symmetric and asymmetric algorithms:
• Encryption and decryption of other keys using a symmetric key encryption key (KEK).
• Encryption and decryption of symmetric keys using an asymmetric key pair, where the
key is encrypted using the public key and decrypted using the associated private key.
• Negotiation of symmetric keys using asymmetric key pairs; both parties use their private
key, their public key, and the public key of the other party to derive a shared value.
In general, when a symmetric key is generated by one party and transmitted as ciphertext to the other
party, this is called key transport. Conversely, when both sides negotiate the symmetric key without
having to transmit it as ciphertext, this is called key agreement. Key transport and key agreement are
collectively called key establishment. Figure 2.14 shows the encryption, transmission, and decryp
tion of a symmetric key.
The sender generates a random symmetric key, encrypts it using a symmetric KEK, and trans
mits the ciphertext to the receiver. The receiver decrypts the ciphertext using the same KEK to
recover the newly generated symmetric key. The KEK would have been previously established
between the sender and the receiver and used exclusively for key encryption. There are several
methods to establish a KEK including asymmetric key transport or key agreement protocol, or
manual methods including key components or key shares, commonly called key splits in this book.
Each of these methods has strengths and weaknesses, which are discussed further in more detail.
The most important aspect of establishing a KEK, especially when using manual methods, is having
documented procedures with good practices that are followed and kept as an audit log for subse
quent review and confirmation.
The purpose of the exchanged symmetric key is indeterminate without a bilateral agreement
between the sender and the receiver or additional information transmitted along with the cipher-
text. Such information about the exchanged symmetric key might be included as part of a key
exchange protocol. For example, in Figure 2.12, we identified the scenario of exchanging not
only a MAC key but also the key’s directional usage (send vs. receive) as part of its control
mechanisms. Some key establishment protocols provide explicit key exchange parameters, while
others are implicit. Some data objects such as digital certificates or encrypted key blocks include
explicit key usage information that applications are expected to obey. Some cryptographic prod
ucts enforce key usage controls, while others do not. And sometimes the key exchange purpose
is merely a contractual agreement between the participating parties with manual procedures that
might require dual controls.
Figure 2.15 also depicts the encryption, transmission, and decryption of a symmetric key. Exam
ples of key transport schemes include RSA Optimal Asymmetric Encryption Padding (OAEP) and
the RSA Key Encapsulation Mechanism and Key Wrapping Scheme (KEM-KWS) methods [B.1.6]
based on the paper “Method for Obtaining Digital Signatures and Public Key Cryptosystems”
[B.7.5], published in 1976.
The sender generates a random symmetric key, encrypts it using the asymmetric public key of
the receiver, and transmits the ciphertext to the receiver. The receiver decrypts the ciphertext using
its asymmetric private key to recover the newly generated symmetric key. The receiver’s public
key would have been previously provided to the sender and used exclusively for key transport. Not
shown is the encapsulation of the receiver’s public key in a certificate. As discussed in Section 2.2,
“Authentication,” without the use of a digital certificate, the key transport is also vulnerable to a
MITM attack, as shown in Figure 2.16. The presumption is the sender erroneously uses the interlop
er’s public key instead of the receiver’s public key.
The sender generates a random symmetric key, encrypts it using the interloper’s public key
believing it was the receiver’s public key, and transmits the ciphertext to the receiver. The ciphertext
is intercepted and decrypted using the interloper’s private key. At this point, the interloper now has
a copy of the symmetric key. Using the receiver’s public key, the interloper might re-encrypt the
symmetric key or generate and encrypt another symmetric key, and then forward the ciphertext onto
the receiver. Regardless, the receiver decrypts the ciphertext using its private key to recover the
symmetric key. The sender and the receiver are under the false impression that they have securely
exchanged a symmetric key without realizing the interloper is in the middle eavesdropping on any
and all encrypted communications.
Figure 2.17 depicts a simplistic view of establishing a symmetric key without using key trans
port. Each party has its own asymmetric private key and public key certificate. The two parties
must also exchange certificates. In general, each party using their own asymmetric key pair, and the
public key of the other party, can mathematically compute a common shared secret from which a
symmetric key is derived. The shared secret cannot be computed without access to one of the private
keys. Thus, the symmetric key is not actually exchanged between the two parties. Examples of key
agreement methods include Diffie–Hellman [B.1.5] based on the original paper “New Directions in
Cryptography” [B.7.6], published in 1976, and elliptic curve cryptography (ECC) schemes [B.1.10]
based on the papers “Uses of Elliptic Curves in Cryptography” [B.7.7] and “Elliptic Curve Crypto
systems” [B.7.8] independently authored two years apart.
The details of the key agreement process will vary depending on the algorithm employed. Some
algorithms only require the private key of one and the public key of the other. Other algorithms use
the key pair of one and the public key of the other. Yet others allow ephemeral key pairs, generated
temporarily to increase the overall entropy of the symmetric key. In addition to the asymmetric
keys, there are also domain parameters and other values that both parties need to agree upon for the
crypto mathematics to operate successfully. These details are beyond the scope of this book but are
available in various standards [B.1.5], [B.1.6], and [B.1.10] and specifications.
What we can say about symmetric and asymmetric keys is that there are security controls that
must be in place in order for the overall system to be considered trustworthy. In general, we can treat
symmetric keys and asymmetric private keys in a similar manner utilizing strong security controls
around protecting the confidentiality of these keys. Note that asymmetric public keys have different
requirements. We can summarize the primary key management controls: key confidentiality, key
integrity, key authentication, and key authorization.
Key confidentiality: Symmetric and asymmetric private keys must be kept secret. Unlike data
confidentiality where authorized entities are permitted to know the information, keys cannot be
known by anyone. The primary key confidentiality control is the use of cryptographic hardware
modules discussed in Section 2.5, “Cryptographic Modules.” Some folks dispute this rule, claiming
that key owners or system administrators are “authorized” to see or know keys; however, controls
intended to prevent unauthorized access can fail, be circumvented, or individuals can be coerced.
The better approach is to manage symmetric and asymmetric private keys such that they are never
stored or displayed as cleartext. We summarize the key confidentiality controls as follows:
1. Symmetric keys and private asymmetric keys are used as cleartext inside cryptographic
hardware modules but are not stored as cleartext outside the crypto module.
2. Symmetric keys and private asymmetric keys are stored as ciphertext outside crypto
graphic hardware modules.
3. Symmetric keys and private asymmetric keys are stored as key splits outside crypto
graphic hardware modules, managed using dual control and split knowledge. Each key
split is managed by different individuals under supervision by a security officer to avoid
collusion between the individuals.
It is important to recognize that key confidentiality does not apply to asymmetric public keys as
knowledge of the public key does not reveal any information about the corresponding private key.
Thus, the strength of PKI is that anyone can use the public key without endangering the security of
the private key.
Key integrity: The integrity of any key must be maintained, including symmetric keys, asym
metric private keys, and asymmetric public keys. Controls must be in place to prevent the
modification or substitution of any cryptographic key. More importantly, controls must
also be in place to detect any adversarial or inadvertent modification or substitution. Any
unintentional change must return an error and prevent the key from being used.
Key authentication: The authenticity of any key must be verified, including symmetric keys,
asymmetric private keys, and public keys. Controls must be in place to prevent any illicit
keys from entering the system. In addition, systems must have the capability to validate
the key. Any failed validation must return an error and prevent the key from being used.
Key authorization: The use of symmetric and asymmetric private keys must be restricted to
authorized entities. For example, individuals enter passwords to unlock keys, or applica
tions are assigned processor identifiers with access to system files. Any unauthorized
attempts must return an error and either trigger an alert for incident response or prevent
the key from being used after some number of maximum attempts.
All four of these principal controls apply to the overall key management lifecycle. There are
many key management models defined in many standards and specification, but for the purposes
of this book, we refer to the American National Standard for PKI Asymmetric Key Management
[B.1.14] depicted in Figure 2.18. The lifecycle is represented as a state transition diagram with seven
nodes: key generation, key distribution, key usage, key backup, key revocation, key termination,
and key archive. The lifecycle is applicable for symmetric and asymmetric keys. The number of
transition arrows has been simplified for our discussion. Depending on the application environment
and events that drive the transitions, the cryptographic key state might not occur in every node.
Key Generation: This is the state when and where keys are generated for a specific purpose.
Symmetric keys are basically random numbers, but asymmetric key pairs are more complex,
incorporating random numbers and prime numbers in their creation. Keys might not be gener
ated at the location or in the equipment they are to be used, and therefore might be distributed.
Key Distribution: This is the state when keys are transferred from where the keys were
generated to the location or equipment where the keys are to be used. The distribution
method might be accomplished using manual or automated processes, and depending on
the method might occur in seconds, hours, or days.
Key Usage: This is the state when and where the keys are used for their intended purpose
as an input into the cryptographic function. The key usage occurs within a cryptographic
module that might be software or hardware, as discussed in Section 2.5, “Cryptographic
Modules.” In addition, the keys might be backed up and recovered to maintain system
reliability.
Key Backup: This is the state when and where the keys are backed up for recovery purposes
when the keys in use are inadvertently lost. When the keys are backed up they are often
distributed from the usage to the backup locations, and when keys are recovered they are
again distributed from the backup to the usage locations. Keys might also be backed up
during distribution from generation to usage.
Key Revocation: This is the state where keys are removed from usage before their assigned
expiration date due to operational or security issues. Operational issues include service
cessation, merger, acquisition, or product end of life. Security issues include known or
suspected system hacks, data breaches, or key compromises.
Key Termination: This is the state where keys are removed from usage or revocation when
reaching their assigned expiration date. Essentially, every instance of the key is erased
from all systems, including key backup, with the special exception of key archival.
Termination might take seconds, hours, days, or longer to securely purge all key instances.
Key Archive: This is the state when and where keys are stored beyond their usage period
for purposes of post-usage verification of information previously protected with the key.
Archived keys are never reused in production systems but rather they are only utilized in
archival systems.
Note that key escrow is not shown in Figure 2.18. Key escrow is a term borrowed from money
escrow where funds are held by a third party for future dispersals such as mortgage insurance or
property taxes and from software escrow where the source code is held by a third party in the event
the provider goes bankrupt. Key escrow is the practice of storing keys for the explicit purpose of
providing them to a third party. Third-party examples include government regulations, law enforce
ment in the event of a subpoena, or other legal obligations. In practice, key escrow is rarely used,
and it would not be an independent key state, rather it would more likely be part of key backup or
key archive to meet whatever escrow requirements might be imposed on the organization.
Now let’s turn our attention to key management lifecycle controls. There are many standards and
other resources that discuss key management controls in minute detail. It is not the intent of this
book to reiterate what is already available. However, for continuity with later chapters, Table 2.3
provides a summary of controls. Each of the controls is discussed in more detail with references to
relevant standards.
Crypto software: This control refers to cryptographic software modules that are basically soft
ware libraries that specialize in performing cryptographic functions that are accessible using cryp
tographic application programming interfaces (crypto API). Some operating systems provide native
cryptographic capabilities, some programming languages provide additional cryptographic libraries,
32 Security Without Obscurity
TABLE 2.3
Key Management Lifecycle Controls
Cryptographic Key Management Controls
Key Key Key Key Key
Lifecycle Confidentiality Integrity Authentication Authorization
Key Crypto software Key fingerprint Originator ID Access controls
Generation Crypto hardware Certificate
Key Key encryption Key fingerprint Originator ID Access controls
Distribution Crypto hardware Certificate Dual control
Key splits Split knowledge
Key Key encryption Key fingerprint Source ID Access controls
Usage Crypto software Certificate Requestor ID
Crypto hardware
Key Key encryption Key fingerprint Source ID Access controls
Backup Crypto hardware Certificate Dual control
Key splits Split knowledge
Key Key encryption Key fingerprint Requestor ID Access controls
Recovery Crypto hardware Certificate Dual control
Key splits Split knowledge
Key Key encryption Key fingerprint Source ID Access controls
Revocation Crypto software Certificate Dual control
Crypto hardware
Key splits
Key Key encryption Key fingerprint Source ID Access controls
Termination Crypto software Certificate Dual control
Crypto hardware
Key splits
Key Key encryption Key fingerprint Source ID Access controls
Archive Crypto hardware Certificate Dual control
Key splits
and some systems need to incorporate third-party cryptographic toolkits. Cryptographic software
modules are discussed in Section 2.5, “Cryptographic Modules.”
Key encryption: This control refers to the storage or transmission of cryptographic keys as
ciphertext. The cryptographic keys are encrypted using a key encryption key (KEK).
KEKs are used solely to encrypt other keys; they are never used for any data functions
such as data encryption, MAC, HMAC, or digital signatures. Key encryption is used for
storage or transmission of other keys as ciphertext that provides key confidentiality.
Crypto hardware: This control refers to cryptographic hardware modules that consist of soft
ware, firmware, and hardware components that specialize in performing cryptographic
functions accessible via API calls. Application developers incorporate cryptographic
hardware modules, commonly called hardware security modules (HSMs), from ven
dors who manufacture cryptographic devices. Note that HSM vendors undergo merg
ers and acquisitions just like any other technology industries with products that suffer
end-of-life time frames. Cryptographic hardware modules are discussed in Section 2.5,
“Cryptographic Modules.”
Key splits: This control refers to various techniques for dividing up keys into multiple parts.
One common method is key components, and another is key shares. We use the term
“splits” to refer to either method. Key splits provide key confidentiality.
Cryptography Basics 33
Key components: This method uses the binary operator “exclusive or” often denoted by the
abbreviation XOR or the ⊕ symbol. In general, 2 bits are combined such that if they are
the same, the result is a binary 0; otherwise, if they are different, the result is a binary 1,
that is: 0 ⊕ 0 = 0, 0 ⊕ 1 = 1, 1 ⊕ 0 = 1, and 1 ⊕ 1 = 0. For example, two individuals can
each have a key component that is a binary string such that neither knows the other’s key
component, but when the two strings are combined using XOR, a key is formed. Knowing
a bit in one string but not knowing the corresponding bit in the other string does not reveal
any information about the bit in the key.
Key components can support any number of strings but to recover the key, all of them must be
available. For instance, if the key is split into five key components where each is assigned to differ
ent individuals, then all five key component holders are needed to recover the key.
In practice, key components are typically written on paper and kept locked up by the key com
ponent holder (also called a key component custodian). Unlike passwords, key components are too
long for an average person to remember. For example, using hexadecimal notation an AES-128 key
component might be:
where each “hex” digit represents 4 bits. The key component is entered manually by each compo
nent holder, outside the view of the others, until all of the components have been entered. The key
components are then locked up securely. This key management process is often supervised by a
security officer and sometimes observed by an auditor [B.1.2].
Key shares: This method uses an algebraic equation, such as Shamir’s Secret Sharing [B.7.9] such
that the key is divided into M parts but only N parts are needed to recover the key. This is called an N
of M scheme, where N < M. For example, 3 of 7 might be chosen where the key is divided into M = 7
parts and assigned to seven individuals, where only N = 3 individuals are needed to recover the key. The
formula to calculate N possible combinations from M available individuals is M!/N! (M − N)! where
“!” is the factorial function. So, for N = 3 and M = 7, we have 7! = 7 × 6 × 5 × 4 × 3 × 2 × 1 = 5040 and
3! = 3× 2 × 1 = 6 so (7 − 3)! = 4 × 3 × 2 × 1 = 24 such that 5040/(6 × 24) or 5040/144 = 35 possible com
binations. Thus, there are 35 ways for 3 of 7 key shares to recover the key. Unlike key components, key
shares are large numbers that are too big to use paper, so it is a common practice for the key shares to be
written to removable media where each device is assigned to a key share holder. The devices are often
PIN activated such that mere possession is insufficient to enter the key share. Each device is inserted
into a device reader that uses the key shares to solve the equation and recover the key.
Key fingerprint: This control is a generic term that refers to a cryptographically derived check
sum that provides a relatively unique reference value per key. Fingerprints provide key integrity and
a means to verify a key value without having to display the actual key.
• For symmetric keys, the historical method is a key check value (KCV) that is created by
encrypting a string of binary zeroes and commonly using the leftmost six digits as the
KCV [B.1.2].
• For asymmetric keys, the common method is a hash of each key, one for the public key
and another for the private key. Note that the key fingerprint algorithm may differ per
vendor or crypto product but SHA1 is commonly used. Regardless of the algorithm, the
key fingerprint can be used to verify the key stored in an HSM or when using key splits.
The recovered key might have a fingerprint as well as each key split.
Certificate: This control refers to digital certificates introduced in Section 2.2, “Authentication.”
As discussed earlier, the certificate signature is over all of the certificate fields, which provides
cryptographic bindings between the sender’s name, public key, and the CA name. The certificate
34 Security Without Obscurity
provides integrity of the public key. Certificates can also have a fingerprint, called a thumbprint,
which is a hash of the whole certificate, including its digital signature.
Originator ID: This control refers to the key generation system. Tracking the origin of the key
helps to maintain the overall system authentication.
Source ID: This control refers to the key usage system. Tracking the source of the crypto
graphic function and the associated key helps to maintain the overall system authentication.
Requestor ID: This control refers to the application accessing the cryptographic function.
Tracking which requestor accessed the cryptographic function and the associated key
helps to maintain the overall system integrity.
Access controls: These controls refer to the permissions granted to the originator, source, and
requestors. Requestors include applications, end users, and system administrators. Access
controls restrict who can perform key establishment, key installation, key activation, key
backup and recovery, key revocation, key expiration, rekey, key archive, and other cryp
tographic functions.
Dual control with split knowledge: This control refers to the use of key splits for key distribution
or key backup and recovery. Assigning different splits to different individuals is the essence of split
knowledge, as no one person knows the value of the actual key. Requiring more than one person for
a cryptographic function is the basis for dual control.
Certificate management is a subset of key management and follows the same patterns shown in
Figure 2.18. The certificate contains information about the public key and indicates information
about the corresponding private key. In addition to the certificate fields listed in Table 2.2, there
are other basic certificate fields and certificate extensions. Another important certificate field is the
validity element, which contains the start “not before” and end “not after” dates. Figure 2.10 shows
how to walk up the chain and verify each certificate coming down the chain, but in addition, the
validity dates of each certificate must also be checked. If the current date is not between the start
and end dates, then the validation fails. Thus, the application performing the validation must be
cognizant of the current date and time with an accurate clock.
The certificate extensions are related to X.509 v3 format. Each extension has an object identifier
(OID), which asserts its presence, and the corresponding content. Other essential fields are the cer
tificate revocation list (CRL) distribution points and authority information access extensions. The
CRL extension provides a link to the CRL, and the authority extension provides a link to the Online
Certificate Status Protocol (OCSP) service. If a certificate is revoked prior to its expiration not after
date, the CRL or an OCSP responder provides a mechanism to check the certificate status. If any of
the certificates within the chain have been revoked, then the validation fails.
• Cryptographic module: The set of hardware, software, and/or firmware that implements
approved security functions (including cryptographic algorithms and key generation) and
is contained within the cryptographic boundary.
• Cryptographic boundary: An explicitly defined continuous perimeter that establishes the
physical bounds of a cryptographic module and contains all the hardware, software, and/
or firmware components of a cryptographic module.
Cryptography Basics 35
FIPS 140 was redesignated in April 1982 from Federal Standard 1027 General Security Require
ments for Equipment Using the Data Encryption Standard. This standard specified the minimum
general security requirements for implementing the Data Encryption Standard (DES) algorithm
[B.5.1] in a telecommunications environment.
FIPS 140–1 was revised in January 1994 and renamed Security Requirements for Cryptographic
Modules. This standard introduced four security levels from lowest to highest: level 1, level 2, level
3, and level 4. This version became effective six months later, in June 1994, and the previous version
was deprecated three years afterward, in June 1997.
FIPS 140–2 [B.5.3] was revised in May 2011, which became effective six months later, in Octo
ber 2011, and the previous version was deprecated six months afterward, in March 2012. Products
certified per FIPS 140–1 were moved to a historical list.
FIPS 14–3 [B.5.36] was revised in March 2019, which became effective six months later, in
August 2020, and the previous version was deprecated six months afterward, in January 2021. This
version refers to ISO/IEC 19790 [B.4.12] and ISO/IEC 24759 [B.4.13]. Products certified per FIPS
140–2 were moved to a historical list.
While there are international and country-specific standards and validation programs, for the pur
poses of this book, we refer to FIPS 140–3 as the de facto standard. This is a common approach sup
ported by many others. Another recognized standard is the Common Criteria transformed into ISO/IEC
15408 Information Technology – Security Techniques – Evaluation Criteria for IT Security [B.4.7],
but this standard provides a security language to develop either security targets or protection pro
files (PPs) for laboratory evaluations. Examples include the German cryptographic modules, security
level “enhanced” [B.7.10], and the European Union cryptographic module for CSP signing operations
[B.7.11]. For the financial services industry, there is also ISO 13491 Banking – Secure Cryptographic
Devices (Retail), but unlike the CMVP, there is no corresponding validation program. Regardless,
FIPS 140–3 defines four security levels (level 1 is the lowest) with 11 distinct security areas.
For example, the August 2015 CMVP list has over 2,200 product certificates issued by NIST to more
than 400 manufacturers. FIPS 140–2 Security Levels 1 and 2 were designed for software modules,
whereas levels 3 and 4 were designed for hardware modules. Roughly 67% of the products are listed as
hardware, 29% as software, and 3% as firmware. However, only 1% of the hardware modules are rated
at level 4 and 26% are rated level 3; 61% are rated at level 2 and 12% are rated at level 1. Thus, the
majority 73% of cryptographic hardware modules are rated at a security level typical of a cryptographic
software module. The reader should always check the security level of the FIPS certification.
The previous edition of this book discussed the FIPS 140–2 security areas, and while this edition
discusses the FIPS 140–3 security areas, the ISO/IEC 19790 international standard has reorganized
the security areas. Table 2.4 provides a summary comparison between the FIPS 140–2 versus the
FIPS 140–3 security areas.
The following chapters provide a summary for the 11 FIPS 140–3 security areas.
— Pari viikkoa. —
— Älä puhu, — kuiskasi Liina Annin korvaan, mutta Aku kuuli sen
ja katsoen Liinaan hän sanoi:
— Kauas. —
— Kuinka kauas? —
— Metsään. —
— On. —
— Älä luule, Liina, että olen pahoillani, vaikka näit minun itkevän.
Minulla ei ole koskaan ollut näin hyvä kuin nyt. Me tulemme sen
kautta lähemmäksi. —
— Minä välitän! —
— Tahtoisitko sinä? —
— Tahtoisin. —
— Niin. —
Hyvä Paavo!
Liina.
— Missä sinä, Liina, olet ollut tänään koko päivän? — kysyi Anni.
Liina vilkasi Akuun, vaan Aku ravisti päätään Annin takana.
— Mitä te kirjoittelette? —
— Emmehän me — —
— Kai hän on ensin unohtanut minut, koska hän jätti. Vai pitäisikö
minun juosta hänen jälessään?
— Koska minä olen lammas ja nauta, niin kuinka sinun sopii etsiä
minun seuraani? sanoi hän kuivasti.
Ovi lensi auki. Aku seisoi kuin kivettynyt Annin edessä otsasuonet
sinisinä.
— Kuka siellä? —
II.
Neljä vuotta oli kulunut. Liinaa ja Paavoa oli kova onni seurannut ja
pakottanut heidät muuttamaan kaupungin äärimmäiseen kolkkaan.
— Erkki, kultaseni, älä itke, äiti tulee jo! Aino, missä sinä olet? —
Kukaan ei vastannut. Pikku Erkki huusi, eikä mies voinut yskältään
puhua. Liina juoksi naapurinsa suutarin kamarista tulta lainaamaan
ja sieltä hän löysi Ainonsakin. Arkulle asetettuna valaisi himmeä
lamppu köyhää huonetta. Poissa oli Liinan upea piironki peilinensä,
ompelukone ja muhkea sänky. Sijaan oli saatu kömpelö perhesänky
rääsyineen ja matkakirstu. Köyhyyden haju oli huoneessa pistävän
kitkerä.
Pikku Erkki istui isänsä takana sängyssä itkien, jotta kyyneleet
valuivat pitkin kalpeita poskia ja laihat kädet kurkottivat äitiä kohti.
Silmät seurasivat äidin liikkeitä ja nyyhkytykset nytkäyttelivät pientä
ruumista. Lastansa avuttomampana makasi Paavo sängyssä, hänkin
katseillaan seuraten vaimoansa. Aino, äitinsä kuva, sinisilmäinen,
pyöreä tytöntyllerö, unohti kaiken nähdessään maidon, leivän ja
silakat arkunkannella. Molemmin käsin hän tarttui leipään, nakersi
sen syrjää kun hiiri terävillä hampaillaan, puraisi silakan selkään ja
ryyppäsi maitoa kannusta palan painimeksi.
— Voi, voi sentään, Liina, kun minä muistan sinut neljä vuotta
sitten, kuinka korea olit; ja miten nyt saat laahustaa yöt ja päivät. —
— Voi mun päiviäni, vai pitkä aika, neljä vuotta; vastahan sinulla
on nenänpää, eikä ijänpää. Paljon on vielä edessäsi ja tällaistahan se
on köyhän ihmisen elämä: yhtämittaista raatamista, väliin yön ja
päivän kanssa. Toisinaan istu laiskana ja odota nälkäisellä vatsalla,
mistä taas rahanpennin leiväksesi irti saisit. Vielähän sinun kelpaa
kun on tehdas turvanasi ja vakinaista tuloa, — lohdutteli Pärmäskä.
ebookbell.com