CISSP Domain 6 Objectives
CISSP Domain 6 Objectives
they include a variety of tools, such as vulnerability assessments, penetration tests, software testing,
audits, and other control validation
Every org should have a security assessment and testing program defined and operational
Security assessments: comprehensive reviews of the security of a system, application, or other tested
environment
during a security assessment, a trained information security professional performs a risk assessment that
identifies vulnerabilities in the tested environment that may allow a compromise and makes
recommendations for remediation, as needed
a security assessment includes the use of security testing tools, but go beyond scanning and manual
penetration tests
the main work product of a security assessment is normally an assessment report addressed to
management that contains the results of the assessment in nontechnical language and concludes with
specific recommendations for improving the security of the tested environment
An organization’s audit strategy will depend on its size, industry, financial status and other factors
a small non-profit, a small private company and a small public company will have different requirements
and goals for their audit strategies
the audit strategy should be assessed and tested regularly to ensure that the organization is not doing a
disservice to itself with the current strategy
there are three types of audit strategies: internal, external, and third-party
Software testing verifies that code functions as designed and doesn't contain security flaws
Security management needs to perform a variety of activities to properly oversee the information security program
Log reviews, especially for admin activities, ensure that systems are not misused
Account management reviews ensure that only authorized users have access to information and systems
Backup verification ensures that the org's data protection process is working properly
Key performance and risk indicators provide a high-level view of security program effectiveness
Artifact: piece of evidence such as text, or a reference to a resource which is submitted in response to a question
Assessment: testing or evaluation of controls to understand which are implemented correctly, operating as
intended and producing the desired outcome in meeting the security or privacy requirements of a system or org
Audit: process of reviewing a system for compliance against a standard or baseline (e.g. audit of security
controls, baselines, financial records) can be formal and independent, or informal/internal
Chaos Engineering: discipline of experiments on a software system in production to build confidence in the
system's capabilities to withstand turbulent/unexpected conditions
Code testing suite: usually used to validate function, statement, branch and condition coverage
Compliance Calendar: tracks an org's audits, assessments, required filings, due dates and related
Compliance Tests: an evaluation that determines if an org's controls are being applied according to management
policies and procedures
Penetration Testing/Ethical Penentration Testing: security testing and assessment where testers actively
attempt to circumvent/defaut a system's security features; typically constrained by contracts to stay within
specified Rules of Engagement (RoE)
Functional order of controls: deter, deny, detect, delay, deterimine, and decide
Fuzzing: uses modified inputs to test sofware performance under unexpected circumstances; mutation fuzzing
modifies known inputs to generate synthetic inputs that may trigger unexpected behavior; generational fuzzing
develops inputs based on models of expected inputs to perform the same task
IAM system: identity and access management system combines lifecycle management and monitoring tools to
ensure that identity and authorization are properly handled throughout an org
ITSM: IT Service Management tools include change management and associated approval tracking
Judgement Sampling: AKA purposive or authoritative sampling, a non-probability sampling technique where
members are chosen only on the basis of the researcher's knowledge and judgement
Misue Case Testing: testing strategy from a hostile actor's point of view, attempting to lead to integrity failures,
malfunctions, or other security or safety compromises
Mutation testing: mutation testing modifies a program in small ways and then tests that mutant to determine if it
behaves as it should or if it fails; technique is used to design and test software through mutation
Plan of Action and Milestones (POA&M): a document indentifying tasks to be accomplished, including details,
resources, milestones, and completion target dates
RUM: real user monitoring is a passive monitoring technique that records user interation with an app or system to
ensure performance and proper app behavior; often used as a predeploymment process using the actual user
interface
RoE: Rules of Engagement, set of rules/constraints/boundaries that establish limits of participant activity; in
ethical pen testing, an RoE defines the scope of testing, and to establish liabilty limits for both testers and the
sponsoring org or system owners
SCF: Script Check Engine is designed to make scripts interoperable with security policy definitions
Statistical Sampling: process of selecting subsets of examples from a population with the objective of estimating
properties of the total population
Substantive Test: testing technique used by an auditor to obtain the audit evidence in order to support the
auditor's opinion
Testing: process of exersizing one or more assessment objects (activities or mechanisms) under specified
conditions to compare actual to expected behaior
Trust Services Criteria (TSC): used by an auditor when evaluating the suitability of the design and operating
effectiveness of controls relevant to the security, availabiliity, or processing integrity of information and systems or
the confidentiality or privacy of the info processed by the entity
6.1.1 Internal
An organization’s security staff can perform security tests and assessments, and the results are meant for
internal use only, designed to evaluate controls with an eye toward finding potential improvements
An internal audit strategy should be aligned to the organization’s business and day-to-day operations
e.g. a publicly traded company will have a more rigorous internal auditing strategy than a privately
held company
Designing the audit strategy should include laying out applicable regulatory requirements and compliance
goals
Internal audits are performed by an organization’s internal audit staff and are typically intended for internal
audiences, and management use
6.1.2 External
An external audit strategy should complement the internal strategy, providing regular checks to ensure
that procedures are being followed and the organization is meeting its compliance goals
External audits are performed by an outside auditing firm
these audits have a high degree of external validity because the auditors performing the
assessment theoretically have no conflict of interest with the org itself
audits by these firms are generally considered acceptable by most investors and governing bodies
third-party audit reporting is generally intended for the org's governing body
6.1.3 Third-party
Vulnerabilities: weaknesses in systems and security controls that might be exploited by a threat
Vulnerability assessments: examining systems for these weaknesses
The goal of a vulnerability assessment is to identify elements in an environment that are not adequately
protected -- and not necessarily from a technical perspective; you can also assess the vulnerability of
physical security or the external reliance on power, for instance
can include personnel testing, physical testing, system and network testing, and other facilities
tests
Vulnerability assessments are some of the most important testing tools in the information security
professional’s toolkit
Security Content Automation Protocol (SCAP): provides a common framework for discussion and
facilitation of automation of interactions between different security systems (sponsored by NIST)
SCAP components related to vulnerability assessments:
Common Vulnerabilities and Exposures (CVE): provides a naming system for describing
security vulnerabilities
Common Vulnerability Scoring Systems (CVSS): provides a standardized scoring
system for describing the severity of security vulnerabilities; it includes metrics and calc
tools for exploitability, impact, how mature exploit code is, and how vulnerabilities can be
remediated, and a means to score vulns against users' unqiue requirements
Common Configuration Enumeration (CCE): provides a naming system for system
config issues
Common Platform Enumeration (CPE): provides a naming system for operating
systems, applications, and devices
Extensible Configuration Checklist Description Format (XCCDF): provides a language
for specifying security checklists
Open Vulnerability and Assessment Language (OVAL): provides a language for
describing security testing procedures; used to describe the security condition of a system
Vulnerability scans automatically probe systems, applications, and networks looking for weaknesses that
could be exploited by an attacker
flaws may include missing patches, misconfigurations, or faulty code
Four main categories of vulnerability scans:
network discovery scans
network vulnerability scans
web application vulnerability scans
database vulnerability scans
Authenticated scans: (AKA credentialed security scan) involves conducting vulnerability assessments
and security checks on a network, system, or application using valid credentials; this approach enables
the scanner to simulate the actions of an authenticated user, allowing it to access deeper layers of the
target system, gather more information, and provide a more accurate assessment of vulnerabilities; often
uses a read-only account to access configuration files
6.2.2 Penetration testing
Penetration tests goes beyond vulnerability testing techniques because it actually attempts to exploit
systems
Vulnerability management programs take the results of the tests as inputs and then implement a risk
management process for identfied vulnerabilities
NIST defines the penetration testing process as consisting of four phases:
planning: includes agreement on the scope of the test and the rules of engagement
ensures that both the testing team and management are in agreement about the nature of
the test and that it is explicitly authorized
information gathering and discovery: uses manual and automated tools to collect information
about the target environment
basic reconnaissance (website mapping)
network discovery
testers probe for system weaknesses using network, web and db vuln scans
attack: seeks to use manual and automated exploit tools to attempt to defeat system security
step where pen testing goes beyond vuln scanning as vuln scans don’t attempt to actually
exploit detected vulns
reporting: summarizes the results of the pen testing and makes recommendations for
improvements to system security
tests are normally categorized into three groups:
white-box penetration test:
provides the attackers with detailed information about the systems they target
this bypasses many of the reconnaissance steps that normally precede attacks, shortening
the time of the attack and increasing the likelihood that it will find security flaws
these tests are sometimes called "known environment" tests
in white-box testing, the tester has access to the source code and performss testing from a
developer's perspective
gray-box penetration test:
AKA partial knowledge tests, these are sometimes chosen to balance the advantages
and disadvantages of white- and black-box penetration tests
this is particularly common when black-box results are desired but costs or time constraints
mean that some knowledge is needed to complete the testing
these tests are sometimes called "partially known environment" tests
in gray-box testing, the tester evaluates software from a user perspective but has access to
the source code
black-box penetration test:
does not provide attackers with any information prior to the attack
this simulates an external attacker trying to gain access to information about the business
and technical environment before engaging in an attack
these tests are sometimes called "unknown environment" tests
Code review and testing is "one of the most critical components of a software testing program"
These procedures provide third-party reviews of the work performed by developers before moving code
into a production environment, possibly discovering security, performance, or reliability flaws in apps
before they go live and negatively impact business operations
In code review, AKA peer review, developers other than the one who wrote the code review it for defects;
code review can be a formal or informal validation process
Fagan inspections: the most formal code review process follows six steps:
1. planning
2. overview
3. preparation
4. inspection
5. rework
6. follow-up
Entry criteria are the criteria or requirements which must be met to enter a specific process
Exit criteria are the criteria or requirements which must be met to complete a specific process
Static application security testing (SAST): evaluates the security of software without running it by
analyzing either the source code or the compiled application; code reviews are an example of static app
security testing
Dynamic application security testing (DAST): evaluates the security of software in a runtime
environment and is often the only option for organizations deploying applications written by someone else
Misuse case testing: AKA abuse case testing - used by software testers to evaluate the vulnerability of
their software to known risks; focuses on behaviors that are not what the org desires or that are counter to
the proper function of a system/app
In misuse case testing, testers first enumerate the known misuse cases, then attempt to exploit those use
cases with manual or automated attack techniques
A test coverage analysis is used to estimate the degree of testing conducted against new software; to
provide insight into how well testing covered the use cases that an app is being tested for
Test coverage: number of use cases tested / total number of use cases
requires enumerating possible use cases (which is a difficult task), and anyone using test
coverage calcs to understand the process used to develop the input values
Five common criteria used for test coverage analysis:
branch coverage: has every IF statement been executed under all IF and ELSE conditions?
condition coverage: has every logical test in the code been executed under all sets of inputs?
functional coverage: has every function in the code been called and returned results?
loop coverage: has every loop in the code been executed under conditions that cause code
execution multiple times, only once, and not at all?
statement coverage: has every line of code been executed during the test?
Test coverage report: measures how many of the test cases have been completed; is used to provide
test metrics when using test cases
Interface testing assesses the performance of modules against the interface specs to ensure that they will
work together properly when all the development efforts are complete
Interface testing essentially assesses the interaction between components and users with API testing,
user interface testing, and physical interface testing
Three types of interfaces should be tested:
application programming interfaces (APIs): offer a standardized way for code modules to
interact and may be exposed to the outside world through web services
should test APIs to ensure they enforce all security requirements
user interfaces (UIs): examples include graphical user interfaces (GUIs) and command-line
interfaces
UIs provide end users with the ability to interact with the software, and tests should
include reviews of all UIs
physical interfaces: exist in some apps that manipulate machinery, logic controllers, or
other objects
software testers should pay careful attention to physical interfaces because of the
potential consequences if they fail
Also see OWASP API security top 10 (https://owasp.org/API-Security/editions/2023/en/0x11-t10/)
Breach and attack simulation (BAS): platforms that seek to automate some aspects of penetration
testing
The BAS platform is not actually waging attacks, but conducting automated testing of security controls to
identify deficencies
A BAS system combines red team (attack) and blue team (defense) techniques together with automation
to simulate advanced persistent threats (and other advanced threat actors) running against the
environment
Designed to inject threat indicators onto systems and networks in an effort to trigger other security
controls (e.g. place a suspicious file on a server)
detection and prevention controls should immediately detect and/or block this traffic as potentially
malicious
See:
OWASP Web Security Testing Guide
OSSTMM (Open Source Security Testing Methodology Manual)
NIST 800-115
FedRAMP Penetration Test Guidance
PCI DSS Information Supplemental on Penetration Testing
Orgs should create and maintain compliance plans documenting each of their regulatory obligations and
map those to the specific security controls designed to satisfy each objective
Compliance checks are an important part of security testing and assessment programs for regulated firms:
these checks verify that all of the controls listed in a compliance plan are functioning properly and are
effectively meeting regulatory requirements
Account management reviews ensure that users only retain authorized permissions and that unauthorized
modifications do not occur
Full review of accounts: time-consuming to review all, and often done only for highly privileged accounts
Organizations that don’t have time to conduct a full review process may use sampling, but only if sampling
is truely random
Adding accounts: should be a well-defined process, and users should sign an AUP
Adding, removing, and modifying accounts and permissions should be carefully controlled and
documented
Accounts that are no longer needed should be suspended
ISO 9000 standards use a Plan-Do-Check-Act loop
plan: foundation of everything in the ISMS, determines goals and drives policies
do: security operations
check: security assessment and testing (this objective)
act: formally do the management review
Key Performance Indicators (KPIs): measures that provide significance of showing the performance of
an ISMS compared to stated goals
Choose the factors that can show the state of security
Define baselines for some (or better yet all) of the factors
Develop a plan for periodically capturing factor values (use automation!)
Analyze and interpret the data and report the results
Key metrics or KPIs that should be monitored by security managers may vary from org to org, but could
include:
number of open vulns
time to resolve vulns
vulnerability/defect recurrence
number of compromised accounts
number of software flaws detected in pre-production scanning
repeat audit findings
user attempts to visit known malicious sites
Develop a dashboard of metrics and track them
Managers should periodically inspect the results of backups to verify that the process functions effectively
and meets the organization’s data protection needs
this might include reviewing logs, inspecting hash values, or requesting an actual restore of a
system or file
Business Continuity (BC): the processes used by an organization to ensure, holistically, that its vital
business processes remain unaffected or can be quickly restored following a serious incident
Disaster Recovery (DR): is a subset of BC, that focuses on restoring information systems after a disaster
These processes need to be periodically accessed, and regular testing of disaster recovery and business
continuity controls provide organizations with the assurance they are effectively protected against
disruptions to business ops
Protection of life is of the utmost importance and should be dealt with first before attempting to save
material things
The goal of the analysis process is to proceed logically from facts to actionable info
A list of vulns and policy exceptions is of little value to business leaders unless it's used in context, so
once all results have been analyzed, you're ready to start writing the official report
The analysis process leads to valuable results only if they are actionable
6.4.1 Remediation
Rather than software defects, most vulnerabilities in average orgs come from misconfigured systems,
inadequate policies, unsound business processes, or unaware staff
Vuln remediation should include all stakeholders, not just IT
While conducting security testing, cybersecurity pros may discover previously undiscovered vulns
(perhaps implementing compensating controls to correct) that they may be unable to correct
Ethical disclosure: the idea that security pros who detect a vuln have a responsibility to report it to the
vendor, providing them with enough time to patch or remediate
the disclosure should be made privately to the vendor providing reasonable amount of time to
correct
if the vuln is not corrected, then public disclosure of the vuln is warrented, such that other
professionals can make informed decisions about future use of the product(s)
6.5.2 External
An external audit (sometimes called a second-party audit) is one conducted by (or on behalf of) a
business partner
External audits are tied to contracts; by definition, an external audit should be scoped to include only the
contractual obligations of an organization
6.5.3 Third-party
Third-party audits are often needed to demonstrate compliance with some government regulation or
industry standard
Advantages of having a third-party audit an organization:
they likely have breadth of experience auditing many types of systems, across many types of
organizations
they are not affected by internal dynamics or org politics
Disadvantage of using a third-party auditor:
cost: third-party auditors are going to be much more costly than internal teams; this means that the
organization is not likely to conduct audits as frequently
internal resources are still required to assist or accompany auditors, to answer questions and
guide