Exam Code: Managing Cloud Security
Exam Name: WGU Managing Cloud Security (JY02)
Associate Certification: Courses and Certificates
Samples: 12Q&As
Save 40% on Full Managing Cloud Security Exam Dumps with
Coupon “40PASS”
Managing Cloud Security exam dumps provide the most effective material
to study and review all key WGU Managing Cloud Security (JY02) topics.
By thoroughly practicing with Managing Cloud Security exam dumps, you
can build confidence and pass the exam in a shorter time.
Practice Managing Cloud Security exam online questions below.
1. An organization is sharing personal information that is defined in its privacy policy with a
trusted third party.
What else should the organization communicate to the trusted third party about the personal
information?
A. The results of the organization's most recent privacy audit
B. A notice of any contractual obligations that do not align with the privacy policy
C. A copy of federal privacy laws regarding unauthorized data disclosure
D. The organization's privacy policy and handling practices
Answer: D
Explanation:
When sharing personal data with a trusted third party, organizations must ensure that the
recipient understands and adheres to the organization’s privacy policy and handling practices.
This ensures consistent treatment of personal information across entities and aligns with
consent provided by individuals.
Audit results and contractual notices are internal matters, while federal laws define obligations
but do not substitute for organizational policies. By explicitly sharing policies and practices,
organizations reinforce accountability and ensure compliance with privacy regulations such as
GDPR, HIPAA, or CCPA.
This communication sets expectations for data use, retention, and disclosure. It also provides a
defensible framework in case of regulatory inquiries, showing that due diligence was performed
when transferring data to third parties.
2. Which cloud computing service model allows customers to run their own application code
without configuring the server environment?
A. Data science as a service (DSaaS)
B. Infrastructure as a service (IaaS)
C. Software as a service (SaaS)
D. Platform as a service (PaaS)
Answer: D
Explanation:
Platform as a Service (PaaS) allows customers to focus on writing and deploying code without
managing the underlying infrastructure. The provider manages the operating system, runtime,
and middleware, enabling faster development cycles and reduced administrative overhead.
IaaS would require the customer to configure servers and operating systems, SaaS provides
ready-to-use applications, and DSaaS is a specialized category for analytics.
By abstracting the infrastructure, PaaS accelerates innovation and reduces operational burden
but also limits flexibility in some cases. Security responsibilities under PaaS focus on application-
level controls, while the provider handles infrastructure-level protections.
3. Which activity is within the scope of the cloud provider’s role in the chain of custody?
A. Setting data backup and recovery policies
B. Collecting and preserving digital evidence
C. Initiating and executing incident response
D. Classifying and analyzing data
Answer: B
Explanation:
In cloud environments, the provider’s role in the chain of custody primarily involves collecting
and preserving digital evidence when incidents or investigations occur. Because providers
manage the infrastructure, they have direct access to logs, storage systems, and virtual
machines necessary for evidence collection.
Backup policies and incident response may involve collaboration, but they remain customer
responsibilities in many service models. Data classification and analysis are business-driven
tasks, which customers must handle.
Providers must ensure that evidence collection is forensically sound and documented properly
to maintain legal admissibility. This responsibility is critical in maintaining trust and ensuring
compliance with laws and contractual obligations. It reinforces the shared responsibility model
by clearly defining which aspects of digital forensics belong to the provider.
4. An internal developer deploys a new customer information system at a company. The system
has an updated graphical interface with new fields.
Which type of functional testing ensures that the graphical interface used by employees to input
customer data behaves as the employees need it to?
A. Load testing
B. Regression testing
C. Security testing
D. Acceptance testing
Answer: D
Explanation:
Acceptance testing evaluates whether the system meets user requirements and performs as
expected in real-world conditions. In this case, employees need the graphical interface to work
properly for customer data entry. Acceptance testing confirms usability, accuracy, and
functionality from the end user’s perspective.
Load testing measures performance under stress, regression testing checks for errors
introduced by new changes, and security testing ensures system defenses. These are valuable,
but they do not validate end-user satisfaction and workflow alignment.
Acceptance testing is the final validation step before production deployment. It ensures that
updates deliver intended business value and user experience. By involving employees in
acceptance testing, organizations ensure successful adoption of new systems.
5. Developers need to be aware of a common application programming interface (API) threat
that occurs when attackers send malicious code through a form input to a web application so
that it may then be executed.
Which type of attack represents this API threat?
A. On-path
B. Injection
C. Credential
D. Denial-of-service
Answer: B
Explanation:
The described scenario is an injection attack. Injection occurs when unvalidated input?such as
SQL commands, script code, or OS instructions?is sent to an application through API forms or
parameters. If the application fails to sanitize input, the attacker’s code may be executed with
full system privileges.
On-path attacks intercept communication, credential attacks target authentication, and denial-of-
service floods services. None involve code execution via unvalidated input.
Injection is a top risk in OWASP API Security Top 10. Developers must implement input
validation, parameterized queries, and least privilege principles to mitigate this risk. API
gateways and WAFs provide additional layers of protection but cannot replace secure coding
practices.
6. After selecting a new vendor, what should an organization do next as part of the vendor
onboarding process?
A. It should terminate the relationship with the vendor and dissolve technical agreements, data
transfers, and other connections with the vendor.
B. It should monitor the practices of the vendor by performing audits and confirming that the
vendor is meeting its contractual agreements.
C. It should evaluate and determine whether the vendor meets the organization's requirements
by evaluating its security policies.
D. It should confirm contractual details and arrange other details such as technical agreements,
data transfers, and encryption standards with the vendor.
Answer: D
Explanation:
Once a vendor has been chosen, the onboarding phase requires confirming contractual details
and
arranging technical agreements. This includes specifying encryption standards, data transfer
methods, SLAs, and compliance responsibilities. These discussions establish a clear foundation
for the partnership.
Auditing and monitoring occur later, during ongoing vendor management. Evaluating
requirements and policies occurs earlier, during vendor selection. Terminating a relationship is
an offboarding activity, not onboarding.
Clarifying technical and contractual details at onboarding ensures a secure, compliant, and
efficient partnership. It reduces risks of miscommunication and enforces accountability from the
beginning.
7. An organization designing a data center wants the ability to quickly create and shut down
virtual systems based on demand.
Which concept describes this capability?
A. Resource scheduling
B. High availability
C. Ephemeral computing
D. Maintenance mode
Answer: C
Explanation:
The capability to rapidly create and destroy virtual systems as demand fluctuates is known as
ephemeral computing. These short-lived resources are provisioned automatically when needed
and decommissioned when demand subsides.
Resource scheduling helps allocate resources but does not imply temporary lifespans. High
availability ensures continuous service, and maintenance mode is used for administrative tasks.
Ephemeral computing is central to elasticity in cloud environments, reducing costs and
improving scalability. For example, containers or serverless functions may run only while
needed and then disappear. This model optimizes utilization, lowers expenses, and supports
modern application architectures that demand agility.
8. An organization is evaluating which cloud computing service model it should implement. It is
considering either platform as a service (PaaS) or software as a service (SaaS).
Which risk associated with SaaS can the organization avoid by choosing PaaS?
A. Vendor lock-out
B. Vendor lock-in
C. Personnel threat
D. Natural disaster
Answer: B
Explanation:
With SaaS, applications are delivered entirely by the provider, and customers have little to no
control over the underlying platform or data portability. This creates a higher risk of vendor lock-
in, as migrating away from one SaaS provider to another may require reworking applications or
losing features.
In contrast, PaaS gives customers more flexibility by allowing them to build, deploy, and
manage their own applications while relying on standardized frameworks and platforms.
Because applications are customer-managed, switching providers or migrating workloads can
be easier compared to SaaS.
Vendor lock-out, personnel threats, and natural disasters are risks in any service model. The
key differentiator here is portability and flexibility. Choosing PaaS reduces dependence on a
single provider’s application features, thereby lowering vendor lock-in risk while still offloading
infrastructure management.
9. Which data destruction technique involves encrypting the data, followed by encrypting the
resulting keys with a different engine, and then destroying the keys resulting from the second
encryption round?
A. One-way hashing
B. Degaussing
C. Overwriting
D. Cryptographic erasure
Answer: D
Explanation:
Cryptographic erasure is a secure data sanitization technique that relies on encryption. The
process involves encrypting the data, encrypting the keys with a second layer, and then
destroying the encryption keys. Without the keys, the encrypted data becomes unreadable and
is effectively destroyed, even though the storage media remains intact.
One-way hashing is used for password storage, not full data destruction. Degaussing is for
magnetic media, and overwriting involves physically writing new data over existing sectors.
Cryptographic erasure is widely used in cloud environments where physical media cannot be
easily destroyed or reclaimed by customers. It ensures compliance with data retention and
privacy regulations while maintaining environmental sustainability by allowing reuse of storage
hardware.
10. In most redundant array of independent disks (RAID) configurations, data is stored across
different disks.
Which method of storing data is described?
A. Striping
B. Archiving
C. Mapping
D. Crypto-shredding
Answer: A
Explanation:
The method described is striping, which is a technique used in RAID configurations to improve
performance and distribute risk. Striping involves splitting data into smaller segments and
writing those segments across multiple disks simultaneously. For example, if a file is divided into
four parts, each part is written to a separate disk in the RAID array.
This parallelism enhances input/output (I/O) performance because multiple drives can be
accessed
at once. It also provides resilience depending on the RAID level. While striping by itself (RAID 0)
increases performance but not redundancy, when combined with mirroring or parity (e.g., RAID
5 or RAID 10), it offers both speed and fault tolerance.
The purpose of striping in the data management context is to optimize how data is stored,
accessed, and protected. It is fundamentally different from archiving, mapping, or crypto-
shredding, as those serve different objectives (long-term storage, logical placement, or secure
deletion). Striping is central to high-performance storage systems and supports availability in
mission-critical environments.
11. As part of training to help the data center engineers understand different attack vectors that
affect the infrastructure, they work on a set of information about access and availability attacks
that was presented. Part of the labs requires the engineers to identify different threat vectors
and their names.
Which threat prohibits the use of data by preventing access to it?
A. Brute force
B. Encryption
C. Rainbow tables
D. Denial of service
Answer: D
Explanation:
The described threat is a Denial of Service (DoS) attack. In security contexts, a DoS attack aims
to make a system, application, or data unavailable to legitimate users by overwhelming
resources. Unlike brute force or rainbow table attacks, which target authentication mechanisms,
or encryption, which is a defensive control, DoS focuses on disrupting availability?the “A” in the
Confidentiality, Integrity, Availability (CIA) triad.
DoS can be executed in many ways: flooding a network with traffic, exhausting server memory,
or overwhelming application processes. When scaled by multiple coordinated systems, it
becomes a Distributed Denial of Service (DDoS) attack. In either case, the effect is the
same?authorized users cannot access critical data or services.
For cloud environments, where service uptime is crucial, DoS protections such as rate limiting,
auto-scaling, and upstream filtering are essential. Training data center engineers to recognize
DoS helps them understand the importance of resilience strategies and ensures continuity
planning includes availability safeguards.
12. Which concept focuses on operating highly available workloads in the cloud?
A. Resource hierarchy
B. Security
C. Operational excellence
D. Reliability
Answer: D
Explanation:
Reliability in cloud design ensures workloads can recover quickly from disruptions and continue
operating as expected. This concept focuses on high availability, fault tolerance, and disaster
recovery. Reliability requires implementing redundancy, backup strategies, and robust
monitoring.
Security ensures data protection, operational excellence covers continuous improvement, and
resource hierarchy refers to organizational structures, but none focus specifically on availability
and resilience.
By prioritizing reliability, organizations design cloud architectures capable of withstanding
failures at multiple layers?compute, storage, networking, and even regions. This design
principle ensures customer trust and compliance with service-level agreements.
Powered by TCPDF ([Link])