CEHv13 Module 02 — Detailed Step-by-Step Lab
Sheets
Important: All exercises must be performed only in a controlled lab environment: your own VMs,
intentionally vulnerable targets you own (e.g., Metasploitable, DVWA), or explicit written permission
from the target owner. Never run these steps against third‑party systems without authorization.
Each lab below contains: Objective, Prerequisites, Tools, Step-by-step procedure (commands /
actions), Expected output / observations, Notes & safety, and Assessment tasks.
Lab 1 — Passive OSINT Recon (Search engines &
footprints)
Objective: Gather public information about a lab target using passive OSINT sources.
Prerequisites: A test domain/service you control (e.g., lab.example.com).
Tools: Web browser, search engines (Google/Bing), Archive.org, LinkedIn, job boards.
Step-by-step procedure
1. Choose a lab target domain: lab.example.com .
2. Perform broad queries in search engines using natural language:
3. "lab.example.com"
4. site:lab.example.com "login"
5. site:lab.example.com "@lab.example.com"
6. Record discovered hostnames, public documents, blog posts, and exposed data (save URLs to a
text file osint-findings.txt ).
7. Use Archive.org (Wayback Machine) to view historical versions of pages — note changed pages
or exposed config files in older snapshots.
8. Search LinkedIn for employees at the lab organization and note job titles that reveal technology
stacks (e.g., "AWS", "React", "Apache").
9. Cross-check public-facing email addresses and infer email format (e.g.,
[email protected] ).
Expected observations
• List of public pages, subdomains, and possibly public documents. Example saved line in osint-
findings.txt :
http://lab.example.com/reports/Q1-2025.pdf
subdomain.lab.example.com
Contact: [email protected]
1
Notes & safety
• Only collect data that is intentionally public. Do not attempt to access restricted or password-
protected areas.
Assessment
• Produce a one-paragraph OSINT summary listing 5 public findings and hypothesize one likely
employee email format.
Lab 2 — Google Dorking & Targeted Document
Discovery
Objective: Use search operators to discover exposed files and pages on your lab host.
Prerequisites: Domain lab.example.com with a few deliberately uploaded documents (PDF, DOCX)
for testing.
Tools: Web browser, search engines.
Step-by-step procedure
1. In a search engine, try these operators (replace domain):
2. site:lab.example.com filetype:pdf
3. site:lab.example.com inurl:admin
4. site:lab.example.com "index of"
5. site:lab.example.com "password" OR "credential"
6. Save useful queries and results into dorking-results.txt .
7. For any discovered documents, download them only if they are explicitly public and meant for
testing. Inspect metadata locally (see Lab 6 for metadata extraction).
Example dorks and what they find
• site:lab.example.com filetype:docx — public Word documents.
• site:lab.example.com intitle:"index of" — directory listings (if enabled).
Expected observations
• A list of URLs pointing to files like lab.example.com/uploads/employee-list.xlsx .
Notes & safety
• Do not download or open sensitive files from real third‑party domains. Always work with your
lab-hosted content.
2
Assessment
• Which dork delivered the most relevant results and why? Provide the dork and three sample
results from your lab host.
Lab 3 — WHOIS & Domain Reconnaissance
Objective: Retrieve registration and administrative data for the lab domain and map related domains.
Prerequisites: Domain name under your control or a purposely registered test domain.
Tools: whois (Linux), web WHOIS services.
Step-by-step procedure (Linux)
1. Run WHOIS on your domain:
whois lab.example.com
1. Save the raw output to a file:
whois lab.example.com > whois-output.txt
1. Identify fields: Registrar, Registration Date, Expiry Date, Name Servers, Registrant Contact (if
public).
2. Search for related domains sharing similar registrant patterns (manually review whois output
for contacts or use web services in a lab setting).
Expected output (example fragment)
Registrar: ExampleRegistrar, Inc.
Creation Date: 2024-07-01
Registry Expiry Date: 2026-07-01
Name Server: ns1.lab.example.com
Notes & safety
• GDPR/redaction may hide registrant details; this is expected for many modern domains.
Assessment
• List the registrar, creation date, and name servers from whois-output.txt . Note any
registration anomalies.
3
Lab 4 — DNS Enumeration & Zone Recon (Passive
→ Active)
Objective: Enumerate DNS records and discover subdomains using passive and active techniques.
Prerequisites: Lab domain with multiple DNS records and subdomains configured.
Tools: dig , host , nslookup , certificate transparency queries (online), openssl for TLS
inspection.
Step-by-step procedure
1. Request common DNS records with dig :
dig A lab.example.com +noall +answer
dig MX lab.example.com +noall +answer
dig NS lab.example.com +noall +answer
Save results:
(dig A lab.example.com +noall +answer) > dig-A.txt
1. Enumerate subdomains (active dictionary‑based brute force only in your lab). Use a small
wordlist and host in a loop:
for s in www mail ftp admin dev; do host $s.lab.example.com || true; done
1. Inspect TLS certificates for hostnames (passive):
openssl s_client -connect lab.example.com:443 -servername lab.example.com </
dev/null 2>/dev/null | openssl x509 -text -noout | grep -A2 "Subject
Alternative Name"
1. Note TXT records (SPF, DMARC):
dig TXT lab.example.com +short
Expected observations
• A list of A records, MX hostnames, NS servers, TXT values (SPF). Example dig output line:
lab.example.com. 300 IN A 203.0.113.5
4
Notes & safety
• Avoid aggressive subdomain brute forcing against third-party domains. Use small lists and only
on lab targets.
Assessment
• Produce a DNS summary: A records, MX, NS, discovered subdomains, and any suspicious/
dangling CNAMEs.
Lab 5 — Website Fingerprinting & Technology
Discovery
Objective: Identify web server software, frameworks, and exposed endpoints used by a lab website.
Prerequisites: Test webserver with various pages and a sitemap/robots file.
Tools: Browser DevTools, curl , wget , openssl , optional whatweb or wappalyzer (both can be
more active — run in lab only).
Step-by-step procedure
1. Inspect headers with curl :
curl -I https://lab.example.com/
Save to a file:
curl -I https://lab.example.com/ > headers.txt
Look for headers like Server: , X-Powered-By: .
1. Retrieve robots and sitemap:
curl -sL https://lab.example.com/robots.txt
curl -sL https://lab.example.com/sitemap.xml
1. View page HTML source in a browser and search for comments, generator meta tags (e.g.,
<!-- WP version --> , <meta name="generator" content="WordPress 6.2"/> ).
2. Passive TLS/cert check (similar to Lab 4) to find SAN hostnames.
3. (Optional active) Run whatweb if installed in lab:
5
whatweb https://lab.example.com/
Expected observations
• HTTP headers with server info; robots/sitemap entries; HTML meta generator tags.
Example curl -I output snippet:
HTTP/1.1 200 OK
Server: Apache/2.4.52 (Ubuntu)
X-Powered-By: PHP/7.4.3
Notes & safety
• Headers can be misleading (deliberately obfuscated). Treat as clues rather than certainties.
Assessment
• Produce a short report listing the server banner, CMS (if detectable), and three URLs from robots/
sitemap to check further.
Lab 6 — Website Mirroring & Local Analysis
(Safe)
Objective: Mirror a lab website for offline analysis and search mirrored files for artifacts and metadata.
Prerequisites: Lab website you control.
Tools: wget , file , strings , exiftool (for document metadata), grep / ripgrep .
Step-by-step procedure
1. Mirror the site locally with wget (only your lab site):
mkdir -p ~/lab-mirror
cd ~/lab-mirror
wget --mirror --convert-links --adjust-extension --page-requisites --no-
parent https://lab.example.com/
1. List mirrored files:
6
find . -type f | sed -n '1,50p'
1. Search for likely sensitive keywords in mirrored files:
grep -RIn "password\|secret\|apikey\|backup" . | sed -n '1,50p'
1. Extract metadata from documents (only from files you own):
exiftool path/to/document.pdf
1. Inspect binary artifacts with file and strings if necessary:
file assets/image1
strings assets/binaryfile | head -n 50
Expected observations
• Local copies of HTML, JS, CSS, and any public documents; examples of metadata output from
exiftool showing Author or Creator fields.
Notes & safety
• Mirroring can generate significant load; throttle or use --wait=1 and --limit-rate=50k if
mirroring larger sites even in lab.
Assessment
• Submit a list of 5 artifacts you found in the mirror (e.g., backup.zip , config.bak , Author
field in report.pdf ) and remediation suggestions.
Lab 7 — Email Harvesting & Social Engineering
Surface (Ethical)
Objective: Build a role-based contact list and map social-engineering surface for a lab org.
Prerequisites: Publicly available posts and a lab domain.
Tools: Browser, manual spreadsheet (CSV), public directories.
Step-by-step procedure
1. Compile a CSV contacts.csv with columns: Name, Role, Email, Source .
7
2. Harvest addresses from public pages (contact/us pages, staff directories) and from downloaded
documents (Lab 6 metadata).
3. Infer email formats by matching known addresses:
4. If you find [email protected] and [email protected] , hypothesize
format [email protected] .
5. Categorize contacts into roles: IT , HR , Finance , Executive .
6. For each role, note the potential social engineering impact (e.g., Finance could authorize
payments).
Expected observations
• A CSV with role-based emails and inferred formats; sample row:
Notes & safety
• Do not send phishing emails or attempt to trick real users. This lab is for mapping only.
Assessment
• Produce a 1-page memo describing the top 3 role targets for a simulated phishing exercise and
recommended mitigations (e.g., MFA for finance emails).
Lab 8 — Footprinting Countermeasures &
Hardening Checklist
Objective: Convert findings into prioritized mitigations and a remediation plan.
Prerequisites: Outputs from Labs 1–7.
Tools: Text editor, spreadsheet or ticketing template.
Step-by-step procedure
1. Gather findings from previous labs into a single spreadsheet with columns: Finding,
Severity (Low/Med/High), Evidence, Remediation, Owner, ETA .
2. For each finding, assign remediation steps. Examples:
3. Exposed document: Remove from public webroot; move to internal file server; rotate any leaked
credentials.
4. Open directory listing: Disable directory indexes; add index.html .
5. Mail server public: Ensure SPF/DKIM/DMARC configured correctly.
6. Create an executive one-page summary: 3 highest-priority issues, business impact,
recommended next steps.
7. Create a technical checklist for ops teams with exact commands/config snippets where
appropriate.
8
Example remediation snippets
• Disable directory listing (Apache httpd.conf ):
<Directory /var/www/html>
Options -Indexes
</Directory>
• Basic SPF TXT record example (lab only):
"v=spf1 ip4:203.0.113.5 -all"
Expected output
• A prioritized spreadsheet and a one-page executive summary PDF.
Notes & safety
• Avoid exposing remediation details for production in public reports; keep them within
authorized teams.
Assessment
• Produce the spreadsheet with at least 8 findings and show remediation, owner, and ETA for the
top 3.
Appendix — Useful commands summary
• whois domain.tld
• dig A domain.tld +noall +answer
• dig MX domain.tld +short
• curl -I https://domain.tld
• openssl s_client -connect domain.tld:443 -servername domain.tld | openssl
x509 -text -noout
• wget --mirror --convert-links --adjust-extension --page-requisites --no-
parent https://domain.tld/
• exiftool file.pdf
• grep -RIn "password\|secret\|apikey" .
If you want, I can: - Export this document as a printable PDF or Word file. - Produce a one-page cheat-
sheet PDF for Module 02. - Convert any single lab into a slide deck for training.
Tell me which export or next step you want and I’ll create it.