Engagement types (black / grey / white box)
Black-box
Little or no prior knowledge. Tester performs full recon (e.g. external test with only org name, or internal with no IP list). Most realistic to a real attacker; can miss issues that need internal/design context.
Grey-box
Some information provided up front: in-scope IPs/ranges, low-priv credentials, app/network diagrams. Simulates insider or post-breach; less time on recon, more on misconfigs and exploitation.
White-box
Full access to design, source, configs, credentials. Aim is maximum finding coverage; not representative of attacker perspective.
Choose scope and rules of engagement (e.g. no DoS, no phishing) per SOW and get them in writing.
Security Assessment Types
Vulnerability Assessment
Identify and categorize known weaknesses via scanning + validation. Little to no manual exploitation.
Checklist-driven, scanner-heavy, results in remediation plan. Appropriate for all orgs.
Penetration Test
Simulated attack to determine if/how a network can be penetrated. Manual + automated.
Requires signed legal scope, medium-to-high security maturity orgs. Goes beyond scanning into exploitation, lateral movement, post-exploitation.
Security Audit
Externally mandated compliance check (government, industry).
Not voluntary — driven by regulation (PCI DSS, HIPAA, etc.).
Bug Bounty
Public program inviting researchers to find vulns for payment.
Large orgs with high maturity; need a dedicated triage team. Usually no automated scanning allowed.
Red Team
Evasive black-box attack simulation by experienced operators. Goal-oriented (e.g. reach a critical DB).
Only reports the chain that reached the objective, not every finding.
Purple Team
Red + Blue working together. Blue observes/provides input during red team campaigns.
Collaborative; improves detection and response in real-time.
Pentester specializations: Application (web apps, APIs, mobile, thick-client, source code review), Network/Infrastructure (networking devices, servers, AD, scanners like Nessus alongside manual testing), Physical (door bypass, tailgating, vent crawling), Social Engineering (phishing, vishing, pretexting).
Vulnerability Assessment vs. Penetration Test
A VA goes through a checklist: Do we meet this standard? Do we have this config? The assessor runs a vuln scan, validates critical/high/medium findings to rule out false positives, but does not pursue priv esc, lateral movement, or post-exploitation.
A pentest simulates a real attack. It includes manual techniques beyond what scanners find. Only appropriate after some VAs have been conducted and fixes applied.
They complement each other. Orgs should run VAs continuously and pentests annually or semi-annually.
Compliance Standards
Orgs that store/process/transmit cardholder data (banks, online stores).
Requires internal + external scanning. Cardholder Data Environment (CDE) must be segmented from the regular network.
Healthcare — protects patient data.
Risk assessment + vulnerability identification required for accreditation.
U.S. government operations and information.
Requires documented vulnerability management program.
International information security management.
Requires quarterly internal + external scans.
Pentesting Standards
Pre-engagement → Intel Gathering → Threat Modeling → Vuln Analysis → Exploitation → Post-Exploitation → Reporting
5 channels: Human Security, Physical Security, Wireless, Telecommunications, Data Networks
Planning → Discovery → Attack → Reporting
Key Risk Terminology
Threat — a process or actor that could exploit a vulnerability.
Exploit — code or technique that takes advantage of a vulnerability. Sources: Exploit-DB, Rapid7 DB, GitHub.
Risk — the possibility of harm from a threat exploiting a vulnerability. Measured by likelihood × impact.
High Likelihood
Medium (3)
High (4)
Critical (5)
Medium Likelihood
Low (2)
Medium (3)
High (4)
Low Likelihood
Lowest (1)
Low (2)
Medium (3)
CVSS Scoring
CVSS v3.1 Calculator — scores range 0–10 based on three metric groups:
Base Metrics (characteristics of the vuln itself):
Exploitability: Attack Vector, Attack Complexity, Privileges Required, User Interaction
Impact: Confidentiality, Integrity, Availability (CIA triad)
Temporal Metrics (change over time):
Exploit Code Maturity: Unproven → PoC → Functional → High
Remediation Level: Official Fix → Temporary Fix → Workaround → Unavailable
Report Confidence: Unknown → Reasonable → Confirmed
Environmental Metrics (org-specific context):
Modified Base Metrics adjusted by the org's CIA requirements (Not Defined / Low / Medium / High).
Microsoft DREAD is a complementary 10-point scale based on: Damage Potential, Reproducibility, Exploitability, Affected Users, Discoverability.
CVE Lifecycle
OVAL (Open Vulnerability Assessment Language) provides XML-based definitions for detecting vulns without exploitation. Four definition classes: Vulnerability, Compliance, Inventory, Patch. ID format: oval:org.mitre.oval:obj:1116.
CVE ID assignment stages:
Confirm the issue is a vulnerability (code exploitable, impacts CIA) and no existing CVE covers it.
Contact the affected vendor (good-faith responsible disclosure).
If vendor is a CNA, they assign the CVE. Otherwise use a third-party CNA.
Fall back to the CVE Web Form.
Receive confirmation email; provide additional info if requested.
CVE ID assigned (not yet public).
Public disclosure once all parties are aware.
Announce — ensure each CVE maps to a distinct vulnerability.
Provide details for the official NVD listing.
Responsible disclosure means working with the vendor to ensure a patch is available before public announcement, preventing zero-day exploitation.
Last updated