The Compliance Test vs The Real Test
Most organisations have had a penetration test. Far fewer have had a penetration test that would catch a real attacker.
The difference comes down to scope, methodology, and what happens after the report lands.
Anatomy of a Checkbox Pen Test
- Automated scanner run against production IP range
- Report lists CVEs with CVSS scores
- No business context — a vulnerability in your payment service gets the same priority score as one in your marketing microsite
- Remediation guidance is generic ("update the package")
- No retest to validate fixes
This is what you get from most compliance-motivated engagements. It satisfies the auditor. It does not meaningfully improve your security posture.
Anatomy of an Adversary Simulation
- Starts with threat modelling: who are your realistic adversaries and what do they want?
- Attack paths are goal-oriented (e.g., "exfiltrate customer PII" or "access payment processing environment")
- Combines automated tooling with manual exploitation attempts
- Tests detection capability — do your alerts fire?
- Social engineering components (phishing, pretexting) are included
- Full attack narrative in the report, not just a vulnerability list
- Debrief with the engineering team to transfer knowledge, not just findings
The Question to Ask Your Provider
"Walk me through a test you ran last year where you achieved your primary objective. What was the initial foothold, what was the pivot path, and what was the final impact?"
If they can't answer this with a specific story — they're running automated scans.
What Actually Changes After a Good Test
A good penetration test changes your detection capability, not just your patch list. The best outcomes include new detection rules in your SIEM, improved incident response runbooks, and architectural changes that reduce attack surface — not just patched CVEs.