Core argument
CISOs are increasingly measured not by the security they implement, but by the breaches they fail to prevent. Most cybersecurity investments create a false sense of protection because they’re never truly tested under realistic conditions.
Zero trust applied new controls but the new wave of Agentic AI solutions will fundamentally challenge again how we need to apply security to data we can access.
Key angles:
- The “security perimeter is no more” phenomenon – tools are being deployed but never validated or not visible
- Current security investments are struggling to maintain effectiveness against the rate of AI tool change
- Compliance checkboxes with annual checks are completely insufficient to safeguard businesses, users and data
- The rhetorical question: “When was the last time you actually proved your security works?”
The Monday Morning Question
The CISO has just presented the quarterly security update.
- Slides full of green checkmarks.
- Compliance frameworks: implemented. EDR solution: deployed.
- Security awareness training: 94% completion rate.
- Multi-factor authentication: rolled out and enforced across the estate.
The board nods approvingly. Cybersecurity budget approved. Threat level – increasing.
Here’s the question nobody asked: “Can you prove any of this actually works?”
The Comfort of Untested Assumptions
In 2025, a UK manufacturing services firm discovered they’d been breached for 27 months.
- They had invested over £20 million in security tooling in the preceding two years.
- Their security posture assessment showed “maturing” across most frameworks.
- Their penetration test six months earlier had found only minor issues, most remediated.
- They were ransomed in 2025 affecting the business and multiple suppliers.
So what went wrong? Nothing was ever truly tested.
Legacy systems were operating with multiple vulnerabilities. There were warning signs. Targeted malicious activities increased considerably in 2023, with multiple dark web data leaks confirmed in 2024. This was a sustained and state orchestrated event.
Multiple EDR solutions were deployed but alerting had been tuned down to reduce “noise.” Their SIEM was collecting logs but nobody had verified it could detect the emerging attack patterns that were occurring.
Whilst it was considered there was a comprehensive incident response plan, on page, it had never been executed fully and/or under pressure. Their backup solution had never been tested against ransomware encryption.
If this company spent millions building better cybersecurity, how did attackers execute such a devastating event? The Gaps were not known or tested.
Related Resource: You’ve Been Breached. Can You Answer these Three Questions?
Reality Check: The Incident Response Testing Gap
Recent attack learnings indicate:
- Average time to detect a breach: 194 days (all industry verticals)
- Percentage of organisations that fully test incident response plans annually: 32%
- Organisations that have validated immutable backup restoration under ransomware attack conditions: Less than 15%
- Mean time between comprehensive security validations: 14 months
Meanwhile, attacks occur constantly and are rapidly evolving. Automated scanners probe for vulnerabilities 24/7.
Vendor vulnerabilities are now immediately exploited in attacker campaigns seeking to establish backdoors prior to remediation and hardening.
Phishing campaigns may test your users weekly, but less than 25% of organisations actually follow up with higher risk and repeat offending users. Gaps may not be visible or in your direct control.
The point is your security posture is constantly changing and so must your defences.
AI and specifically Agentic solutions challenge existing approaches which cannot distinguish between trusted user instructions and untrusted retrieved or transmitted data. There is continuous configuration drift, patches lag and new attack vectors morph.
You’re testing annually. They’re testing continuously. Gaps exist!
The Three Illusions of Security Confidence
Illusion #1: The Compliance Shield
“We’re ISO 27001 certified, SOC 2 compliant, and Cyber Essentials Plus accredited.”
Excellent. That proves you have policies, procedures, and controls documented. It proves you passed an audit at a specific point in time.
It doesn’t prove those controls work tomorrow. Or that they’d work under actual attack conditions. Or that your team can execute them when systems are failing and executives are under stress.
Compliance is a floor. It’s a minimum baseline, not a security strategy or shield. The two routinely get confused. Do not confuse “We’re compliant with We’ve invested in security with the outcome that We must be secure.”
Compliance tells you what controls you should have and follow. It does not mean everyone does or whether they actually work.
Illusion #2: The Vendor Presence
Security vendors make you feel protected.
Most gaps exist between security solutions and impersonations. Dashboards glow green with reassuring metrics. But a control is not effective because it reports success, it is effective only when it performs under attack.
But vendor demonstrations are often in controlled environments with known attack patterns. Real breaches are messy, novel, and deliberately designed to evade the deployed tools which they typically identify before they begin.
During red teaming exercises, we often identify missing or non-active endpoint and email security services or real-time scanning was settings excluding file types and scopes “for performance or compatibility reasons,” and alerts were routing to an unmonitored mailbox out of hours.
The tools may detect threats brilliantly but they are not complete and not integrated.
In practice, multiple events arrive on email but monitoring is inconsistent and not 24 hours. This introduces more gaps.
Illusion #3: The Annual Pen Test Safety Blanket
Most penetration tests remain annual events. They typically come back clean or with only minor findings. The exec team breathes easier. “We’re secure for another year.”
Most companies breached in 2025 had similar penetration test results. Except:
- The pen test was scoped to avoid business disruption or excluded systems whether technical debt or development related (so critical points of weakness weren’t really tested)
- It was scheduled weeks in advance (so IT hardened defences and expectations beforehand)
- It tested whether vulnerabilities exist, whether these could be exploited but as a standalone event not as part of a wider distraction approach e.g. not how fast critical vendor zero-day vulnerabilities get patched
- It gave you a point-in-time snapshot that was outdated the moment new solutions were deployed and required new policies or controls
Bottom line – Real attackers don’t schedule appointments. They don’t avoid production, development or legacy non deprecated systems. They don’t stop when they find the first vulnerability, they weaponise it and move laterally. Without detection.
A clean pen test proves firewall controls are in place and working. It doesn’t prove you’d notice someone climbing them at 3 AM or exploiting a critical new vulnerability to gain persistent access. This is the most likely attack vector.
How to think Differently
Next time you’re reviewing security posture, try these questions:
“If we were breached right now, how would we know?”
Listen carefully to the answer. If it starts with “Our SIEM would alert…” ask for evidence that the SIEM has detected a real or simulated breach in the past 90 days.
“When did we last test that our backups would survive a ransomware attack?”
Not “it’s immutable we are fine” or “when did we last back up data” but “when did we last prove we could restore encrypted systems?” Less than 10% of organisations have ever tested this scenario.
It is the most likely major cyber event scenario. No-one wants to find out during an actual incident that their backups were in scope for encryption too or keys have been stolen and changed.
“What percentage of our security controls have been validated in the last 90 days?”
If the answer is less than 50%, you’re operating behind the curve. The rate of attack landscape change is faster than you.
What We See in the Field
Working with SMBs across the UK and Europe, we consistently see the same patterns: organisations invest in security, but rarely invest in proving it works.
No resource time, distractions, no ownership, single point of failure, business involvement or downtime not agreed.
A financial services business recently asked us to assess the security posture ahead of a potential acquisition.
On paper, everything looked good. In practice they wanted validation:
- A “next-gen firewall” was running with a critical vulnerability which had been missed patching and was sat in monitor-only mode and not integrated to the SIEM solution.
- MFA was enabled, enforced for users and for VPN access, but disabled for an Admin Group
- Security awareness training was completed, but there were clear higher risks users to phishing targeting which had a 47% click rate and had no follow up
- Their incident response plan referenced tools they’d decommissioned 18 months earlier. Some of these deprecated systems, non-production, had not been fully removed.
Nothing malicious. Just the natural entropy of security programs that aren’t continuously validated. But all exploitable potential gaps.
Their IT team was solid. The issue is that nobody had ever asked them to prove the security worked, only to implement it.
The Path Forward: From Trust to Evidence
The good news: fixing this doesn’t major change or doubling budgets. It requires a shift in mindset from “we have security” to “we can prove security.” In a world of Agentic AI, this will become the norm.
So try these changes:
- Treating security like aviation safety: Pilots don’t just read emergency procedure manuals, they practice in flight simulators and review before every flight. Your security team needs the same.
- Embracing continuous validation: Move away from annual tests, to ongoing automated checks that critical controls are functioning and performing.
- Measuring what matters: Stop tracking “number of vulnerabilities patched” and start tracking “time to detect simulated breach attempts” and automating notification and patching or high and critical vulnerabilities.
- Accepting that breach is inevitable: The question isn’t “We can prevent attacks or our posture is good” but “can we detect and respond faster than attackers can cause damage?”
- What Data has left the company today: Most companies fail to correlate lower level security risks from data exfiltration before it has taken place. Simple steps to understand any unexpected data movements can significantly increase event detection and response.
A companies security can and will be tested by anyone and all aspects of malicious AI. Weaknesses, whatever and wherever they are, will be found. It is better to expose the questions that reveal gaps with testing evidence than to build a false confidence.
Optimal security is continuous. Supported by evidence. As a CISO, you should expect and not be offended to be challenged for this.
Related Resource: Cybersecurity ROI Business Guide
The 15-Minute Proof Test
Before reading the next article, do this: Ask your security team to demonstrate, right now, without preparation, that they can detect a compromised admin account.
Not in theory. Actually show you the alert firing and follow to the end, the response being executed. If they can’t, you’ve just identified your first gap. And that’s just the beginning. It’s more likely to be a socially engineered attack, so perhaps that is your first Pilot training scenario.
TL;DR
| Security Area | Most Organisations Measure | What Actually Matters |
| EDR | Installed & licensed | Time to detect simulated compromise |
| Backups | Immutable & configured | Proven restore under ransomware |
| SIEM | Logs collected | Alerts tested in last 90 days |
| IR Plan | Documented | Executed under pressure |
If this raised uncomfortable questions, that’s the point. Continuous Security Validation begins with visibility.
We offer independent Security Validation Reviews designed to test whether your critical controls, detection, response, backup recovery, work under realistic conditions.










