The SMB Guide to Security Metrics That Matter
I see a lot of security reports full of numbers that don’t matter.
“We blocked 50,000 threats last month.” Great. What does that tell you about your security posture? Not much, actually.
Here’s how to measure security in ways that actually inform decisions.
The Problem with Common Metrics
Vanity metrics: Numbers that look impressive but don’t indicate security health.
- Threats blocked (mostly automated noise)
- Emails filtered (ditto)
- Scans completed (activity, not outcome)
Lagging indicators: Metrics that tell you about the past, not the present.
- Number of breaches (you want zero, but that doesn’t mean you’re secure)
- Insurance claims (same problem)
Incomplete metrics: Numbers that capture part of the picture.
- Patch compliance (for systems you know about)
- MFA coverage (for accounts you’ve inventoried)
Better metrics are leading indicators that predict security outcomes and cover your actual environment.
Metrics That Actually Matter
1. Coverage Metrics
MFA coverage rate
What it measures: Percentage of accounts protected by MFA.
Why it matters: Credential theft is the top attack vector. MFA is the top defence. Gaps in coverage are exploitable.
How to measure:
- Azure AD/Entra ID: MFA registration status report
- Google Workspace: 2-Step Verification enrollment report
- Calculate: (Accounts with MFA / Total accounts) x 100
Target: 100% for all accounts with access to sensitive systems. 95%+ overall.
Endpoint protection coverage
What it measures: Percentage of endpoints with current protection.
Why it matters: Unprotected endpoints are entry points.
How to measure:
- Endpoint protection console: device coverage report
- Compare enrolled devices to known device inventory
Target: 100% of known devices. If you’re below 95%, find out why.
Backup coverage and success rate
What it measures: Systems backed up successfully vs systems that should be backed up.
Why it matters: Backups are your ransomware recovery strategy. Gaps mean data loss.
How to measure:
- Backup tool reports: success/failure rates
- Compare backed-up systems to critical system inventory
Target: 100% coverage of critical systems. 99%+ success rate.
2. Time-Based Metrics
Patch latency
What it measures: How long between vulnerability disclosure and patch application.
Why it matters: The Essential Eight has specific timeframes. Attackers exploit known vulnerabilities quickly.
How to measure:
- Vulnerability scanner: days since patch available
- Patch management tool: deployment timelines
Target: Critical vulnerabilities within 48 hours. High within two weeks.
Mean time to detect (MTTD)
What it measures: How long threats go undetected in your environment.
Why it matters: Faster detection means less damage.
How to measure:
- For detected incidents: Time from initial compromise to detection
- Requires incident investigation to determine
Target: Depends on threat type. Days rather than weeks or months.
Mean time to respond (MTTR)
What it measures: How long from detection to containment.
Why it matters: Quick response limits damage.
How to measure:
- Incident records: Detection time to containment time
Target: Hours for significant threats, not days.
3. Vulnerability Metrics
Vulnerability age
What it measures: How long known vulnerabilities remain unpatched.
Why it matters: Old vulnerabilities get exploited. Age correlates with risk.
How to measure:
- Vulnerability scanner: age of open vulnerabilities
- Group by severity and track age distribution
Target: No critical vulnerabilities older than 48 hours. No high vulnerabilities older than 14 days.
Vulnerability density
What it measures: Vulnerabilities per system or per category.
Why it matters: Identifies problem areas needing attention.
How to measure:
- Vulnerability scanner: group findings by system/application
Target: Trend downward over time.
4. Access Metrics
Privileged account count
What it measures: Number of accounts with administrative access.
Why it matters: More privileged accounts = more attack surface.
How to measure:
- AD/Entra ID: count accounts in privileged groups
- Review quarterly
Target: Minimum necessary. Should be explainable.
Orphaned account count
What it measures: Accounts for former employees still active.
Why it matters: Orphaned accounts are takeover targets.
How to measure:
- Compare account list to HR employee list
- Identify accounts without corresponding employees
Target: Zero. Any orphaned accounts should be disabled immediately.
Access review completion rate
What it measures: Percentage of required access reviews completed on time.
Why it matters: Reviews catch unnecessary access. Incomplete reviews leave gaps.
How to measure:
- Track scheduled reviews vs completed reviews
Target: 100% on time.
5. Awareness Metrics
Phishing simulation click rate
What it measures: Percentage of employees who click simulated phishing links.
Why it matters: Directly measures susceptibility to the primary attack vector.
How to measure:
- Phishing simulation platform: campaign results
Target: Industry average is ~15-20%. Aim for under 10% and trending down.
Suspicious email reporting rate
What it measures: How often employees report suspicious emails.
Why it matters: Reporting is positive security behaviour. High reporting suggests engaged employees.
How to measure:
- Count reports to phishing mailbox
- Calculate per employee per month
Target: Trending upward. Some is good; none suggests employees aren’t engaged.
Building a Metrics Dashboard
Keep it simple:
Don’t track 50 metrics. Track 5-10 that matter.
Suggested starter set:
- MFA coverage rate
- Endpoint protection coverage
- Patch latency (critical vulnerabilities)
- Backup success rate
- Phishing click rate
Add as you mature: 6. Privileged account count 7. Vulnerability age distribution 8. Access review completion 9. Mean time to detect 10. Suspicious email reporting
Visualisation:
- Trend over time (are you improving?)
- Current state vs target (are you meeting goals?)
- Red/yellow/green status (quick at-a-glance assessment)
Reporting Frequency
Real-time: Critical alerts that need immediate attention (major incidents, critical vulnerabilities discovered).
Weekly: Operational metrics for IT/security team (patch status, backup status, alert volume).
Monthly: Management summary of security posture (key metrics, trends, issues requiring attention).
Quarterly: Strategic review for leadership (security program progress, risk trends, investment needs).
Using Metrics for Decisions
Metrics should drive action. Some examples:
MFA coverage at 87%: Action: Identify accounts without MFA. Investigate why. Remediate.
Patch latency increasing: Action: Review patching process. Identify bottlenecks. Improve automation.
Phishing click rate not improving: Action: Review training effectiveness. Consider different approaches. Target repeat clickers.
Privileged accounts growing: Action: Review admin access grants. Apply principle of least privilege.
If you can’t tie a metric to a potential action, question whether it’s worth tracking.
Common Pitfalls
Measuring what’s easy, not what’s important:
It’s easy to report on tool output. It’s harder to measure actual security outcomes. Focus on outcomes.
Gaming the metrics:
When metrics become targets, people optimise for the metric, not the outcome. A patch compliance rate that excludes “exceptions” isn’t useful.
Too many metrics:
Dashboard overload leads to nothing being reviewed. Fewer, meaningful metrics beat many meaningless ones.
No context:
A number without context is useless. 95% sounds good. Is it better than last month? Is it meeting your target? Context matters.
Manual collection:
If metrics require significant manual effort, they won’t be maintained. Automate where possible.
Getting Started
This week:
- Identify what metrics you currently have available
- Determine what reports your tools can generate
- List the metrics that would be most valuable
This month:
- Configure automated reports for key metrics
- Create a simple dashboard (even a spreadsheet)
- Establish baseline values
Ongoing:
- Monthly review of metrics
- Quarterly assessment of whether you’re tracking the right things
- Adjust as your security program matures
Working with Specialists
For businesses wanting help establishing security metrics, AI consultants Melbourne and similar firms can:
- Assess current measurement capabilities
- Design appropriate metrics programs
- Implement automated collection and reporting
- Establish governance and review processes
The investment in proper measurement pays back through better security decisions and clearer communication with stakeholders.
Metrics for Different Audiences
IT Team:
- Operational metrics (patch status, protection coverage)
- Detailed breakdowns
- Action-oriented
Management:
- Summary metrics (overall posture, trends)
- Comparisons to targets
- Risk-focused
Board/Leadership:
- High-level risk indicators
- Strategic trends
- Investment justification
Tailor reporting to audience needs.
Industry Benchmarks
Where to find them:
- Verizon Data Breach Investigations Report
- ASD/ACSC reports
- Ponemon Institute research
- Your cyber insurer may have data
- Industry associations
Use carefully:
Benchmarks provide context but aren’t targets. Your organisation has unique risk profile. What matters is improving your posture over time.
Final Thought
The goal of security metrics isn’t to generate reports. It’s to make better decisions.
Measure what matters. Track trends over time. Use data to prioritise action.
Working with specialists like Team400 can help establish metrics programs that provide genuine value. But the fundamental principle is accessible to any business: measure the things that tell you whether you’re secure, not just the things your tools easily produce.
What you measure is what you manage. Make sure you’re measuring the right things.