tl;dr

If you want the shortlist first, these are the 19 KPIs covered in this post.

MetricWhy it matters
% of applicable requirements fully documentedShows whether the team has identified applicable requirements and documented the controls.
% of applicable requirements implemented and rolled outTracks whether documented controls have actually been put into practice.
Mean time to rolloutGives a rough indicator of implementation speed and possible completion timelines.
Number of audit findings (by severity)Highlights weaknesses and where the ISMS needs improvement.
Number of incidentsA direct signal of whether protections are holding up over time.
% of assets or asset groups with identified and documented risksMeasures how broadly risk management covers the asset inventory.
Total risk scoreShows how the overall risk landscape is changing.
Total residual risk scoreIndicates remaining exposure after mitigations are in place.
Total score of mitigationsShows progress in implementing controls that reduce risk.
Mean time to resolve issues, incidents, or non-conformitiesMeasures how quickly the organization closes issues once found.
Time to implement corrective actions post-incidentTracks how fast the organization applies fixes after incidents.
Uptime or availabilityA broadly understood technical signal of service reliability.
Number of OFIs identifiedShows whether the team is actively finding opportunities for improvement.
Number of OFIs implementedMeasures whether improvement plans are actually being delivered.
Average implementation time for OFIsIndicates how quickly improvement work moves from idea to execution.
% of recommendations from previous audits implementedShows whether audit feedback is being addressed.
Time or effort spent on internal and external auditsHelps quantify the governance overhead of the program.
Time spent collecting and organizing evidenceExposes a major source of audit friction and manual work.
Number of controls automatedHelps track where repetitive control work has been reduced.

Introduction

Information security and compliance are still too often treated as a cost center. A lot of money, time, and effort goes in, and the organization feels like all it gets back is a certification badge or a checkbox on a website.

That framing makes it harder to get people to prioritize security. To change that, teams need to show progress, show value, and make the work visible. That is where metrics help.

This list is not meant to suggest that every team should track all 19 metrics at once. It is a practical menu. Pick the KPIs that fit your current stage, your maturity level, and the decisions you need to support.

KPIs to report when implementing a new framework

Adopting a new framework is a large effort whether the organization already has an information security program or is building one more or less from scratch. Stakeholders often ask some version of “are we there yet?” because framework implementation can look like a black box with only two visible states: done or not done.

It does not have to work that way. Requirements move through a pipeline from interpretation to documentation to rollout into day-to-day behavior or technical controls. If you measure those stages, progress becomes easier to communicate.

% of applicable requirements fully documented

This is the starting point. First you decide which requirements apply to the organization. Then you design and document the controls needed to cover them.

This metric helps answer whether the work has at least been understood and translated into documented controls.

% of applicable requirements implemented and rolled out

Documentation alone is not implementation. A control is only fully in place once it has been rolled out and embedded into operational reality.

This matters because many controls require more than writing something down:

  • policy changes need coordination across teams
  • technical changes take implementation time
  • the organization may not have equal capacity for every change at once

This metric helps explain why some controls sit in progress for a while and where bottlenecks are forming.

Mean time to rollout

Mean time to rollout gives you a rough sense of how quickly requirements are moving through the implementation pipeline. In some organizations this can support realistic completion forecasting.

It is only worth tracking if it is easy enough to measure and meaningful in your environment. If collecting it becomes a reporting burden of its own, it loses value.

KPIs to measure operational effectiveness of the ISMS

Once the framework is in place, the question changes. The issue is no longer whether the controls are documented or rolled out, but whether the ISMS is doing its job.

At this stage, the core questions are:

  • how well is the ISMS protecting assets
  • how well is it identifying issues
  • how quickly are issues being resolved
  • how much organizational effort does all of that require

Number of audit findings, split by severity

Audits are one of the clearest tests of whether the system is working. A good auditor usually finds something. That is normal.

What matters is the shape of the findings:

  • how many there are
  • how severe they are
  • whether the trend is improving over time

This metric helps tell a more useful story than a simple “passed” or “failed” view.

Number of incidents

This is one of the most direct lagging indicators in the whole program. In plain terms, you want incidents to be rare and ideally trending downward.

It can also be useful to split incidents by type so you can see whether confidentiality, integrity, or availability issues are driving the pattern.

% of assets or asset groups with identified and documented risks

Framework compliance does not automatically mean broad or mature risk coverage. Many organizations begin by assessing the most critical assets, which is reasonable, but over time risk management should extend further across the inventory.

This KPI shows whether the ISMS is gaining both breadth and depth.

Total risk score

Risk never stays still. New systems appear, vendors change, threats shift, and the environment evolves. Because of that, the total risk score rarely moves in one neat direction forever.

Even so, this metric is useful in stakeholder conversations because it shows the scale of the problem space and helps explain ongoing workload and resource needs.

Total residual risk score

This is one of the strongest KPIs in the entire list. If a team can only choose a small handful of metrics, residual risk should be near the top.

Why it matters:

  • it reflects risk after mitigations are applied
  • it shows the organization’s actual remaining exposure
  • it makes progress visible over time

Total score of mitigations

If total risk is rising because the environment is becoming more complex, teams still need a way to show that their work is creating value. That is where mitigation progress becomes useful.

This metric helps show whether the team is actively building and improving controls even when the external risk environment is getting tougher.

Mean time to resolve issues, incidents, or non-conformities

This measures response speed after an issue has been identified. The specific issue type can vary depending on what the team wants to improve:

  • audit findings
  • non-conformities
  • incidents
  • breach response follow-up

It is a useful operational KPI because it turns vague frustration into something concrete and assignable.

Time to implement corrective actions post-incident

Incident response is not only about containment. It is also about learning and fixing what caused the issue or allowed it to happen.

This metric focuses specifically on how fast the organization converts incident lessons into corrective action.

Uptime or availability

Availability is one of the easiest technical metrics for non-security stakeholders to understand. It is also often already reflected in service level agreements.

It is useful because it gives a plain-language signal that the environment is operating reliably before a more visible failure occurs.

Continuous improvement KPIs

An ISMS cannot stay static. Assets change, threats change, frameworks change, and the business changes. Continuous improvement is not optional, and in some frameworks, such as ISO 27001, it is explicitly expected.

Number of OFIs identified

OFIs, or opportunities for improvement, show whether the team is actively finding ways to strengthen the program rather than only reacting to failures.

This is a healthy indicator when you want to build a culture of continuous improvement.

Number of OFIs implemented

Identification alone does not create value. This metric shows whether the organization follows through and turns improvements into delivered changes.

Average implementation time for OFIs

This tells you how long it takes to move from improvement idea to completed implementation. It becomes especially useful when improvement work depends on collaboration with other departments and starts getting stuck.

% of recommendations from previous audits implemented

This metric shows whether audit feedback is being absorbed and acted on. It is also a practical readiness indicator for the next audit cycle.

Efficiency KPIs for the ISMS and audits

One of the most common pain points in GRC programs is the amount of time audits consume. That time comes out of the same budget of attention and effort that should also be going to meaningful security work.

If the organization wants to reduce wasted effort, it needs to measure where that effort is going.

Time or effort spent on internal and external audits

At minimum, count the time spent:

  • coordinating with auditors
  • preparing for audits
  • handling internal audit-related discussions
  • following up on audit requests

The point is not perfect precision. The point is consistent measurement so you can see trends and identify opportunities to reduce overhead.

Time spent collecting and organizing evidence

For many teams, this is one of the most time-consuming parts of audit work. Measuring it often reveals just how much manual effort is hidden in evidence gathering and evidence organization.

That makes it a valuable KPI when you are trying to justify:

  • better tooling
  • process improvements
  • integration work
  • a more structured GRC platform

Honorable mention

Number of controls automated

This KPI comes up often because automation is heavily marketed in compliance and GRC. It can be useful, but only when the return is real.

Automation is worth measuring when:

  • manual upkeep is clearly too heavy
  • the control is repetitive enough to justify automation
  • the saved effort is meaningfully larger than the implementation effort

Automating controls purely to inflate a number is not useful.

Summary

An information security program is not a one-time project. It is an ongoing system that has to be implemented, operated, improved, and defended over time.

The best KPIs are the ones that help the organization make better decisions. Choose metrics that match the stage you are in, make progress visible, and show where effort is paying off or getting stuck. When used that way, KPIs stop being vanity reporting and become part of how you run a stronger, more adaptive ISMS.