How the After Action Readiness Score Is Computed
A transparent methodology for cyber crisis readiness assessment
After Action | Version 1.0 | April 2026
Executive Summary
The After Action Readiness Score (AARS) is a single 0–100 number that quantifies an organization's operational readiness to respond to a cyber crisis. Unlike survey-based maturity assessments, the AARS is computed from observed behavior during live exercises — actual decisions made by actual people under time pressure — not from self-reported checklists.
This whitepaper documents the full methodology in plain language, including every formula, constant, and framework reference. It is suitable for:
- Executives and boards evaluating cyber investment
- Cyber insurance carriers assessing risk and pricing coverage
- Auditors and regulators requiring defensible scoring
- Procurement teams comparing readiness assessment vendors
- Academic researchers studying organizational crisis response
The methodology is open for inspection but the implementation is proprietary. Every constant in this document is drawn directly from After Action's production code.
1. Design Principles
The AARS was designed around four principles:
1.1 Measure behavior, not intent
Most cyber maturity frameworks rely on self-assessment. The questions are aspirational ("Do you have an incident response plan?") and the answers are optimistic. Real crisis response is not predicted by policy documents — it is predicted by how people behave when they are tired, uncertain, and under executive scrutiny.
The AARS is computed from data generated during realistic exercise simulations where participants make real-time decisions, receive pressure injects, and coordinate across roles. The score reflects how the team actually performs, not how they describe themselves.
1.2 Framework-anchored
Every component of the score maps to at least one established framework:
- NIST SP 800-61r3 (Incident Response)
- NIST CSF 2.0 (Cybersecurity Framework)
- MITRE ATT&CK
- SOC 2 Trust Services Criteria
- ISO 27001:2022
- HIPAA Security Rule
- PCI-DSS 4.0
- CIS Controls v8
- CMMC 2.0
This means every prescription generated alongside the score cites a specific control ID that auditors, carriers, and regulators can verify.
1.3 Deterministic and auditable
The AARS is computed by a rule-based engine, not a language model. Same input always produces the same output. Every assumption is explicit. Every constant in the scoring model is documented. This is a non-negotiable requirement for insurance carriers and regulated industries.
1.4 Dynamic but stable
The AARS includes both static industry benchmarks (based on published research and domain expertise) and dynamic benchmarks that improve as more organizations are scored on the platform. When fewer than five samples exist for an industry, static benchmarks are used. With five or more samples, dynamic benchmarks replace the static baseline, weighted by recency.
2. The Eight Capability Dimensions
The AARS decomposes readiness into eight capability areas, each weighted according to its contribution to crisis response effectiveness. Weights were calibrated against NIST SP 800-61r3 and validated through retrospective analysis of real incidents.
| # | Dimension | Weight | Framework Anchor |
|---|---|---|---|
| 1 | Detection & Monitoring | 15% | NIST CSF DE.CM, DE.AE |
| 2 | Crisis Communications | 15% | NIST CSF RS.CO, RC.CO |
| 3 | Containment & Response | 15% | NIST CSF RS.MI, RS.AN |
| 4 | Operational Recovery | 15% | NIST CSF RC.RP |
| 5 | Decision Speed | 10% | NIST SP 800-61r3 §3 |
| 6 | Executive Alignment | 10% | NIST CSF GV.OV |
| 7 | Incident Command | 10% | NIST SP 800-61r3 §2 |
| 8 | Regulatory Readiness | 10% | NIST CSF GV.PO |
The four 15%-weighted dimensions cover the core phases of incident response (detect → communicate → contain → recover). The four 10%-weighted dimensions cover supporting capabilities that multiply effectiveness.
Each dimension has a descriptive definition in the platform:
- Detection & Monitoring — Ability to identify threats and anomalies
- Crisis Communications — Internal and external communication during incidents
- Containment & Response — Speed and effectiveness of threat containment
- Operational Recovery — Business continuity and restoration capability
- Decision Speed — Executive response time under pressure
- Executive Alignment — Leadership coordination and governance
- Incident Command — Clarity of roles and escalation procedures
- Regulatory Readiness — Compliance notification and legal preparedness
3. The Scoring Formula
3.1 Gap-based penalty model
Each capability area starts at 100 points. Gaps identified during the exercise reduce the score.
Severity penalty schedule:
| Severity | Penalty |
|---|---|
| Critical | 25 points |
| High | 15 points |
| Medium | 8 points |
| Low | 3 points |
| Info | 0 points |
Remediated gaps: If a gap has been marked as remediated in the gap tracker, its penalty is reduced by 50%. This rewards organizations that close the loop on findings, and ensures the score reflects current posture rather than historical failure.
Gap category to capability area mapping:
Each gap is assigned a category during the exercise, which maps to a capability area:
| Gap Category | Capability Area |
|---|---|
| detection | Detection & Monitoring |
| escalation | Incident Command |
| containment | Containment & Response |
| communications | Crisis Communications |
| recovery | Operational Recovery |
| legal_regulatory | Regulatory Readiness |
| process | Executive Alignment |
| technology | Containment & Response |
| personnel | Incident Command |
| documentation | Regulatory Readiness |
Unmapped gaps: If a gap doesn't match any category, its penalty is distributed equally across all eight capability areas (penalty ÷ 8). This ensures no penalty is lost.
3.2 Capability area score
For each of the eight areas:
area_score = max(0, min(100, 100 - Σ penalties))
3.3 Adjustments
Four of the eight areas have additional adjustments to reward or penalize specific behaviors:
Operational Recovery — BIA (Business Impact Analysis) adjustment:
if biaContext:
coverage = functionsWithRpoRto / totalFunctions
adjustment = +round(coverage * 10)
if criticalFunctions > 0 and coverage < 1:
adjustment = adjustment - 5
else:
adjustment = -3 (no BIA program at all)
Rationale: an organization with defined RPO/RTO targets for its critical business functions is structurally better prepared to recover. An organization that has documented targets but hasn't covered all critical functions has a known gap.
Decision Speed — session timing adjustment:
decision_base = normalized_confidence // 1-5 scale → 0-100
if session_duration / planned_duration <= 1.0:
adjustment = +10
elif session_duration / planned_duration > 1.5:
adjustment = -10
Rationale: teams that complete the exercise on schedule demonstrate decision-making under pressure. Teams that consume 50% more than planned time are struggling.
Executive Alignment — confidence bonus:
if avg_confidence >= 70/100:
adjustment = +5
Rationale: high collective confidence at the executive level is a measurable signal of aligned leadership.
All other areas: no adjustments, pure gap-penalty model.
3.4 Overall score
overall = Σ (area_score[i] × area_weight[i])
Where area_weight comes from the table in Section 2. The weights sum to 1.0, so the overall score remains in the 0–100 range.
3.5 Clamping
All intermediate and final scores are clamped to the integer range 0–100. A theoretically extreme penalty cannot drive a score below 0, and no bonus can push it above 100.
4. Industry Benchmarks
Raw scores are meaningless without a reference point. The AARS is always reported alongside an industry benchmark, enabling comparisons like "your score of 62 is 6 points below the financial services industry average."
4.1 Static baselines
When fewer than five After Action-scored organizations exist in an industry, the benchmark is drawn from published research on that sector's typical cyber maturity. These baselines are calibrated against the IBM Cost of a Data Breach annual reports, sector-specific regulatory guidance, and After Action's own field experience.
Static industry benchmarks (overall score):
| Industry | Benchmark |
|---|---|
| Technology | 71 |
| Financial services | 68 |
| Energy | 60 |
| Government | 58 |
| Healthcare | 55 |
| Default (uncategorized) | 62 |
Per-capability benchmarks are also defined for each industry. For example, financial services has a regulatory readiness benchmark of 74 (highest) and a decision speed benchmark of 64 (lower, reflecting the pace of bureaucratic decision-making in that sector).
4.2 Dynamic benchmarks
Once five or more organizations in an industry have received AARS scores, the static benchmark is replaced with the real mean of those organizations. This is the After Action platform's compounding advantage: the more organizations it scores, the more accurate its benchmarks become.
Dynamic benchmarks track:
- Mean score per capability area
- Standard deviation (for percentile rank computation)
- Sample size
- Trend direction (is the industry itself improving?)
Example: If 47 healthcare organizations have scored, the platform will compute the real mean (say, 57.3) and report a client's score against that — plus a percentile rank like "above 62% of healthcare peers."
4.3 Handling small samples
When the sample size is exactly 5 (the threshold), the dynamic benchmark is weighted 50/50 against the static baseline to avoid abrupt shifts. The dynamic weight grows linearly with sample size until it reaches 100% at 20 samples.
5. Prescriptive Recommendations
A score alone is not actionable. The AARS engine pairs every capability area with a tiered set of prescriptive recommendations.
5.1 Tier selection
For each capability area, the engine selects one of four tiers based on the area score:
| Tier | Condition |
|---|---|
| Critical | score < 40 |
| High | score < 60, or (score < benchmark - 15) |
| Moderate | score < 80, or (score < benchmark) |
| On-track | score ≥ 80 and ≥ benchmark |
5.2 Prescription content
Each (area, tier) combination has a pre-defined prescription containing:
- Headline — one-sentence framing of the problem
- Prescription — paragraph explaining the operational risk
- 3–5 action items — specific, concrete, measurable steps
- Framework reference — NIST CSF, CIS, ISO 27001, or SOC 2 control IDs
5.3 Example
Detection & Monitoring at "Critical" tier:
Detection Capability Gap — Immediate Action Required
Your organization cannot reliably detect threats before they escalate. Without functional detection, adversaries operate unimpeded. Invest in foundational monitoring before any other security initiative.
Actions:
- Deploy EDR on all endpoints within 30 days — prioritize crown jewel systems
- Establish 24/7 log monitoring with a minimum 90-day retention policy
- Implement network anomaly detection at egress points
- Conduct a purple team exercise to validate detection coverage against the top 10 MITRE techniques
Framework: NIST CSF DE.CM / DE.AE • MITRE ATT&CK TA0007
Every prescription in the library follows this structure, covering eight capability areas × four tiers = 32 distinct prescription packets. Each one is hand-written by domain experts.
6. Validation and Limitations
6.1 What the AARS does not measure
The AARS is deliberately narrow. It does not measure:
- Technology maturity in isolation — a client could have the best SIEM in the world and still score poorly if their team doesn't know how to use it under pressure
- Compliance-only posture — passing a SOC 2 audit does not guarantee crisis readiness
- Threat landscape per se — the AARS measures readiness to respond, not likelihood of attack
These gaps are intentional. Other dimensions of cyber risk are measured by complementary engines in the After Action platform (see the mispriced-risk, risk-quantification, and business-impact modules).
6.2 Limitations of exercise-based measurement
The AARS is computed from exercise behavior. This has limitations:
- Sample size: a single exercise may not capture every capability. A team might happen to get a scenario that plays to their strengths.
- Scenario bias: the scenario selected influences which gaps surface.
- Hawthorne effect: people behave better when they know they are being observed.
To mitigate these, the After Action platform:
- Recommends running at least two exercises per year (four for high-risk industries)
- Rotates scenarios across adversary types (ransomware, insider threat, supply chain, cloud compromise, social engineering)
- Uses multi-exercise trend analysis (see the trends module) to identify systemic gaps that persist across scenarios
- Measures response consistency under fatigue by tracking confidence degradation over inject order
6.3 Calibration and review
All constants in the scoring formula (severity penalties, capability weights, benchmark values, prescription templates) are reviewed quarterly by After Action's domain experts and updated as needed. Changes are documented in the platform changelog and communicated to carrier partners.
7. Using the AARS for Specific Purposes
7.1 Insurance carrier pricing
Carriers can use the AARS in three ways:
- Premium discount eligibility — organizations scoring 70+ (Tier "Good") qualify for 8–15% discounts in the After Action premium calculator. Scores of 85+ ("Excellent") qualify for 15–25% discounts.
- Mispriced risk detection — combined with the mispriced-risk engine, carriers can identify organizations where current premium is misaligned with actual posture.
- Renewal risk assessment — score trends over time predict which clients are drifting into higher-risk territory.
Discount ranges are calibrated against published insurance industry loss ratios and are conservative by design.
7.2 Board reporting
Boards should interpret the AARS as:
- Above 85: Strong position. Maintain through regular exercises. Consider showcasing in earnings calls and ESG reports.
- 70–84: Good baseline. Focus investment on closing the remaining capability gaps flagged as "high" or "moderate" tier.
- 55–69: Adequate. Significant improvement opportunities exist. Treat the prescriptive recommendations as a 12-month operational roadmap.
- 40–54: Below average. This is a governance-level concern. Commission an enterprise risk review.
- Below 40: Critical. Assume an active breach within 12 months and invest accordingly.
7.3 Regulatory submissions
The AARS is suitable as supporting evidence for:
- SEC cyber disclosure filings (Item 106 and 8-K cyber incidents)
- HIPAA Security Rule §164.308(a)(1)(ii)(A) (risk analysis)
- NYDFS 500.2 / 500.9 (cybersecurity program / risk assessment)
- GLBA Safeguards Rule §314.4(b) (written risk assessment)
Every prescription in the AARS engine cites a specific control ID, which maps cleanly to regulatory audit findings.
8. Technical Implementation
8.1 Pure function architecture
The AARS scoring engine is implemented as a single pure function in TypeScript:
function calculateReadinessScore(input: ReadinessInput): ReadinessResult
- No database dependency — all data is passed as function arguments
- No network dependency — no API calls, no external services
- No LLM dependency — deterministic rule engine, no generative models
- Runs in <100ms — suitable for client-side or edge deployment
This architecture means the AARS can be computed anywhere: server-side during exercise debriefs, client-side in real-time dashboards, embedded in carrier risk models, or called from third-party tools via an SDK.
8.2 Testing
The AARS engine is covered by 100+ unit tests in the After Action platform, validating:
- Known inputs produce expected outputs (regression coverage)
- Edge cases (0 gaps, 100 gaps, missing fields, extreme confidence values)
- Benchmark lookup for every supported industry
- Prescription tier selection for boundary conditions
- BIA adjustment edge cases (no BIA, partial BIA, full coverage)
8.3 Versioning
The AARS methodology is versioned. Scores computed under different versions are not directly comparable. The current version is 1.0. Any future changes will be documented in a published changelog and version numbers will be embedded in all exported scores and certificates.
9. Licensing and Reproducibility
The methodology described in this whitepaper is disclosed for transparency. The implementation is proprietary and protected as a trade secret.
Commercial use requires a licensing agreement with After Action. Permitted uses under the platform Terms of Service include:
- Viewing and receiving your organization's own AARS score
- Exporting your score and prescriptions for internal use
- Sharing your certificate and score with insurance carriers and auditors
Prohibited uses include:
- Reimplementing the scoring engine using these formulas
- Using the methodology to produce a competitive scoring product
- Licensing or sublicensing the methodology to third parties
For commercial licensing inquiries, contact licensing@afteraction.dev.
10. Contact and Updates
This whitepaper will be updated whenever the AARS methodology changes materially. The current version is always available at:
https://afteraction.dev/whitepapers/readiness-score
Questions and feedback:
- General inquiries: hello@afteraction.dev
- Technical questions: engineering@afteraction.dev
- Carrier partnerships: partnerships@afteraction.dev
- Licensing: licensing@afteraction.dev
Appendix A — Full Constants Reference
For completeness, all numeric constants used in the AARS scoring formula are reproduced here:
Severity penalties
critical: 25
high: 15
medium: 8
low: 3
info: 0
Capability weights (sum = 1.00)
detection_monitoring: 0.15
communication_readiness: 0.15
containment_response: 0.15
operational_recovery: 0.15
decision_speed: 0.10
executive_alignment: 0.10
incident_command: 0.10
regulatory_readiness: 0.10
BIA adjustments (operational recovery)
full_rpo_rto_coverage_bonus: up to +10
uncovered_critical_penalty: -5
no_bia_program_penalty: -3
Timing adjustments (decision speed)
on_time_bonus: +10 (session ≤ planned duration)
significantly_over: -10 (session > 1.5 × planned)
Executive confidence bonus
high_confidence_bonus: +5 (avg confidence ≥ 70/100)
Remediation multiplier
remediated_gap_penalty_reduction: 0.5 (remediated gaps penalized at half rate)
Dynamic benchmark threshold
min_sample_size_for_dynamic: 5
full_weight_sample_size: 20
Score clamping
min_score: 0
max_score: 100
Appendix B — Framework Control References
Every prescription in the AARS engine cites specific controls. A partial mapping:
Detection & Monitoring
- NIST CSF DE.CM-1 through DE.CM-8
- NIST CSF DE.AE-2, DE.AE-5
- MITRE ATT&CK TA0007 (Discovery)
- CIS Control 8 (Audit Log Management)
- SOC 2 CC7.2
Crisis Communications
- NIST CSF RS.CO-1 through RS.CO-5
- NIST SP 800-61r3 §3.2.7 (Communication)
- HIPAA §164.404 (Notification)
- GDPR Articles 33–34
- SOC 2 CC2.2
Containment & Response
- NIST CSF RS.MI-1 through RS.MI-3
- NIST CSF RS.AN-1 through RS.AN-5
- NIST SP 800-61r3 §3.3 (Containment)
- CIS Control 17 (Incident Response Management)
Operational Recovery
- NIST CSF RC.RP-1
- NIST CSF RC.CO-1, RC.CO-2
- ISO 27001:2022 A.17 (Business Continuity)
- SOC 2 A1.2
Decision Speed & Executive Alignment
- NIST CSF GV.OV-1 through GV.OV-3
- NIST SP 800-61r3 §2.3 (Coordination)
- COBIT 2019 APO12
Incident Command
- NIST SP 800-61r3 §2.4 (Incident Handler Communications)
- CIS Control 17
- ISO 27001:2022 A.16.1
Regulatory Readiness
- NIST CSF GV.PO
- HIPAA §164.404
- GDPR Article 33
- PCI-DSS 4.0 Requirement 12.10
- NYDFS 500.17
This whitepaper describes the After Action Readiness Score methodology as implemented in the After Action platform version 1.0 (April 2026). Constants and formulas are subject to periodic revision. Consult the latest version of this document at https://afteraction.dev/whitepapers/readiness-score before quoting it in external contexts.
© 2024-2026 After Action. All rights reserved. Methodology disclosed for transparency. Implementation is proprietary trade secret.