FAIR Risk Quantification at After Action
How we turn exercise gaps into annualized loss expectancy
After Action | Version 1.0 | April 2026
Executive Summary
Most cyber risk conversations happen in words: "critical gap", "high exposure", "significant risk". These descriptions are useful for prioritization but useless for budget decisions. A board asking "how much should we spend on endpoint detection?" needs a dollar answer, not an adjective.
The Factor Analysis of Information Risk (FAIR) methodology, originally developed by Jack Jones at Nationwide and adopted by The Open Group as ISO standard 31000-compatible, converts qualitative cyber risk into quantitative dollar ranges. After Action implements a full FAIR engine with Monte Carlo simulation (10,000 iterations per scenario), integrated directly with the exercise gap data collected during tabletop exercises.
This whitepaper documents the methodology, the Monte Carlo implementation, and the exercise-to-FAIR bridging logic that makes it all automatic.
1. Why FAIR
1.1 The problem with qualitative scoring
Most cyber risk assessments produce color-coded matrices: red/amber/green, 1-5 scales, "high/medium/low". These are useful for communicating relative priority but fail when executives ask:
- "How much annual loss does this gap actually create?"
- "If we fix this, what's our ROI?"
- "At what point does the cost of mitigation exceed the expected loss?"
- "What's our 90th percentile worst-case scenario?"
These are budget questions, not color questions. They need money answers.
1.2 FAIR answers these questions
FAIR decomposes cyber risk into measurable, composable sub-factors and produces a probability distribution of financial loss. The output is a dollar range with confidence intervals: "90% confident that annual loss from this scenario falls between $220K and $1.8M, with an expected value of $640K."
That's a number a CFO can act on.
1.3 FAIR is an international standard
- ISO 31000 compatible risk management framework
- The Open Group FAIR adopted as standard O-RT and O-RA in 2013
- Used by Fortune 500 risk committees, cyber insurance carriers, and government agencies
Using FAIR isn't experimental. It's what mature risk programs already do.
2. The FAIR Ontology
FAIR breaks down cyber risk into a hierarchy of factors:
Risk (Annualized Loss Expectancy)
├── Loss Event Frequency (LEF)
│ ├── Threat Event Frequency (TEF) — attempts per year
│ └── Vulnerability — probability an attempt succeeds
└── Loss Magnitude (per event)
├── Primary Loss — direct costs
│ (detection, response, recovery, lost revenue during downtime)
└── Secondary Loss — indirect costs
(fines, lawsuits, reputation, customer churn)
× Secondary Loss Event Frequency (probability secondary losses occur)
Core formula
LEF = TEF × Vulnerability
Expected loss per event = PrimaryLoss + (SecondaryLEF × SecondaryLoss)
ALE = LEF × Expected loss per event
Example
A ransomware scenario:
- TEF: 0.5 attempts per year (1 in 2 years)
- Vulnerability: 0.4 (40% chance an attempt succeeds given current controls)
- LEF: 0.5 × 0.4 = 0.2 events per year (1 every 5 years)
- Primary Loss: $250K (incident response, recovery, downtime)
- Secondary LEF: 0.3 (30% chance secondary losses occur — regulatory fines, lawsuits)
- Secondary Loss: $1,500K
- Expected loss per event: $250K + (0.3 × $1,500K) = $700K
- ALE: 0.2 × $700K = $140K per year
A board can budget against $140K. A board cannot budget against "high".
3. Monte Carlo Simulation
3.1 Point estimates are wrong
The example above uses point estimates. In reality, every factor is uncertain:
- TEF might be 0.3 to 0.8 attempts per year, not exactly 0.5
- Vulnerability might be 0.25 to 0.55, not exactly 0.4
- Primary loss ranges wildly based on how bad the incident is
Multiplying uncertain numbers compounds the uncertainty. The correct output isn't a single dollar figure — it's a distribution.
3.2 Beta-PERT distribution
For each factor, After Action accepts a Beta-PERT distribution defined by four parameters:
min— optimistic boundmostLikely— expected valuemax— pessimistic boundconfidence— 1-5, how tightly to cluster around mostLikely
Beta-PERT is the industry standard for expert-elicited estimates because it's flexible enough to model asymmetric distributions (the "long tail" of worst-case scenarios) while being simple enough for domain experts to parameterize.
3.3 The simulation loop
For each scenario, After Action runs 10,000 iterations:
for i in 1..10,000:
tef = samplePERT(scenario.tef)
vuln = samplePERT(scenario.vulnerability)
lef = tef × vuln
primary = samplePERT(scenario.primaryLoss)
secondaryLef = samplePERT(scenario.secondaryLef)
secondary = samplePERT(scenario.secondaryLoss)
totalLossPerEvent = primary + (secondaryLef × secondary)
ale[i] = lef × totalLossPerEvent
3.4 Outputs
After 10,000 iterations we have a population of 10,000 possible annual loss values. From this we compute:
- Mean — expected annual loss (the "headline" number)
- p10 — optimistic scenario (10% of iterations are lower)
- p50 (median) — the middle outcome
- p90 — pessimistic scenario (10% of iterations are higher)
- Max — worst observed iteration
- Loss exceedance curve — P(loss > X) for any X
3.5 Loss exceedance curve
The exceedance curve is the single most useful output for executive conversations. It reads:
"There's a 23% chance our annual loss from this scenario exceeds $500K. There's a 5% chance it exceeds $2M. There's a 0.5% chance it exceeds $10M."
Boards can look at the curve and decide where they're comfortable. If they accept up to a 10% chance of $1M loss, they can see exactly how much additional control spend would shift the curve.
4. Exercise-to-FAIR Bridging
4.1 The problem FAIR doesn't solve
FAIR assumes you can parameterize the factors accurately. In practice, parameterizing TEF, Vulnerability, Primary Loss, and Secondary Loss is hard. Most organizations don't have the data and can't afford to hire a FAIR consultant ($50K-$150K per engagement).
4.2 What After Action solves
After Action's exerciseToFairScenarios() function converts exercise gaps into FAIR scenarios automatically:
- Groups open gaps by category (detection, containment, recovery, etc.)
- Looks up default PERT ranges for each category (calibrated from industry loss data)
- Adjusts the ranges based on gap severity (critical severity → higher vulnerability)
- Scales loss magnitudes by industry multiplier and employee count
- Returns a fully parameterized set of FAIR scenarios ready for simulation
No consultant required. A completed exercise → a quantified annual loss expectancy in 300 milliseconds.
4.3 Severity multipliers [ASSUMPTION]
critical: 1.5x vulnerability multiplier, 1.3x loss multiplier
high: 1.25x vulnerability, 1.15x loss
medium: 1.0x (baseline)
low: 0.8x vulnerability, 0.9x loss
4.4 Category → default PERT ranges [ASSUMPTION]
Each gap category has a default scenario template with PERT-distributed parameters. For example, a "detection" category gap maps to:
scenarioName: "Undetected intrusion leading to data exfiltration"
tef: { min: 0.3, mostLikely: 0.6, max: 1.2 }
vulnerability: { min: 0.2, mostLikely: 0.4, max: 0.7 }
primaryLoss: { min: $150K, mostLikely: $400K, max: $1.2M }
secondaryLoss: { min: $300K, mostLikely: $1M, max: $4M }
secondaryLef: { min: 0.2, mostLikely: 0.4, max: 0.7 }
These values are calibrated against published breach cost data (IBM/Ponemon) and updated quarterly.
5. Breach Cost Regression Model
As an independent sanity check on the FAIR output, After Action runs a parallel breach cost regression model based on the IBM Cost of a Data Breach methodology.
5.1 Base cost
base_per_record_cost = $165 (2024-2025 average)
base_cost_floor = $500,000 (minimum for small orgs)
5.2 Industry multipliers [ASSUMPTION]
From IBM Ponemon sector reports:
| Industry | Multiplier |
|---|---|
| Healthcare | 1.55 |
| Financial Services | 1.40 |
| Pharmaceuticals | 1.30 |
| Energy | 1.20 |
| Technology | 1.15 |
| Retail | 0.95 |
| Manufacturing | 0.95 |
| Government | 0.90 |
| Hospitality | 0.80 |
| Default | 1.00 |
5.3 Breach type multipliers [ASSUMPTION]
| Type | Multiplier |
|---|---|
| Malicious outsider | 1.25 |
| Malicious insider | 1.15 |
| Accidental | 0.85 |
| Lost device | 0.75 |
5.4 Cost breakdown [ASSUMPTION]
Per IBM Ponemon research:
- Detection & escalation: 29%
- Notification: 6%
- Post-breach response: 27%
- Lost business: 38% (the hidden cost most executives underestimate)
5.5 Formula
records = provided_count || (employee_count × 50) || 10000
per_record = base_per_record_cost × industry_mult × breach_type_mult
total = max(base_cost_floor, records × per_record)
When the FAIR output and the breach cost regression agree within ±50%, the numbers are defensible. When they diverge significantly, something's wrong with the input data and a human should review.
6. Implementation
6.1 Pure function architecture
The FAIR engine is a single file: src/lib/risk-quantification.ts. It exports:
quantifyRisk(scenario, iterations)— runs Monte Carlo, returns full resultestimateBreachCost(input)— runs the regression modelexerciseToFairScenarios(data)— bridges exercise data to FAIR scenarioscompareScenarios(results)— side-by-side comparison of current vs. remediatedsamplePERT(range)— the Beta-PERT sampling primitive
Every function is pure (no DB, no network, no LLM). 10,000 iterations complete in ~300ms.
6.2 Surfacing in the platform
The FAIR engine is surfaced at /app/client/risk-quantification. Clients see:
- 4 StatCards: expected annual loss, 90th percentile, 10th percentile, scenario count
- Independent breach cost regression sanity check
- Per-scenario risk cards ranked by mean ALE
- Inline loss exceedance curve SVG for the top scenario
7. Licensing
The FAIR methodology itself is public. After Action's implementation, including:
- The exercise-to-FAIR bridge (
exerciseToFairScenarios()) - The calibrated category defaults
- The severity-adjusted multipliers
- The integration with the readiness scoring engine
- The Monte Carlo simulation code
...is proprietary trade secret. Commercial licensing available via licensing@afteraction.dev.
© 2024-2026 After Action. FAIR is a methodology published by The Open Group. After Action's implementation is proprietary. Contact licensing@afteraction.dev for commercial terms.