Methodology

FAIR vs Monte Carlo Cyber Risk Quantification: A Methodology Guide

Cyber risk quantification presentations often pose 'FAIR vs Monte Carlo' as a methodology choice. The framing is wrong — FAIR and Monte Carlo are complementary, not competing. FAIR is what to measure; Monte Carlo is how to calculate. The best modern CRQ practice uses both, often layered with calibrated estimation techniques from Hubbard Decision Research's methodology. Here's how the pieces fit together.

By Nick Shevelyov 9 min read

FAIR and Monte Carlo: complementary, not competing

FAIR (Factor Analysis of Information Risk) is a methodology for decomposing cyber risk into measurable components. Monte Carlo is a calculation technique that handles uncertainty in those components by running probabilistic simulations. They serve different purposes in the same risk-quantification workflow.

The "FAIR vs Monte Carlo" framing comes from older CRQ debates where some practitioners ran point-estimate FAIR (no Monte Carlo) and others ran scenario-based Monte Carlo (without FAIR's structural rigor). Modern enterprise CRQ has converged on combining both — FAIR provides the input decomposition, Monte Carlo runs the math on probabilistic inputs. Treating them as alternatives obscures the actual methodology choice.

The actual methodology choices that matter:

  • How to structure risk decomposition (FAIR is the dominant answer)
  • How to calculate outcomes (point estimates vs Monte Carlo simulation)
  • How to estimate the inputs (subjective expert opinion, calibrated estimation, empirical data)
  • How to handle uncertainty (acknowledged with distributions, hidden in point estimates, or treated explicitly via Hubbard methodology)

The rest of this guide walks through each piece — what FAIR provides, what Monte Carlo provides, how to combine them, and where Hubbard's methodology adds rigor on top.

FAIR: the methodology

FAIR is an open-source methodology for decomposing cyber risk, published by the FAIR Institute. The framework provides structural definitions for the inputs that combine to produce annual loss expectancy.

The FAIR decomposition

FAIR breaks risk into a hierarchy of measurable components:

  • Threat Event Frequency (TEF) — how often the threat actor attempts the attack
  • Vulnerability — when attempted, what fraction of attempts succeed
  • Loss Event Frequency (LEF) = TEF × Vulnerability
  • Primary Loss Magnitude — direct response, recovery, replacement costs
  • Secondary Loss Magnitude — regulatory fines, customer notification, reputation damage, litigation
  • Probable Loss Magnitude (PLM) = Primary + Secondary
  • Annual Loss Expectancy (ALE) = LEF × PLM

Each component has a precise definition that forces analysts to think systematically. The decomposition is the value: it ensures every risk-relevant factor gets explicitly considered, and it makes the analysis auditable (each input can be sourced, defended, and challenged).

Strengths of FAIR

  • Open standard. The FAIR Institute publishes the methodology; it's not proprietary to any vendor.
  • Auditability. Every input is explicit and defensible. Risk committees, auditors, and underwriters can challenge specific components.
  • Industry alignment. Most credible CRQ platforms implement FAIR-based input models. Using FAIR makes your analysis comparable to industry peers.
  • Discipline. The structured decomposition prevents hand-waving. You can't hide imprecision behind aggregate statements.

Limitation: point estimates produce false precision

Basic FAIR analysis treats each input as a single number — Threat Event Frequency = 12 per year, Vulnerability = 0.6, etc. In reality, each input has uncertainty. TEF might be somewhere between 5 and 25 per year, with most likely around 12. Treating uncertain inputs as precise produces ALE estimates that look more confident than they should.

This is the limitation Monte Carlo addresses.

Monte Carlo: the calculation technique

Monte Carlo simulation handles input uncertainty by treating each FAIR component as a probability distribution rather than a point estimate. Instead of "TEF is 12 per year," the input becomes "TEF is most likely 12, with 90% confidence between 5 and 25, following a PERT distribution."

The simulation runs thousands of trials — typically 10,000 to 100,000. Each trial draws different values from each input distribution and computes the resulting ALE for that scenario. The output is a distribution of possible annual losses, not a single number.

What Monte Carlo output looks like

Output is typically expressed in percentiles:

  • 50th percentile (median): $300K — half of simulations produced losses below this; half above
  • 75th percentile: $1.2M — three quarters of simulations produced losses below this
  • 95th percentile: $4.0M — only 5% of simulations produced losses above this (the tail-risk number)
  • 99th percentile: $8.5M — extreme tail loss

The 50th percentile gives expected loss; the 95th and 99th give tail-risk visibility — the catastrophic-case losses that deterministic ALE estimates obscure.

Why tail risk matters in cyber

Cyber loss distributions are fat-tailed. Most years produce no major incidents; some years produce catastrophic losses. A point-estimate ALE of $300K hides the fact that the 95th percentile loss might be $4M — and a single bad year can break the company. Risk decisions that rely only on expected loss systematically underestimate the catastrophic-case scenarios that drive most insurance claims and most company failures.

Monte Carlo output makes tail risk visible. Boards can see "expected annual loss is $300K, but we have a 5% chance of losing more than $4M in any given year." That visibility changes risk decisions — toward more conservative tail-risk hedging (higher insurance limits, more redundancy investments, more pre-positioned incident response).

FAIR + Monte Carlo: the modern standard

The modern enterprise CRQ standard combines both:

  1. FAIR provides the structural decomposition — what inputs to measure, how they combine into ALE
  2. Monte Carlo handles the calculation — running simulations on probabilistic versions of each FAIR input

The combined output is FAIR-traceable (you can defend each component) and Monte Carlo-precise (you get distributions, not point estimates). Most credible enterprise CRQ platforms implement this combined approach: Safe Security, Kovrr, vCSO.ai's Theodolite, and others use FAIR-derived input models with Monte Carlo simulation engines.

The integration is seamless from a methodology perspective. Each FAIR input gets specified as a distribution (typically PERT or beta distributions parameterized by minimum, most likely, and maximum values). The simulation runs on those distributions. The output preserves the FAIR decomposition — you can see how each input distribution contributed to the resulting loss distribution.

Hubbard's extension: calibrated estimation

Doug Hubbard's methodology, articulated in "How to Measure Anything in Cybersecurity Risk," extends FAIR + Monte Carlo with three additions:

Calibrated probability assessment

The biggest weakness in any subjective-input methodology is human bias in estimation. Most people are systematically overconfident — they produce 90% confidence intervals that are correct only 50% of the time. Hubbard's calibration training (a few hours of structured practice) measurably reduces this overconfidence, producing intervals that are accurate at the stated confidence level.

Calibrated estimators produce dramatically better Monte Carlo inputs than uncalibrated ones. The methodology argument is that subjective inputs can be made statistically reliable — and when they are, the resulting analysis is materially more trustworthy.

Explicit uncertainty handling

Hubbard's methodology treats uncertainty as information, not noise. A wide confidence interval isn't a measurement failure — it's accurate signal that the input is genuinely uncertain. The methodology preserves this uncertainty through to the output, rather than collapsing it into confident-looking point estimates.

Value-of-information analysis

Some measurements are worth more than they cost; others aren't. Hubbard's value-of-information analysis quantifies the expected dollar benefit of reducing uncertainty in any specific input. This lets organizations focus measurement effort on the inputs where reduced uncertainty would most change risk decisions — and skip the rest.

vCSO.ai's Theodolite implements this combined FAIR + Monte Carlo + Hubbard approach via methodology partnership with Hubbard Decision Research. The result is CRQ output suitable for board-grade risk reporting, cyber insurance underwriting conversations, and audit-grade defensibility.

How to choose the right approach

The methodology choice depends on the maturity of your CRQ practice and the decisions the output has to support.

Starting out: basic FAIR with point estimates

For organizations just beginning CRQ, basic FAIR with point estimates produces useful output quickly. The methodology is simpler to teach, the math runs in spreadsheets, and the output is sufficient for early-stage budget conversations. Acknowledged limitation: false precision in point-estimate output. Acceptable trade-off when the alternative is no quantification at all.

Maturing: FAIR + Monte Carlo

Once basic FAIR has produced wins and the team has internalized the methodology, graduate to FAIR + Monte Carlo. The shift exposes the uncertainty buried in earlier point estimates and produces output that captures both expected and tail-risk loss. This is the mainstream enterprise CRQ standard.

Audit-grade: FAIR + Monte Carlo + Hubbard calibration

For organizations where CRQ output drives high-stakes decisions — board oversight, cyber insurance underwriting, M&A diligence, regulatory inquiries — adding Hubbard-style calibrated estimation materially improves output reliability. The investment in calibration training and explicit uncertainty handling pays off when the output has to defend against expert challenge.

Always: scenario-based stress testing alongside

Quantitative methods handle expected and tail-risk loss for "normal" risk distributions. They don't handle truly catastrophic scenarios well — a successful nation-state attack, a successful insider threat at the right time, a CEO-level fraud event. Scenario-based stress testing complements quantification: model the catastrophic scenarios deterministically as separate analyses, alongside the Monte Carlo distributions for normal risk.


vCSO.ai is the operator-led cybersecurity advisory firm of Nick Shevelyov, former 15-year Chief Security Officer at Silicon Valley Bank. Theodolite, vCSO.ai's security platform, implements FAIR + Monte Carlo + Hubbard methodology in production cyber risk quantification — across CSPM, DSPM, sensitive data discovery, and risk-based vulnerability findings. For the broader CRQ framing, see our cyber risk quantification guide; for the foundational ALE math, see our annual loss expectancy calculator.

Questions & answers

What is the difference between FAIR and Monte Carlo cyber risk quantification?

FAIR (Factor Analysis of Information Risk) is a methodology — a structured framework for decomposing cyber risk into measurable components. Monte Carlo is a calculation technique — running thousands of probabilistic simulations to model loss distributions. They're not alternatives; they work together. Most modern enterprise CRQ uses FAIR for the input model and Monte Carlo for the math. Pure FAIR with point estimates is simpler but loses tail-risk visibility; FAIR + Monte Carlo produces both expected and percentile loss output.

Is FAIR Monte Carlo?

No, but they complement each other. FAIR is the conceptual framework that decomposes risk into Threat Event Frequency, Vulnerability, Loss Event Frequency, and Probable Loss Magnitude. Monte Carlo is the simulation technique that handles uncertainty in those input components by running thousands of scenarios with probabilistic inputs. Most credible enterprise CRQ tools use FAIR-derived input decomposition with Monte Carlo simulation for the calculation layer.

What is FAIR methodology in cybersecurity?

FAIR (Factor Analysis of Information Risk) is the dominant cybersecurity risk quantification methodology. Open-source standard published by the FAIR Institute. Provides structured decomposition: Threat Event Frequency × Vulnerability = Loss Event Frequency; Probable Loss Magnitude × Loss Event Frequency = Annual Loss Expectancy. The methodology is well-documented, auditable, and required for risk-informed cybersecurity programs. Most credible CRQ tools implement FAIR-based input models.

What is Monte Carlo simulation in cybersecurity?

Monte Carlo simulation runs thousands of scenarios with probabilistic inputs to produce loss distributions rather than single-point estimates. In cybersecurity, each input (Threat Event Frequency, Vulnerability, Loss Magnitude) gets a probability distribution rather than a fixed number; the simulation runs 10,000–100,000 trials drawing different values from each distribution; the output is a distribution of annual losses expressed in percentiles (50th: $300K, 75th: $1.2M, 95th: $4M). This captures both expected loss and tail-risk loss.

Which is better, FAIR or Monte Carlo?

Wrong question. They're complementary, not competing. FAIR is "what to measure" (the structural decomposition of risk); Monte Carlo is "how to calculate" (the simulation technique). The best CRQ practice uses FAIR for the input model and Monte Carlo for the math. Pure-FAIR with point estimates produces clean methodology but false precision. Pure Monte Carlo without FAIR's structural decomposition lacks rigor. Combined, they produce auditable input methodology with probabilistic output.

What is Hubbard cyber risk methodology?

Doug Hubbard's methodology, described in "How to Measure Anything in Cybersecurity Risk," extends FAIR + Monte Carlo with calibrated probability assessment, explicit uncertainty handling, and value-of-information analysis. The methodology emphasizes that subjective inputs can be made statistically reliable through calibration training, that uncertainty is information (not noise), and that some measurements are worth more than their cost. vCSO.ai's Theodolite implements this combined FAIR + Monte Carlo + Hubbard approach via methodology partnership with Hubbard Decision Research.

Ready to turn this into a working plan?

Nick's team helps growth-stage companies, PE/VC sponsors, and cybersecurity product teams translate security questions into board-ready decisions. First call is strategy, not vendor pitch.