Template

Incident Response Plan Template

An incident response plan defines how your organization detects, contains, and recovers from security incidents. This guide walks through the six NIST phases, what a working template should include, and the mistakes that turn plans into shelfware.

By Nick Shevelyov 12 min read

What an incident response plan covers

An incident response plan (IRP) is a documented set of procedures that defines how an organization identifies, contains, eradicates, and recovers from cybersecurity incidents. It is not a technical runbook — it is an organizational document that assigns authority, establishes communication chains, and ensures the right people make the right decisions under pressure.

The plan sits at the intersection of security operations, legal compliance, and executive decision-making. Without one, organizations improvise — and improvisation during a breach produces delayed notifications, evidence spoliation, inconsistent messaging, and regulatory exposure.

Every major compliance framework requires a documented IRP: SOC 2 (CC7.3-CC7.5), ISO 27001 (A.16), NIST CSF (RS.RP), PCI-DSS (Requirement 12.10), HIPAA (164.308(a)(6)), and SEC cybersecurity disclosure rules. The question is not whether you need one — it is whether the one you have will work when tested.

The 6 NIST phases

NIST SP 800-61 (Computer Security Incident Handling Guide) defines the standard six-phase incident response lifecycle. Most frameworks — including SANS, ISO 27035, and CISA — map to these phases with minor variations in terminology. The phases are sequential in theory but overlapping in practice.

1. Preparation

Preparation is everything that happens before an incident occurs. This phase determines whether the remaining five phases will function under pressure or collapse on first contact. Preparation includes:

  • Establishing and training the incident response team (IRT)
  • Deploying detection and monitoring tools (SIEM, EDR, NDR)
  • Documenting escalation paths and decision authorities
  • Pre-contracting external resources: outside counsel, forensics firm, crisis communications
  • Configuring evidence preservation capabilities (log retention, forensic imaging tools)
  • Running tabletop exercises to validate the plan

2. Detection and Analysis

Detection is identifying that an incident may be occurring. Analysis is determining whether the alert is a true positive, what type of incident it is, and what scope is affected. This phase is where most time is spent — the median dwell time (time from compromise to detection) across industries is still measured in days, not hours.

  • Triage alerts from SIEM, EDR, and user reports
  • Classify the incident type: malware, unauthorized access, data exfiltration, denial of service, insider threat
  • Determine initial scope: which systems, data, and users are affected
  • Assign severity level (critical / high / medium / low) based on data sensitivity and business impact
  • Begin the incident log — timestamped, factual, and maintained throughout

3. Containment

Containment stops the incident from spreading while preserving evidence for investigation. There are two containment strategies: short-term (immediate isolation to stop active damage) and long-term (sustained controls that allow investigation to proceed without re-infection).

  • Short-term: network isolation, account disabling, firewall rule changes, DNS sinkholing
  • Long-term: rebuilding credentials, segmenting affected network zones, deploying additional monitoring on adjacent systems
  • Evidence preservation: forensic images before wiping, log snapshots, memory captures

The containment decision often requires executive input — isolating a production database stops revenue, not just the attacker. The IRP must pre-define who has authority to make that call.

4. Eradication

Eradication removes the root cause of the incident from the environment. This is not the same as containment — containment stops the bleeding; eradication removes the source. Activities include:

  • Removing malware, backdoors, and persistence mechanisms
  • Patching the vulnerability that was exploited
  • Resetting compromised credentials across all affected systems
  • Validating that the attacker's access paths are fully closed
  • Scanning adjacent systems for indicators of compromise (IOCs)

5. Recovery

Recovery restores affected systems to normal operation and verifies their integrity before returning them to production. The recovery phase is where organizations often move too fast — restoring from a compromised backup, or putting systems back online before confirming eradication.

  • Restore systems from verified clean backups
  • Rebuild compromised hosts from known-good images
  • Monitor restored systems for signs of re-infection (elevated logging for 30-90 days)
  • Validate data integrity — ensure no unauthorized modifications persist
  • Gradually restore user access with enhanced monitoring

6. Lessons Learned

The lessons-learned phase is the most skipped and the most valuable. Within 5-10 business days of incident closure, the IRT should conduct a structured post-incident review covering:

  • Timeline reconstruction: what happened, when, and how it was detected
  • What worked well in the response
  • What failed or caused delays
  • Specific recommendations for plan, process, or technology changes
  • Updated threat intelligence to feed back into the Preparation phase

Document the findings in a post-incident report. Update the IRP based on what the exercise revealed. Organizations that skip this phase repeat the same response failures on the next incident.

What a template should include

A working incident response plan template covers these sections. The document should be self-contained — anyone on the IRT should be able to open it during an incident and find what they need without referencing external documents.

  1. Purpose and scope — what the plan covers, which systems and data are in scope, and what constitutes an "incident" versus a routine event
  2. Definitions — incident severity levels, incident types, key terms (breach vs. incident, containment vs. eradication)
  3. Incident response team roster — names, roles, contact information (including personal cell numbers), and alternates
  4. Escalation matrix — who gets notified at each severity level, within what timeframe, and through what channel
  5. Phase-by-phase procedures — what happens during each of the six NIST phases, with decision points and checklists
  6. Communication plan — internal and external communication templates, spokesperson designation, media holding statements
  7. Regulatory reporting matrix — applicable laws and frameworks, notification deadlines, responsible party, and template language
  8. Evidence handling procedures — chain of custody requirements, forensic imaging standards, log preservation policies
  9. External resource contacts — pre-contracted outside counsel, forensics firm, insurance carrier, law enforcement contacts (FBI, CISA)
  10. Appendices — incident report template, post-incident review template, communication templates, technical runbooks for common scenarios

Roles and responsibilities

Ambiguity about who does what during an incident is the single most common failure mode. The IRP must assign clear roles with explicit authority boundaries. The following roles apply to most organizations, scaled up or down based on company size.

Incident Commander

The IC owns the response end-to-end. This is typically the CISO, fractional CISO, or VP of Security. The IC makes containment decisions, authorizes external notifications, and serves as the single point of authority during the incident. In organizations using strategic oversight services, the fractional CISO fills this role.

Technical Lead

The technical lead directs the hands-on investigation and remediation. This is usually a senior engineer, IT director, or SOC manager. They coordinate forensic analysis, containment actions, and system restoration under the IC's authority.

Legal Counsel

Legal counsel advises on notification obligations, privilege considerations, regulatory exposure, and law enforcement engagement. Ideally, this is outside counsel with cybersecurity incident experience — invoking attorney-client privilege early protects investigation findings from discovery.

Communications Lead

The communications lead manages all internal and external messaging: employee notifications, customer communications, media inquiries, and social media monitoring. This role prevents conflicting statements from multiple spokespeople — a common source of reputational damage during public breaches.

Business Unit Liaison

For incidents affecting specific business units (e.g., customer data in a SaaS product, financial data in a fintech platform), a business unit liaison translates technical findings into business impact and coordinates customer-facing decisions.

Communication plan

The communication plan is the component most organizations leave until last and regret first. Breach communications happen under time pressure, legal scrutiny, and emotional stress. Pre-drafted templates and pre-designated spokespeople are not optional.

Internal communication

  • Incident response team: immediate notification via pre-defined channel (secure messaging, not email if email may be compromised)
  • Executive team: briefed within 1-2 hours of confirmed incident; provided with talking points for board inquiries
  • Employees: notified with appropriate scope — what happened, what they should or should not do, who to contact with questions
  • Board of directors: briefed according to the organization's governance charter; typically within 24 hours for critical incidents

External communication

  • Affected individuals: notification per applicable state and federal breach notification laws, using pre-approved template language
  • Regulators: notification within framework-specific deadlines (see regulatory reporting section below)
  • Law enforcement: FBI IC3, local field office, and/or CISA depending on incident type and severity
  • Cyber insurance carrier: notification within policy-required timeframes (typically 24-72 hours); failure to notify promptly can void coverage
  • Media: holding statement prepared in advance; all inquiries routed to designated spokesperson
  • Customers and partners: proactive notification if their data or services are affected, using pre-drafted templates

Communication principles

Three rules govern all incident communications: (1) say only what you know, not what you suspect; (2) route all external statements through legal review before release; (3) designate one spokesperson — conflicting public statements from multiple executives compound reputational damage.

Regulatory reporting requirements

Regulatory notification deadlines are the highest-stakes element of the communication plan. Missing a deadline can convert a manageable incident into a regulatory enforcement action. The IRP should include a reporting matrix that covers every applicable jurisdiction and framework.

Key US requirements

  • State breach notification laws: all 50 states plus DC require notification to affected individuals; deadlines range from 30 days (Colorado, Florida) to 90 days, with some states requiring "without unreasonable delay"
  • SEC (public companies): material cybersecurity incidents must be disclosed on Form 8-K within four business days of determining materiality
  • HIPAA (healthcare): notification to HHS and affected individuals within 60 days; breaches affecting 500+ individuals require media notice
  • PCI-DSS (payment card data): immediate notification to acquiring bank and card brands
  • GLBA (financial services): notification to primary federal regulator "as soon as possible" and no later than 36 hours after determination
  • CISA (critical infrastructure): covered entities must report incidents within 72 hours and ransom payments within 24 hours under CIRCIA

International requirements

  • GDPR (EU/EEA): 72 hours to notify the supervisory authority; affected individuals must be notified "without undue delay" if the breach poses high risk
  • PIPEDA (Canada): report to the Privacy Commissioner and notify affected individuals "as soon as feasible"
  • UK GDPR: 72 hours to the ICO; follows the same structure as EU GDPR

The IRP should include a regulatory matrix that maps each applicable requirement to: (1) the triggering condition, (2) the notification deadline, (3) the responsible party on the IRT, and (4) template notification language. Review the matrix with legal counsel during the Preparation phase — not during an active incident.

Testing and tabletop exercises

An untested incident response plan is a hypothesis. Testing validates that the plan works under simulated pressure — that contact information is current, escalation paths function, decision authorities are clear, and the team can execute the documented procedures.

Tabletop exercises

A tabletop exercise is a facilitated discussion-based simulation where the IRT walks through a realistic incident scenario. The facilitator presents injects (new information, complications, media inquiries) at timed intervals and observes how the team responds. Tabletop exercises are the most efficient way to test an IRP because they require no live systems, minimal scheduling, and can be completed in 2-4 hours.

Technical simulations

Technical simulations test the operational components of the plan: can the SOC detect the simulated attack? Can the team execute containment procedures? Do the forensic tools work as documented? These are more resource-intensive than tabletops but reveal technical gaps that discussion-based exercises miss.

Testing cadence

  • Tabletop exercises: at least annually; twice per year recommended, with one executive-level exercise and one IRT-focused exercise
  • Technical simulations: annually, or after major infrastructure changes
  • Plan review: after every real incident, after organizational changes (mergers, leadership transitions), and at least annually
  • Contact list validation: quarterly — phone numbers change, people leave, and outdated contacts during an incident waste critical hours

Common mistakes

The following failure patterns appear repeatedly across organizations that have an IRP on paper but discover it does not work during a real incident.

  • Writing the plan and never testing it. The most common mistake. Plans degrade over time — contact information becomes stale, documented procedures no longer match the infrastructure, and new team members have never seen the document. Annual tabletop exercises are the minimum viable testing cadence.
  • No pre-contracted external resources. Negotiating an engagement letter with a forensics firm during an active breach adds days to the response. Outside counsel, forensics, and crisis communications should be contracted before an incident occurs — ideally through cyber insurance panel relationships.
  • Storing the plan only on the network. If ransomware encrypts your file shares, the incident response plan is encrypted with them. Maintain offline copies: printed binders, encrypted USB drives, and a copy accessible through a channel that does not depend on corporate infrastructure.
  • No clear decision authority for containment. Containment decisions — isolating a production system, disabling executive accounts, shutting down a revenue-generating service — require pre-authorized decision rights. If the on-call engineer has to wake up three VPs to get permission to isolate a server, the attacker has hours of additional dwell time.
  • Treating the plan as a security-team document. Incident response is cross-functional. Legal, communications, HR, and executive leadership all have roles. If they have not reviewed the plan and participated in a tabletop, they will improvise — and improvised breach communications are how companies end up in the news for the wrong reasons.
  • Skipping the lessons-learned phase. After the adrenaline subsides, teams move on. But the post-incident review is where the plan improves. Organizations that skip it make the same response mistakes on the next incident.
  • No alignment with the risk assessment. The IRP should be informed by the organization's risk assessment — the scenarios the plan prepares for should reflect the risks the assessment identified as highest priority. If the risk assessment flagged ransomware and supply chain compromise as top risks, but the IRP only has a generic "malware" playbook, there is a disconnect.

vCSO.ai provides fractional CISO services that include incident response planning, tabletop exercises, and breach readiness for growth-stage companies and PE/VC portfolio operators.

Questions & answers

How long should an incident response plan be?

A practical IRP is typically 20-40 pages. The core plan — roles, escalation paths, phase-by-phase procedures — should fit in 15-20 pages. Appendices (contact lists, communication templates, regulatory reporting matrices) add the rest. Anything over 60 pages is unlikely to be read under pressure. The test is whether your on-call engineer can find the right escalation path within 60 seconds of opening the document.

What is the difference between an incident response plan and a disaster recovery plan?

An incident response plan governs the detection, containment, and investigation of security events — breaches, ransomware, unauthorized access. A disaster recovery plan governs restoring IT systems and data after any disruption, including natural disasters, hardware failures, and outages. They overlap during incidents that require system restoration, but a DR plan does not cover forensic investigation, legal notification, or attacker containment. Most organizations need both, and the IRP should reference the DR plan as a downstream dependency during the Recovery phase.

How often should an incident response plan be tested?

At minimum, annually. Best practice is twice per year: one full tabletop exercise with executive participation and one technical simulation with the incident response team. Plans should also be reviewed and updated after every real incident, any major infrastructure change, a merger or acquisition, or a change in regulatory requirements. Untested plans fail under pressure — the goal is to discover gaps in a controlled setting, not during a live breach.

Do small companies need a formal incident response plan?

Yes. Company size does not change the legal obligation to respond appropriately to a breach. All 50 US states plus DC have breach notification laws that apply regardless of company size. SOC 2, ISO 27001, PCI-DSS, and HIPAA all require documented incident response procedures. The plan can be simpler — fewer roles, shorter contact lists — but skipping it entirely means improvising during the most consequential hours of a security event.

What are the 6 phases of incident response?

The NIST SP 800-61 framework defines six phases: (1) Preparation — policies, tools, training, and team readiness before an incident occurs. (2) Detection and Analysis — identifying that an incident is happening and determining its scope. (3) Containment — stopping the incident from spreading while preserving evidence. (4) Eradication — removing the threat actor, malware, or vulnerability from the environment. (5) Recovery — restoring systems to normal operation and verifying integrity. (6) Lessons Learned — documenting what happened, what worked, what failed, and updating the plan accordingly.

Who should be on the incident response team?

At minimum: an incident commander (usually the CISO or fractional CISO), a technical lead from engineering or IT, a legal representative (in-house counsel or outside cyber counsel on retainer), a communications lead (PR or executive team member), and an HR representative for insider threat scenarios. Larger organizations add forensics specialists, business unit liaisons, and a dedicated scribe. External parties — outside counsel, forensics firms, insurance carrier — should be pre-contracted, not sourced during an active incident.

What regulatory reporting deadlines apply after a breach?

Deadlines vary by jurisdiction and framework. GDPR requires notification to the supervisory authority within 72 hours. US state laws range from 30 to 90 days, with some states (like Colorado and Florida) requiring notification within 30 days. HIPAA requires notification within 60 days. SEC rules require material cybersecurity incident disclosure within four business days of materiality determination. PCI-DSS requires immediate notification to the acquiring bank. The IRP should include a regulatory reporting matrix that maps each applicable requirement to a specific deadline and responsible party.

Should the incident response plan include ransomware-specific procedures?

Yes. Ransomware is the most common incident type that exercises every phase of the IRP simultaneously — detection, containment, eradication, recovery, legal notification, and executive decision-making (including the pay/don't-pay question). Ransomware-specific additions should cover: offline backup verification procedures, cryptocurrency wallet readiness (if the organization's policy allows payment as a last resort), OFAC sanctions screening before any payment, law enforcement notification (FBI IC3), and insurance carrier notification within policy-required timeframes.

Ready to turn this into a working plan?

Nick's team helps growth-stage companies, PE/VC sponsors, and cybersecurity product teams translate security questions into board-ready decisions. First call is strategy, not vendor pitch.

Talk to us Tell us your needs →