Crisis Response Action Plan Template: 72-Hour to 30-Day Format

A crisis response action plan operates on a different time horizon than other action plans. The first 72 hours determine how much damage the crisis does. The 30 days that follow determine whether the organisation absorbs the lessons or repeats the mistake. The plan structure has to support both: tight role-based command for the acute phase, structured remediation and learning for the recovery phase. This page covers the incident command framework that prevents confusion in hour one, a worked example for a customer-facing data incident, the role assignments that should be defined before the crisis happens, the communication cadence, and the day-4-to-30 recovery workstreams that turn an incident into organisational learning.

Updated 11 May 2026

Why Crisis Plans Are Different

Most action plans assume time to think, deliberate, and refine. Crisis plans operate under the opposite assumption: decisions have to be made under time pressure, with incomplete information, and with consequences that compound by the hour. The plan that works in this environment is largely defined before the crisis happens, so that during the crisis the team is executing rather than designing. The role assignments, communication templates, escalation paths, and decision-making authority all have to be predefined; trying to negotiate them in hour one of a real incident produces the worst outcomes.

The US FEMA Incident Command System is the canonical reference framework, originally designed for wildfire response and now widely adopted across emergency services, technology operations, and corporate crisis management. The framework's core insight is that role-based response with predefined authorities removes the meta-conversation about who decides what, freeing the team to focus on the actual response. The FEMA NIMS resources document the framework in detail.

The corporate version of incident command is widely used in technology operations (Google's SRE practice and many SaaS companies have well-documented incident response programs). The same discipline transfers cleanly to non-technology crises: data breaches, regulatory incidents, executive departures, public PR crises, customer safety events. The plan structure below is adapted for the broader corporate context while preserving the role discipline that makes the framework work.

The Four Roles That Must Be Defined Before the Crisis

01

Incident Commander

The single point of decision authority during the crisis. Holds the role for a defined shift (typically 8-12 hours) before handoff. Runs the response cadence, makes the operational calls, and is the named accountable individual to the executive team. Should be senior enough to make real decisions but operational enough to engage with detail. Often a VP-level engineer for technical incidents, a VP-level operator for business incidents.

02

Operations Lead

Owns the technical or operational execution of the response. Coordinates the team doing the actual remediation work. Reports up to the incident commander, down to the response team. For a data incident this is typically a security or engineering lead; for a PR incident a communications lead; for an HR incident an HR lead. The operations lead has the deepest functional context.

03

Communications Lead

Owns all external and internal messaging during the crisis. Drafts customer communications, regulator notifications, and internal updates. Coordinates with the incident commander on what gets communicated and to whom. Often a communications team member but can be a senior product or marketing lead with crisis comms experience. The communications lead's authority and the incident commander's authority must be clearly demarcated.

04

Scribe

Records every decision, action, and communication during the response. The scribe role seems administrative and is often skipped or under-resourced; this is a mistake. The scribe's record becomes the foundation of the post-incident review, the regulatory documentation, and the organisational learning. Without a dedicated scribe, the timeline of what actually happened becomes contested after the fact, often expensively.

Worked Example: Customer-Facing Data Incident

Incident: SaaS company discovers at 09:14 on a Tuesday that a configuration error has exposed a subset of customer data (estimated 3,400 customer records) to other customers within the platform for an unknown period (initial assessment: 48 hours).

Severity: P0 (highest). Customer data exposure triggers regulatory notification timelines (GDPR 72-hour rule, US state breach notification laws).

Hour 1: Incident Declared and Roles Assigned (09:14-10:14)

  • 09:18 - Incident declared. Severity P0. Slack channel created.
  • 09:22 - Incident commander assigned (VP Engineering on-call).
  • 09:25 - Operations lead (Security Lead), Communications lead (CMO), Scribe (Eng Manager) assigned.
  • 09:30 - Containment begins: configuration rolled back, customer access logs preserved.
  • 09:45 - Initial scope assessment: 3,400 records confirmed, 48-hour exposure window confirmed.
  • 10:00 - Executive team and General Counsel notified. CEO joins as senior stakeholder (not incident commander).
  • 10:14 - First standup: confirmed scope, containment in progress, 60-minute update cadence locked in.

Hours 2-24: Acute Response

  • Containment verified, technical root cause identified (config change deployed Sunday).
  • Forensic investigation begins: which data was actually accessed, by which other customers, what was viewed or downloaded.
  • Customer notification draft prepared by Communications lead, reviewed by General Counsel.
  • Regulatory notification prepared (GDPR 72-hour clock running from incident discovery).
  • Internal all-hands held at hour 6 to brief the company.
  • Incident commander handoff at hour 12 to Director of Engineering.
  • Hour-by-hour standup cadence maintained through hour 24.

Hours 25-72: Stabilisation and Notification

  • Detailed forensic findings: 12 customers actually viewed exposed data, downloads in 4 cases.
  • Affected-customer communication sent at hour 36, with offer of credit monitoring service.
  • GDPR regulator notification submitted at hour 48 (within 72-hour window).
  • US state breach notifications prepared per state-specific timelines.
  • Standup cadence stepped back to every 4 hours, then twice daily by day 3.
  • Press statement prepared and held; not published unless story breaks externally.

Days 4-30: Recovery and Learning

  • Days 4-7: Customer communication follow-ups. Regulator dialogue continues. Internal team retro held day 5.
  • Days 8-14: Technical remediation deepens (configuration change controls, automated detection of similar exposures).
  • Days 15-21: Customer trust rebuilding. CEO calls to top 20 affected customers. Public-facing transparency report drafted.
  • Days 22-30: Full post-incident review with all stakeholders. Plan updates published. Team training on the response playbook updates. Documented organisational learning.

The plan converts a chaotic situation into a sequenced response with named accountability at every step. The first hour is the most important and the most defined; by minute 60 the response team has clear roles, a containment in progress, and a stand-up cadence locked in. The recovery phase is where the long-term value is built: customer trust, organisational learning, and the documented playbook updates that prevent the next similar incident from being as expensive.

5 Crisis Response Mistakes

No predefined roles

When a crisis hits and the team has to debate who is incident commander, who owns communications, who is the scribe, the first hour is lost to meta-discussion. Roles must be predefined in the plan, with named on-call rotations and clear authorities. The hour saved by predefined roles is often the hour that determines the crisis's blast radius.

CEO as incident commander

The CEO often wants to be in the room and may have the most authority, but the CEO as incident commander overweights the response with executive decisions and underweights the operational coordination. Better pattern: CEO as senior stakeholder providing executive air cover and external relationship management; a VP-level operator as incident commander managing the actual response.

Inconsistent communications

Without a designated communications lead, multiple people send messages to customers, the press, and regulators with subtly different framing. The contradictions get noticed, often by lawyers, and become expensive. The communications lead is non-optional, and their authority over messaging needs to be respected even when other senior people have opinions.

Skipping the scribe role

The scribe seems administrative and is the easiest role to skip during the actual crisis. This produces incomplete records, contested timelines after the fact, and weak post-incident reviews. A dedicated scribe with one job, recording everything, is one of the cheapest, highest-value roles in the response.

No real recovery phase

Many organisations declare the crisis over once the acute phase ends and skip the days-4-to-30 recovery phase. This leaves the underlying cause unresolved, customer trust unrebuilt, and organisational learning uncaptured. The acute phase contains damage; the recovery phase prevents repetition. Skipping the recovery is what guarantees a similar incident recurs.

Frequently Asked Questions

What separates a crisis from a normal operational issue?
Three thresholds. First, urgency: the issue requires action within hours, not days. Second, scope: the impact extends to customers, regulators, the public, or major internal systems, not just one team. Third, novelty: the standard playbooks do not apply directly. Issues meeting all three are crises and need an explicit crisis response plan; issues meeting only one or two can usually be handled within normal operational rhythms.
What is the incident command framework and why use it?
Incident command is a role-based response framework originally developed for emergency services and now widely used in technology operations and crisis management. It assigns specific roles (incident commander, operations lead, communications lead, scribe) with predefined responsibilities, so that in the first hour of a crisis nobody is debating who decides what. The framework's value is in the predictability it brings to chaotic situations. The US FEMA Incident Command System is the canonical reference.
How long should an incident commander hold the role?
For the duration of the acute phase (typically 12-72 hours), with handoffs at predefined shift boundaries to prevent burnout. The incident commander makes the operational decisions, runs the cadence of stand-ups, and is the single point of accountability for resolving the crisis. Holding the role longer than 12 hours without a handoff degrades decision quality. The handoff itself is structured: explicit transfer of context, current state, decisions made, and open questions to the incoming commander.
How often should the response team meet during the acute phase?
Hourly during the first 6-12 hours, then every 2-4 hours through hour 24, then twice daily through day 3, then daily through day 7. The cadence intentionally compresses early because the situation is changing quickly and decisions made on stale information are dangerous. As the situation stabilises, the cadence steps back. The structure prevents the dual failure modes of meeting too rarely (decisions made in isolation) and meeting too constantly (no time to actually execute).
What should the communication strategy include?
Audience-specific messaging on three tracks. Internal stakeholders need accurate operational information (executive team, board, affected internal teams). External stakeholders need appropriate transparency (customers, regulators, the public). Media and social channels need controlled messaging. Each track has a designated owner and a designated approver, both named in the plan before the crisis. Crisis communications drafted in real time without these named roles produce inconsistent, contradictory, or ill-considered messaging that compounds the original crisis.
What does the day-4-to-30 recovery phase cover?
Three workstreams. First, technical or operational remediation of the underlying cause (the corrective action plan in CAPA terms). Second, customer or stakeholder remediation (communication, reparations, trust rebuilding). Third, organisational learning (post-incident review, plan updates, training based on the incident). The recovery phase is where the real value of the crisis response is built; the acute phase contains the damage, the recovery phase prevents repetition.

Related Templates

Updated 11 May 2026