Sample debrief — illustrative output
This is a pre-rendered example of what your team sees after running an exercise. The decisions, scoring, costings and runbook below are real outputs from a representative ransomware run. Sign up to run your own scenario and download the full debrief pack.
Solid response with refinements
Decision quality: Sound decision-making. This rating reflects how your team made decisions under pressure — every scenario is winnable and losable; what matters is the reasoning behind each call.

Solid response with refinements
The team made defensible calls under pressure with manageable downside. (Path taken: 2 cautious, 4 balanced calls.)
Cumulative-damage rules & triggering metrics
Cumulative gains on threat containment (+40) and a final public trust of 72% put you on this outcome path.
Cumulative metric damage
Each cell sums every decision impact on that metric. The engine bands the final value into healthy / strained / critical to drive the outcome rating.
| Metric | Start → end | Total losses | Total gains | Net | Final band |
|---|---|---|---|---|---|
| Public trust | 60% → 72% | −3 | +19 | +12 | Healthy (≥70%) |
| Operational capacity | 80% → 75% | −17 | +12 | -5 | Healthy (≥70%) |
| Threat containment | 40% → 80% | 0 | +40 | +40 | Healthy (≥70%) |
Decisions that moved the needle
- +6#3 Notify the regulator within the 72-hour window
- +5#5 Issue a holding statement now, full statement at 17:00
- +4#4 Refuse to pay; rebuild from verified backups
- +12#6 Phased restoration with continuous monitoring
- -10#2 Disconnect the affected segment from the network
- -5#4 Refuse to pay; rebuild from verified backups
- +14#2 Disconnect the affected segment from the network
- +10#4 Refuse to pay; rebuild from verified backups
- +8#1 Treat as a credible incident immediately
Key triggering metrics for this outcome
- Public trust ended at 72% (healthy (≥70%)) — net +12 from 60%.
- Operational capacity ended at 75% (healthy (≥70%)) — net -5 from 80%.
- Threat containment ended at 80% (healthy (≥70%)) — net +40 from 40%.
How your choices produced the rating
Each decision contributes a score: cautious (+2), balanced (+1), aggressive (−1). The running ratio against the maximum possible determines the rating band.
| # | Decision | Choice | Risk | Score | Running | Rating so far |
|---|---|---|---|---|---|---|
| 1 | Suspicious email reported by finance team | A: Treat as a credible incident immediately | low | +2 | 2/2 | Excellent |
| 2 | Encrypted files spreading on the file server | B: Disconnect the affected segment from the network | medium | +1 | 3/4 | Excellent |
| 3 | Regulator notification window opens | A: Notify the regulator within the 72-hour window | low | +2 | 5/6 | Excellent |
| 4 | Attacker demands ransom in cryptocurrency | C: Refuse to pay; rebuild from verified backups | medium | +1 | 6/8 | Excellent |
| 5 | Press desk asks for a statement | B: Issue a holding statement now, full statement at 17:00 | low | +2 | 8/10 | Excellent |
| 6 | Service restoration vs full security review | A: Phased restoration with continuous monitoring | low | +2 | 10/12 | Excellent |
The same scoring drives the per-path outcome variants — change one decision and the rating column above shows where the path would have diverged.
How your decisions moved the situation
tick mark = starting valueWhat you did well
The judgement calls your team made under pressure that moved things in the right direction — and the worse outcomes those calls quietly avoided.
Avoided outcomes
The worst-case consequences your decisions prevented — based on the alternative paths you didn't take and where the metrics finished.
Avoided uncontrolled threat spread
high severityWorst case prevented: If containment had collapsed below 20%, the attacker would have continued lateral movement — more systems compromised, longer dwell time, and a far larger forensic and recovery bill.
Instead you: You held containment at 80%, cutting off enough of the attack surface to limit the blast radius.
Avoided a public trust collapse
medium severityWorst case prevented: Below 20% trust, customers, regulators and the press start setting the narrative for you — refunds, churn, regulator scrutiny, and brand damage that takes years to rebuild.
Instead you: You finished on 72% trust (started at 60%), keeping the room for the organisation to lead its own communications.
Avoided operational standstill
medium severityWorst case prevented: Below 20% operational capacity, services degrade to the point staff are firefighting unguided and recovery starts from zero — every hour compounds the lost revenue and goodwill.
Instead you: You held operational capacity at 75%, keeping enough of the business running to support a structured response.
Confidence evolution
Self-rated confidence per decision (1–5). A rising line means the team grew into the situation.
Across 6 of 6 decisions the team rated themselves, confidence started at High (4/5), ended at Very high (5/5), and averaged 4.3/5 (peak 5/5, trough 4/5). Confidence stayed broadly steady — the team neither lost nor gained meaningful certainty as events unfolded.
Decision timeline
When each call was made, how long it took, and which choices drove the biggest swings. Highlighted rows are the key calls.
| # | Elapsed | Took | Decision | Risk | Net impact | Conf. |
|---|---|---|---|---|---|---|
| 1 | 04:00 | 04:00 | Treat as a credible incident immediately | low | +6 | 4/5 |
| 2 | 12:00 | 08:00 | Disconnect the affected segment from the network | medium | +1 | 4/5 |
| 3 | 20:00 | 08:00 | Notify the regulator within the 72-hour window | low | +8 | 5/5 |
| 4 | 30:00 | 10:00 | Refuse to pay; rebuild from verified backups | medium | +9 | 4/5 |
| 5 | 40:00 | 10:00 | Issue a holding statement now, full statement at 17:00 | low | +5 | 4/5 |
| 6 | 55:00 | 15:00 | Phased restoration with continuous monitoring | low | +22 | 5/5 |
#2 at 12:00 — “Disconnect the affected segment from the network” (+1 net, 27 total swing)
#6 at 55:00 — “Phased restoration with continuous monitoring” (+22 net, 22 total swing)
#4 at 30:00 — “Refuse to pay; rebuild from verified backups” (+9 net, 19 total swing)
6 decisions over 55:00, averaging 09:10 per call. 6 of 6 decisions had a net positive impact on the headline metrics. The quickest call (#1, 04:00) was "Treat as a credible incident immediately"; the longest deliberation (#6, 15:00) was "Phased restoration with continuous monitoring". The biggest swings came from #2, #6, #4 — these are the calls that moved the headline metrics most.
Total crisis cost
Scale: Mid-market · Eight categories of exposure modelled from your final metrics, sector, scenario type and time on incident. Heuristic estimates intended to provoke debrief discussion — not actuarial figures.
roughly balanced between immediate cash burn (55%) and long-tail damage (45%) of the cash-vs-trust split. The single biggest line is direct response (decisions).
≈ 1.7% of annual revenue — a material but recoverable hit.
Sum of the 6 decisions the team committed to during the exercise.
Which committed decision do you think was the best value for money — and which the worst?
Were any of these spends avoidable with earlier action?
How this £ was calculated
- Decisions committed:
- 6
- Sum of decision costs:
- £350,500
- Scale band:
- Mid-market
Sector exposure (1.6×) and a 20% trigger drive likely ICO/sector-regulator action plus external counsel costs. The trigger is squared (^1.6) so moderate breaches produce moderate fines, reflecting ICO's actual enforcement curve. Capped at the greater of ICO's £17.5m statutory maximum or 4% of annual revenue (GDPR Art. 83(5)). Not a guaranteed fine — a planning estimate.
Were notification clocks acknowledged early, or only when prompted?
Who would have signed off the regulator-facing narrative?
How this £ was calculated
- Max regulatory ceiling:
- £4,000,000
- Trigger strength:
- 20%
- Sector regulatory weight:
- 1.60×
- Scenario-type weight:
- 1.40×
- Effective regulatory weight:
- 1.60×
- Statutory cap:
- £17,500,000
- Containment factor (live):
- 32%
Incident response retainer, forensic investigation, and system restoration — sized by containment gap (20%) and incident type.
Which technical debt got exposed by this incident?
What would the team rebuild differently if budget were no object?
How this £ was calculated
- Technical baseline:
- £240,000
- Containment factor (live):
- 32%
- Containment gap (final):
- 20%
- Scenario technical weight:
- 1.50×
Remaining categories (5) · £461k
External comms agency, media monitoring, and paid recovery campaigns to restore brand trust over 6–12 months.
Which moment in the exercise did the most reputational damage?
Was there a missed opportunity to take control of the public story?
How this £ was calculated
- Baseline budget:
- £150,000
- Trust damage:
- 28%
- Scenario reputation weight:
- 1.20×
Trust drop of 28% drives roughly 0.18% of annual revenue lost over the next 12 months in your sector.
What single action would have halved the trust drop?
Who was responsible for protecting customer perception during the incident?
How this £ was calculated
- Trust damage:
- 28%
- Sector churn weight:
- 1.30×
- Effective churn:
- 0.18% of ARR
- Annual revenue:
- £75,000,000
~56 staff × 13h overtime, plus employee assistance and projected turnover replacement costs.
Did anyone check on the responders' welfare during the exercise?
Who was running on adrenaline by the end — and what's the plan for them next time?
How this £ was calculated
- Affected staff:
- 56
- Overtime hours:
- 13 h
- Loaded hourly rate:
- £65/h
- Overtime cost:
- £48,685
- EAP / counselling:
- £25,000
- Turnover risk:
- £56,000
- Scenario welfare weight:
- 1.00×
Estimated 28% increase on cyber/crisis cover at next renewal (baseline £120k/yr).
Do you actually know what your cyber policy excludes?
Would this incident trigger a premium reset at your next renewal?
How this £ was calculated
- Baseline cyber premium:
- £120,000/yr
- Uplift:
- 28%
- Worse of containment gap / trust damage:
- 28%
Approx 0.1 days of degraded operations at £300k/day, scaled by your final operational capacity (75%).
What would have brought operations back online faster?
Did anyone own the call to degrade or restore service, or did it drift?
How this £ was calculated
- Daily revenue:
- £300,000/day
- Ops damage:
- 25%
- Downtime days:
- 0.13 days
- Time on incident:
- 55.0 min (0.92 h)
Cost accrual timeline
Cumulative £ per category, minute-by-minute, replaying each decision at the time you actually committed to it. As containment improves, the per-minute crisis bleed slows and category curves flatten.
- #1 · 4mTreat as a credible incident immediately+8 containment
- #2 · 12mDisconnect the affected segment from the network+14 containment
- #3 · 20mNotify the regulator within the 72-hour window+2 containment
- #4 · 30mRefuse to pay; rebuild from verified backups+10 containment
- #5 · 40mIssue a holding statement now, full statement at 17:000 containment
- #6 · 55mPhased restoration with continuous monitoring+6 containment
Per-decision response cost (6)
Debrief discussion
Use these questions to guide your team's debrief while the exercise is still fresh.
Ethical principles — self-check
Is what I am considering consistent with these ethical principles?
What would my colleagues, customers and stakeholders expect of me in this situation?
What does my organisation expect of me in this situation?
Is this action likely to reflect positively on the organisation? Will it affect stakeholder trust?
Could I explain my action or decision to the board, regulators or affected stakeholders?
Questions for supervisors / facilitators
Did you recognise and acknowledge instances of initiative or good decisions?
Did you recognise, question and challenge instances of poor decision making?
Can you relate the decision making to the organisation's ethical principles?
Are there any opportunities for organisational learning?
Lessons learned
A handful of decisions traded resilience for speed — review whether that trade was deliberate.
Document what worked so it can be repeated in future exercises.
Identify the one decision you would re-take, and rehearse the alternative.
Supporting your staff
Major incidents take a personal toll. Even staff who weren't directly affected may feel exposed, blamed, or burnt out. Building staff support into the response — not just the debrief — protects wellbeing and makes the team more resilient for the next event.
During the incident
- Make rest and handover mandatory — rotate responders so no one works more than a single shift without a break.
- Feed people. Order food in, keep water and caffeine available, and protect time for meals away from screens.
- Name a single point of contact for staff questions so the response team isn't interrupted constantly.
- Be explicit that nobody will be blamed for the initial incident (e.g. clicking a phishing link). Blame slows reporting on the next event.
- Communicate clearly and often, even when there's nothing new — silence breeds rumour and anxiety.
In the first 72 hours after
- Hold a short, structured 'hot debrief' focused on what happened and what people need now — not on judgement.
- Offer time off in lieu for those who worked extended hours, and protect it (don't pull people back into BAU immediately).
- Signpost your Employee Assistance Programme (EAP), occupational health, and any internal mental-health first-aiders.
- Watch for warning signs: sleep disruption, withdrawal, irritability, or staff repeatedly replaying the incident.
- Thank people specifically and publicly. Generic 'thanks team' messages land flat after a hard week.
Longer term
- Run a 'cold debrief' 2–4 weeks later when emotions have settled and lessons can be captured calmly.
- Review whether on-call rotas, response retainers, or staffing levels need to change so the same people aren't always carrying the load.
- Update the staff member who triggered the incident (if any) on what changed as a result — closes the loop and reinforces no-blame culture.
- Track whether anyone leaves the team in the 6 months after the incident — burnout often surfaces late.
- Build staff support into your incident response plan as a named workstream, with an owner, not as an afterthought.
Further reading
Download your report before leaving
Aggregated session data (scenario, metrics, and decisions) is saved anonymously to help improve future exercises. The full detailed report — including participant names, role assignments, the complete decision journey, and the post-incident review questions — is only available as a PDF or Word download. When you close this page, that detail is lost. Download the report now to keep a complete record.
Take this sample debrief with you
A watermarked sample of the post-exercise debrief — same format your team gets after a real run. PDF for circulation, CSV for analytics.
Sample data is illustrative — figures and outcomes are pre-scripted, not generated from a live exercise.
Like what you see?
Run your own scenario to generate a debrief tailored to your team — including PDF, Word, PowerPoint and CSV exports.
Bundles your team dynamics observations, team-layer scores, debrief responses, media training notes, and any tagged timeline moments into one printable handout.