Skip to main content

Methods Note: NGO Security Risk Management ROI Calculator

Version: 2.1.0 (Hybrid Draft)
Status: Draft for RFQ Submission
Date: 2025-10-17
Estimated Reading Time: 30–35 minutes

Table of Contents

  1. Introduction
  2. Methodological Framework
  3. Expected Annual Loss Methodology
  4. Risk Reduction Rates
  5. Financial Metrics
  6. Qualitative Valuation
  7. Scenario Logic and Workflow
  8. Worked Example
  9. Edge Cases and Data Quality
  10. Assumptions, Limitations, and Future Refinement
  11. Validation Roadmap
  12. References

1. Introduction

1.1 Purpose

This Methods Note documents the methodological foundation for the NGO Security Risk Management (SRM) ROI Calculator being developed for the Global Interagency Security Forum (GISF). It defines every equation, assumption, parameter range, and decision rule so that the calculator can be replicated, audited, and trusted by security managers, finance teams, and external reviewers.

1.2 Scope

This document covers:
  • Formulas for Expected Annual Loss (EAL), Return on Investment (ROI), Net Present Value (NPV), and Payback Period
  • The historical-cost baseline methodology and scenario fallback used to generate EAL
  • Reduction-rate guidance for common security interventions, including methodological justifications
  • Integration of the Qualitative Impact Index (QII) with financial outputs, including full checklists and scoring logic
  • Data requirements, validation rules, edge-case handling, and pilot validation roadmap
This document does not include:
  • Full user interface specifications (see plain-language tasks in .specify/memory/backlog.md and the terminology matrix in apps/docs/content-references.mdx Section 6)
  • Data collection templates (see apps/docs/pilot-pack.mdx)
  • Software implementation details (see source code comments and technical docs)

1.3 Reporting Decision Guide

Use the quick decision guide below to determine which outputs to emphasise with stakeholders:
START

Do you have reliable historical incident cost data?
  NO → Use scenario presets and report QII with qualitative narrative
  YES → Report Historical-Cost ROI + QII
Guidance
  • Financial ROI + QII (recommended default): Present quantitative savings alongside qualitative outcomes.
  • Qualitative-only reporting: When quantitative data are incomplete, document the QII evidence notes so improvements remain transparent.

2. Methodological Framework

2.1 Standards and Reference Points

  • ISO 31000:2018 Risk Management Guidelines1 – Emphasises evidence-based decision-making and transparency in documenting risk treatments.
  • Humanitarian Financial Stewardship – Prefers demonstrated impact grounded in actual spend rather than speculative forecasts; aligns with practice in donor audits and post-incident reviews.
  • Conventional Financial Analysis – ROI, NPV, and Payback formulas follow accepted corporate finance practice, with a zero-discount convention to reflect humanitarian ethics.
  • Professional Judgment with Validation – Initial parameter estimates use conservative professional judgment and are explicitly flagged for refinement during Stage 5 pilots.

2.2 Calculation Flow

1

Gather Baseline Loss

Collect historical incident costs (1–3 years) or select a scenario preset when data are unavailable.
2

Select Reduction Rate

Choose the expected percentage reduction based on the planned intervention (Section 4).
3

Compute Intervention EAL

Apply the reduction to the baseline to obtain the Expected Annual Loss after the investment.
4

Calculate Financial Metrics

Translate annual savings into ROI, payback period, and NPV using the formulas in Section 5.
5

Document Qualitative Outcomes

Capture qualitative benefits via the Qualitative Impact Index to complement financial metrics.

2.3 Design Principles

  • KISS First: The default workflow delivers results in minutes with only two numeric inputs (historical cost, reduction rate).
  • Evidence-Lite, Evidence-Honest: When precise data are unavailable, conservative estimates are surfaced with clear labels and guidance.
  • Transparency: All assumptions are visible in the calculator and repeatable from this Methods Note.
  • Optional Granularity: An advanced ARO × SLE mode remains available for organisations that require incident-level decomposition and aligns with legacy workflows.

3. Expected Annual Loss Methodology

3.1 Baseline EAL from Historical Costs

The primary estimate of Expected Annual Loss is derived from recent incident costs actually incurred by the organisation.
EAL_baseline = Σ(incident costs over last N years) / N
Where:
  • N is 1–3 years based on available records (minimum 1 year required).
  • Incident costs include all direct and indirect expenses (evacuation, medical, programme disruption, legal, reputational response).
Why this is defensible
  • Uses “known actuals” rather than speculative forecasts.
  • Aligns with financial audit practice: prior-year losses are the best available proxy for near-term expectations when environment and exposure remain similar.
  • Keeps cognitive load low for non-financial security managers.
Normalising for unusual years
  • If a year is anomalously low (no incidents) or high (major crisis), encourage a 3-year average.
  • When operations are scaling up or down, adjust with an exposure factor:
EAL_adjusted = EAL_baseline × (Projected Exposure / Historical Exposure)
Exposure can be measured using staff-days, site-months, or similar volume measures; defaults remain at 1.0 when no adjustment is supplied.

3.1.1 Historical Baseline Calculation

Consider an organisation with the following verified incident spend:
Financial YearRecorded Incident Costs (USD)Notes
FY2024$50,000Evacuation, medical, and programme delays following two incidents
FY2023$45,000Vehicle theft, emergency procurement of radios, temporary hibernation
FY2022$55,000Compound intrusion escalating to relocation
Step-by-step:
  1. Validate inputs – confirm all values are non-negative and represent full incident costs.
  2. AverageEAL_baseline = (50,000 + 45,000 + 55,000) / 3 = 50,000.
  3. Document rationale – log the data source (finance ledger + security incident log) for audit trail.
Choosing the number of years (N).
  • Use three years whenever data exist; this smooths spikes caused by rare, high-severity incidents.
  • Use two years when one year is unavailable or clearly anomalous.
  • Fall back to one year only when no additional data exist—flag the baseline as provisional and revisit once more data become available.
  • If a known change in exposure occurred (programme doubled or halved), scale the historical average using the exposure adjustment shown earlier.
Partial-year handling. When only part-year cost data are available, annualise the partial value before averaging (e.g., multiply the cost per month by 12) and record the assumption. If FY2023 data were missing, the calculator automatically averages the remaining years: (50,000 + 55,000) / 2 = 52,500. This mirrors the helper function calculateHistoricalBaselineEAL in the codebase and ensures the numerical result is reproducible from the documentation. Conversely, if only FY2024 is available, EAL_baseline = 50,000 but the scenario should note the short observation window. Relationship to the ARO × SLE approach. The historical average is mathematically equivalent to an aggregate of incident-level ARO × SLE values when historical incidents are categorised and multiplied by their unit costs. Section 3.4 provides the incident-level method; both approaches should converge when the same data underpin them. Differences greater than 10% warrant a documented explanation (data gaps, one-off events, etc.).

3.2 Handling Limited Data: Scenario Fallback

If the organisation cannot access historical cost data, the calculator provides scenario presets based on operational context. Values are intentionally conservative and will be validated with pilot NGOs.
ScenarioDescriptionBaseline EALTypical RangeNotes
Low-Risk StableUrban or well-governed contexts with few incidents$15,00010k10k–25kTypical for HQ or low-threat country offices
Medium-Risk Conflict-AffectedMixed stability, sporadic violence, reliance on local partners$75,00045k45k–105kRepresents many field programmes in fragile states
High-Risk Active ConflictPersistent insecurity, frequent incidents, remote operations$200,000120k120k–280kAssumes high reliance on security logistics
Users can accept the preset, override the value, or revisit when better data emerge. Scenario definitions and ranges will be refined using Stage 5 pilot feedback.

3.3 Intervention EAL and Annual Savings

Once a baseline is set (historical data or scenario preset), the Expected Annual Loss after the investment is calculated by applying the selected reduction rate r.
EAL_intervention = EAL_baseline × (1 - r)
Annual Savings = EAL_baseline - EAL_intervention
Annual savings represent the recurring financial benefit attributed to the security investment.

3.3.1 Intervention EAL Calculation (Example and Rate Clamping)

Continuing the example baseline (EAL_baseline = $50,000):
  1. Select reduction rate – choose a comprehensive SRM programme with moderate confidence (r = 0.50).
  2. Apply formula
    • EAL_intervention = 50,000 × (1 - 0.50) = 25,000
    • Annual Savings = 50,000 - 25,000 = 25,000
  3. Clamp unrealistic inputs – reduction rates are limited to 0–95% in the implementation. ISO 31000-aligned risk reviews emphasise residual risk; setting a 95% ceiling avoids implying total elimination of threat and prevents negative EAL values. Any analyst-supplied rate above 95% is automatically capped and should trigger additional justification in the assumption log (e.g., “eliminates a single hazard entirely while others remain”).
  4. Record assumptions – document the intervention category, confidence level, and evidence (e.g., GISF case studies) for audit purposes.
These steps mirror the calculateInterventionEAL helper and provide transparency for reviewers replicating the calculation.

3.4 Advanced Option: Incident-Level ARO × SLE

Organisations that prefer incident-level modelling will be able to enable an Advanced Mode in a future release. The current Stage 1 workflow focuses on the simplified historical-cost path, with Advanced Mode surfaced in documentation so analysts can reconcile the two approaches manually. Advanced Mode applies the classic formula:
EAL = Σ (ARO_i × SLE_i)
Use this when reliable frequency (ARO) and impact (SLE) estimates exist for individual incident categories. Advanced Mode results should reconcile with the historical baseline whenever the underlying data sets are consistent. To validate, total the incident-level EAL and compare it to the historical baseline for the same period; investigate and document any variance greater than 10%.

3.5 Parameter Guidance for Advanced Mode

Although the historical-cost workflow is the default, Advanced Mode still relies on traditional risk parameters. Analysts using this path should observe the following guardrails: Annualised Rate of Occurrence (ARO)
  • Range: ≥ 0.00 (no upper limit).
  • Rare events (ARO < 1) should be expressed as “once every X years” (ARO = 1 / X).
  • Frequent events (ARO ≥ 1) can be recorded as incidents per year (monthly = 12, weekly = 52).
  • Very high values (>10) often indicate operational costs rather than stochastic risk; double-check whether the incident should be budgeted directly.
Single Loss Expectancy (SLE)
  • Range: ≥ $0.
  • Include both direct costs (damage, medical, ransom) and indirect costs (downtime, reputational recovery).
  • When historical data are sparse, supplement with insurance estimates, sector benchmarks, or the severity helper in Section 6.7.
Time Horizon
  • Allowed range: 1–10 years.
  • Match horizon to asset lifespan (e.g., 5 years for vehicles, 10 years for facilities) and funding certainty.
All assumptions used in Advanced Mode must be documented in the calculator’s assumption log, including data sources or expert judgments applied.

4. Risk Reduction Rates

4.1 Quick-Select Reduction Table

Reduction rates estimate how much a security investment is expected to lower annual incident costs. These values are conservative professional judgments grounded in GISF research, Humanitarian Outcomes datasets, and practitioner interviews. They will be tested and refined during Stage 5 pilots.
Intervention TypeConservativeModerateOptimisticTypical Deployment Notes
Physical Security (guarding, barriers, safe rooms)20%30%50%Rapid effect once infrastructure is in place; effectiveness tied to compliance and maintenance.
Training (HEAT, security awareness, protocols)15%25%40%Benefits scale with refresher frequency and leadership uptake; expect 3–6 month ramp-up.
Technology (tracking, communications, monitoring)25%40%60%Requires integration with SOPs; best results when hardware + analytics + response protocols align.
Comprehensive SRM Programme (combined measures)35%50%70%Represents layered interventions (training + tech + governance); assumes strong management sponsorship.
How users apply the table
  1. Select the intervention category that best matches the planned investment.
  2. Choose Conservative, Moderate, or Optimistic based on confidence in implementation quality and contextual fit.
  3. The calculator displays the underlying percentage and allows fine-tuning (e.g., adjusting 25% to 27%).
  4. Record a short rationale for governance and future refinement.
Real-world illustrations
  • Physical Security (30% moderate) – e.g., compound hardening plus vetted guarding at a provincial office reduced burglary and vandalism costs by one-third within six months.
  • Training (25% moderate) – multi-cohort HEAT rollout with quarterly refreshers reduced incident-triggered downtime by an estimated 25%.
  • Technology (40% moderate) – vehicle tracking and redundant comms lowered convoy losses by roughly two-fifths during a 12‑month pilot.
  • Comprehensive Programme (50% moderate) – integrated governance, training, and technology bundle (paired with community acceptance work) halved annualised security losses across three operating regions.

4.2 Evidence and Methodological Basis

The quick-select table above is anchored in empirical ranges originally documented through incident-level analysis. When organisations need more granular justification—especially for Advanced Mode inputs—use the evidence table below.
Intervention CategoryTypical ARO ReductionTypical SLE ReductionTimeline to Full EffectMethodological Basis
Security training & protocols20–40%10–20%3–6 monthsGISF 2024 member survey (n=32); ALNAP learning briefs on HEAT effectiveness.
Physical security infrastructure10–30%20–40%1–3 monthsHumanitarian Outcomes incident review; GISF Field Security Handbook.
Cyber/communications technology30–60%20–50%Immediate–3 monthsHumanitarian Outcomes 2023 AWSD trend analysis; NGO cybersecurity audits.
Security personnel & monitoring20–50%15–30%6–12 monthsFAIR-adapted risk models; academic studies on man-guarding deterrence.
Community acceptance strategies30–50%10–25%12–24 monthsCDA collaborative learning projects; GISF “Acceptance as a Security Strategy” report.
Using both tables together
  • The quick-select reduction percentage is the headline number applied in the calculator.
  • The evidence table provides the narrative justification and can be used to decompose reductions into frequency (ARO) vs impact (SLE) components when stakeholders require transparency.
  • All selected rates must be documented in the calculator’s assumption log with reference to data source (historical performance, literature citation, expert interview).

4.3 Sensitivity Guidance

Because reduction rates carry uncertainty, organisations should present optimistic and conservative brackets during decision meetings, mirroring finance review practice. The Stage 1 interface provides professional-judgment presets (conservative/moderate/optimistic) plus a custom percentage input. Teams can manually enter ±10% scenarios using the custom field and document their rationale in the assumption log.

5. Financial Metrics

5.1 Annual Savings and Benefit Stream

Annual savings calculated in Section 3.3 are treated as a recurring benefit across the analysis horizon (default three years). Qualitative benefits (Section 6) are tracked separately and are not monetised unless the organisation supplies defensible proxies.

5.2 Return on Investment (ROI)

ROI = ((Σ Annual Savings) - Total Investment Costs) / Total Investment Costs
  • Summation typically runs over a 3-year horizon with 0% discounting.
  • ROI is expressed as a decimal or percentage (multiply by 100).
  • Positive ROI indicates cost savings exceed investment costs.

5.2.1 ROI Worked Example

Using the savings from Section 3.3.1:
  • Annual Savings: $25,000
  • Time Horizon: 3 years
  • Investment Costs: Year 1 = 40,000,Year2=40,000, Year 2 = 12,000, Year 3 = $12,000
Steps:
  1. Aggregate savings: Σ Savings = 25,000 × 3 = 75,000.
  2. Aggregate costs: Σ Costs = 40,000 + 12,000 + 12,000 = 64,000.
  3. ROI calculation: ROI = (75,000 - 64,000) / 64,000 = 0.1719 (17.19%).
Interpretation: The investment returns approximately 1.17foreach1.17 for each 1 spent across three years. Present both ROI and the underlying savings/costs to maintain transparency with governance forums.

5.3 Payback Period

Identify the first year where cumulative savings equal or exceed cumulative costs.
Payback Year = smallest Y where Σ_{t=1..Y} Savings_t ≥ Σ_{t=1..Y} Costs_t
If payback occurs mid-year, interpolate linearly:
Payback = (Y - 1) + (Remaining Costs / Savings in Year Y)

5.4 Net Present Value (NPV) and Discounting Rationale

The calculator applies a 0% discount rate to reflect humanitarian ethics (future lives have equal value).
NPV = Σ (Costs_t)  with r = 0%
Ethical and practical justification
  1. Lives have equal value across time: A beneficiary protected in Year 3 has the same intrinsic worth as one protected in Year 1.
  2. Mission alignment: Unlike for-profit enterprises maximising financial returns (which use 7–10% discount rates reflecting opportunity cost of capital), NGOs maximise human welfare.
  3. Philosophical foundation: Ramsey (1928) condemned discounting future welfare as “ethically indefensible.” 2
  4. Humanitarian precedent: The Stern Review (2006) used near-zero discounting (1.4%) to avoid devaluing future lives in climate policy. 3
  5. Intergenerational equity: Zero discounting ensures current investments benefit future staff, communities, and beneficiaries without temporal discrimination.
  6. Practical safeguard: Leaving discounting decisions to end users risks inconsistent application; the calculator instead provides a neutral baseline and guidance for external re-application if required.
Organisations that must apply financial discounting (e.g., donor-driven cost of capital) can export the cash-flow table and apply their preferred rate externally. Section 5.5 provides horizon sensitivity to support those discussions.

5.5 Time Horizon Sensitivity

Security investments often last beyond a three-year planning window. Extending the analysis horizon captures long-term payback and ROI.
Horizon (years)Cumulative Savings ($)Payback TimingROI (undiscounted)Interpretation
3 (default)3 × Annual SavingsPayback may not occurROI may remain negativeAppropriate for short grants or pilots.
55 × Annual SavingsOften reaches paybackROI typically positive if savings stableAligns with vehicle/asset lifespans.
1010 × Annual SavingsPayback achieved earlyROI markedly positiveUse when interventions deliver enduring benefits (infrastructure, governance).
When presenting results, disclose the analysis horizon and offer an extended-horizon scenario if the investment continues delivering savings beyond Year 3.

6. Qualitative Valuation

6.1 Framework Overview

Quantitative savings rarely capture the full value of security investments. The Qualitative Impact Index (QII) documents how SRM enables mission delivery across four outcome dimensions:
  1. Access to Environment (AE) – ability to reach and operate in critical locations.
  2. Operational Continuity (OC) – reduced downtime and faster recovery after incidents.
  3. Community Acceptance (CA) – strengthened social licence and community support.
  4. Staff Wellbeing (SW) – improved morale, retention, and duty of care.
The QII is intentionally lightweight, evidence-driven, and auditable. It aligns with ISO 31000’s call for transparent decision-making, Social Return on Investment (SROI) practice4, OECD-DAC evaluation criteria5, and ALNAP/GISF guidance on enabling operations in high-risk contexts.

6.2 Dimension Overview and Weighting

Each dimension captures a distinct pathway through which SRM investments create value. Default weights are 0.25 per dimension (summing to 1.0). Teams may customise weights when organisational priorities differ; all changes must be justified in the calculator’s notes. Weights are renormalised automatically to maintain a total of 1.0.

6.3 Scoring Workflow

Each dimension follows three quick steps during review meetings or focus groups:
  1. Regression check: If deterioration is observed, select the regression statement, record evidence, and assign score 0 (skip improvement statements).
  2. Improvement checklist: Tick each statement that accurately describes the past 12 months relative to baseline. Prompts rely on operational knowledge rather than formal datasets.
  3. Map ticks to score: Convert the number of improvement statements ticked to the 0–5 scale using Table 6.1.
Table 6.1 – Checklist to score mapping
Regression selected?Improvement boxes tickedAssigned score
YesN/A (skip checklist)0
No0 boxes1
No1 box2
No2 boxes3
No3 boxes4
No4 boxes5
Score 1 represents “no observed improvement” (status quo). Scores 2–5 capture escalating levels of demonstrable improvement. Any regression resets the score to 0 regardless of improvements in other areas.

6.4 Dimension Checklists

Teams may adjust wording to local context while preserving intent. Each statement is supported by evidence notes (meeting minutes, incident logs, survey excerpts). Guidance for documenting evidence, including example notes, appears in the Pilot Pack (Appendix B).

Access to Environment (AE)

Regression check: ☐ We lost access to critical locations we previously served (e.g., repeated mission cancellations or newly closed districts). Improvement checks (tick all that apply):
  • ☐ We regained access to at least one high-priority location that had been closed.
  • ☐ The regained access has been sustained for six months or more without emergency waivers.
  • ☐ Most critical locations are now reachable on routine rotas with predictable approvals.
  • ☐ We reached additional communities or beneficiaries because of the improved access.

Operational Continuity (OC)

Regression check: ☐ Security-related downtime increased or we experienced more suspensions than our baseline period. Improvement checks (tick all that apply):
  • ☐ We had fewer security-driven suspensions or closures than last year.
  • ☐ After incidents, teams resumed work within planned timelines (e.g., back on schedule within the expected window).
  • ☐ Contingency plans or backup sites kept essential services running during disruptions.
  • ☐ We met donor/programme deliverables despite security incidents, without accumulating large backlogs.

Community Acceptance (CA)

Regression check: ☐ Community-triggered incidents, access denials, or complaints increased. Improvement checks (tick all that apply):
  • ☐ Community complaints or incident reports linked to acceptance issues dropped noticeably.
  • ☐ We have a regular forum or liaison mechanism that community representatives attend.
  • ☐ Community leaders helped resolve at least one security or access issue before it escalated.
  • ☐ Recent feedback (surveys, partner letters, third-party reviews) points to strong support or trust.

Staff Wellbeing (SW)

Regression check: ☐ Security-related stress, turnover, or medical leave worsened. Improvement checks (tick all that apply):
  • ☐ Security-related turnover, sick leave, or exceptional R&R requests decreased.
  • ☐ Uptake of wellbeing resources (counselling, peer support, check-ins) increased.
  • ☐ Supervisors run routine wellbeing check-ins after incidents or high-stress periods.
  • ☐ Staff surveys or pulse checks report feeling safer and better supported.

6.5 Aggregation and Reporting

For period t, the calculator stores each dimension’s score score_d,t and weight w_d.
QII_t = Σ_d (w_d × score_d,t)
Reporting should present:
  • Financial outputs: ROI, NPV, payback, annual savings.
  • Qualitative summary: QII score, dimension-level scores, and concise evidence notes explaining which statements were selected.
  • Confidence signals: Optional High / Medium / Low tags to flag anecdotal evidence or data gaps.

6.6 Limitations and Mitigations

  • Evidence gaps: Record qualitative notes for each score and track follow-up actions when stronger evidence is desired.
  • Double counting risk: Keep the four dimensions conceptually distinct so the same improvement is not scored twice.
  • Scenario spillover: When qualitative gains also affect EAL (e.g., better acceptance reduces incidents), record the change in only one part of the model and note the rationale.
  • Periodic review: Refresh qualitative scores at least annually to ensure they reflect current realities and to capture regression when environments deteriorate.

6.7 Optional SLE Severity Helper (Advanced Mode)

When organisations enable Advanced Mode and supply their annual operating budget, the calculator can assist with Single Loss Expectancy estimates using budget-relative severity bands. This mirrors the helper introduced in v1.4 and remains available for analysts who prefer structured scaffolding.
SeverityBudget %DescriptionExample Incidents
Negligible0.1–0.5%Minor incidents with limited organisational impactMinor property loss, short IT disruption
Minor0.5–2%Localised incidents requiring standard responseOffice burglary, short evacuation
Moderate2–6%Significant incidents requiring management attentionVehicle theft, data breach, extended facility closure
Severe6–20%Major incidents affecting operations significantlyStaff injury requiring evacuation, major system compromise
Critical20%+Catastrophic incidents threatening organisational viabilityKidnapping with ransom, catastrophic facility destruction
Calculator behaviour
  • When an annual budget is provided, the SLE input offers the severity dropdown with pre-computed mid-point values; users can override with custom amounts at any time.
  • Without budget data, the standard numeric SLE input is displayed.
  • The helper is a guide only—historical incident costs, insurance data, or expert judgment should take precedence whenever available.
See Data Schema & Codebook Section 3.3.1 for the organisation context fields that activate this helper.

7. Scenario Logic and Workflow

  1. Quick-Start (Default):
    • Prompt 1: “Do you have the total cost of security incidents for the last year (or average of last 2–3 years)?”
    • Prompt 2: “Choose the expected reduction from your planned investment.”
    • Output: Baseline EAL, Intervention EAL, Savings, ROI, Payback, QII placeholder.
  2. Scenario Fallback:
    • If the user selects “No data,” display the scenario cards from Section 3.2.
    • Allow optional override and collect a confidence tag (High / Medium / Low).
  3. Exposure Adjustment (Optional):
    • If operations are expected to scale materially, prompt for exposure ratios (e.g., projected staff-days ÷ historical staff-days) and apply the multiplier from Section 3.1.
    • Hidden by default to keep the quick-start path uncluttered.
  4. Advanced Mode (Optional):
    • Toggle reveals incident-level entry for ARO and SLE.
    • Results reconcile with the primary method to maintain trust with analysts who prefer granular modelling.
  5. Results Presentation:
    • Dashboard shows annual savings, ROI, payback, and qualitative highlights.
    • Sensitivity control (±10%) illustrates how results change if reduction rates vary.

8. Worked Example

Context: Medium-sized NGO operating in a conflict-affected region planning to invest in comprehensive SRM improvements.
  1. Historical Costs: 225,000totaloverthelast3yearsEALbaseline=225,000/3=225,000 total over the last 3 years → `EAL_baseline = 225,000 / 3 = 75,000`.
  2. Reduction Rate: Select “Moderate – Comprehensive SRM (50%)”.
  3. Intervention EAL: 75,000 × (1 - 0.50) = $37,500.
  4. Annual Savings: 75,000 - 37,500 = $37,500.
  5. Investment Costs: Year 1 = 90,000;Year2=90,000; Year 2 = 45,000; Year 3 = $25,000 (training refresh).
  6. ROI (3-year horizon):
    • Total Savings (3 years) = $112,500
    • Total Costs = $160,000
    • ROI = (112,500 - 160,000) / 160,000 = -0.297 (−29.7%)
  7. Payback:
    • Year 1 savings = 37,500 (costs exceed benefits)
    • Year 2 cumulative savings = 75,000 (below cumulative costs of 135,000)
    • Year 3 cumulative savings = 112,500 (below total costs of 160,000)
    • Payback occurs in Year 5 if savings remain constant (160,000 ÷ 37,500 ≈ 4.3 years).
  8. Extended Horizon Sensitivity:
    • 5-year savings: $187,500 → ROI = (187,500 - 160,000) / 160,000 = 17.2%
    • 10-year savings: $375,000 → ROI = 134.4%
    • Communicate both 3-year baseline and extended horizon to decision-makers.
  9. QII:
    • Access (tick 3 statements) = score 4
    • Continuity (tick 2 statements) = score 3
    • Acceptance (tick 1 statement) = score 2
  • Staff wellbeing (tick 2 statements) = score 3
  • Weighted QII = 0.25 × (4 + 3 + 2 + 3) = 3.0.
Reporting Narrative: Even with a negative three-year ROI, the investment halves expected losses and materially improves access and wellbeing. Leadership should consider the extended payback horizon alongside qualitative gains.

9. Edge Cases and Data Quality

  • Zero incidents recorded: When users enter $0 historical costs, the calculator prompts them to (a) select a scenario preset or (b) manually enter a proxy baseline. If they proceed with $0, the tool sets EAL_baseline = 0, Savings = 0, and ROI = -100% to avoid division errors, while displaying a red “Data required” badge.
  • One-off catastrophic year: Users can apply either a three-year rolling average or a manual cap. When EAL_capped is supplied, the calculator records EAL_baseline = MIN(avg_3yr, EAL_capped) and logs the override note in the audit trail. Dashboard badges flag capped baselines in amber.
  • Rapid scaling (exposure change): Provide projected and historical exposure values (e.g., staff-days). The tool applies EAL_adjusted = EAL_baseline × (Projected / Historical) and stores the ratio for audit. Sensitivity output shows both original and adjusted baselines.
  • Low confidence in reduction rate: Selecting the “Low confidence” toggle displays ±10% scenario bands next to the main result and annotates the export. This ensures decision-makers see conservative and optimistic variants.
  • Switching to Advanced Mode: The calculator sums incident-level Σ(ARO_i × SLE_i) and compares it with the baseline. Variance beyond ±10% surfaces a warning requiring users to reconcile assumptions before proceeding.

10. Assumptions, Limitations, and Future Refinement

  1. Historical Stability: Assumes incident patterns in the near future resemble recent history unless exposure changes are documented.
  2. Single Reduction Rate: Applies one rate across all incident types; further granularity is available only in Advanced Mode.
  3. Conservative Estimates: Reduction ranges are intentionally modest; Stage 5 pilots will validate and adjust them.
  4. Zero Discounting: Prioritises ethical parity over the financial cost of capital. Organisations needing discounting can apply it externally.
  5. Qualitative-Quantitative Separation: Prevents double counting but requires disciplined storytelling to integrate both dimensions.

11. Validation Roadmap

  • Scenario Presets: Validate against pilot NGO incident datasets. Adjust baseline values if variance exceeds ±30%; publish updated ranges in Methods Note v2.2.
  • Reduction Rates: Conduct rapid expert elicitation (5–10 SRM managers) and reconcile with pilot outcomes. Update table bands if consensus differs by >10 percentage points.
  • Qualitative Framework: Gather user feedback on checklist clarity and scoring consistency; refine prompts and add sector-specific examples as needed.
  • Calculator Outputs: Compare results with manual spreadsheet replications during pilot debriefs to confirm formula accuracy.

12. References

  • ISO 31000:2018 – Risk Management Guidelines.
  • Ramsey, F. P. (1928). A Mathematical Theory of Saving. Economic Journal, 38(152).
  • Stern Review (2006). The Economics of Climate Change. HM Treasury.
  • Hubbard, D. (2009). The Failure of Risk Management. Wiley.
  • Taleb, N. (2007). The Black Swan. Random House.
  • GISF (2025). Call for Proposals: Development of an ROI Tool for NGO Security Risk Management.
  • Humanitarian Outcomes (2023). Aid Worker Security Report.
  • CDA Collaborative Learning Projects (2022). Community Engagement for Security Access.
  • ALNAP (2021). Security Risk Management in Practice.
  • Nicholls, J., Lawlor, E., Neitzert, E., & Goodspeed, T. (2012). A Guide to Social Return on Investment. Social Value UK.
  • OECD (2019). Better Criteria for Better Evaluation: Revised Evaluation Criteria Definitions and Principles for Use. OECD Publishing.

Document Control

Version: 2.1.0
Status: Draft for RFQ Submission
Date: 2025-10-17
Next Review: 2026-01-10 (quarterly review cycle)
Change Log
DateVersionChangesAuthor
2025-10-172.1.0Hybrid rewrite combining historical-cost workflow with full qualitative and parameter documentation; restored decision guide, SLE severity helper, and validation roadmap.Project Team
2025-10-172.0.0Historical-cost baseline prototype with simplified documentation (superseded by v2.1.0).Project Team
2025-10-151.4.2Clarified Staff Wellbeing as required core dimension, aligned qualitative guidance with implemented features, and linked SLE severity helper to Data Schema context fields.Project Team
2025-10-141.4.1Stage 1 alignment: refreshed documentation, validation fixtures, and application UI to reflect checklist scoring plus evidence notes.Project Team
2025-10-101.0Initial release – comprehensive methodological framework.Project Team
Approval Sign-Off
  • ✅ Technical Lead – Formula accuracy verified
  • ⏳ Product Owner – RFQ Stage 1 requirements fulfilled (pending review)
  • ⏳ GISF Stakeholder – Methodological transparency confirmed (pending review)

Footnotes

  1. International Organization for Standardization (2018), ISO 31000: Risk Management — Guidelines.
  2. Frank P. Ramsey (1928), “A Mathematical Theory of Saving,” The Economic Journal, 38(152), pp. 543–559.
  3. Nicholas Stern (2006), The Economics of Climate Change: The Stern Review, HM Treasury.
  4. Nicholls, J., Lawlor, E., Neitzert, E., & Goodspeed, T. (2012), A Guide to Social Return on Investment. Social Value UK.
  5. OECD (2019), Better Criteria for Better Evaluation: Revised Evaluation Criteria Definitions and Principles for Use.