Methods Note: NGO Security Risk Management ROI Calculator
Version: 2.1.0 (Hybrid Draft)Status: Draft for RFQ Submission
Date: 2025-10-17
Estimated Reading Time: 30–35 minutes
Table of Contents
- Introduction
- Methodological Framework
- Expected Annual Loss Methodology
- Risk Reduction Rates
- Financial Metrics
- Qualitative Valuation
- Scenario Logic and Workflow
- Worked Example
- Edge Cases and Data Quality
- Assumptions, Limitations, and Future Refinement
- Validation Roadmap
- References
1. Introduction
1.1 Purpose
This Methods Note documents the methodological foundation for the NGO Security Risk Management (SRM) ROI Calculator being developed for the Global Interagency Security Forum (GISF). It defines every equation, assumption, parameter range, and decision rule so that the calculator can be replicated, audited, and trusted by security managers, finance teams, and external reviewers.1.2 Scope
This document covers:
- Formulas for Expected Annual Loss (EAL), Return on Investment (ROI), Net Present Value (NPV), and Payback Period
- The historical-cost baseline methodology and scenario fallback used to generate EAL
- Reduction-rate guidance for common security interventions, including methodological justifications
- Integration of the Qualitative Impact Index (QII) with financial outputs, including full checklists and scoring logic
- Data requirements, validation rules, edge-case handling, and pilot validation roadmap
1.3 Reporting Decision Guide
Use the quick decision guide below to determine which outputs to emphasise with stakeholders:- Financial ROI + QII (recommended default): Present quantitative savings alongside qualitative outcomes.
- Qualitative-only reporting: When quantitative data are incomplete, document the QII evidence notes so improvements remain transparent.
2. Methodological Framework
2.1 Standards and Reference Points
- ISO 31000:2018 Risk Management Guidelines1 – Emphasises evidence-based decision-making and transparency in documenting risk treatments.
- Humanitarian Financial Stewardship – Prefers demonstrated impact grounded in actual spend rather than speculative forecasts; aligns with practice in donor audits and post-incident reviews.
- Conventional Financial Analysis – ROI, NPV, and Payback formulas follow accepted corporate finance practice, with a zero-discount convention to reflect humanitarian ethics.
- Professional Judgment with Validation – Initial parameter estimates use conservative professional judgment and are explicitly flagged for refinement during Stage 5 pilots.
2.2 Calculation Flow
1
Gather Baseline Loss
Collect historical incident costs (1–3 years) or select a scenario preset when data are unavailable.
2
Select Reduction Rate
Choose the expected percentage reduction based on the planned intervention (Section 4).
3
Compute Intervention EAL
Apply the reduction to the baseline to obtain the Expected Annual Loss after the investment.
4
Calculate Financial Metrics
Translate annual savings into ROI, payback period, and NPV using the formulas in Section 5.
5
Document Qualitative Outcomes
Capture qualitative benefits via the Qualitative Impact Index to complement financial metrics.
2.3 Design Principles
- KISS First: The default workflow delivers results in minutes with only two numeric inputs (historical cost, reduction rate).
- Evidence-Lite, Evidence-Honest: When precise data are unavailable, conservative estimates are surfaced with clear labels and guidance.
- Transparency: All assumptions are visible in the calculator and repeatable from this Methods Note.
- Optional Granularity: An advanced ARO × SLE mode remains available for organisations that require incident-level decomposition and aligns with legacy workflows.
3. Expected Annual Loss Methodology
3.1 Baseline EAL from Historical Costs
The primary estimate of Expected Annual Loss is derived from recent incident costs actually incurred by the organisation.Nis 1–3 years based on available records (minimum 1 year required).- Incident costs include all direct and indirect expenses (evacuation, medical, programme disruption, legal, reputational response).
- Uses “known actuals” rather than speculative forecasts.
- Aligns with financial audit practice: prior-year losses are the best available proxy for near-term expectations when environment and exposure remain similar.
- Keeps cognitive load low for non-financial security managers.
- If a year is anomalously low (no incidents) or high (major crisis), encourage a 3-year average.
- When operations are scaling up or down, adjust with an exposure factor:
3.1.1 Historical Baseline Calculation
Consider an organisation with the following verified incident spend:| Financial Year | Recorded Incident Costs (USD) | Notes |
|---|---|---|
| FY2024 | $50,000 | Evacuation, medical, and programme delays following two incidents |
| FY2023 | $45,000 | Vehicle theft, emergency procurement of radios, temporary hibernation |
| FY2022 | $55,000 | Compound intrusion escalating to relocation |
- Validate inputs – confirm all values are non-negative and represent full incident costs.
- Average –
EAL_baseline = (50,000 + 45,000 + 55,000) / 3 = 50,000. - Document rationale – log the data source (finance ledger + security incident log) for audit trail.
N).
- Use three years whenever data exist; this smooths spikes caused by rare, high-severity incidents.
- Use two years when one year is unavailable or clearly anomalous.
- Fall back to one year only when no additional data exist—flag the baseline as provisional and revisit once more data become available.
- If a known change in exposure occurred (programme doubled or halved), scale the historical average using the exposure adjustment shown earlier.
(50,000 + 55,000) / 2 = 52,500. This mirrors the helper function calculateHistoricalBaselineEAL in the codebase and ensures the numerical result is reproducible from the documentation. Conversely, if only FY2024 is available, EAL_baseline = 50,000 but the scenario should note the short observation window.
Relationship to the ARO × SLE approach. The historical average is mathematically equivalent to an aggregate of incident-level ARO × SLE values when historical incidents are categorised and multiplied by their unit costs. Section 3.4 provides the incident-level method; both approaches should converge when the same data underpin them. Differences greater than 10% warrant a documented explanation (data gaps, one-off events, etc.).
3.2 Handling Limited Data: Scenario Fallback
If the organisation cannot access historical cost data, the calculator provides scenario presets based on operational context. Values are intentionally conservative and will be validated with pilot NGOs.| Scenario | Description | Baseline EAL | Typical Range | Notes |
|---|---|---|---|---|
| Low-Risk Stable | Urban or well-governed contexts with few incidents | $15,000 | 25k | Typical for HQ or low-threat country offices |
| Medium-Risk Conflict-Affected | Mixed stability, sporadic violence, reliance on local partners | $75,000 | 105k | Represents many field programmes in fragile states |
| High-Risk Active Conflict | Persistent insecurity, frequent incidents, remote operations | $200,000 | 280k | Assumes high reliance on security logistics |
3.3 Intervention EAL and Annual Savings
Once a baseline is set (historical data or scenario preset), the Expected Annual Loss after the investment is calculated by applying the selected reduction rater.
3.3.1 Intervention EAL Calculation (Example and Rate Clamping)
Continuing the example baseline (EAL_baseline = $50,000):
- Select reduction rate – choose a comprehensive SRM programme with moderate confidence (
r = 0.50). - Apply formula
EAL_intervention = 50,000 × (1 - 0.50) = 25,000Annual Savings = 50,000 - 25,000 = 25,000
- Clamp unrealistic inputs – reduction rates are limited to 0–95% in the implementation. ISO 31000-aligned risk reviews emphasise residual risk; setting a 95% ceiling avoids implying total elimination of threat and prevents negative EAL values. Any analyst-supplied rate above 95% is automatically capped and should trigger additional justification in the assumption log (e.g., “eliminates a single hazard entirely while others remain”).
- Record assumptions – document the intervention category, confidence level, and evidence (e.g., GISF case studies) for audit purposes.
calculateInterventionEAL helper and provide transparency for reviewers replicating the calculation.
3.4 Advanced Option: Incident-Level ARO × SLE
Organisations that prefer incident-level modelling will be able to enable an Advanced Mode in a future release. The current Stage 1 workflow focuses on the simplified historical-cost path, with Advanced Mode surfaced in documentation so analysts can reconcile the two approaches manually. Advanced Mode applies the classic formula:ARO) and impact (SLE) estimates exist for individual incident categories. Advanced Mode results should reconcile with the historical baseline whenever the underlying data sets are consistent. To validate, total the incident-level EAL and compare it to the historical baseline for the same period; investigate and document any variance greater than 10%.
3.5 Parameter Guidance for Advanced Mode
Although the historical-cost workflow is the default, Advanced Mode still relies on traditional risk parameters. Analysts using this path should observe the following guardrails: Annualised Rate of Occurrence (ARO)- Range: ≥ 0.00 (no upper limit).
- Rare events (ARO < 1) should be expressed as “once every X years” (
ARO = 1 / X). - Frequent events (ARO ≥ 1) can be recorded as incidents per year (monthly = 12, weekly = 52).
- Very high values (>10) often indicate operational costs rather than stochastic risk; double-check whether the incident should be budgeted directly.
- Range: ≥ $0.
- Include both direct costs (damage, medical, ransom) and indirect costs (downtime, reputational recovery).
- When historical data are sparse, supplement with insurance estimates, sector benchmarks, or the severity helper in Section 6.7.
- Allowed range: 1–10 years.
- Match horizon to asset lifespan (e.g., 5 years for vehicles, 10 years for facilities) and funding certainty.
4. Risk Reduction Rates
4.1 Quick-Select Reduction Table
Reduction rates estimate how much a security investment is expected to lower annual incident costs. These values are conservative professional judgments grounded in GISF research, Humanitarian Outcomes datasets, and practitioner interviews. They will be tested and refined during Stage 5 pilots.| Intervention Type | Conservative | Moderate | Optimistic | Typical Deployment Notes |
|---|---|---|---|---|
| Physical Security (guarding, barriers, safe rooms) | 20% | 30% | 50% | Rapid effect once infrastructure is in place; effectiveness tied to compliance and maintenance. |
| Training (HEAT, security awareness, protocols) | 15% | 25% | 40% | Benefits scale with refresher frequency and leadership uptake; expect 3–6 month ramp-up. |
| Technology (tracking, communications, monitoring) | 25% | 40% | 60% | Requires integration with SOPs; best results when hardware + analytics + response protocols align. |
| Comprehensive SRM Programme (combined measures) | 35% | 50% | 70% | Represents layered interventions (training + tech + governance); assumes strong management sponsorship. |
- Select the intervention category that best matches the planned investment.
- Choose Conservative, Moderate, or Optimistic based on confidence in implementation quality and contextual fit.
- The calculator displays the underlying percentage and allows fine-tuning (e.g., adjusting 25% to 27%).
- Record a short rationale for governance and future refinement.
- Physical Security (30% moderate) – e.g., compound hardening plus vetted guarding at a provincial office reduced burglary and vandalism costs by one-third within six months.
- Training (25% moderate) – multi-cohort HEAT rollout with quarterly refreshers reduced incident-triggered downtime by an estimated 25%.
- Technology (40% moderate) – vehicle tracking and redundant comms lowered convoy losses by roughly two-fifths during a 12‑month pilot.
- Comprehensive Programme (50% moderate) – integrated governance, training, and technology bundle (paired with community acceptance work) halved annualised security losses across three operating regions.
4.2 Evidence and Methodological Basis
The quick-select table above is anchored in empirical ranges originally documented through incident-level analysis. When organisations need more granular justification—especially for Advanced Mode inputs—use the evidence table below.| Intervention Category | Typical ARO Reduction | Typical SLE Reduction | Timeline to Full Effect | Methodological Basis |
|---|---|---|---|---|
| Security training & protocols | 20–40% | 10–20% | 3–6 months | GISF 2024 member survey (n=32); ALNAP learning briefs on HEAT effectiveness. |
| Physical security infrastructure | 10–30% | 20–40% | 1–3 months | Humanitarian Outcomes incident review; GISF Field Security Handbook. |
| Cyber/communications technology | 30–60% | 20–50% | Immediate–3 months | Humanitarian Outcomes 2023 AWSD trend analysis; NGO cybersecurity audits. |
| Security personnel & monitoring | 20–50% | 15–30% | 6–12 months | FAIR-adapted risk models; academic studies on man-guarding deterrence. |
| Community acceptance strategies | 30–50% | 10–25% | 12–24 months | CDA collaborative learning projects; GISF “Acceptance as a Security Strategy” report. |
- The quick-select reduction percentage is the headline number applied in the calculator.
- The evidence table provides the narrative justification and can be used to decompose reductions into frequency (ARO) vs impact (SLE) components when stakeholders require transparency.
- All selected rates must be documented in the calculator’s assumption log with reference to data source (historical performance, literature citation, expert interview).
4.3 Sensitivity Guidance
Because reduction rates carry uncertainty, organisations should present optimistic and conservative brackets during decision meetings, mirroring finance review practice. The Stage 1 interface provides professional-judgment presets (conservative/moderate/optimistic) plus a custom percentage input. Teams can manually enter ±10% scenarios using the custom field and document their rationale in the assumption log.5. Financial Metrics
5.1 Annual Savings and Benefit Stream
Annual savings calculated in Section 3.3 are treated as a recurring benefit across the analysis horizon (default three years). Qualitative benefits (Section 6) are tracked separately and are not monetised unless the organisation supplies defensible proxies.5.2 Return on Investment (ROI)
- Summation typically runs over a 3-year horizon with 0% discounting.
- ROI is expressed as a decimal or percentage (multiply by 100).
- Positive ROI indicates cost savings exceed investment costs.
5.2.1 ROI Worked Example
Using the savings from Section 3.3.1:- Annual Savings: $25,000
- Time Horizon: 3 years
- Investment Costs: Year 1 = 12,000, Year 3 = $12,000
- Aggregate savings:
Σ Savings = 25,000 × 3 = 75,000. - Aggregate costs:
Σ Costs = 40,000 + 12,000 + 12,000 = 64,000. - ROI calculation:
ROI = (75,000 - 64,000) / 64,000 = 0.1719(17.19%).
5.3 Payback Period
Identify the first year where cumulative savings equal or exceed cumulative costs.5.4 Net Present Value (NPV) and Discounting Rationale
The calculator applies a 0% discount rate to reflect humanitarian ethics (future lives have equal value).- Lives have equal value across time: A beneficiary protected in Year 3 has the same intrinsic worth as one protected in Year 1.
- Mission alignment: Unlike for-profit enterprises maximising financial returns (which use 7–10% discount rates reflecting opportunity cost of capital), NGOs maximise human welfare.
- Philosophical foundation: Ramsey (1928) condemned discounting future welfare as “ethically indefensible.” 2
- Humanitarian precedent: The Stern Review (2006) used near-zero discounting (1.4%) to avoid devaluing future lives in climate policy. 3
- Intergenerational equity: Zero discounting ensures current investments benefit future staff, communities, and beneficiaries without temporal discrimination.
- Practical safeguard: Leaving discounting decisions to end users risks inconsistent application; the calculator instead provides a neutral baseline and guidance for external re-application if required.
5.5 Time Horizon Sensitivity
Security investments often last beyond a three-year planning window. Extending the analysis horizon captures long-term payback and ROI.| Horizon (years) | Cumulative Savings ($) | Payback Timing | ROI (undiscounted) | Interpretation |
|---|---|---|---|---|
| 3 (default) | 3 × Annual Savings | Payback may not occur | ROI may remain negative | Appropriate for short grants or pilots. |
| 5 | 5 × Annual Savings | Often reaches payback | ROI typically positive if savings stable | Aligns with vehicle/asset lifespans. |
| 10 | 10 × Annual Savings | Payback achieved early | ROI markedly positive | Use when interventions deliver enduring benefits (infrastructure, governance). |
6. Qualitative Valuation
6.1 Framework Overview
Quantitative savings rarely capture the full value of security investments. The Qualitative Impact Index (QII) documents how SRM enables mission delivery across four outcome dimensions:- Access to Environment (AE) – ability to reach and operate in critical locations.
- Operational Continuity (OC) – reduced downtime and faster recovery after incidents.
- Community Acceptance (CA) – strengthened social licence and community support.
- Staff Wellbeing (SW) – improved morale, retention, and duty of care.
6.2 Dimension Overview and Weighting
Each dimension captures a distinct pathway through which SRM investments create value. Default weights are 0.25 per dimension (summing to 1.0). Teams may customise weights when organisational priorities differ; all changes must be justified in the calculator’s notes. Weights are renormalised automatically to maintain a total of 1.0.6.3 Scoring Workflow
Each dimension follows three quick steps during review meetings or focus groups:- Regression check: If deterioration is observed, select the regression statement, record evidence, and assign score 0 (skip improvement statements).
- Improvement checklist: Tick each statement that accurately describes the past 12 months relative to baseline. Prompts rely on operational knowledge rather than formal datasets.
- Map ticks to score: Convert the number of improvement statements ticked to the 0–5 scale using Table 6.1.
| Regression selected? | Improvement boxes ticked | Assigned score |
|---|---|---|
| Yes | N/A (skip checklist) | 0 |
| No | 0 boxes | 1 |
| No | 1 box | 2 |
| No | 2 boxes | 3 |
| No | 3 boxes | 4 |
| No | 4 boxes | 5 |
6.4 Dimension Checklists
Teams may adjust wording to local context while preserving intent. Each statement is supported by evidence notes (meeting minutes, incident logs, survey excerpts). Guidance for documenting evidence, including example notes, appears in the Pilot Pack (Appendix B).Access to Environment (AE)
Regression check: ☐ We lost access to critical locations we previously served (e.g., repeated mission cancellations or newly closed districts). Improvement checks (tick all that apply):- ☐ We regained access to at least one high-priority location that had been closed.
- ☐ The regained access has been sustained for six months or more without emergency waivers.
- ☐ Most critical locations are now reachable on routine rotas with predictable approvals.
- ☐ We reached additional communities or beneficiaries because of the improved access.
Operational Continuity (OC)
Regression check: ☐ Security-related downtime increased or we experienced more suspensions than our baseline period. Improvement checks (tick all that apply):- ☐ We had fewer security-driven suspensions or closures than last year.
- ☐ After incidents, teams resumed work within planned timelines (e.g., back on schedule within the expected window).
- ☐ Contingency plans or backup sites kept essential services running during disruptions.
- ☐ We met donor/programme deliverables despite security incidents, without accumulating large backlogs.
Community Acceptance (CA)
Regression check: ☐ Community-triggered incidents, access denials, or complaints increased. Improvement checks (tick all that apply):- ☐ Community complaints or incident reports linked to acceptance issues dropped noticeably.
- ☐ We have a regular forum or liaison mechanism that community representatives attend.
- ☐ Community leaders helped resolve at least one security or access issue before it escalated.
- ☐ Recent feedback (surveys, partner letters, third-party reviews) points to strong support or trust.
Staff Wellbeing (SW)
Regression check: ☐ Security-related stress, turnover, or medical leave worsened. Improvement checks (tick all that apply):- ☐ Security-related turnover, sick leave, or exceptional R&R requests decreased.
- ☐ Uptake of wellbeing resources (counselling, peer support, check-ins) increased.
- ☐ Supervisors run routine wellbeing check-ins after incidents or high-stress periods.
- ☐ Staff surveys or pulse checks report feeling safer and better supported.
6.5 Aggregation and Reporting
For period t, the calculator stores each dimension’s scorescore_d,t and weight w_d.
- Financial outputs: ROI, NPV, payback, annual savings.
- Qualitative summary: QII score, dimension-level scores, and concise evidence notes explaining which statements were selected.
- Confidence signals: Optional High / Medium / Low tags to flag anecdotal evidence or data gaps.
6.6 Limitations and Mitigations
- Evidence gaps: Record qualitative notes for each score and track follow-up actions when stronger evidence is desired.
- Double counting risk: Keep the four dimensions conceptually distinct so the same improvement is not scored twice.
- Scenario spillover: When qualitative gains also affect EAL (e.g., better acceptance reduces incidents), record the change in only one part of the model and note the rationale.
- Periodic review: Refresh qualitative scores at least annually to ensure they reflect current realities and to capture regression when environments deteriorate.
6.7 Optional SLE Severity Helper (Advanced Mode)
When organisations enable Advanced Mode and supply their annual operating budget, the calculator can assist with Single Loss Expectancy estimates using budget-relative severity bands. This mirrors the helper introduced in v1.4 and remains available for analysts who prefer structured scaffolding.| Severity | Budget % | Description | Example Incidents |
|---|---|---|---|
| Negligible | 0.1–0.5% | Minor incidents with limited organisational impact | Minor property loss, short IT disruption |
| Minor | 0.5–2% | Localised incidents requiring standard response | Office burglary, short evacuation |
| Moderate | 2–6% | Significant incidents requiring management attention | Vehicle theft, data breach, extended facility closure |
| Severe | 6–20% | Major incidents affecting operations significantly | Staff injury requiring evacuation, major system compromise |
| Critical | 20%+ | Catastrophic incidents threatening organisational viability | Kidnapping with ransom, catastrophic facility destruction |
- When an annual budget is provided, the SLE input offers the severity dropdown with pre-computed mid-point values; users can override with custom amounts at any time.
- Without budget data, the standard numeric SLE input is displayed.
- The helper is a guide only—historical incident costs, insurance data, or expert judgment should take precedence whenever available.
7. Scenario Logic and Workflow
-
Quick-Start (Default):
- Prompt 1: “Do you have the total cost of security incidents for the last year (or average of last 2–3 years)?”
- Prompt 2: “Choose the expected reduction from your planned investment.”
- Output: Baseline EAL, Intervention EAL, Savings, ROI, Payback, QII placeholder.
-
Scenario Fallback:
- If the user selects “No data,” display the scenario cards from Section 3.2.
- Allow optional override and collect a confidence tag (High / Medium / Low).
-
Exposure Adjustment (Optional):
- If operations are expected to scale materially, prompt for exposure ratios (e.g., projected staff-days ÷ historical staff-days) and apply the multiplier from Section 3.1.
- Hidden by default to keep the quick-start path uncluttered.
-
Advanced Mode (Optional):
- Toggle reveals incident-level entry for ARO and SLE.
- Results reconcile with the primary method to maintain trust with analysts who prefer granular modelling.
-
Results Presentation:
- Dashboard shows annual savings, ROI, payback, and qualitative highlights.
- Sensitivity control (±10%) illustrates how results change if reduction rates vary.
8. Worked Example
Context: Medium-sized NGO operating in a conflict-affected region planning to invest in comprehensive SRM improvements.- Historical Costs: 75,000`.
- Reduction Rate: Select “Moderate – Comprehensive SRM (50%)”.
- Intervention EAL:
75,000 × (1 - 0.50) = $37,500. - Annual Savings:
75,000 - 37,500 = $37,500. - Investment Costs: Year 1 = 45,000; Year 3 = $25,000 (training refresh).
- ROI (3-year horizon):
- Total Savings (3 years) = $112,500
- Total Costs = $160,000
ROI = (112,500 - 160,000) / 160,000 = -0.297(−29.7%)
- Payback:
- Year 1 savings = 37,500 (costs exceed benefits)
- Year 2 cumulative savings = 75,000 (below cumulative costs of 135,000)
- Year 3 cumulative savings = 112,500 (below total costs of 160,000)
- Payback occurs in Year 5 if savings remain constant (
160,000 ÷ 37,500 ≈ 4.3 years).
- Extended Horizon Sensitivity:
- 5-year savings: $187,500 → ROI =
(187,500 - 160,000) / 160,000 = 17.2% - 10-year savings: $375,000 → ROI =
134.4% - Communicate both 3-year baseline and extended horizon to decision-makers.
- 5-year savings: $187,500 → ROI =
- QII:
- Access (tick 3 statements) = score 4
- Continuity (tick 2 statements) = score 3
- Acceptance (tick 1 statement) = score 2
- Staff wellbeing (tick 2 statements) = score 3
- Weighted QII = 0.25 × (4 + 3 + 2 + 3) = 3.0.
9. Edge Cases and Data Quality
- Zero incidents recorded: When users enter $0 historical costs, the calculator prompts them to (a) select a scenario preset or (b) manually enter a proxy baseline. If they proceed with $0, the tool sets
EAL_baseline = 0,Savings = 0, andROI = -100%to avoid division errors, while displaying a red “Data required” badge. - One-off catastrophic year: Users can apply either a three-year rolling average or a manual cap. When
EAL_cappedis supplied, the calculator recordsEAL_baseline = MIN(avg_3yr, EAL_capped)and logs the override note in the audit trail. Dashboard badges flag capped baselines in amber. - Rapid scaling (exposure change): Provide projected and historical exposure values (e.g., staff-days). The tool applies
EAL_adjusted = EAL_baseline × (Projected / Historical)and stores the ratio for audit. Sensitivity output shows both original and adjusted baselines. - Low confidence in reduction rate: Selecting the “Low confidence” toggle displays ±10% scenario bands next to the main result and annotates the export. This ensures decision-makers see conservative and optimistic variants.
- Switching to Advanced Mode: The calculator sums incident-level
Σ(ARO_i × SLE_i)and compares it with the baseline. Variance beyond ±10% surfaces a warning requiring users to reconcile assumptions before proceeding.
10. Assumptions, Limitations, and Future Refinement
- Historical Stability: Assumes incident patterns in the near future resemble recent history unless exposure changes are documented.
- Single Reduction Rate: Applies one rate across all incident types; further granularity is available only in Advanced Mode.
- Conservative Estimates: Reduction ranges are intentionally modest; Stage 5 pilots will validate and adjust them.
- Zero Discounting: Prioritises ethical parity over the financial cost of capital. Organisations needing discounting can apply it externally.
- Qualitative-Quantitative Separation: Prevents double counting but requires disciplined storytelling to integrate both dimensions.
11. Validation Roadmap
- Scenario Presets: Validate against pilot NGO incident datasets. Adjust baseline values if variance exceeds ±30%; publish updated ranges in Methods Note v2.2.
- Reduction Rates: Conduct rapid expert elicitation (5–10 SRM managers) and reconcile with pilot outcomes. Update table bands if consensus differs by >10 percentage points.
- Qualitative Framework: Gather user feedback on checklist clarity and scoring consistency; refine prompts and add sector-specific examples as needed.
- Calculator Outputs: Compare results with manual spreadsheet replications during pilot debriefs to confirm formula accuracy.
12. References
- ISO 31000:2018 – Risk Management Guidelines.
- Ramsey, F. P. (1928). A Mathematical Theory of Saving. Economic Journal, 38(152).
- Stern Review (2006). The Economics of Climate Change. HM Treasury.
- Hubbard, D. (2009). The Failure of Risk Management. Wiley.
- Taleb, N. (2007). The Black Swan. Random House.
- GISF (2025). Call for Proposals: Development of an ROI Tool for NGO Security Risk Management.
- Humanitarian Outcomes (2023). Aid Worker Security Report.
- CDA Collaborative Learning Projects (2022). Community Engagement for Security Access.
- ALNAP (2021). Security Risk Management in Practice.
- Nicholls, J., Lawlor, E., Neitzert, E., & Goodspeed, T. (2012). A Guide to Social Return on Investment. Social Value UK.
- OECD (2019). Better Criteria for Better Evaluation: Revised Evaluation Criteria Definitions and Principles for Use. OECD Publishing.
Document Control
Version: 2.1.0Status: Draft for RFQ Submission
Date: 2025-10-17
Next Review: 2026-01-10 (quarterly review cycle) Change Log
| Date | Version | Changes | Author |
|---|---|---|---|
| 2025-10-17 | 2.1.0 | Hybrid rewrite combining historical-cost workflow with full qualitative and parameter documentation; restored decision guide, SLE severity helper, and validation roadmap. | Project Team |
| 2025-10-17 | 2.0.0 | Historical-cost baseline prototype with simplified documentation (superseded by v2.1.0). | Project Team |
| 2025-10-15 | 1.4.2 | Clarified Staff Wellbeing as required core dimension, aligned qualitative guidance with implemented features, and linked SLE severity helper to Data Schema context fields. | Project Team |
| 2025-10-14 | 1.4.1 | Stage 1 alignment: refreshed documentation, validation fixtures, and application UI to reflect checklist scoring plus evidence notes. | Project Team |
| 2025-10-10 | 1.0 | Initial release – comprehensive methodological framework. | Project Team |
- ✅ Technical Lead – Formula accuracy verified
- ⏳ Product Owner – RFQ Stage 1 requirements fulfilled (pending review)
- ⏳ GISF Stakeholder – Methodological transparency confirmed (pending review)
Footnotes
- International Organization for Standardization (2018), ISO 31000: Risk Management — Guidelines. ↩
- Frank P. Ramsey (1928), “A Mathematical Theory of Saving,” The Economic Journal, 38(152), pp. 543–559. ↩
- Nicholas Stern (2006), The Economics of Climate Change: The Stern Review, HM Treasury. ↩
- Nicholls, J., Lawlor, E., Neitzert, E., & Goodspeed, T. (2012), A Guide to Social Return on Investment. Social Value UK. ↩
- OECD (2019), Better Criteria for Better Evaluation: Revised Evaluation Criteria Definitions and Principles for Use. ↩