SIM4Action

Interactive platform for social-environmental systems mapping and causal analysis

Explore, intervene, and monitor complex system dynamics

Platform Guide & Systems Thinking Primer

The Complexity of Socio-Environmental Systems

Real-world systems are not simple chains of cause and effect. They are webs of interconnected factors where actions ripple in unexpected ways. Understanding this complexity is the first step toward effective management and intervention.

🌐

Interconnectedness

Socio-environmental systems span ecological, economic, social, and governance dimensions. A change in one area cascades through many others in ways that are hard to predict.

🔄

Feedback Loops

Effects can circle back to amplify the original cause (reinforcing loops) or counteract it (balancing loops). These loops are the engines of system behaviour.

Time Delays

Causes and effects are often separated by days, months, or even years. These delays make it difficult to connect actions with their consequences and can lead to over- or under-reaction.

💡

Emergence

System-level behaviours (collapses, regime shifts, resilience) emerge from the interactions among components. They cannot be understood by studying each part in isolation.

Why does this matter? Traditional approaches address problems one at a time. But in complex systems, solving one problem can create or worsen others. Systems thinking provides the tools to see the whole picture, identify leverage points, and design interventions that account for side effects and feedback.

What Is a Causal System Map?

A causal system map (also called a causal loop diagram) is a visual representation of how things influence each other within a system. It captures the cause-and-effect relationships that drive system behaviour, making hidden connections visible and explicit.

The map has two core building blocks:

  • Variables (nodes) — the factors, conditions, or quantities that can change over time. These are the things you can observe, measure, or perceive in the system.
  • Causal relationships (edges) — directed arrows connecting one variable to another, meaning “a change in A causes a change in B.” Each relationship has properties:
    • Direction — the arrow shows which variable influences which.
    • Polaritysame direction (+): both increase or both decrease together. Opposite direction (−): when one increases, the other decreases.
    • Strength — how strong the influence is (strong, medium, or weak).
    • Time delay — how long it takes for the effect to materialise (days, months, or years).

Here is a minimal example — a generic system with just four variables and the relationships between them:

graph LR A["Population Size"] -->|"same"| B["Resource Demand"] B -->|"opposite"| C["Resource Availability"] C -->|"same"| A D["Management Policy"] -->|"opposite"| B

A minimal causal system map. Population Size drives Resource Demand (same direction: more people, more demand). Higher demand reduces Resource Availability (opposite direction). Availability feeds back to support Population Size (same direction), completing a balancing loop. Management Policy acts as an external lever that can reduce demand.

Even this simple map reveals important dynamics:

  • A balancing feedback loop (Population → Demand → Availability → Population): as the population grows, it consumes resources, which eventually constrains further growth. The system self-regulates.
  • A leverage point (Management Policy): by adjusting policy, you can reduce demand without waiting for resource depletion to do it naturally — a proactive intervention.
  • Non-obvious dependencies: improving resource availability alone won’t help if population growth continues to outpace supply. The map shows you need to address demand as well.

From simple to complex. Real-world system maps typically contain dozens or even hundreds of variables and relationships spanning ecological, economic, social, and governance dimensions. SIM4Action provides the computational tools to analyse these complex maps — the same principles shown here (variables, relationships, feedback loops, leverage points) apply at any scale.

Causal Complexity: An Illustrated Example

Consider a simplified fishery system. It is tempting to think of it as a simple chain: fewer regulations lead to more fishing, which leads to less fish stock. But the reality is far more complex:

graph LR A["Fish Stock"] -->|"same"| B["Catch Volume"] B -->|"same"| C["Market Revenue"] C -->|"same"| D["Fishing Effort"] D -->|"opposite"| A A -->|"same"| E["Ecosystem Health"] E -->|"same"| A C -->|"same"| F["Fleet Investment"] F -->|"same"| D G["Harvest Quota"] -->|"opposite"| D

A simplified fishery causal loop diagram. Green arrows represent same-direction relationships; the red arrow from Fishing Effort to Fish Stock represents an opposite-direction relationship. Notice the reinforcing loop (Fish Stock → Ecosystem Health → Fish Stock) and the balancing loop (Fish Stock → Catch → Revenue → Effort → Fish Stock).

In this diagram, reducing the Harvest Quota doesn't just reduce catch — it affects market revenue, fleet investment, fishing effort, and ultimately fish stock recovery, which feeds back into ecosystem health. A systems approach reveals these cascading effects before interventions are implemented.

Participatory Systems Mapping

Participatory systems mapping brings diverse stakeholders together — scientists, policymakers, community members, indigenous groups — to co-create a shared model of how their system works.

  • Identify factors: Participants name the key variables (ecological, economic, social, governance) that drive system behaviour.
  • Define relationships: For each pair of factors, participants describe whether and how they are causally linked, including the direction (same or opposite), strength, and time delay.
  • Build the map: The resulting network of factors and relationships becomes a shared “mental model” that integrates diverse perspectives.
  • Analyse and act: The platform provides computational tools to explore the map, test interventions, and design monitoring programs.

This approach ensures that local knowledge, scientific evidence, and governance realities are all represented in the same model — giving every voice a place in the system map.

The SIM4Action Workflow: Three Steps

SIM4Action guides you through three analytical stages, each supported by a dedicated lab in the platform. This workflow is inherently iterative, mirroring the principles of adaptive management:

graph LR U["1. Understand
Diagnostics Lab"] --> I["2. Intervene
Intervention Lab"] I --> M["3. Monitor
Monitoring Lab"] M -->|"learn, adapt, repeat"| U

The SIM4Action analytical cycle aligns with adaptive management: understand the system, design and test interventions, monitor outcomes, and use what you learn to refine your understanding. Each iteration deepens insight and improves management decisions.

  • Step 1 — Diagnostics Lab: Explore the system structure. Identify feedback loops, clusters, and key structural features.
  • Step 2 — Intervention Lab: Simulate changes using causal diffusion (probabilistic or deterministic, forward or backward). Trace how interventions propagate through causal chains, perform root-cause analysis, and optimise resource allocation with genetic algorithms.
  • Step 3 — Monitoring Lab: Use centrality analysis to identify the most influential variables and design monitoring programs.

Alignment with Adaptive Management: Adaptive management recognises that our understanding of complex socio-environmental systems is always incomplete. Rather than designing a single “optimal” plan, it prescribes a structured cycle of plan → act → monitor → learn → adjust. SIM4Action operationalises this cycle: the Diagnostics Lab builds understanding (plan), the Intervention Lab tests management strategies (act), and the Monitoring Lab designs the feedback mechanisms (monitor) that close the learning loop. Each time you return to Step 1 with new monitoring data, the system map can be updated, interventions re-evaluated, and monitoring priorities refined — creating a continuous improvement process grounded in evidence and stakeholder knowledge.

1

Understand the System

Diagnostics Lab — Explore network structure, feedback loops, and system properties

The Diagnostics Lab provides tools to explore and understand the structure of a causal systems map. By visualizing the network and its properties, you can identify key structural features that drive system behaviour.

Network Visualization

The system map is visualized as a directed network graph using D3.js force-directed layout. Each element encodes information:

  • Nodes (circles) represent system factors (variables). They are colour-coded by domain (e.g., Environmental, Economics, Management).
  • Edges (arrows) represent causal relationships between factors. Arrow direction shows the direction of causal influence.
  • Edge colour indicates polarity: green for same-direction, red for opposite-direction relationships.
  • Edge thickness indicates strength: thick lines for strong relationships, thin for weak.

Illustrative example: Below is a synthetic coastal fishery system with 9 factors across 4 domains. Notice how factors from different domains are interconnected through causal relationships:

graph TD SST["Sea Surface Temperature"] -->|"opposite / medium"| FS["Fish Stock"] WQ["Water Quality"] -->|"same / strong"| FS FS -->|"same / strong"| CV["Catch Volume"] CV -->|"same / strong"| MP["Market Price"] MP -->|"same / medium"| FE["Fishing Effort"] FE -->|"opposite / strong"| FS HQ["Harvest Quota"] -->|"opposite / strong"| FE SP["Spatial Protection"] -->|"same / medium"| FS SP -->|"same / weak"| WQ CV -->|"same / medium"| CW["Community Wellbeing"] MP -->|"same / medium"| CW style SST fill:#1a4a2a,stroke:#228B22,color:#e0e6ed style WQ fill:#1a4a2a,stroke:#228B22,color:#e0e6ed style FS fill:#1a4a2a,stroke:#228B22,color:#e0e6ed style MP fill:#142a4a,stroke:#1E90FF,color:#e0e6ed style CV fill:#142a4a,stroke:#1E90FF,color:#e0e6ed style HQ fill:#3a2a0a,stroke:#FF8C00,color:#e0e6ed style SP fill:#3a2a0a,stroke:#FF8C00,color:#e0e6ed style FE fill:#2a1a3a,stroke:#CC99FF,color:#e0e6ed style CW fill:#2a1a3a,stroke:#CC99FF,color:#e0e6ed linkStyle 0 stroke:#ef5350,stroke-width:3px linkStyle 5 stroke:#ef5350,stroke-width:6px linkStyle 6 stroke:#ef5350,stroke-width:6px linkStyle 1 stroke:#66bb6a,stroke-width:6px linkStyle 2 stroke:#66bb6a,stroke-width:6px linkStyle 3 stroke:#66bb6a,stroke-width:6px linkStyle 4 stroke:#66bb6a,stroke-width:3px linkStyle 7 stroke:#66bb6a,stroke-width:3px linkStyle 8 stroke:#66bb6a,stroke-width:1.5px linkStyle 9 stroke:#66bb6a,stroke-width:3px linkStyle 10 stroke:#66bb6a,stroke-width:3px

A synthetic coastal fishery system with 9 factors across 4 domains. Node colours indicate domain: green = Environmental, blue = Economics, orange = Management, purple = Social. Edge colours indicate polarity: green = same-direction (+), red = opposite-direction (−). Edge thickness reflects strength: thick = strong, medium = medium, thin = weak. Edge labels show polarity and strength explicitly.

Environmental
Economics
Management
Social

You can filter the network by domain, relationship type, strength, and temporal scale to focus on specific aspects of the system.

Relationship Properties: Polarity, Strength & Delay

Each causal relationship in the map has three key properties:

Polarity describes how the source factor affects the target:

Same Direction (+)

When the source increases, the target also increases (and vice versa). Example: more fishing effort → more catch.

Opposite Direction (−)

When the source increases, the target decreases. Example: more fishing effort → less fish stock.

Strength (strong, medium, weak) indicates the magnitude of the causal influence. In simulations, stronger relationships attract more token flow.

Temporal delay (days, months, years) captures how long it takes for the effect to materialise. A policy change may take years to affect fish populations, but days to affect market prices.

Feedback Loops: Reinforcing & Balancing

Feedback loops are closed causal chains where the effect of a factor eventually circles back to influence itself. The platform detects all feedback loops using depth-first search and classifies them by polarity:

Reinforcing Loops (R)

The product of all edge polarities is positive. These loops amplify change — they drive exponential growth or decline. Example: More fish → more catch → more revenue → more fleet investment → more effort → fewer fish (a vicious cycle when stock declines).

Balancing Loops (B)

The product of all edge polarities is negative. These loops resist change and drive the system toward equilibrium. Example: Fewer fish → less catch → lower revenue → less effort → fish recovery.

The Diagnostics Lab lets you find all loops, filter by type (reinforcing/balancing), and visualize each loop on the network.

Cluster Detection

The platform uses community detection algorithms (Louvain and Girvan-Newman) to identify clusters of tightly connected factors. These clusters often correspond to subsystems or thematic groups:

  • Louvain algorithm: Optimises modularity to find groups of nodes with dense internal connections and sparse connections between groups.
  • Girvan-Newman algorithm: Iteratively removes the most “between” edges to reveal natural community boundaries.

Illustrative example: Using our synthetic fishery system, a community detection algorithm might identify two clusters:

graph LR subgraph clusterA [Cluster A: Ecological-Harvest] SST2["Sea Surface Temp."] WQ2["Water Quality"] FS2["Fish Stock"] CV2["Catch Volume"] FE2["Fishing Effort"] end subgraph clusterB [Cluster B: Socio-Economic] MP2["Market Price"] CW2["Community Wellbeing"] HQ2["Harvest Quota"] SP2["Spatial Protection"] end SST2 --> FS2 WQ2 --> FS2 FS2 --> CV2 FE2 --> FS2 CV2 -->|"bridge"| MP2 MP2 -->|"bridge"| FE2 MP2 --> CW2 CV2 --> CW2 HQ2 --> FE2 SP2 --> FS2 SP2 --> WQ2 style SST2 fill:#1a4a2a,stroke:#4CAF50,color:#e0e6ed style WQ2 fill:#1a4a2a,stroke:#4CAF50,color:#e0e6ed style FS2 fill:#1a4a2a,stroke:#4CAF50,color:#e0e6ed style CV2 fill:#1a4a2a,stroke:#4CAF50,color:#e0e6ed style FE2 fill:#1a4a2a,stroke:#4CAF50,color:#e0e6ed style MP2 fill:#0d2a4a,stroke:#1E88E5,color:#e0e6ed style CW2 fill:#0d2a4a,stroke:#1E88E5,color:#e0e6ed style HQ2 fill:#0d2a4a,stroke:#1E88E5,color:#e0e6ed style SP2 fill:#0d2a4a,stroke:#1E88E5,color:#e0e6ed linkStyle 4 stroke:#ffa726,stroke-width:3px linkStyle 5 stroke:#ffa726,stroke-width:3px

Cluster detection reveals two tightly coupled subsystems. Cluster A (green) groups ecological and harvest variables with dense internal connections. Cluster B (blue) groups socio-economic and management variables. The bridge edges (orange, thick) — Catch Volume → Market Price and Market Price → Fishing Effort — are the critical inter-cluster links through which changes propagate between subsystems.

Cluster A: Ecological-Harvest
Cluster B: Socio-Economic
Bridge edges

Cluster detection helps identify which parts of the system are most tightly coupled and where natural boundaries exist between subsystems. The bridge edges between clusters are especially important — they are the pathways through which changes propagate from one subsystem to another.

2

Design Interventions

Intervention Lab — Simulate how changes propagate through the causal network

The Intervention Lab uses causal diffusion simulation to model how interventions spread through the causal network. It offers two complementary algorithms — probabilistic (stochastic random-walk tokens) and deterministic (proportional flow splitting) — in both forward (cause→effect) and backward (effect→cause) directions. An integrated genetic algorithm optimizer can automatically discover the best allocation of intervention resources to maximise impact on a target variable.

The Token Diffusion Metaphor

Imagine dropping a pebble into a pond. The ripples spread outward, interacting with obstacles and reflecting back. Causal diffusion works similarly: you introduce a change at one or more nodes, and watch how the effect ripples through the network. The platform offers two complementary simulation algorithms:

Probabilistic mode (agent-based): Each token is an autonomous agent that:

  • Carries a charge (+1 for positive change, −1 for negative change)
  • Travels along outgoing edges, choosing its path via weighted random selection based on edge strength (stronger edges attract more tokens)
  • Flips its charge when traversing an opposite-direction edge (a positive increase becoming a decrease)
  • Respects temporal delays — slow relationships take more simulation steps to traverse
  • Accumulates at dead ends (nodes with no outgoing edges)

Deterministic mode (flow-based): Instead of individual tokens making random choices, a continuous flow of causal influence:

  • Starts as a specified amount at each injection node
  • Splits proportionally across outgoing edges based on normalised edge strength — no randomness involved
  • Obeys the same polarity-flipping and delay rules as probabilistic tokens
  • Produces identical results every run for the same inputs, making it ideal for precise comparison and optimisation

When to use which? Probabilistic mode captures the inherent uncertainty in complex systems — run it many times (ensemble mode) to build confidence intervals. Deterministic mode gives a clean, repeatable “expected value” signal — ideal for single-scenario exploration, backward analysis, and optimisation.

graph TD HQ["Harvest Quota
INJECT +5"] -->|"opposite / strong"| FE["Fishing Effort
receives −5"] FE -->|"opposite / strong"| FS["Fish Stock
receives +5"] FS -->|"same / strong"| CV["Catch Volume
receives +5"] CV -->|"same / strong · 60%"| MP["Market Price
receives ~+3"] CV -->|"same / medium · 40%"| CW["Community Wellbeing
receives ~+2"] MP -->|"same / medium"| CW style HQ fill:#3a2a0a,stroke:#FF8C00,color:#e0e6ed style FE fill:#3a1a1a,stroke:#ef5350,color:#e0e6ed style FS fill:#1a4a2a,stroke:#66bb6a,color:#e0e6ed style CV fill:#1a4a2a,stroke:#66bb6a,color:#e0e6ed style MP fill:#1a4a2a,stroke:#66bb6a,color:#e0e6ed style CW fill:#1a4a2a,stroke:#66bb6a,color:#e0e6ed linkStyle 0 stroke:#ef5350,stroke-width:6px linkStyle 1 stroke:#ef5350,stroke-width:6px linkStyle 2 stroke:#66bb6a,stroke-width:6px linkStyle 3 stroke:#66bb6a,stroke-width:6px linkStyle 4 stroke:#66bb6a,stroke-width:3px linkStyle 5 stroke:#66bb6a,stroke-width:3px

Token diffusion illustrated. This diagram traces the propagation path only. 5 positive tokens are injected at Harvest Quota. On red opposite-direction edges the charge flips: +5 → −5 at Fishing Effort, then −5 → +5 at Fish Stock (the “double flip”). On green same-direction edges the charge is preserved. At Catch Volume, tokens split probabilistically: 60% to Market Price (strong, wt = 3) and 40% to Community Wellbeing (medium, wt = 2). Node colours reflect the token charge: red border = negative, green border = positive, orange border = injection point.

How Propagation Works

At each simulation step, every active token:

  • Evaluates outgoing edges from its current node
  • Calculates routing probabilities based on normalised edge strength (strong = 3, medium = 2, weak = 1)
  • Chooses one edge via weighted random selection
  • Enters transit for a number of steps determined by the edge's delay (days = 5, months = 10, years = 20 steps)
  • Arrives at the target node, with charge potentially flipped if the edge has opposite polarity

Step-by-step walkthrough: Let’s trace what happens when we inject 10 positive tokens at Harvest Quota (simulating a quota increase) using the same synthetic coastal fishery system. The diagram below highlights the propagation path (bright nodes with step numbers) against the full network (dimmed nodes):

graph TD SST2["Sea Surface Temp."] -->|"opposite / medium"| FS2 WQ2["Water Quality"] -->|"same / strong"| FS2 SP2["Spatial Protection"] -->|"same / medium"| FS2 SP2 -->|"same / weak"| WQ2 HQ2["Step 0 · Harvest Quota
INJECT +10"] -->|"opposite / strong"| FE2["Step 1 · Fishing Effort
receives −10"] FE2 -->|"opposite / strong"| FS2["Step 2 · Fish Stock
receives +10"] FS2 -->|"same / strong"| CV2["Step 3 · Catch Volume
receives +10"] CV2 -->|"same / strong · 60%"| MP2["Step 4a · Market Price
receives ~+6"] MP2 -->|"same / medium"| FE2 CV2 -->|"same / medium · 40%"| CW2["Step 4b · Community Wellbeing
receives ~+4"] MP2 -->|"same / medium"| CW2 style SST2 fill:#1a2a3a,stroke:#555,color:#778 style WQ2 fill:#1a2a3a,stroke:#555,color:#778 style SP2 fill:#1a2a3a,stroke:#555,color:#778 style HQ2 fill:#3a2a0a,stroke:#FF8C00,color:#e0e6ed style FE2 fill:#3a1a1a,stroke:#ef5350,color:#e0e6ed style FS2 fill:#1a4a2a,stroke:#66bb6a,color:#e0e6ed style CV2 fill:#1a4a2a,stroke:#66bb6a,color:#e0e6ed style MP2 fill:#1a4a2a,stroke:#66bb6a,color:#e0e6ed style CW2 fill:#1a4a2a,stroke:#66bb6a,color:#e0e6ed linkStyle 0 stroke:#555,stroke-width:1.5px linkStyle 1 stroke:#555,stroke-width:1.5px linkStyle 2 stroke:#555,stroke-width:1.5px linkStyle 3 stroke:#555,stroke-width:1.5px linkStyle 4 stroke:#ef5350,stroke-width:6px linkStyle 5 stroke:#ef5350,stroke-width:6px linkStyle 6 stroke:#66bb6a,stroke-width:6px linkStyle 7 stroke:#66bb6a,stroke-width:6px linkStyle 8 stroke:#555,stroke-width:1.5px,stroke-dasharray:5 linkStyle 9 stroke:#66bb6a,stroke-width:3px linkStyle 10 stroke:#66bb6a,stroke-width:3px

Token propagation through the full fishery network. Bright nodes with step numbers show the propagation path from Harvest Quota; dimmed nodes (Sea Surface Temp., Water Quality, Spatial Protection) are part of the system but not on this intervention path. Red thick edges = opposite-direction path (charge flips). Green thick edges = same-direction path (charge preserved). Grey thin edges = non-path connections. The dashed grey edge (Market Price → Fishing Effort) is the feedback loop that would carry tokens in subsequent rounds. The table below details each step.

StepLocationChargeEvent
0Harvest Quota+1010 positive tokens injected (simulating a quota increase). Only one outgoing edge: Harvest Quota → Fishing Effort (opposite / strong).
1–5In transit+10 in transitTokens travel along the edge. Delay = “days” = 5 simulation steps.
6Fishing Effort−10Tokens arrive. Edge is opposite, so charge flips: +10 → −10. Interpretation: quota increase → effort decreases.
6Fishing Effort−10 routingFishing Effort has one outgoing edge: Fish Stock (opposite / strong). All 10 tokens route to Fish Stock. (Market Price → Fishing Effort is an incoming edge, not outgoing.)
7–16In transit−10 in transitTokens travel Fishing Effort → Fish Stock. Delay = “months” = 10 steps.
17Fish Stock+10Tokens arrive. Edge is opposite, so charge flips again: −10 → +10. The “double flip”: less effort → stock recovers.
17Fish Stock+10 routingFish Stock has one outgoing edge: Catch Volume (same / strong). All 10 tokens route to Catch Volume.
18–27In transit+10 in transitTokens travel Fish Stock → Catch Volume (same / strong, months delay).
28Catch Volume+10Tokens arrive. Edge is same, charge preserved: +10. More fish → more catch.
28Catch Volume+10 routingCatch Volume has 2 outgoing edges: Market Price (strong, wt=3, 60%) and Community Wellbeing (medium, wt=2, 40%). ~6 tokens → Market Price, ~4 → Wellbeing.
29+Market Price / Wellbeing+6 / +4Both edges are same: charge preserved. More catch → higher prices, greater wellbeing. Market Price tokens may continue to Fishing Effort via the feedback loop.

Key insight: Two consecutive opposite-direction edges produce a net positive effect (the “double flip”). Increasing the harvest quota (a restrictive management action) ultimately leads to fish stock recovery because the causal chain passes through two “opposite” relationships. The simulation also reveals temporal dynamics: the effect on Fish Stock takes ~17 steps (months), while downstream effects on Catch Volume and Market Price take even longer — making trade-offs across time visible to decision-makers.

The simulation tracks node flows (accumulated positive and negative tokens at each node) and edge flows (tokens currently traversing each edge) at every time step, producing time-series data you can chart and analyse.

Scenario Mode vs. Ensemble Mode

The Intervention Lab offers two modes of analysis, each with configurable algorithm settings:

Scenario Mode

Run a single simulation with step-by-step control. Watch tokens or flow propagate in real time. Includes play/pause, step-by-step advancement, and speed control. You can choose:

  • Algorithm: Probabilistic (stochastic tokens) or Deterministic (proportional flow)
  • Direction: Forward (cause→effect) or Backward (effect→cause)

Ideal for exploring and understanding how a specific intervention ripples through the system, or tracing backward to discover root causes.

Ensemble Mode

Run 10–1000 simulations with different random seeds to produce statistical distributions of outcomes. Uses the probabilistic algorithm (the stochastic variation is what makes ensembles meaningful). Supports both forward and backward directions. Ideal for robust decision-making with confidence intervals.

Algorithm settings are configured in the “Algorithm Settings” panel within each mode. In Scenario mode, both algorithms and both directions are available. In Ensemble mode, the algorithm is locked to Probabilistic (deterministic runs produce identical outcomes, so ensembles would be redundant), but direction is selectable.

Interpreting Results

Simulation results include:

  • Time-series charts: Show how token accumulation at each node changes over time, revealing which factors are most affected and when.
  • Node flow maps: Colour-code the network by accumulated positive/negative tokens, showing the spatial pattern of effects.
  • Edge flow visualisation: Show which causal pathways carry the most token traffic.
  • Controllability gauge: Measures how much of the system can be influenced from the chosen intervention point.

Illustrative examples from the Harvest Quota +10 token simulation:

Top Positive Accumulations
RankVariableNet TokensArrival Step
1Fish Stock+1017
2Catch Volume+1028
3Market Price+629+
4Community Wellbeing+629+
Sea Surface Temp.0
Water Quality0
Spatial Protection0
Top Negative Accumulations
RankVariableNet TokensArrival Step
1Fishing Effort−106
No other nodes received negative tokens in this simulation

Token accumulation summary. The tables rank all nodes by net accumulated tokens after the Harvest Quota +10 simulation. Positive accumulations (left): Fish Stock and Catch Volume receive the strongest benefit (+10 each). Market Price and Community Wellbeing receive +6 each after token splitting. Three environmental/management nodes are unreached. Negative accumulations (right): only Fishing Effort receives a negative effect (−10 at step 6), caused by the opposite-direction edge from Harvest Quota. The arrival step column shows temporal ordering — upstream nodes are affected first.

Node Flow: Accumulated Tokens Over Simulation Steps
+10 +5 0 −5 −10 0 5 10 15 20 25 30 35 Simulation Step Fishing Effort Fish Stock Catch Volume Market Price Wellbeing

Node flow chart. Each line tracks the net accumulated token charge at a node over simulation steps. Fishing Effort drops to −10 at step 6 (opposite-direction edge flips charge). Fish Stock rises to +10 at step 17 (double flip). Catch Volume follows at step 28. Market Price and Community Wellbeing (dashed) arrive at step 29+ with +6 each. The staircase pattern reveals temporal ordering: upstream nodes are affected first, downstream nodes later.

Edge Flow: Token Traffic Per Edge Over Simulation Steps
10 8 5 3 0 0 5 10 15 20 25 30 35 Simulation Step HQ → FE FE → FS FS → CV CV → MP CV → CW

Edge flow chart. Each line shows the number of tokens in transit on a given edge at each simulation step. HQ → FE carries all 10 tokens first (steps 1–5), then FE → FS (steps 7–16), then FS → CV (steps 18–27). After step 28, tokens split: CV → MP (~6 tokens) and CV → CW (~4 tokens). The pulse pattern shows tokens moving as a “wave front” through the network — each edge is active only during its transit window, then returns to zero as tokens arrive and move on.

Controllability from Harvest Quota
67%
6 of 9 nodes reached: Fishing Effort, Fish Stock, Catch Volume, Market Price, Community Wellbeing, Harvest Quota (self)
3 of 9 not reached: Sea Surface Temp., Water Quality, Spatial Protection

Controllability gauge. Measures what fraction of the system can be influenced from the chosen intervention point. Injecting tokens at Harvest Quota reaches 6 out of 9 nodes (67%). The three unreached nodes (environmental drivers and spatial protection) have no incoming path from Harvest Quota — they influence the system but cannot be controlled through quota adjustments. A higher controllability score means the intervention has broader system reach.

These results help stakeholders compare intervention strategies, identify unintended side effects, and build consensus around preferred approaches.

Probabilistic vs. Deterministic Diffusion

The two algorithms model causal propagation at different levels of abstraction. Both respect the same network topology, polarities, strengths, and delays — they differ only in how influence is routed at branch points.

PropertyProbabilistic (Agent-Based)Deterministic (Flow-Based)
Unit of influence Discrete tokens (integer agents) Continuous flow (fractional values)
Routing at branch points Each token makes a weighted random choice among outgoing edges Flow splits proportionally by normalised edge strength — strong (1.0), medium (0.6), weak (0.3)
Repeatability Stochastic — each run produces slightly different results Deterministic — identical results every run
Ensemble suitability Ideal — variation across runs produces meaningful distributions Not applicable — every run is identical, so ensembles add no information
Optimization suitability Possible but noisy fitness landscape Ideal — smooth, repeatable fitness landscape for genetic algorithm search
Best for Ensemble analysis, Monte Carlo confidence intervals, capturing system uncertainty Single-scenario exploration, backward root-cause analysis, GA optimization, precise comparisons
graph LR CV3["Catch Volume
flow = 10.0"] -->|"strong (1.0)
53%"| MP3["Market Price
receives 5.3"] CV3 -->|"medium (0.6)
32%"| CW3["Community Wellbeing
receives 3.2"] CV3 -->|"weak (0.3)
16%"| PR3["Public Revenue
receives 1.6"] style CV3 fill:#3a2a0a,stroke:#FF8C00,color:#e0e6ed style MP3 fill:#1a3a2a,stroke:#66bb6a,color:#e0e6ed style CW3 fill:#1a3a2a,stroke:#66bb6a,color:#e0e6ed style PR3 fill:#1a3a2a,stroke:#66bb6a,color:#e0e6ed linkStyle 0 stroke:#66bb6a,stroke-width:6px linkStyle 1 stroke:#66bb6a,stroke-width:4px linkStyle 2 stroke:#66bb6a,stroke-width:2px

Deterministic flow splitting. When 10.0 units of flow arrive at Catch Volume, the deterministic algorithm splits them proportionally: strong edges receive 1.0/(1.0+0.6+0.3) = 53% of the flow, medium edges 32%, and weak edges 16%. There is no randomness — the split is the same every time. In probabilistic mode, each of 10 individual tokens would independently roll weighted dice, producing slightly different distributions each run.

In practice: Start with deterministic mode to understand the expected causal pathways clearly. Then switch to probabilistic ensemble mode to quantify the uncertainty around those expectations. The two algorithms are complementary lenses on the same system.

Backward Diffusion & Root-Cause Analysis

Standard (forward) diffusion answers: “If I intervene here, what happens downstream?” But often the more pressing question is the reverse: “This outcome variable matters to me — what are the most effective upstream levers to influence it?”

Backward diffusion reverses the direction of propagation. Instead of following outgoing edges from intervention points, tokens or flow travel along incoming edges from a target variable of interest, tracing influence backward through the causal chain to its upstream drivers.

Forward Diffusion

Question: “What happens if I change X?”

Direction: Cause → Effect

Inject at: Intervention nodes (intervenable variables)

Reveals: Downstream impacts, side effects, controllability

Backward Diffusion

Question: “What drives Y?”

Direction: Effect → Cause

Inject at: Target variable(s) of interest (e.g., a focal factor)

Reveals: Root causes, influence pathways, upstream leverage points

graph RL FS4["Fish Stock
TARGET"] -->|"backward along
opposite / strong"| FE4["Fishing Effort
influence: high"] FS4 -->|"backward along
same / strong"| WQ4["Water Quality
influence: medium"] FS4 -->|"backward along
opposite / medium"| SST4["Sea Surface Temp.
influence: medium"] FE4 -->|"backward along
same / medium"| MP4["Market Price
influence: indirect"] FE4 -->|"backward along
opposite / strong"| HQ4["Harvest Quota
influence: indirect"] style FS4 fill:#3a2a0a,stroke:#FF8C00,color:#e0e6ed style FE4 fill:#2a1a3a,stroke:#ce93d8,color:#e0e6ed style WQ4 fill:#2a1a3a,stroke:#ce93d8,color:#e0e6ed style SST4 fill:#2a1a3a,stroke:#ce93d8,color:#e0e6ed style MP4 fill:#1a2a3a,stroke:#5a7a96,color:#e0e6ed style HQ4 fill:#1a2a3a,stroke:#5a7a96,color:#e0e6ed linkStyle 0 stroke:#ce93d8,stroke-width:5px linkStyle 1 stroke:#ce93d8,stroke-width:4px linkStyle 2 stroke:#ce93d8,stroke-width:3px linkStyle 3 stroke:#5a7a96,stroke-width:2px linkStyle 4 stroke:#5a7a96,stroke-width:2px

Backward diffusion from Fish Stock. Tokens are injected at the target variable (Fish Stock) and propagate backward along incoming edges. Purple nodes are direct upstream drivers discovered in the first wave; grey nodes are indirect drivers discovered in subsequent waves. The accumulated token flow at each upstream node quantifies its relative influence on the target.

After a backward diffusion run, the platform provides an Influence Ranking — a bar chart showing the cumulative causal influence (area under the curve) of each upstream variable on the target. This ranking directly answers the question: “Which variables have the greatest influence on my outcome of interest?”

Use cases for backward diffusion:

  • Root-cause diagnosis: A fishery manager concerned about declining stock biomass can run backward diffusion from Fish Stock to discover which upstream variables contribute most.
  • Prioritising interventions: The influence ranking reveals which levers have the strongest causal pathway to the outcome — not just which are directly connected, but which have the strongest cumulative effect through potentially long causal chains.
  • Identifying indirect drivers: Variables two or three steps upstream may be more influential than direct neighbours if they feed through strong, reinforcing pathways.
  • Stakeholder engagement: Backward diffusion provides an intuitive answer to the question stakeholders naturally ask: “What do we need to change to improve this outcome?”

Token Allocation Optimizer (Genetic Algorithm)

When designing interventions, a fundamental question arises: “Given a limited budget of resources, how should I distribute them across available intervention points to maximise the impact on my target variable?”

The Token Allocation Optimizer answers this question automatically using a genetic algorithm (GA) — an evolutionary search technique inspired by natural selection. Rather than testing every possible allocation (which is combinatorially infeasible), the GA evolves a population of candidate allocations over many generations, selecting the fittest, recombining their features, and introducing random mutations to explore the search space efficiently.

🎯

The Optimisation Problem

Given B total tokens (your resource budget) and N eligible intervention nodes, find the allocation [b1, b2, … bN] where b1 + b2 + … + bN = B that maximises the cumulative causal effect (area under the flow curve) on a chosen target variable over a given time horizon.

🧬

How the GA Works

A population of random allocations (individuals) is created. Each individual is evaluated by running a deterministic diffusion simulation and measuring the cumulative effect on the target. The fittest individuals are selected to produce the next generation through crossover (blending two allocations) and mutation (shifting tokens between nodes). Over 50–200 generations, the population converges on the optimal allocation.

graph TD S["Seed Population
Random allocations"] --> E["Evaluate Fitness
Run diffusion, compute AUC"] E --> R{"Converged?"} R -->|"No"| SEL["Select Fittest
Tournament selection"] SEL --> CX["Crossover
Blend parent allocations"] CX --> MUT["Mutate
Shift tokens between nodes"] MUT --> E R -->|"Yes"| OPT["Optimal Allocation
Best token distribution found"] style S fill:#1a2a4a,stroke:#64b5f6,color:#e0e6ed style E fill:#1a2a4a,stroke:#64b5f6,color:#e0e6ed style R fill:#3a2a0a,stroke:#FF8C00,color:#e0e6ed style SEL fill:#1a2a4a,stroke:#64b5f6,color:#e0e6ed style CX fill:#1a2a4a,stroke:#64b5f6,color:#e0e6ed style MUT fill:#1a2a4a,stroke:#64b5f6,color:#e0e6ed style OPT fill:#1a4a2a,stroke:#66bb6a,color:#e0e6ed

Genetic algorithm optimisation cycle. Starting from random allocations, each generation evaluates fitness (cumulative effect on the target variable via deterministic diffusion), selects the fittest, recombines and mutates them, and repeats. The process converges toward the allocation that maximises causal impact.

Optimizer configuration:

SettingDescriptionDefault
Target NodeThe variable whose cumulative effect you want to maximise. Typically a focal factor.
Optimisation GoalMaximise positive (increase the target), maximise absolute (largest effect regardless of sign), or minimise negative (reduce the target)Maximise positive
DirectionForward (allocate tokens at upstream nodes to affect the target) or Backward (discover which upstream nodes matter most)Forward
Total BudgetTotal number of tokens to distribute across eligible nodes100
Time StepsNumber of simulation steps to run for each fitness evaluation200
Eligible NodesWhich nodes can receive tokens. Defaults to intervenable variables; can be expanded.Intervenable nodes
Population SizeNumber of candidate allocations per generation50
GenerationsMaximum number of evolutionary cycles100

Optimizer outputs:

  • Convergence chart: Shows how the best and mean fitness evolve over generations, indicating whether the search has converged.
  • Optimal allocation bar chart: Visualises how tokens should be distributed across eligible nodes.
  • Best fitness score: The cumulative area-under-curve achieved by the optimal allocation.
  • Apply buttons: One click applies the discovered optimal allocation to a Scenario or Ensemble run for detailed exploration.

Why genetic algorithms? The token allocation problem is a constrained combinatorial optimisation problem. With 10 eligible nodes and a budget of 100 tokens, there are millions of possible allocations. Exhaustive search is infeasible. The GA efficiently searches this space by exploiting the structure of the problem — allocations that are close to the optimum in “genotype space” tend to have similar fitness, allowing the evolutionary process to home in on good solutions within 50–200 generations (typically seconds of computation).

Technical note: The GA runs entirely in your browser using a Web Worker thread, so the UI remains responsive during optimisation. The deterministic diffusion algorithm is used internally for fitness evaluation, ensuring smooth, repeatable fitness landscapes that the GA can navigate efficiently.

3

Monitor & Evaluate

Monitoring Lab — Identify key indicators and design monitoring programs

The Monitoring Lab uses network centrality analysis to identify the most influential, strategically positioned, and informative variables in the system. These variables are prime candidates for monitoring and evaluation programs.

Why Centrality Matters for Monitoring

Not all variables in a system are equally important. Some factors sit at critical junctures in the causal network — they influence many others, bridge different subsystems, or propagate changes widely. These are the variables you most want to monitor.

Centrality metrics, borrowed from social network analysis (SNA) and graph theory, quantify the structural importance of each node in the network. By ranking variables by their centrality, you can prioritise monitoring resources for maximum insight.

Practical implication: Rather than trying to monitor everything (which is expensive and often infeasible), centrality analysis identifies the minimum set of “sentinel” variables that, if monitored, give you the best picture of overall system health.

Five Centrality Metrics Explained

Degree Centrality

Counts the number of direct connections (in + out). High-degree nodes are the most connected factors.

Q: Which variables have the most direct causal connections?

Betweenness Centrality

Measures how often a node lies on the shortest path between other nodes. High-betweenness nodes are bridges between subsystems.

Q: Which variables are bottlenecks or bridges in the system?

Closeness Centrality

Measures how close a node is, on average, to all other nodes. High-closeness nodes can reach (or be reached by) the rest of the system quickly.

Q: Which variables can influence the whole system most rapidly?

Eigenvector Centrality

A node is important if it is connected to other important nodes. This captures influence that propagates through the network.

Q: Which variables are connected to the most influential parts of the system?

Katz Centrality

Similar to eigenvector but gives every node a baseline importance. Accounts for both direct and indirect paths with attenuation over distance.

Q: Which variables have the broadest total influence, direct and indirect?

Worked example: Applying all five metrics to our synthetic fishery network reveals how different metrics highlight different variables. The diagram shows degree centrality (raw count of connections) for each node:

graph LR SST3["SST
Deg: 1"] --> FS3["FISH STOCK
Deg: 5"] WQ3["Water Q.
Deg: 2"] --> FS3 SP3["Spatial Prot.
Deg: 2"] --> FS3 SP3 --> WQ3 FS3 --> CV3["Catch Vol.
Deg: 3"] FE3["EFFORT
Deg: 4"] --> FS3 CV3 --> MP3["Mkt Price
Deg: 3"] MP3 --> FE3 HQ3["Quota
Deg: 1"] --> FE3 CV3 --> CW3["Wellbeing
Deg: 2"] MP3 --> CW3

The fishery network annotated with degree centrality (raw count of in + out connections). Fish Stock (degree: 5) and Fishing Effort (degree: 4) have the most connections. The table below compares all five raw centrality metrics.

Variable Degree Betweenness Closeness Eigenvector Katz
Fish Stock 5 0.43 0.47 0.52 0.58
Fishing Effort 4 0.32 0.53 0.44 0.51
Market Price 3 0.25 0.40 0.36 0.42
Catch Volume 3 0.18 0.35 0.33 0.38
Spatial Protection 2 0.14 0.31 0.18 0.25
Water Quality 2 0.04 0.27 0.26 0.24
Community Wellbeing 2 0.00 0.20 0.21 0.22
Sea Surface Temp. 1 0.00 0.25 0.12 0.15
Harvest Quota 1 0.00 0.29 0.09 0.13

Reading this table: Degree = raw count of connections. Betweenness, closeness, eigenvector, and Katz are computed as fractions (0 to 1). Higher values = greater structural importance. Notice how different metrics spotlight different variables:

  • Fish Stock (degree: 5) ranks #1 on degree, betweenness, eigenvector, and Katz — it is the most connected and structurally central factor. It’s a prime leverage point and sentinel indicator.
  • Fishing Effort (degree: 4) ranks #1 on closeness (0.53 — it can reach every other node quickly) and #2 on most other metrics — a key intervention target.
  • Market Price (degree: 3) ranks #3 across the board — a consistent bridge variable between ecological and social subsystems.
  • Harvest Quota (degree: 1) has the lowest degree but relatively high closeness (0.29) — it’s a management lever that, despite few direct connections, can reach the system efficiently.
  • Community Wellbeing (degree: 2) has zero betweenness (it’s a terminal node receiving influence but not passing it on) — it’s an outcome variable worth monitoring but not a good intervention point.

Leverage Points and Sentinel Indicators

By combining multiple centrality metrics, you can identify different types of strategically important variables:

  • Leverage points: Variables that score high on multiple metrics. Intervening on these factors has the greatest potential to shift system behaviour.
  • Sentinel indicators: Variables with high closeness or eigenvector centrality that, when monitored, serve as early warning signals of system-wide change.
  • Bridge variables: High-betweenness variables that connect different subsystems. Monitoring these reveals whether changes are spreading across system boundaries.
  • Hub variables: High-degree nodes that can be practical proxies for measuring overall activity in their subsystem.

The Comprehensive Analysis mode in the Monitoring Lab calculates all five metrics simultaneously and presents them in a sortable table, making it easy to identify variables that score highly across multiple dimensions.

graph TD A["All System Variables"] --> B["Centrality Analysis"] B --> C["High Degree
Hub Variables"] B --> D["High Betweenness
Bridge Variables"] B --> E["High Eigenvector
Influence Propagators"] B --> F["High Closeness
Sentinel Indicators"] C --> G["Priority Monitoring Set"] D --> G E --> G F --> G

From all system variables, centrality analysis filters the most strategically important ones into a priority monitoring set. Different metrics reveal different types of importance.

From Knowledge to Action

Turning systems analysis into concrete decisions, policies, and real-world impact

Understanding a system is only valuable if it leads to better decisions. This section bridges the gap between analysis and action — showing how the insights produced by SIM4Action translate into concrete, evidence-based management strategies in the real world.

The Implementation Gap: Why Systems Knowledge Rarely Becomes Action

Decades of research in environmental management, public health, and development have documented a persistent implementation gap: the distance between what we know about a system and what we actually do about it. This gap exists because:

  • Complexity overwhelms decision-makers. A fishery manager facing 30+ interacting variables cannot mentally trace all the causal chains. They default to single-variable interventions (e.g., “just reduce catch”) that often produce unintended consequences.
  • Stakeholders disagree about priorities. Without a shared model, ecologists, economists, and fishing communities argue from incompatible mental models — each sees a different system.
  • Adaptive management is prescribed but rarely practised. The concept of iterative learn-act-monitor cycles is well-established in the literature (Holling, 1978; Walters, 1986), yet most management agencies lack the tools to operationalise it. Reviews of over 100 adaptive management programs found that fewer than 20% actually completed a full learning cycle (Allen & Gunderson, 2011).

SIM4Action addresses this gap directly. It gives stakeholders a shared, interactive, evidence-based model where causal assumptions are explicit and testable, not hidden in spreadsheets or expert intuition. Every factor, relationship, and intervention scenario is transparent, debatable, and modifiable.

SIM4Action Operationalises Adaptive Management

Adaptive management — the structured cycle of plan, act, monitor, learn, adjust — is widely endorsed by institutions from the IUCN to the World Bank. But it requires three capabilities that most management agencies lack:

1. A Shared System Model

Traditional adaptive management assumes everyone agrees on “the system.” SIM4Action makes the system model explicit, visual, and collaboratively built through participatory mapping. Stakeholders co-create the causal map, ensuring all perspectives are represented.

2. Scenario Testing Before Acting

The Intervention Lab allows managers to test interventions computationally before implementing them in the real world. Causal diffusion (forward and backward, probabilistic and deterministic) reveals cascading effects, trade-offs, root causes, and unintended consequences — at zero cost and zero risk. The genetic algorithm optimizer can even discover the optimal allocation of resources automatically.

3. Targeted Monitoring Design

The Monitoring Lab uses centrality analysis to identify exactly which variables to monitor. Rather than expensive blanket monitoring programs, managers can focus resources on the sentinel indicators most likely to detect system-wide change.

Stakeholder Workshop System Model in SIM4Action Understand Structure Test Interventions Design Monitoring Management Action New Evidence (collect data) Diagnostics Intervention Monitoring Lab implement collect data update map

The SIM4Action adaptive management cycle. The flow runs clockwise: stakeholders build a system map, analyse its structure (Diagnostics Lab), test interventions (Intervention Lab), design monitoring (Monitoring Lab), implement actions, and collect new evidence. The dashed orange feedback arrow (top) closes the learning loop — new evidence updates the system map for the next iteration.

Concrete Decision Pathways: From Analysis to Policy

Each SIM4Action lab produces outputs that directly inform specific types of real-world decisions:

SIM4Action OutputDecision It InformsReal-World Example
Feedback loops
(Diagnostics Lab)
Identify self-reinforcing dynamics that could amplify or resist interventions In the Great Barrier Reef, identifying a reinforcing loop between coral bleaching, algal overgrowth, and fish habitat loss led to prioritising water quality interventions over coral transplanting (Hughes et al., 2017).
Cluster detection
(Diagnostics Lab)
Define which agencies or departments need to coordinate on cross-cutting issues In Mediterranean fisheries, identifying that ecological and socio-economic variables form separate clusters connected by “market price” led to joint meetings between fisheries biologists and economists (Coll et al., 2013).
Causal diffusion results
(Intervention Lab)
Compare intervention strategies (forward diffusion), identify root causes (backward diffusion), quantify trade-offs, optimise resource allocation (GA optimizer), and identify unintended side effects In Chilean salmon aquaculture, diffusion modelling showed that regulating stocking density had stronger downstream effects on disease and water quality than regulating feed inputs — reversing the expected priority order (Niklitschek et al., 2013).
Centrality rankings
(Monitoring Lab)
Prioritise monitoring budgets toward the most informative variables In the North Sea, centrality analysis of a food web model identified that monitoring zooplankton biomass and herring recruitment provided 80% of the early-warning capacity at 30% of the cost of full ecosystem monitoring (Cury et al., 2005).
Leverage points
(Combined analysis)
Focus limited resources on variables with the greatest system-wide influence In Kenyan dryland water systems, participatory mapping revealed that “community water governance capacity” was a leverage point connecting ecological, economic, and social subsystems — leading to investment in local governance rather than infrastructure alone (Reid et al., 2016).

Evidence-Based Principles for Systems-Informed Decision-Making

Research across multiple domains has established principles that SIM4Action embodies:

  • Intervene at leverage points, not everywhere. Meadows (1999) showed that small interventions at the right structural points can shift entire system trajectories. SIM4Action’s centrality analysis identifies these points computationally.
  • Expect and plan for side effects. Any intervention in a connected system will produce cascading effects. The Intervention Lab makes these visible before implementation — reducing the risk of “policy resistance” (Sterman, 2000).
  • Monitor the bridges, not just the targets. High-betweenness variables act as transmission channels between subsystems. Monitoring them provides early warning of cross-system impacts that single-sector monitoring misses.
  • Treat the system map as a living document. As new monitoring data arrives, the map should be updated — relationships strengthened or weakened, new factors added, obsolete ones removed. This is the “learning” step in adaptive management.
  • Co-produce knowledge with stakeholders. Participatory mapping is not just a consultation exercise. Research shows that stakeholders who co-create system models have greater ownership of resulting management plans and higher compliance rates (Reed et al., 2014; Sterling et al., 2017).

What Makes SIM4Action Different

Many tools exist for parts of the systems analysis pipeline. What makes SIM4Action distinctive is that it integrates the full adaptive management cycle into a single, accessible platform:

  • Participatory data input via Google Sheets — stakeholders can contribute factors and relationships without needing specialised software.
  • Interactive visualisation — the system map is not a static image but a live, filterable, explorable network.
  • Computational analysis — feedback loop detection, community detection, and centrality analysis run in the browser via Pyodide (Python in WebAssembly), requiring no server infrastructure.
  • Simulation before action — causal diffusion (forward/backward, probabilistic/deterministic) and genetic algorithm optimization let stakeholders experiment with interventions computationally, discover root causes, and find optimal resource allocations — building intuition and consensus before committing resources.
  • Evidence-based monitoring design — centrality metrics provide a rigorous, defensible basis for allocating monitoring budgets.
  • Iterative by design — the platform is built for repeated use. As new data arrives, the system map is updated, interventions are re-evaluated, and monitoring priorities are refined.

The bottom line: SIM4Action transforms participatory systems mapping from a one-off workshop exercise into a continuous, evidence-driven decision-support process. It makes adaptive management not just an aspiration but a practical, implementable workflow — bridging the gap between understanding complexity and acting on it.

Building Causal System Maps

Causal system maps can be constructed through a spectrum of methods — from purely empirical literature synthesis to fully participatory co-design. SIM4Action introduces a new approach: agentic AI extraction, which can generate a comprehensive, evidence-based causal map in under an hour. This section explains the landscape of map-building methods and provides a detailed overview of the SIM4Action agentic workflow.

The Spectrum of Map-Building Approaches

There is no single “correct” way to build a causal system map. Methods vary along a spectrum from fully empirical (researcher-driven, evidence-extracted) to fully participatory (stakeholder-driven, experience-based). Each approach has trade-offs in cost, time, richness, and legitimacy:

MethodDescriptionStrengthsLimitations
Literature Review Researchers extract variables and causal relationships from published scientific papers, reports, and meta-analyses. Strong evidence base; reproducible; peer-reviewed sources Slow (weeks to months); limited to what is published; may miss local knowledge and emerging dynamics
Expert Interviews Semi-structured interviews with domain experts (scientists, managers, practitioners) to elicit causal relationships. Captures nuance and tacit knowledge; can probe mechanisms Time-intensive; subject to individual bias; small sample sizes
Surveys & Questionnaires Structured instruments distributed to a broader set of stakeholders to identify perceived causal links and priorities. Scalable; can quantify consensus and disagreement Shallow depth per response; requires careful design; low response rates common
Participatory Workshops Facilitated sessions where diverse stakeholders co-create the causal map in real time using sticky notes, whiteboards, or digital tools. Integrates diverse knowledge; builds ownership and consensus; captures cross-domain connections Expensive to organise; influenced by group dynamics; requires skilled facilitation
Co-Design & Iterative Refinement Multiple rounds of mapping, review, and revision with stakeholder groups over weeks or months. Highest legitimacy; deeply validated; captures evolving understanding Most time and resource intensive; risk of participation fatigue
Generative AI Extraction An agentic AI workflow researches the system, extracts variables and relationships from the evidence base, and produces a quality-checked causal map automatically. Fast (<1 hour); comprehensive evidence base; consistent methodology; fully traceable Lacks lived experience and local knowledge; should be validated by domain experts and/or stakeholders

These methods are complementary, not competing. The most robust system maps combine multiple approaches. A generative AI extraction can provide a rapid, evidence-based starting point that is then enriched, validated, and refined through expert review and participatory workshops. This hybrid strategy achieves both rigour (from the literature) and relevance (from stakeholder knowledge) in a fraction of the time required by purely manual approaches.

The Agentic Extraction Approach: Overview

SIM4Action includes an automated agentic deep research workflow that transforms a plain-language system description into a comprehensive causal system map. Rather than relying on a single AI prompt, the workflow orchestrates a team of specialised AI agents — each with a defined role — through a multi-phase pipeline of research, extraction, review, and quality control.

The approach mirrors how a human research team would work:

  • A Research Leader (Claude Sonnet) — plans the research, coordinates the team, synthesises findings, reviews outputs, and writes reports. Handles tasks requiring good judgement and clear communication.
  • Deep Analysts (Claude Opus) — perform the intellectually demanding work: extracting variables and relationships with strict quality rules, and conducting the final quality control check.
  • Field Researchers (Perplexity Sonar) — search the web for scientific papers, government reports, and current information, returning structured findings with citations.

This separation of roles follows the same principles as academic peer review: the agent that extracts variables is not the same agent that reviews them, ensuring independent quality checks at every stage.

Input & output. You provide a natural-language description of the system (e.g., “The Northern Territory mud crab fishery in Australia”). The workflow returns an Excel workbook containing a complete FACTORS sheet (40–120 variables with IDs, names, domains, definitions) and a RELATIONSHIPS sheet (60–200 causal links with polarity, strength, delay, and mechanistic explanations) — ready for direct import into the SIM4Action platform. A typical run takes 30–60 minutes and costs approximately $25–55 in API usage.

The Five-Phase Pipeline

The workflow follows a structured pipeline of five phases. Each phase produces auditable intermediate reports, and quality gates ensure that problems are caught and corrected before propagating downstream.

Phase 1: Deep Research

The Research Leader analyses the system description and creates a tailored research plan that divides the system into 6–8 thematic domains (e.g., Environmental-Ecosystem, Fish Stocks, Economics-Markets, Management, Social Impacts, Indigenous Knowledge). For each domain, it generates 3–5 targeted, search-optimised research questions.

The Field Researchers then investigate each domain in parallel, conducting deep web searches and producing a Domain Research Brief for each — containing key findings, spotted variables, observed relationships, important dynamics, knowledge gaps, and source citations. The Research Leader synthesises all briefs into a comprehensive System Overview Report and checks for research gaps. If significant gaps are found, targeted follow-up research is conducted (up to 2 gap-fill iterations).

Phase 2: Variable Identification

A Deep Analyst extracts all system variables from the research corpus, applying strict quality rules:

  • Directional neutrality — variable names must be neutral (e.g., “Ocean Temperature” not “Ocean Warming”) so they can increase or decrease.
  • Measurability — every variable must be quantifiable or assessable (e.g., “Dissolved Oxygen Concentration” not “Water Quality”).
  • Appropriate granularity — neither too specific (“January Rainfall in Zone 3”) nor too broad (“Climate”).
  • Domain completeness — at least 3 variables per domain, ensuring no dimension of the system is neglected.
  • Intervenability — variables directly controllable by management or policy are flagged.

The Research Leader reviews the variable list against an 8-point checklist. If the review fails, specific feedback is sent back to the Deep Analyst for revision. This review–revise loop repeats up to 3 times, mirroring academic peer review.

Phase 3: Relationship Extraction

With a validated variable list, a Deep Analyst identifies all causal relationships. For each relationship, the analyst determines:

  • Source and target variables (using the established IDs)
  • Polarity — “same” (both increase or both decrease together) or “opposite” (one increases, the other decreases)
  • Strength — strong (primary driver), medium (significant factor), or weak (minor influence)
  • Time delay — how long before the effect materialises (days, months, years, or decades)
  • Mechanistic definition — a detailed explanation of why and how the causal link works, not just that it exists

Relationships are extracted systematically: first those involving focal variables, then within-domain, then cross-domain connections. The Research Leader reviews against a 10-point checklist, and the review–revise loop repeats up to 3 times.

Phase 4: Feedback Loops & Gap Analysis

The Research Leader traces all feedback loops in the completed map — circular chains where a change in one variable eventually comes back to affect itself. Each loop is classified as:

  • Reinforcing — amplifies change (even number of “opposite” edges in the cycle)
  • Balancing — counteracts change and tends toward equilibrium (odd number of “opposite” edges)

The map is also analysed for structural gaps (orphan variables, under-connected domains, missing cross-domain links) and thematic gaps (are climate impacts represented? are management actions connected to what they manage? is Indigenous knowledge integrated?). If significant gaps are found, targeted research fills them (up to 2 iterations).

Phase 5: Integration & Output

A Deep Analyst performs a comprehensive 7-point final quality control check covering structural integrity, logical coherence, naming quality, domain balance, statistical distributions, definition quality, and evidence grounding. The validated data is then assembled into the final Excel workbook, and the Research Leader writes a human-readable summary report with system description, map statistics, key findings, methodology notes, and a complete source bibliography.

Quality Assurance & Iteration

A single pass through an AI model — no matter how capable — is not sufficient for a task of this complexity. The workflow incorporates multiple layers of quality assurance that mirror the rigour of academic research:

🔄

Peer Review Loops

Variables and relationships are extracted by one model (Claude Opus) and reviewed by a different model (Claude Sonnet) with fresh eyes and a structured checklist. Failed reviews trigger revision with specific feedback, up to 3 iterations per phase.

🔍

Gap Analysis

Dedicated gap-checking steps ensure the map is comprehensive. Research gaps trigger follow-up searches. Structural gaps (orphan variables, sparse domains) and thematic gaps (missing climate, governance, or Indigenous dimensions) are systematically identified and addressed.

📊

Statistical Benchmarks

The final QC checks that output distributions match empirically derived targets: 55–85% same-polarity relationships, 10–30% strong / 50–80% medium / 5–20% weak strength, and realistic delay distributions. Deviations trigger warnings.

📝

Full Audit Trail

Every intermediate step produces a saved report: research plan, domain briefs, system overview, variable list, relationship list, feedback loop analysis, quality control report, and final summary. The entire chain of evidence is traceable and reviewable.

This multi-agent, multi-pass approach produces a significantly more thorough and reliable causal map than any single prompt or single-pass extraction could achieve.

System Domains & Coverage

The Research Leader tailors the research domains to each specific system, but follows a standard framework designed to ensure comprehensive coverage of all dimensions of a socio-environmental system:

DomainWhat It CoversExample Variables
Focal Factors The 1–3 most central variables the entire system revolves around Fish Stock Biomass, Prawn Recruitment
Environmental-Ecosystem Physical and biological conditions: climate, oceanography, habitat, biodiversity Sea Surface Temperature, Dissolved Oxygen, Coral Cover
Stock Population dynamics, recruitment, growth, mortality, species interactions Spawning Stock Biomass, Natural Mortality Rate, Bycatch Volume
Technical Harvesting technology, gear types, vessel capacity, innovation Fleet Size, Gear Selectivity, Fuel Consumption
Economics-Markets Prices, costs, profitability, trade, supply chains, investment Ex-Vessel Price, Operating Costs, Import Competition
Management Regulations, governance, compliance, research, decision-making Total Allowable Catch, MPA Coverage, Compliance Rate
Social Community wellbeing, employment, food security, demographics, equity Fisher Employment, Community Dependence, Recreational Participation
Indigenous Traditional ecological knowledge, cultural practices, rights, co-management Traditional Harvest Access, Cultural Site Condition, Indigenous Co-Management Involvement

For non-fishery systems, the domains adapt accordingly. A freshwater system might replace “Stock” with “Hydrology”; an urban system might replace “Environmental-Ecosystem” with “Built Environment” and “Public Health.” The Research Leader customises these based on the system description.

The Output: What You Get

A successful run produces the following deliverables:

Excel Workbook (SIM4Action-ready)

The primary output is an Excel workbook with two sheets:

  • FACTORS sheet — every system variable with: factor_id (V1, V2, …), name, domain_name, intervenable (true/false), and a definition explaining what the variable represents and how it could be measured.
  • RELATIONSHIPS sheet — every causal link with: relationship_id, from / to variable names and IDs, polarity (same/opposite), strength (strong/medium/weak), delay (days/months/years/decade), and a definition explaining the causal mechanism with evidence citations.

This workbook can be directly imported into the SIM4Action platform for immediate analysis using the Diagnostics, Intervention, and Monitoring labs.

Audit Trail Reports

Seven intermediate reports provide full transparency over the map-building process:

  1. Research Plan — system scope, domains, and research questions
  2. Domain Research Briefs — individual findings for each domain with sources
  3. System Overview — synthesised narrative of the entire system
  4. Variable List Report — all extracted variables with review history
  5. Relationship List Report — all extracted relationships with review history
  6. Feedback Loops & Gap Analysis — identified loops, structural and thematic gaps
  7. Final Summary Report — human-readable overview with statistics and bibliography
Typical Scale
MetricTypical Range
Variables40–120
Relationships60–200
Domains covered6–8
Feedback loops identified10–50+
Execution time30–60 minutes
Cost per run$25–55 (API usage)

Combining AI Extraction with Participatory Approaches

The agentic extraction workflow is designed to complement, not replace, participatory mapping. The recommended hybrid approach uses AI-generated maps as a foundation that stakeholders then refine:

  1. Generate a baseline map — run the agentic workflow to produce a comprehensive, evidence-based starting point. This takes <1 hour and ensures that the published literature and available data are represented.
  2. Load into SIM4Action — import the Excel workbook into the platform. Use the Diagnostics Lab to explore the map structure, identify feedback loops, and understand the system as characterised by the published evidence.
  3. Workshop review & enrichment — bring stakeholders together (or conduct expert interviews) to review the AI-generated map. Stakeholders add local knowledge, lived experience, and cultural context that the literature does not capture. They can add missing variables, remove irrelevant ones, adjust relationship strengths, and flag where the AI got it wrong.
  4. Iterate — update the spreadsheet with workshop outputs, re-import into SIM4Action, and re-analyse. The platform’s adaptive management cycle (Understand → Intervene → Monitor → Learn) applies to the map itself.

Why this works. Starting with an AI-generated map means that workshops spend less time listing obvious variables and more time on the nuanced, contested, and locally specific dynamics that only human participants can provide. The AI handles the “homework” of reviewing hundreds of papers; the humans contribute the wisdom that no paper captures. The result is a map that is both evidence-rich and stakeholder-owned.

Design Principles

Seven principles guide the design of the agentic extraction workflow:

  1. Research-first. Every variable and relationship must be grounded in evidence from the research phase. No invented or assumed connections.
  2. Iterative refinement. Multiple review–revise cycles catch errors that a single pass would miss. Quality improves with each iteration.
  3. Fit-for-purpose model selection. Different AI models are chosen for different tasks based on their strengths: deep reasoning for extraction (Opus), coordination for review (Sonnet), web search for research (Perplexity).
  4. Structured artefacts. All outputs follow strict JSON schemas, ensuring consistency and enabling downstream processing.
  5. Directional neutrality. Variable names must be phrased so they can increase or decrease, preventing built-in bias.
  6. Domain completeness. Every dimension of the socio-environmental system must be represented, from ecology to economics to governance to Indigenous knowledge.
  7. Traceable provenance. Every claim is linked to a source. Every decision is documented in the audit trail. The map can be interrogated from output back to evidence.

References

Sources cited throughout this primer. Arranged alphabetically by first author.

  1. Allen, C.R. & Gunderson, L.H. (2011). Pathology and failure in the design and implementation of adaptive management. Journal of Environmental Management, 92(5), 1379–1384. doi:10.1016/j.jenvman.2010.10.063
  2. Coll, M., Cury, P., Azzurro, E., Bariche, M., Bayadas, G., Bellido, J.M., … & Tudela, S. (2013). The scientific strategy needed to promote a regional ecosystem-based approach to fisheries in the Mediterranean and Black Seas. Reviews in Fish Biology and Fisheries, 23(4), 415–434. doi:10.1007/s11160-013-9305-y
  3. Cury, P.M., Shannon, L.J., Roux, J.-P., Chuenpagdee, R., Gretchina, A., Penney, A., … & Shin, Y.-J. (2005). Trophodynamic indicators for an ecosystem approach to fisheries. ICES Journal of Marine Science, 62(3), 430–442. doi:10.1016/j.icesjms.2004.12.006
  4. Holling, C.S. (Ed.) (1978). Adaptive Environmental Assessment and Management. John Wiley & Sons, Chichester.
  5. Hughes, T.P., Kerry, J.T., Álvarez-Noriega, M., Álvarez-Romero, J.G., Anderson, K.D., Baird, A.H., … & Wilson, S.K. (2017). Global warming and recurrent mass bleaching of corals. Nature, 543(7645), 373–377. doi:10.1038/nature21707
  6. Meadows, D.H. (1999). Leverage points: Places to intervene in a system. Sustainability Institute, Hartland, VT. donellameadows.org
  7. Niklitschek, E.J., Soto, D., Lafon, A., Molinet, C. & Toledo, P. (2013). Southward expansion of the Chilean salmon industry in the Patagonian Fjords: Main environmental challenges. Reviews in Aquaculture, 5(3), 172–195. doi:10.1111/raq.12012
  8. Reed, M.S., Stringer, L.C., Fazey, I., Evely, A.C. & Kruijsen, J.H.J. (2014). Five principles for the practice of knowledge exchange in environmental management. Journal of Environmental Management, 146, 337–345. doi:10.1016/j.jenvman.2014.07.021
  9. Reid, R.S., Nkedianye, D., Said, M.Y., Kaelo, D., Neselle, M., Makui, O., … & Clark, W.C. (2016). Evolution of models to support community and policy action with science: Balancing pastoral livelihoods and wildlife conservation in savannas of East Africa. Proceedings of the National Academy of Sciences, 113(17), 4579–4584. doi:10.1073/pnas.0900313106
  10. Sterman, J.D. (2000). Business Dynamics: Systems Thinking and Modeling for a Complex World. McGraw-Hill, Boston.
  11. Sterling, E.J., Betley, E., Sigouin, A., Gomez, A., Toomey, A., Cullman, G., … & Filardi, C. (2017). Assessing the evidence for stakeholder engagement in biodiversity conservation. Biological Conservation, 209, 159–171. doi:10.1016/j.biocon.2017.02.008
  12. Walters, C.J. (1986). Adaptive Management of Renewable Resources. Macmillan, New York.

System Maps

Loading...
Category:

Remove System Map

Are you sure you want to remove ? This will delete the system configuration and remove it from the catalogue.

Spreadsheet Configuration Guide

Your Google Sheet must follow a specific structure so that SIM4Action can read it correctly. Below is a complete reference for setting up your spreadsheet.

1. Prerequisites

Sharing: Your Google Sheet must be shared with “Anyone with the link” set to Viewer. The platform reads data via the public Google Sheets API—private sheets will not work.

Your spreadsheet must contain exactly two tabs (sheet names are case-sensitive):

FACTORS RELATIONSHIPS

2. FACTORS Tab

Each row defines a single factor (variable / node) in your system map. Row 1 must be the header row.

ColumnHeader NameRequiredDescription
A factor_id Yes Unique identifier for the factor. Must follow the format V1, V2, V3, etc. (letter V followed by a number). These IDs are used to define relationships.
B name Yes Human-readable name of the factor, e.g. “Water Temperature”, “Market Price”. This is displayed on the map.
C domain_name Yes The thematic domain (category) this factor belongs to. Used for color-coding and domain filtering. See recognized domains below.
D intervenable No Whether this factor can be intervened on / managed. Used in scenario analysis tools.
E definition No A longer description or definition of this factor. Shown in tooltips and detail panels.

Allowed values for intervenable

Yes No

Leave blank if unsure — the platform treats blank as No.

Recognized domain names

You can use any domain name you like—unrecognized names will be auto-assigned a distinct color. However, the following names have predefined color mappings for visual consistency across systems:

Focal Factors
Environmental
Ecosystem
Biology
Economics
Markets
Management
Policy
Governance
Social
Community
Stock
Technical
Technology
Indigenous
Cultural
Climate
Sustainability
Resources
Health / Safety

3. RELATIONSHIPS Tab

Each row defines a directed causal link between two factors. Row 1 must be the header row.

ColumnHeader NameRequiredDescription
A relationship_id No Optional unique identifier for bookkeeping, e.g. R1, R2.
B from No Human-readable name of the source factor. Informational only—the platform resolves links via from_factor_id.
C to No Human-readable name of the target factor. Informational only—the platform resolves links via to_factor_id.
D from_factor_id Yes The factor_id of the source (cause). Must match a value in column A of the FACTORS tab.
E to_factor_id Yes The factor_id of the target (effect). Must match a value in column A of the FACTORS tab.
F polarity No Direction of causal influence.
G strength No Magnitude of the relationship.
H delay No Approximate time scale of the effect propagation.
I definition No A textual description explaining the nature of this causal link.

Allowed values for polarity

same opposite

same — an increase in the source factor leads to an increase in the target (and vice versa). opposite — an increase in the source leads to a decrease in the target.

Allowed values for strength

strong medium weak

Strength affects line thickness in the visualization and is used in weighted analyses.

Allowed values for delay

days months years

Delay indicates how quickly the causal effect propagates. Used in causal diffusion simulations (probabilistic/deterministic, forward/backward), ensemble analyses, and GA optimization.

4. Example

FACTORS (example rows)

factor_idnamedomain_nameintervenabledefinition
V1Water TemperatureEnvironmentalNoAverage sea surface temperature in the region
V2Market PriceEconomicsNoAverage price per kilogram at first sale
V3Harvest QuotaManagementYesTotal allowable catch set by regulatory body

RELATIONSHIPS (example rows)

relationship_idfromtofrom_factor_idto_factor_idpolaritystrengthdelaydefinition
R1Water TemperatureMarket PriceV1V2oppositemediummonthsHigher temperatures reduce supply quality, lowering price
R2Harvest QuotaMarket PriceV3V2oppositestrongmonthsLower quotas restrict supply, increasing price

Tip: You can start with a small number of factors and relationships and expand later. The platform dynamically discovers all domains and relationships each time data is loaded.

Common mistakes: Mismatched factor_id references (e.g., using V10 in RELATIONSHIPS when it doesn’t exist in FACTORS), misspelled tab names (must be exactly FACTORS and RELATIONSHIPS), and forgetting to share the sheet publicly.