Interactive platform for social-environmental systems mapping and causal analysis
Explore, intervene, and monitor complex system dynamics
Real-world systems are not simple chains of cause and effect. They are webs of interconnected factors where actions ripple in unexpected ways. Understanding this complexity is the first step toward effective management and intervention.
Socio-environmental systems span ecological, economic, social, and governance dimensions. A change in one area cascades through many others in ways that are hard to predict.
Effects can circle back to amplify the original cause (reinforcing loops) or counteract it (balancing loops). These loops are the engines of system behaviour.
Causes and effects are often separated by days, months, or even years. These delays make it difficult to connect actions with their consequences and can lead to over- or under-reaction.
System-level behaviours (collapses, regime shifts, resilience) emerge from the interactions among components. They cannot be understood by studying each part in isolation.
Why does this matter? Traditional approaches address problems one at a time. But in complex systems, solving one problem can create or worsen others. Systems thinking provides the tools to see the whole picture, identify leverage points, and design interventions that account for side effects and feedback.
A causal system map (also called a causal loop diagram) is a visual representation of how things influence each other within a system. It captures the cause-and-effect relationships that drive system behaviour, making hidden connections visible and explicit.
The map has two core building blocks:
Here is a minimal example — a generic system with just four variables and the relationships between them:
A minimal causal system map. Population Size drives Resource Demand (same direction: more people, more demand). Higher demand reduces Resource Availability (opposite direction). Availability feeds back to support Population Size (same direction), completing a balancing loop. Management Policy acts as an external lever that can reduce demand.
Even this simple map reveals important dynamics:
From simple to complex. Real-world system maps typically contain dozens or even hundreds of variables and relationships spanning ecological, economic, social, and governance dimensions. SIM4Action provides the computational tools to analyse these complex maps — the same principles shown here (variables, relationships, feedback loops, leverage points) apply at any scale.
Consider a simplified fishery system. It is tempting to think of it as a simple chain: fewer regulations lead to more fishing, which leads to less fish stock. But the reality is far more complex:
A simplified fishery causal loop diagram. Green arrows represent same-direction relationships; the red arrow from Fishing Effort to Fish Stock represents an opposite-direction relationship. Notice the reinforcing loop (Fish Stock → Ecosystem Health → Fish Stock) and the balancing loop (Fish Stock → Catch → Revenue → Effort → Fish Stock).
In this diagram, reducing the Harvest Quota doesn't just reduce catch — it affects market revenue, fleet investment, fishing effort, and ultimately fish stock recovery, which feeds back into ecosystem health. A systems approach reveals these cascading effects before interventions are implemented.
Participatory systems mapping brings diverse stakeholders together — scientists, policymakers, community members, indigenous groups — to co-create a shared model of how their system works.
This approach ensures that local knowledge, scientific evidence, and governance realities are all represented in the same model — giving every voice a place in the system map.
SIM4Action guides you through three analytical stages, each supported by a dedicated lab in the platform. This workflow is inherently iterative, mirroring the principles of adaptive management:
The SIM4Action analytical cycle aligns with adaptive management: understand the system, design and test interventions, monitor outcomes, and use what you learn to refine your understanding. Each iteration deepens insight and improves management decisions.
Alignment with Adaptive Management: Adaptive management recognises that our understanding of complex socio-environmental systems is always incomplete. Rather than designing a single “optimal” plan, it prescribes a structured cycle of plan → act → monitor → learn → adjust. SIM4Action operationalises this cycle: the Diagnostics Lab builds understanding (plan), the Intervention Lab tests management strategies (act), and the Monitoring Lab designs the feedback mechanisms (monitor) that close the learning loop. Each time you return to Step 1 with new monitoring data, the system map can be updated, interventions re-evaluated, and monitoring priorities refined — creating a continuous improvement process grounded in evidence and stakeholder knowledge.
Diagnostics Lab — Explore network structure, feedback loops, and system properties
The Diagnostics Lab provides tools to explore and understand the structure of a causal systems map. By visualizing the network and its properties, you can identify key structural features that drive system behaviour.
The system map is visualized as a directed network graph using D3.js force-directed layout. Each element encodes information:
Illustrative example: Below is a synthetic coastal fishery system with 9 factors across 4 domains. Notice how factors from different domains are interconnected through causal relationships:
A synthetic coastal fishery system with 9 factors across 4 domains. Node colours indicate domain: green = Environmental, blue = Economics, orange = Management, purple = Social. Edge colours indicate polarity: green = same-direction (+), red = opposite-direction (−). Edge thickness reflects strength: thick = strong, medium = medium, thin = weak. Edge labels show polarity and strength explicitly.
You can filter the network by domain, relationship type, strength, and temporal scale to focus on specific aspects of the system.
Each causal relationship in the map has three key properties:
Polarity describes how the source factor affects the target:
When the source increases, the target also increases (and vice versa). Example: more fishing effort → more catch.
When the source increases, the target decreases. Example: more fishing effort → less fish stock.
Strength (strong, medium, weak) indicates the magnitude of the causal influence. In simulations, stronger relationships attract more token flow.
Temporal delay (days, months, years) captures how long it takes for the effect to materialise. A policy change may take years to affect fish populations, but days to affect market prices.
Feedback loops are closed causal chains where the effect of a factor eventually circles back to influence itself. The platform detects all feedback loops using depth-first search and classifies them by polarity:
The product of all edge polarities is positive. These loops amplify change — they drive exponential growth or decline. Example: More fish → more catch → more revenue → more fleet investment → more effort → fewer fish (a vicious cycle when stock declines).
The product of all edge polarities is negative. These loops resist change and drive the system toward equilibrium. Example: Fewer fish → less catch → lower revenue → less effort → fish recovery.
The Diagnostics Lab lets you find all loops, filter by type (reinforcing/balancing), and visualize each loop on the network.
The platform uses community detection algorithms (Louvain and Girvan-Newman) to identify clusters of tightly connected factors. These clusters often correspond to subsystems or thematic groups:
Illustrative example: Using our synthetic fishery system, a community detection algorithm might identify two clusters:
Cluster detection reveals two tightly coupled subsystems. Cluster A (green) groups ecological and harvest variables with dense internal connections. Cluster B (blue) groups socio-economic and management variables. The bridge edges (orange, thick) — Catch Volume → Market Price and Market Price → Fishing Effort — are the critical inter-cluster links through which changes propagate between subsystems.
Cluster detection helps identify which parts of the system are most tightly coupled and where natural boundaries exist between subsystems. The bridge edges between clusters are especially important — they are the pathways through which changes propagate from one subsystem to another.
Intervention Lab — Simulate how changes propagate through the causal network
The Intervention Lab uses causal diffusion simulation to model how interventions spread through the causal network. It offers two complementary algorithms — probabilistic (stochastic random-walk tokens) and deterministic (proportional flow splitting) — in both forward (cause→effect) and backward (effect→cause) directions. An integrated genetic algorithm optimizer can automatically discover the best allocation of intervention resources to maximise impact on a target variable.
Imagine dropping a pebble into a pond. The ripples spread outward, interacting with obstacles and reflecting back. Causal diffusion works similarly: you introduce a change at one or more nodes, and watch how the effect ripples through the network. The platform offers two complementary simulation algorithms:
Probabilistic mode (agent-based): Each token is an autonomous agent that:
Deterministic mode (flow-based): Instead of individual tokens making random choices, a continuous flow of causal influence:
When to use which? Probabilistic mode captures the inherent uncertainty in complex systems — run it many times (ensemble mode) to build confidence intervals. Deterministic mode gives a clean, repeatable “expected value” signal — ideal for single-scenario exploration, backward analysis, and optimisation.
Token diffusion illustrated. This diagram traces the propagation path only. 5 positive tokens are injected at Harvest Quota. On red opposite-direction edges the charge flips: +5 → −5 at Fishing Effort, then −5 → +5 at Fish Stock (the “double flip”). On green same-direction edges the charge is preserved. At Catch Volume, tokens split probabilistically: 60% to Market Price (strong, wt = 3) and 40% to Community Wellbeing (medium, wt = 2). Node colours reflect the token charge: red border = negative, green border = positive, orange border = injection point.
At each simulation step, every active token:
Step-by-step walkthrough: Let’s trace what happens when we inject 10 positive tokens at Harvest Quota (simulating a quota increase) using the same synthetic coastal fishery system. The diagram below highlights the propagation path (bright nodes with step numbers) against the full network (dimmed nodes):
Token propagation through the full fishery network. Bright nodes with step numbers show the propagation path from Harvest Quota; dimmed nodes (Sea Surface Temp., Water Quality, Spatial Protection) are part of the system but not on this intervention path. Red thick edges = opposite-direction path (charge flips). Green thick edges = same-direction path (charge preserved). Grey thin edges = non-path connections. The dashed grey edge (Market Price → Fishing Effort) is the feedback loop that would carry tokens in subsequent rounds. The table below details each step.
| Step | Location | Charge | Event |
|---|---|---|---|
| 0 | Harvest Quota | +10 | 10 positive tokens injected (simulating a quota increase). Only one outgoing edge: Harvest Quota → Fishing Effort (opposite / strong). |
| 1–5 | In transit | +10 in transit | Tokens travel along the edge. Delay = “days” = 5 simulation steps. |
| 6 | Fishing Effort | −10 | Tokens arrive. Edge is opposite, so charge flips: +10 → −10. Interpretation: quota increase → effort decreases. |
| 6 | Fishing Effort | −10 routing | Fishing Effort has one outgoing edge: Fish Stock (opposite / strong). All 10 tokens route to Fish Stock. (Market Price → Fishing Effort is an incoming edge, not outgoing.) |
| 7–16 | In transit | −10 in transit | Tokens travel Fishing Effort → Fish Stock. Delay = “months” = 10 steps. |
| 17 | Fish Stock | +10 | Tokens arrive. Edge is opposite, so charge flips again: −10 → +10. The “double flip”: less effort → stock recovers. |
| 17 | Fish Stock | +10 routing | Fish Stock has one outgoing edge: Catch Volume (same / strong). All 10 tokens route to Catch Volume. |
| 18–27 | In transit | +10 in transit | Tokens travel Fish Stock → Catch Volume (same / strong, months delay). |
| 28 | Catch Volume | +10 | Tokens arrive. Edge is same, charge preserved: +10. More fish → more catch. |
| 28 | Catch Volume | +10 routing | Catch Volume has 2 outgoing edges: Market Price (strong, wt=3, 60%) and Community Wellbeing (medium, wt=2, 40%). ~6 tokens → Market Price, ~4 → Wellbeing. |
| 29+ | Market Price / Wellbeing | +6 / +4 | Both edges are same: charge preserved. More catch → higher prices, greater wellbeing. Market Price tokens may continue to Fishing Effort via the feedback loop. |
Key insight: Two consecutive opposite-direction edges produce a net positive effect (the “double flip”). Increasing the harvest quota (a restrictive management action) ultimately leads to fish stock recovery because the causal chain passes through two “opposite” relationships. The simulation also reveals temporal dynamics: the effect on Fish Stock takes ~17 steps (months), while downstream effects on Catch Volume and Market Price take even longer — making trade-offs across time visible to decision-makers.
The simulation tracks node flows (accumulated positive and negative tokens at each node) and edge flows (tokens currently traversing each edge) at every time step, producing time-series data you can chart and analyse.
The Intervention Lab offers two modes of analysis, each with configurable algorithm settings:
Run a single simulation with step-by-step control. Watch tokens or flow propagate in real time. Includes play/pause, step-by-step advancement, and speed control. You can choose:
Ideal for exploring and understanding how a specific intervention ripples through the system, or tracing backward to discover root causes.
Run 10–1000 simulations with different random seeds to produce statistical distributions of outcomes. Uses the probabilistic algorithm (the stochastic variation is what makes ensembles meaningful). Supports both forward and backward directions. Ideal for robust decision-making with confidence intervals.
Algorithm settings are configured in the “Algorithm Settings” panel within each mode. In Scenario mode, both algorithms and both directions are available. In Ensemble mode, the algorithm is locked to Probabilistic (deterministic runs produce identical outcomes, so ensembles would be redundant), but direction is selectable.
Simulation results include:
Illustrative examples from the Harvest Quota +10 token simulation:
| Top Positive Accumulations | |||
|---|---|---|---|
| Rank | Variable | Net Tokens | Arrival Step |
| 1 | Fish Stock | +10 | 17 |
| 2 | Catch Volume | +10 | 28 |
| 3 | Market Price | +6 | 29+ |
| 4 | Community Wellbeing | +6 | 29+ |
| — | Sea Surface Temp. | 0 | — |
| — | Water Quality | 0 | — |
| — | Spatial Protection | 0 | — |
| Top Negative Accumulations | |||
|---|---|---|---|
| Rank | Variable | Net Tokens | Arrival Step |
| 1 | Fishing Effort | −10 | 6 |
| — | No other nodes received negative tokens in this simulation | ||
Token accumulation summary. The tables rank all nodes by net accumulated tokens after the Harvest Quota +10 simulation. Positive accumulations (left): Fish Stock and Catch Volume receive the strongest benefit (+10 each). Market Price and Community Wellbeing receive +6 each after token splitting. Three environmental/management nodes are unreached. Negative accumulations (right): only Fishing Effort receives a negative effect (−10 at step 6), caused by the opposite-direction edge from Harvest Quota. The arrival step column shows temporal ordering — upstream nodes are affected first.
Node flow chart. Each line tracks the net accumulated token charge at a node over simulation steps. Fishing Effort drops to −10 at step 6 (opposite-direction edge flips charge). Fish Stock rises to +10 at step 17 (double flip). Catch Volume follows at step 28. Market Price and Community Wellbeing (dashed) arrive at step 29+ with +6 each. The staircase pattern reveals temporal ordering: upstream nodes are affected first, downstream nodes later.
Edge flow chart. Each line shows the number of tokens in transit on a given edge at each simulation step. HQ → FE carries all 10 tokens first (steps 1–5), then FE → FS (steps 7–16), then FS → CV (steps 18–27). After step 28, tokens split: CV → MP (~6 tokens) and CV → CW (~4 tokens). The pulse pattern shows tokens moving as a “wave front” through the network — each edge is active only during its transit window, then returns to zero as tokens arrive and move on.
Controllability gauge. Measures what fraction of the system can be influenced from the chosen intervention point. Injecting tokens at Harvest Quota reaches 6 out of 9 nodes (67%). The three unreached nodes (environmental drivers and spatial protection) have no incoming path from Harvest Quota — they influence the system but cannot be controlled through quota adjustments. A higher controllability score means the intervention has broader system reach.
These results help stakeholders compare intervention strategies, identify unintended side effects, and build consensus around preferred approaches.
The two algorithms model causal propagation at different levels of abstraction. Both respect the same network topology, polarities, strengths, and delays — they differ only in how influence is routed at branch points.
| Property | Probabilistic (Agent-Based) | Deterministic (Flow-Based) |
|---|---|---|
| Unit of influence | Discrete tokens (integer agents) | Continuous flow (fractional values) |
| Routing at branch points | Each token makes a weighted random choice among outgoing edges | Flow splits proportionally by normalised edge strength — strong (1.0), medium (0.6), weak (0.3) |
| Repeatability | Stochastic — each run produces slightly different results | Deterministic — identical results every run |
| Ensemble suitability | Ideal — variation across runs produces meaningful distributions | Not applicable — every run is identical, so ensembles add no information |
| Optimization suitability | Possible but noisy fitness landscape | Ideal — smooth, repeatable fitness landscape for genetic algorithm search |
| Best for | Ensemble analysis, Monte Carlo confidence intervals, capturing system uncertainty | Single-scenario exploration, backward root-cause analysis, GA optimization, precise comparisons |
Deterministic flow splitting. When 10.0 units of flow arrive at Catch Volume, the deterministic algorithm splits them proportionally: strong edges receive 1.0/(1.0+0.6+0.3) = 53% of the flow, medium edges 32%, and weak edges 16%. There is no randomness — the split is the same every time. In probabilistic mode, each of 10 individual tokens would independently roll weighted dice, producing slightly different distributions each run.
In practice: Start with deterministic mode to understand the expected causal pathways clearly. Then switch to probabilistic ensemble mode to quantify the uncertainty around those expectations. The two algorithms are complementary lenses on the same system.
Standard (forward) diffusion answers: “If I intervene here, what happens downstream?” But often the more pressing question is the reverse: “This outcome variable matters to me — what are the most effective upstream levers to influence it?”
Backward diffusion reverses the direction of propagation. Instead of following outgoing edges from intervention points, tokens or flow travel along incoming edges from a target variable of interest, tracing influence backward through the causal chain to its upstream drivers.
Question: “What happens if I change X?”
Direction: Cause → Effect
Inject at: Intervention nodes (intervenable variables)
Reveals: Downstream impacts, side effects, controllability
Question: “What drives Y?”
Direction: Effect → Cause
Inject at: Target variable(s) of interest (e.g., a focal factor)
Reveals: Root causes, influence pathways, upstream leverage points
Backward diffusion from Fish Stock. Tokens are injected at the target variable (Fish Stock) and propagate backward along incoming edges. Purple nodes are direct upstream drivers discovered in the first wave; grey nodes are indirect drivers discovered in subsequent waves. The accumulated token flow at each upstream node quantifies its relative influence on the target.
After a backward diffusion run, the platform provides an Influence Ranking — a bar chart showing the cumulative causal influence (area under the curve) of each upstream variable on the target. This ranking directly answers the question: “Which variables have the greatest influence on my outcome of interest?”
Use cases for backward diffusion:
When designing interventions, a fundamental question arises: “Given a limited budget of resources, how should I distribute them across available intervention points to maximise the impact on my target variable?”
The Token Allocation Optimizer answers this question automatically using a genetic algorithm (GA) — an evolutionary search technique inspired by natural selection. Rather than testing every possible allocation (which is combinatorially infeasible), the GA evolves a population of candidate allocations over many generations, selecting the fittest, recombining their features, and introducing random mutations to explore the search space efficiently.
Given B total tokens (your resource budget) and N eligible intervention nodes, find the allocation [b1, b2, … bN] where b1 + b2 + … + bN = B that maximises the cumulative causal effect (area under the flow curve) on a chosen target variable over a given time horizon.
A population of random allocations (individuals) is created. Each individual is evaluated by running a deterministic diffusion simulation and measuring the cumulative effect on the target. The fittest individuals are selected to produce the next generation through crossover (blending two allocations) and mutation (shifting tokens between nodes). Over 50–200 generations, the population converges on the optimal allocation.
Genetic algorithm optimisation cycle. Starting from random allocations, each generation evaluates fitness (cumulative effect on the target variable via deterministic diffusion), selects the fittest, recombines and mutates them, and repeats. The process converges toward the allocation that maximises causal impact.
Optimizer configuration:
| Setting | Description | Default |
|---|---|---|
| Target Node | The variable whose cumulative effect you want to maximise. Typically a focal factor. | — |
| Optimisation Goal | Maximise positive (increase the target), maximise absolute (largest effect regardless of sign), or minimise negative (reduce the target) | Maximise positive |
| Direction | Forward (allocate tokens at upstream nodes to affect the target) or Backward (discover which upstream nodes matter most) | Forward |
| Total Budget | Total number of tokens to distribute across eligible nodes | 100 |
| Time Steps | Number of simulation steps to run for each fitness evaluation | 200 |
| Eligible Nodes | Which nodes can receive tokens. Defaults to intervenable variables; can be expanded. | Intervenable nodes |
| Population Size | Number of candidate allocations per generation | 50 |
| Generations | Maximum number of evolutionary cycles | 100 |
Optimizer outputs:
Why genetic algorithms? The token allocation problem is a constrained combinatorial optimisation problem. With 10 eligible nodes and a budget of 100 tokens, there are millions of possible allocations. Exhaustive search is infeasible. The GA efficiently searches this space by exploiting the structure of the problem — allocations that are close to the optimum in “genotype space” tend to have similar fitness, allowing the evolutionary process to home in on good solutions within 50–200 generations (typically seconds of computation).
Technical note: The GA runs entirely in your browser using a Web Worker thread, so the UI remains responsive during optimisation. The deterministic diffusion algorithm is used internally for fitness evaluation, ensuring smooth, repeatable fitness landscapes that the GA can navigate efficiently.
Monitoring Lab — Identify key indicators and design monitoring programs
The Monitoring Lab uses network centrality analysis to identify the most influential, strategically positioned, and informative variables in the system. These variables are prime candidates for monitoring and evaluation programs.
Not all variables in a system are equally important. Some factors sit at critical junctures in the causal network — they influence many others, bridge different subsystems, or propagate changes widely. These are the variables you most want to monitor.
Centrality metrics, borrowed from social network analysis (SNA) and graph theory, quantify the structural importance of each node in the network. By ranking variables by their centrality, you can prioritise monitoring resources for maximum insight.
Practical implication: Rather than trying to monitor everything (which is expensive and often infeasible), centrality analysis identifies the minimum set of “sentinel” variables that, if monitored, give you the best picture of overall system health.
Counts the number of direct connections (in + out). High-degree nodes are the most connected factors.
Q: Which variables have the most direct causal connections?
Measures how often a node lies on the shortest path between other nodes. High-betweenness nodes are bridges between subsystems.
Q: Which variables are bottlenecks or bridges in the system?
Measures how close a node is, on average, to all other nodes. High-closeness nodes can reach (or be reached by) the rest of the system quickly.
Q: Which variables can influence the whole system most rapidly?
A node is important if it is connected to other important nodes. This captures influence that propagates through the network.
Q: Which variables are connected to the most influential parts of the system?
Similar to eigenvector but gives every node a baseline importance. Accounts for both direct and indirect paths with attenuation over distance.
Q: Which variables have the broadest total influence, direct and indirect?
Worked example: Applying all five metrics to our synthetic fishery network reveals how different metrics highlight different variables. The diagram shows degree centrality (raw count of connections) for each node:
The fishery network annotated with degree centrality (raw count of in + out connections). Fish Stock (degree: 5) and Fishing Effort (degree: 4) have the most connections. The table below compares all five raw centrality metrics.
| Variable | Degree | Betweenness | Closeness | Eigenvector | Katz |
|---|---|---|---|---|---|
| Fish Stock | 5 | 0.43 | 0.47 | 0.52 | 0.58 |
| Fishing Effort | 4 | 0.32 | 0.53 | 0.44 | 0.51 |
| Market Price | 3 | 0.25 | 0.40 | 0.36 | 0.42 |
| Catch Volume | 3 | 0.18 | 0.35 | 0.33 | 0.38 |
| Spatial Protection | 2 | 0.14 | 0.31 | 0.18 | 0.25 |
| Water Quality | 2 | 0.04 | 0.27 | 0.26 | 0.24 |
| Community Wellbeing | 2 | 0.00 | 0.20 | 0.21 | 0.22 |
| Sea Surface Temp. | 1 | 0.00 | 0.25 | 0.12 | 0.15 |
| Harvest Quota | 1 | 0.00 | 0.29 | 0.09 | 0.13 |
Reading this table: Degree = raw count of connections. Betweenness, closeness, eigenvector, and Katz are computed as fractions (0 to 1). Higher values = greater structural importance. Notice how different metrics spotlight different variables:
By combining multiple centrality metrics, you can identify different types of strategically important variables:
The Comprehensive Analysis mode in the Monitoring Lab calculates all five metrics simultaneously and presents them in a sortable table, making it easy to identify variables that score highly across multiple dimensions.
From all system variables, centrality analysis filters the most strategically important ones into a priority monitoring set. Different metrics reveal different types of importance.
Turning systems analysis into concrete decisions, policies, and real-world impact
Understanding a system is only valuable if it leads to better decisions. This section bridges the gap between analysis and action — showing how the insights produced by SIM4Action translate into concrete, evidence-based management strategies in the real world.
Decades of research in environmental management, public health, and development have documented a persistent implementation gap: the distance between what we know about a system and what we actually do about it. This gap exists because:
SIM4Action addresses this gap directly. It gives stakeholders a shared, interactive, evidence-based model where causal assumptions are explicit and testable, not hidden in spreadsheets or expert intuition. Every factor, relationship, and intervention scenario is transparent, debatable, and modifiable.
Adaptive management — the structured cycle of plan, act, monitor, learn, adjust — is widely endorsed by institutions from the IUCN to the World Bank. But it requires three capabilities that most management agencies lack:
Traditional adaptive management assumes everyone agrees on “the system.” SIM4Action makes the system model explicit, visual, and collaboratively built through participatory mapping. Stakeholders co-create the causal map, ensuring all perspectives are represented.
The Intervention Lab allows managers to test interventions computationally before implementing them in the real world. Causal diffusion (forward and backward, probabilistic and deterministic) reveals cascading effects, trade-offs, root causes, and unintended consequences — at zero cost and zero risk. The genetic algorithm optimizer can even discover the optimal allocation of resources automatically.
The Monitoring Lab uses centrality analysis to identify exactly which variables to monitor. Rather than expensive blanket monitoring programs, managers can focus resources on the sentinel indicators most likely to detect system-wide change.
The SIM4Action adaptive management cycle. The flow runs clockwise: stakeholders build a system map, analyse its structure (Diagnostics Lab), test interventions (Intervention Lab), design monitoring (Monitoring Lab), implement actions, and collect new evidence. The dashed orange feedback arrow (top) closes the learning loop — new evidence updates the system map for the next iteration.
Each SIM4Action lab produces outputs that directly inform specific types of real-world decisions:
| SIM4Action Output | Decision It Informs | Real-World Example |
|---|---|---|
| Feedback loops (Diagnostics Lab) |
Identify self-reinforcing dynamics that could amplify or resist interventions | In the Great Barrier Reef, identifying a reinforcing loop between coral bleaching, algal overgrowth, and fish habitat loss led to prioritising water quality interventions over coral transplanting (Hughes et al., 2017). |
| Cluster detection (Diagnostics Lab) |
Define which agencies or departments need to coordinate on cross-cutting issues | In Mediterranean fisheries, identifying that ecological and socio-economic variables form separate clusters connected by “market price” led to joint meetings between fisheries biologists and economists (Coll et al., 2013). |
| Causal diffusion results (Intervention Lab) |
Compare intervention strategies (forward diffusion), identify root causes (backward diffusion), quantify trade-offs, optimise resource allocation (GA optimizer), and identify unintended side effects | In Chilean salmon aquaculture, diffusion modelling showed that regulating stocking density had stronger downstream effects on disease and water quality than regulating feed inputs — reversing the expected priority order (Niklitschek et al., 2013). |
| Centrality rankings (Monitoring Lab) |
Prioritise monitoring budgets toward the most informative variables | In the North Sea, centrality analysis of a food web model identified that monitoring zooplankton biomass and herring recruitment provided 80% of the early-warning capacity at 30% of the cost of full ecosystem monitoring (Cury et al., 2005). |
| Leverage points (Combined analysis) |
Focus limited resources on variables with the greatest system-wide influence | In Kenyan dryland water systems, participatory mapping revealed that “community water governance capacity” was a leverage point connecting ecological, economic, and social subsystems — leading to investment in local governance rather than infrastructure alone (Reid et al., 2016). |
Research across multiple domains has established principles that SIM4Action embodies:
Many tools exist for parts of the systems analysis pipeline. What makes SIM4Action distinctive is that it integrates the full adaptive management cycle into a single, accessible platform:
The bottom line: SIM4Action transforms participatory systems mapping from a one-off workshop exercise into a continuous, evidence-driven decision-support process. It makes adaptive management not just an aspiration but a practical, implementable workflow — bridging the gap between understanding complexity and acting on it.
Causal system maps can be constructed through a spectrum of methods — from purely empirical literature synthesis to fully participatory co-design. SIM4Action introduces a new approach: agentic AI extraction, which can generate a comprehensive, evidence-based causal map in under an hour. This section explains the landscape of map-building methods and provides a detailed overview of the SIM4Action agentic workflow.
There is no single “correct” way to build a causal system map. Methods vary along a spectrum from fully empirical (researcher-driven, evidence-extracted) to fully participatory (stakeholder-driven, experience-based). Each approach has trade-offs in cost, time, richness, and legitimacy:
| Method | Description | Strengths | Limitations |
|---|---|---|---|
| Literature Review | Researchers extract variables and causal relationships from published scientific papers, reports, and meta-analyses. | Strong evidence base; reproducible; peer-reviewed sources | Slow (weeks to months); limited to what is published; may miss local knowledge and emerging dynamics |
| Expert Interviews | Semi-structured interviews with domain experts (scientists, managers, practitioners) to elicit causal relationships. | Captures nuance and tacit knowledge; can probe mechanisms | Time-intensive; subject to individual bias; small sample sizes |
| Surveys & Questionnaires | Structured instruments distributed to a broader set of stakeholders to identify perceived causal links and priorities. | Scalable; can quantify consensus and disagreement | Shallow depth per response; requires careful design; low response rates common |
| Participatory Workshops | Facilitated sessions where diverse stakeholders co-create the causal map in real time using sticky notes, whiteboards, or digital tools. | Integrates diverse knowledge; builds ownership and consensus; captures cross-domain connections | Expensive to organise; influenced by group dynamics; requires skilled facilitation |
| Co-Design & Iterative Refinement | Multiple rounds of mapping, review, and revision with stakeholder groups over weeks or months. | Highest legitimacy; deeply validated; captures evolving understanding | Most time and resource intensive; risk of participation fatigue |
| Generative AI Extraction | An agentic AI workflow researches the system, extracts variables and relationships from the evidence base, and produces a quality-checked causal map automatically. | Fast (<1 hour); comprehensive evidence base; consistent methodology; fully traceable | Lacks lived experience and local knowledge; should be validated by domain experts and/or stakeholders |
These methods are complementary, not competing. The most robust system maps combine multiple approaches. A generative AI extraction can provide a rapid, evidence-based starting point that is then enriched, validated, and refined through expert review and participatory workshops. This hybrid strategy achieves both rigour (from the literature) and relevance (from stakeholder knowledge) in a fraction of the time required by purely manual approaches.
SIM4Action includes an automated agentic deep research workflow that transforms a plain-language system description into a comprehensive causal system map. Rather than relying on a single AI prompt, the workflow orchestrates a team of specialised AI agents — each with a defined role — through a multi-phase pipeline of research, extraction, review, and quality control.
The approach mirrors how a human research team would work:
This separation of roles follows the same principles as academic peer review: the agent that extracts variables is not the same agent that reviews them, ensuring independent quality checks at every stage.
Input & output. You provide a natural-language description of the system (e.g., “The Northern Territory mud crab fishery in Australia”). The workflow returns an Excel workbook containing a complete FACTORS sheet (40–120 variables with IDs, names, domains, definitions) and a RELATIONSHIPS sheet (60–200 causal links with polarity, strength, delay, and mechanistic explanations) — ready for direct import into the SIM4Action platform. A typical run takes 30–60 minutes and costs approximately $25–55 in API usage.
The workflow follows a structured pipeline of five phases. Each phase produces auditable intermediate reports, and quality gates ensure that problems are caught and corrected before propagating downstream.
The Research Leader analyses the system description and creates a tailored research plan that divides the system into 6–8 thematic domains (e.g., Environmental-Ecosystem, Fish Stocks, Economics-Markets, Management, Social Impacts, Indigenous Knowledge). For each domain, it generates 3–5 targeted, search-optimised research questions.
The Field Researchers then investigate each domain in parallel, conducting deep web searches and producing a Domain Research Brief for each — containing key findings, spotted variables, observed relationships, important dynamics, knowledge gaps, and source citations. The Research Leader synthesises all briefs into a comprehensive System Overview Report and checks for research gaps. If significant gaps are found, targeted follow-up research is conducted (up to 2 gap-fill iterations).
A Deep Analyst extracts all system variables from the research corpus, applying strict quality rules:
The Research Leader reviews the variable list against an 8-point checklist. If the review fails, specific feedback is sent back to the Deep Analyst for revision. This review–revise loop repeats up to 3 times, mirroring academic peer review.
With a validated variable list, a Deep Analyst identifies all causal relationships. For each relationship, the analyst determines:
Relationships are extracted systematically: first those involving focal variables, then within-domain, then cross-domain connections. The Research Leader reviews against a 10-point checklist, and the review–revise loop repeats up to 3 times.
The Research Leader traces all feedback loops in the completed map — circular chains where a change in one variable eventually comes back to affect itself. Each loop is classified as:
The map is also analysed for structural gaps (orphan variables, under-connected domains, missing cross-domain links) and thematic gaps (are climate impacts represented? are management actions connected to what they manage? is Indigenous knowledge integrated?). If significant gaps are found, targeted research fills them (up to 2 iterations).
A Deep Analyst performs a comprehensive 7-point final quality control check covering structural integrity, logical coherence, naming quality, domain balance, statistical distributions, definition quality, and evidence grounding. The validated data is then assembled into the final Excel workbook, and the Research Leader writes a human-readable summary report with system description, map statistics, key findings, methodology notes, and a complete source bibliography.
A single pass through an AI model — no matter how capable — is not sufficient for a task of this complexity. The workflow incorporates multiple layers of quality assurance that mirror the rigour of academic research:
Variables and relationships are extracted by one model (Claude Opus) and reviewed by a different model (Claude Sonnet) with fresh eyes and a structured checklist. Failed reviews trigger revision with specific feedback, up to 3 iterations per phase.
Dedicated gap-checking steps ensure the map is comprehensive. Research gaps trigger follow-up searches. Structural gaps (orphan variables, sparse domains) and thematic gaps (missing climate, governance, or Indigenous dimensions) are systematically identified and addressed.
The final QC checks that output distributions match empirically derived targets: 55–85% same-polarity relationships, 10–30% strong / 50–80% medium / 5–20% weak strength, and realistic delay distributions. Deviations trigger warnings.
Every intermediate step produces a saved report: research plan, domain briefs, system overview, variable list, relationship list, feedback loop analysis, quality control report, and final summary. The entire chain of evidence is traceable and reviewable.
This multi-agent, multi-pass approach produces a significantly more thorough and reliable causal map than any single prompt or single-pass extraction could achieve.
The Research Leader tailors the research domains to each specific system, but follows a standard framework designed to ensure comprehensive coverage of all dimensions of a socio-environmental system:
| Domain | What It Covers | Example Variables |
|---|---|---|
| Focal Factors | The 1–3 most central variables the entire system revolves around | Fish Stock Biomass, Prawn Recruitment |
| Environmental-Ecosystem | Physical and biological conditions: climate, oceanography, habitat, biodiversity | Sea Surface Temperature, Dissolved Oxygen, Coral Cover |
| Stock | Population dynamics, recruitment, growth, mortality, species interactions | Spawning Stock Biomass, Natural Mortality Rate, Bycatch Volume |
| Technical | Harvesting technology, gear types, vessel capacity, innovation | Fleet Size, Gear Selectivity, Fuel Consumption |
| Economics-Markets | Prices, costs, profitability, trade, supply chains, investment | Ex-Vessel Price, Operating Costs, Import Competition |
| Management | Regulations, governance, compliance, research, decision-making | Total Allowable Catch, MPA Coverage, Compliance Rate |
| Social | Community wellbeing, employment, food security, demographics, equity | Fisher Employment, Community Dependence, Recreational Participation |
| Indigenous | Traditional ecological knowledge, cultural practices, rights, co-management | Traditional Harvest Access, Cultural Site Condition, Indigenous Co-Management Involvement |
For non-fishery systems, the domains adapt accordingly. A freshwater system might replace “Stock” with “Hydrology”; an urban system might replace “Environmental-Ecosystem” with “Built Environment” and “Public Health.” The Research Leader customises these based on the system description.
A successful run produces the following deliverables:
The primary output is an Excel workbook with two sheets:
factor_id (V1, V2, …), name, domain_name, intervenable (true/false), and a definition explaining what the variable represents and how it could be measured.relationship_id, from / to variable names and IDs, polarity (same/opposite), strength (strong/medium/weak), delay (days/months/years/decade), and a definition explaining the causal mechanism with evidence citations.This workbook can be directly imported into the SIM4Action platform for immediate analysis using the Diagnostics, Intervention, and Monitoring labs.
Seven intermediate reports provide full transparency over the map-building process:
| Metric | Typical Range |
|---|---|
| Variables | 40–120 |
| Relationships | 60–200 |
| Domains covered | 6–8 |
| Feedback loops identified | 10–50+ |
| Execution time | 30–60 minutes |
| Cost per run | $25–55 (API usage) |
The agentic extraction workflow is designed to complement, not replace, participatory mapping. The recommended hybrid approach uses AI-generated maps as a foundation that stakeholders then refine:
Why this works. Starting with an AI-generated map means that workshops spend less time listing obvious variables and more time on the nuanced, contested, and locally specific dynamics that only human participants can provide. The AI handles the “homework” of reviewing hundreds of papers; the humans contribute the wisdom that no paper captures. The result is a map that is both evidence-rich and stakeholder-owned.
Seven principles guide the design of the agentic extraction workflow:
Sources cited throughout this primer. Arranged alphabetically by first author.