Chapter 11 Glossary of Key DAG Concepts

A simple guide for readers new to causal diagrams

11.1 Directed Acyclic Graph (DAG)

A DAG is a map of cause and effect.

  • Nodes = events or conditions
  • Arrows = what causes what
  • Directed = arrows point forward in time
  • Acyclic = arrows never loop back

A DAG shows how one thing leads to another.


11.2 Node

A node is the basic building block of a DAG. It represents a single event, condition, decision, or state of the world — the “dot” in the diagram.

11.2.1 ✔ What a node can represent

  • An action (e.g., “Z attacks I”)
  • A condition (e.g., “Alliance pressure”)
  • A behavior (e.g., “U mobilizes”)
  • A political factor (e.g., “Domestic politics”)
  • A measurable variable (e.g., “Smoking”)
  • An outcome (e.g., “Lung cancer”)

11.2.2 ✔ Why nodes matter

Every other concept in a DAG — exposure, outcome, mediator, confounder, proxy, collider — is simply a node playing a specific causal role. Nodes are the nouns of causal diagrams; arrows are the verbs.


11.3 Exposure (Cause)

The starting point — the thing that might cause something else.
Example: Smoking

In the war model:
Z attacks I is the exposure.


11.4 Outcome (Effect)

The result we care about.
Example: Lung cancer

In the war model:
Joint attack is the outcome.


11.5 Mediator

A mediator sits in the middle of a causal chain.
It explains how or why the exposure leads to the outcome.

Example:
Exercise → Weight loss → Lower blood pressure
Weight loss is the mediator.

In the war model:
Alliance pressure and U’s attack are mediators.


11.6 Confounder

A confounder is a variable that causes both the exposure and the outcome.
It creates a backdoor path that can distort the true causal effect.

11.6.1 ✔ A confounder must:

  1. Cause the exposure
  2. Cause the outcome
  3. Not be caused by the exposure

11.6.2 Example (Confounder case)

Alcohol → Smoking → Lung cancer
Alcohol → Lung cancer

Alcohol is a confounder of the smoking → lung cancer relationship.

11.6.3 Regression clue

If you adjust for smoking and alcohol still shows a residual OR > 1,
→ alcohol has its own causal effect
→ alcohol is a confounder.

In the war model:
Alliance structure is a confounder because it affects both Z’s attack and U’s response.


11.7 Proxy

A proxy is a variable that predicts the outcome but does not cause it.
It is downstream of the true cause.

11.7.1 ✔ A proxy is:

  • Downstream of the true cause
  • Associated with the outcome
  • Not itself causal

11.7.2 Example (Proxy case)

Alcohol → Smoking → Lung cancer

Alcohol drinkers show a higher crude risk of lung cancer,
but alcohol does not cause lung cancer.

Alcohol is a proxy, not a confounder.

11.7.3 Regression clue

If you adjust for smoking and alcohol’s OR becomes 1.0,
→ alcohol has no independent effect
→ alcohol is a proxy.

In the war model:
U’s mobilization is a proxy — it predicts escalation but does not cause Z’s attack.


11.8 Effect Modifier

An effect modifier changes the strength of a causal relationship.

Example:
Exercise lowers blood pressure more in older adults than in younger adults.

In conflict:
Relative power can modify how strongly an attack leads to escalation.


11.9 Collider

A collider is a node where two arrows collide — two causes point into the same effect.

A collider is a “co‑incident” node — the point where two independent causal arrows meet. Conditioning on it creates a coincidental (spurious) association between its causes.

Example:
Talent → Getting hired ← Hard work
“Getting hired” is a collider.

Conditioning on a collider creates spurious associations.

📦 Collider Bias: The Hiring Example
Why conditioning on a collider creates fake relationships

A collider is a node where two arrows collide — two different causes point into the same effect. When we condition on a collider (for example, by restricting our analysis to people who share that effect), we accidentally create a spurious association between its causes, even if they are completely unrelated in reality.

The DAG

Talent → Getting hired ← Hard work
  • Talent does not cause hard work.
  • Hard work does not cause talent.
  • They can be completely independent in the general population.
  • Both increase the chance of getting hired.

“Getting hired” is the collider.

What happens when we condition on the collider?

If we look only at people who were hired, we create a false relationship:

  • Among the hired, someone with high talent may need less hard work to get hired.
  • Someone with lower talent may need more hard work to get hired.

This produces a negative association between talent and hard work within the hired group, even though no such relationship exists in the full population.

Why this matters

This example shows the essence of collider bias:

  • Before conditioning:
    Talent ⟂ Hard work (independent)

  • After conditioning on “Getting hired”:
    Talent ↔︎ Hard work (spurious association)

The two causes appear related only because we restricted our view to the outcome they both influence.

General lesson

Avoid conditioning on a collider. It creates relationships that do not exist and can mislead causal interpretation.

In conflict: If two countries both respond to a shared threat, the “response” node becomes a collider.

Collider Bias: The Shared‑Threat Example (War & Peace Edition)
How conditioning on a collider can create fake relationships between countries

In conflict analysis, a collider appears when two different causes point into the same event. If we restrict our attention to that event — for example, by analyzing only cases where a certain response occurred — we can accidentally create a spurious association between the causes, even if they are unrelated in reality.

The DAG

Country Z perceives threat  →  
                               Joint military response  
Country U perceives threat  →
  • Z’s threat perception does not cause U’s threat perception.
  • U’s threat perception does not cause Z’s threat perception.
  • They can be completely independent (different intelligence sources, different borders, different histories).
  • Both can independently trigger a joint military response (e.g., coordinated mobilization, joint patrols, alliance activation).

“Joint military response” is the collider.

What happens when we condition on the collider?

If we look only at cases where Z and U both responded militarily, we create a false relationship:

  • If Z perceived a high threat, U may have needed only a moderate threat to join the response.
  • If Z perceived a low threat, U may have needed a high threat to justify joining.

Within the selected group (“countries that responded”), Z’s and U’s threat perceptions appear negatively associated, even though they are not related in the general population. This is pure collider bias.

Why this matters for conflict analysis

Before conditioning:
Z’s threat perception ⟂ U’s threat perception
(independent)

After conditioning on “joint response”:
Z’s threat perception ↔︎ U’s threat perception
(spurious association)

A researcher might mistakenly conclude:

  • “Z and U coordinate threat assessments,” or
  • “Z’s threat perception predicts U’s threat perception,” or
  • “U only responds when Z is highly threatened,”

even though none of these are true.

The false relationship arises only because we restricted our analysis to cases where both countries responded — the collider.

General lesson

Avoid conditioning on a collider such as:

  • “countries that mobilized,”
  • “alliances that activated,”
  • “conflicts that escalated,”
  • “cases where both sides retaliated.”

These restrictions can create fake causal patterns that do not exist in the real world.


11.10 Conditioning

To “condition” on a variable means to restrict your analysis to a certain value of that variable.

  • Conditioning on a confounder is good (it blocks bias).
  • Conditioning on a collider is bad (it creates bias).

11.11 Backdoor Path

An unwanted causal route that creates bias.
Confounders open backdoor paths.

Example:
Smoking → Lung cancer
Smoking → Carrying a lighter
Carrying a lighter → (looks like) Lung cancer

The lighter is not the cause — smoking is the backdoor path.


11.12 Blocking a Path

You block a causal path by adjusting for the right variable.

  • Adjust for confounders → good
  • Do not adjust for colliders → bad
  • Intervene on mediators → changes the mechanism

In the war model, blocking the path
Z attacks → U attacks → Joint attack
is the key to de‑escalation.


11.13 Intervention

A deliberate change to a node in the DAG.

Examples:
- Ban smoking → reduces lung cancer
- Add peacekeepers → reduces escalation
- Create mediation triggers → reduces alliance pressure

Interventions cut or redirect arrows.


Every DAG concept used in Chapters 1–4 is now defined

The four chapters rely on a specific, bounded set of causal‑diagram concepts:

Core structural concepts - Nodes
- Arrows
- Directed
- Acyclic
- Exposure
- Outcome

Causal‑path concepts - Mediator
- Confounder
- Proxy (now added)
- Effect modifier
- Collider
- Conditioning
- Backdoor path
- Blocking a path
- Intervention

Interpretive concepts - Escalation pathway
- Resolution pathway
- Causal architecture
- Leverage points


This book is aimed at the general public — people who may have:

  • never seen a DAG
  • never taken epidemiology
  • never studied causal inference

The glossary now explains every concept in:

  • simple language
  • intuitive examples
  • parallel epidemiology and conflict‑analysis illustrations
  • no math
  • no jargon beyond what is necessary

This means a reader can move through the four chapters without ever feeling lost.


The first four chapters do not require:

  • d‑separation
  • front‑door adjustment
  • instrumental variables
  • selection bias
  • transportability
  • counterfactual notation
  • structural equations
  • identifiability theory

These are powerful tools, but they belong to a more advanced text.

This glossary is intentionally scoped to the concepts that appear in:

  • the War‑DAG
  • the De‑escalation DAG
  • the Peace‑DAG
  • the General Conflict‑to‑Peace DAG

Nothing is missing for those chapters.