Chapter 9 Time‑indexed DAG
If Power feeds back into Will, doesn’t that create a cycle and violate the acyclic requirement of DAGs? Formally, yes—if we try to put everything into a single, timeless graph. But once we index the chain in time, the apparent cycle becomes a temporal loop, not a structural one. That’s exactly what dynamic causal models and time‑indexed DAGs are built to handle.
The key move is simple but profound:keep each time slice acyclic, and let feedback appear as arrows across time, not within a single moment.
9.1 Step 1: the static chain as a DAG
Starting with our inner architecture in a single time slice:
\[ \text{Will}_t \rightarrow \text{Intention}_t \rightarrow \text{Purpose}_t \rightarrow \text{Plan}_t \rightarrow \text{Power}_t \]
This is a perfectly valid DAG:
- each arrow is directional
- there are no cycles
- the joint distribution factorizes as
\[ P(\text{Will}_t, \text{Intention}_t, \text{Purpose}_t, \text{Plan}_t, \text{Power}_t) = P(\text{Will}_t)\,P(\text{Intention}_t|\text{Will}_t)\,P(\text{Purpose}_t|\text{Intention}_t)\,P(\text{Plan}_t|\text{Purpose}_t)\,P(\text{Power}_t|\text{Plan}_t) \]
Within this moment, causation is acyclic and clean.
9.2 Step 2: introducing time—copying the chain forward
Now, the same chain at the next moment:
\[ \text{Will}_{t+1} \rightarrow \text{Intention}_{t+1} \rightarrow \text{Purpose}_{t+1} \rightarrow \text{Plan}_{t+1} \rightarrow \text{Power}_{t+1} \]
We now have two layers: time \(t\) and time \(t+1\). Each layer is acyclic on its own. So far, we’ve just duplicated the structure. No feedback yet.
9.3 Step 3: adding feedback as cross‑time arrows
Feedback enters when Power at time \(t\) influences Will at time \(t+1\):
\[ \text{Power}_t \rightarrow \text{Will}_{t+1} \]
Now the picture looks like this:
\[ \text{Will}_t \rightarrow \text{Intention}_t \rightarrow \text{Purpose}_t \rightarrow \text{Plan}_t \rightarrow \text{Power}_t \rightarrow \text{Will}_{t+1} \rightarrow \text{Intention}_{t+1} \rightarrow \dots \]
Crucially:
- there is no arrow from \(\text{Power}_t\) back to \(\text{Will}_t\)
- the feedback is forward in time, not circular in the same slice
The overall graph, unrolled over time, is still a DAG. It is just a large acyclic graph with a temporal structure. This is the essence of a Dynamic Bayesian Network (DBN) or time‑indexed DAG.
9.4 Step 4: formal factorization with feedback
With feedback, the joint distribution over two time steps factorizes as:
\[ P(\mathbf{X}_t, \mathbf{X}_{t+1}) = P(\mathbf{X}_t)\,P(\mathbf{X}_{t+1}|\mathbf{X}_t) \]
where \(\mathbf{X}_t = (\text{Will}_t, \text{Intention}_t, \text{Purpose}_t, \text{Plan}_t, \text{Power}_t)\).
More explicitly, for time \(t+1\):
\[ P(\text{Will}_{t+1}|\mathbf{X}_t) = P(\text{Will}_{t+1}|\text{Power}_t) \]
and then:
\[ P(\text{Intention}_{t+1}|\text{Will}_{t+1}),\quad P(\text{Purpose}_{t+1}|\text{Intention}_{t+1}),\quad P(\text{Plan}_{t+1}|\text{Purpose}_{t+1}),\quad P(\text{Power}_{t+1}|\text{Plan}_{t+1}) \]
So the full transition kernel from \(t\) to \(t+1\) is:
\[ P(\mathbf{X}_{t+1}|\mathbf{X}_t) = P(\text{Will}_{t+1}|\text{Power}_t)\, P(\text{Intention}_{t+1}|\text{Will}_{t+1})\, P(\text{Purpose}_{t+1}|\text{Intention}_{t+1})\, P(\text{Plan}_{t+1}|\text{Purpose}_{t+1})\, P(\text{Power}_{t+1}|\text{Plan}_{t+1}) \]
Still acyclic. But now Power reshapes Will—just one time step later.
9.5 Step 5: interpreting this as learning, growth, and karma
Conceptually, this formalism captures exactly what we want:
- Power\(_t\) is the action taken, the manifestation in the world.
- That action generates consequences, feedback, experience.
- This feedback modifies Will\(_{t+1}\)—the next originating impulse.
In metaphysical language: action reshapes the actor.
In psychological language: behavior reshapes character.
In ethical language: consequence reshapes responsibility.
In formal language: \(\text{Power}_t \rightarrow \text{Will}_{t+1}\).
The Uroboros—Power returning to Will—is now represented as a temporal loop, not a structural cycle. The snake still eats its tail, but it does so over time, not within a single frozen diagram.
9.6 Step 6: dynamic causal models as continuous‑time analogues
If we move from discrete time to continuous time, the same idea appears in dynamic causal models or stochastic differential equations. You might write:
\[ \frac{d\,\text{Will}(t)}{dt} = f(\text{Power}(t), \text{Will}(t), \dots) + \epsilon(t) \]
Here:
- the drift term \(f\) encodes how current Power and other states reshape Will
- the noise term \(\epsilon(t)\) encodes probabilistic freedom
Again, no algebraic cycle in a static DAG—just evolution equations that let Power feed back into Will as time flows.
9.7 Step 7: why this matters for our philosophical architecture
Formally, we now have:
- a DAG within each moment: clean, acyclic, directional
- a feedback loop across moments: Power\(_t\) → Will\(_{t+1}\)
Philosophically, this lets us say:
- causation is directional at each step (Will → Power)
- life is recursive over time (Power → new Will)
So:
- the DAG captures the grammar of a single act
- the time‑unrolled DAG / DBN captures the story of a life
That’s exactly the reconciliation we are reaching for between the Uroboros and acyclicity: the Uroboros is not a violation of causal logic; it is the temporal unfolding of repeated acyclic chains with feedback.
By representing feedback across time, the time‑indexed DAG completes the formal architecture of agency. Yet this structure is not limited to psychology or metaphysics. It also appears in esoteric traditions that describe the evolution of language, consciousness, and meaning. To see how these symbolic systems mirror the same causal logic, we turn to Beltrán‑Anglada’s esoteric model of linguistic evolution.