Epistemic Decay Dashboard
Interactive demonstration. Corpus integrity I(t) decay is empirically calibrated from Phase 3a experiments (GPT-2 124M, 5 replicate tracks, R² = 0.98). Governance efficacy parameters are theoretical projections pending Phase 3b validation. Orange diamonds show empirical observations with 95% CI.
GOVERNED SCENARIO — FINAL GENERATION
I(t) Integrity
---
P(t) Provenance
---
B(t) Bias
---
Q(n) Quality
---
M(t) Misinfo
---
E(t) Error
---
GOVERNANCE EFFECT (governed − baseline) ▲ green = helped ▼ red = worsened
Δ I
---
Δ P
---
Δ B
---
Δ Q
---
Δ M
---
Δ E
---
Loading charts...
Red: uncontrolled baseline. Blue: governed scenario. Orange diamonds: Phase 3a empirical mean (5 GPT-2 tracks, 95% CI). Dashed lines: floor (0.468) and collapse threshold (0.3).
▼ About This Model
Research Program
This simulator implements the Model Autophagy decay model, a quantitative framework for understanding epistemic decay in recursive AI training ecosystems. The model formalizes how LLMs degrade when iteratively trained on their own outputs, and how governance interventions can slow (but not reverse) the resulting decay.Empirical Basis
The integrity decay trajectory I(t) is calibrated from Phase 3a controlled experiments: GPT-2 (124M parameters) fine-tuned across 10 recursive generations with increasing synthetic contamination (S(t) ramped 0.10 to 0.80). Five independent replicate tracks (seeds 42-46) produced 55 observations. The exponential fit I(t) = 0.532 · exp(-1.93t) + 0.468 achieves R² = 0.98.What Is Calibrated vs. Projected
| Parameter | Status | Value |
|---|---|---|
| I(t) decay rate α | ✓ Empirical | 1.93 |
| I(t) floor | ✓ Empirical | 0.468 |
| B(t) decline rate | ✓ Empirical | 0.002/gen |
| FIF (feedback amplification) | ✓ Empirical | 1.55 |
| M(t) propagation σ_M | ● Theoretical | 0.12 |
| E(t) propagation σ_E | ● Theoretical | 0.08 |
| Governance efficacy (η_G1, η_G2) | ● Theoretical | Pending Phase 3b |
| Q(t) degradation (ψ, θ_mid) | ● Theoretical | 3.0, 1.5 |
Limitations
Training corpus was locally generated (not real Wikipedia/PubMed). I(t) measured via log-perplexity ratio (revised from QA accuracy during execution). Single-epoch fine-tuning may underestimate production decay rates. Governance slider effects use simplified coupling approximations.Citation & Source
Rutherford, D. A. (2026). Model Autophagy: Quantifying Epistemic Decay and Governance Intervention in Recursive AI Training Ecosystems. [Manuscript in preparation].Repository: github.com/darutherford/model-autophagy