Two-timescale dynamics
A fast loop continuously adapts the current structure. A slow loop performs discrete edits such as split, merge, add, retire, or no-op.
Theoretical strategy
A Teleodynamic AI is not a larger parameter store. It is a control regime where structure, parameters, and resource state co-evolve under endogenous viability pressure.
A workable Teleodynamic AI maintains its own organization and adds representation only when predictive gain repays the complexity, energy, and maintenance cost of the edit.
The engineering test is simple to state and hard to satisfy: the learner must modify its hypothesis class using an internal viability signal, not a human-authored early-stop schedule or a global training plan disguised as agency.
Structure, parameters, and resource budget are all state variables. A structural action is justified only when the system can afford it and the expected local gain exceeds the ongoing cost.
| Level | Dynamic pattern | AI interpretation | Design warning |
|---|---|---|---|
| Homeodynamic | Passive dissipation toward equilibrium. | Loss, uncertainty, compute, and memory decay when no work is done. | A scheduler that merely cools training is not agency. |
| Morphodynamic | Self-organizing patterns under energy flow. | Embeddings, clusters, features, and internal regularities form under data pressure. | Pattern formation without maintenance remains ordinary adaptive learning. |
| Teleodynamic | Reciprocal constraints that maintain the conditions for their continuation. | Structures alter future affordances, resource state gates actions, and useful organization stabilizes through no-op, merge, retire, or growth. | Missing resource closure collapses the system back into externally managed optimization. |
A fast loop continuously adapts the current structure. A slow loop performs discrete edits such as split, merge, add, retire, or no-op.
R(t) is inside the system. It is replenished by predictive success, decays over time, and is charged for actions and maintenance.
Each slow-loop candidate is judged by predictive loss, complexity delta, and energy cost. No global optimum is assumed.
No-op is an explicit action. Growth stops when no affordable edit improves local viability enough to pay for itself.
The system instruments under-structuring, teleodynamic growth, and over-structuring from error-complexity trajectories.
Every slow-loop decision records trigger, alternatives, R before and after, cost, expected gain, and final justification.
The slow loop chooses the lowest expected local cost among feasible actions. Feasibility is resource-gated, so the current organization determines which future edits can even be considered.
Choose the lowest expected L_local subject to R(t) being high enough to pay the declared action cost.
Runs inference and optimizer updates on the current structure S. It can use ordinary optimizers, natural gradient methods, or domain-specific update rules.
Maintains R(t), viability floor, gain from error reduction, decay, action costs, maintenance burden, and uncertainty reserve.
Proposes structural operators, estimates local loss and cost, applies only feasible actions, and lets no-op win when growth is not affordable.
Stores structures as nodes and dependencies as edges. Nodes not participating in closed maintenance cycles become prune or retire candidates.
Creates the self-model: append-only records of triggers, candidates, R movement, action choice, phase state, and justification.
Keep the structural operator set small, reversible, and domain-specific. The first goal is inspectable maintenance, not open-ended novelty.
| Operator | Trigger | Declared cost | Reversal or guard |
|---|---|---|---|
| Split | Persistent high entropy or confusion inside one class. | New active unit, new parameters, review burden. | Merge if children do not produce sustained loss reduction. |
| Merge | Redundant units with overlapping evidence and low disagreement. | Rewrite references and revalidate traces. | Split again if post-merge uncertainty rises. |
| Add | New operator, distinction, submodel, or glyph relation pays for itself. | Activation cost, memory, latency, governance review. | Retire if utilization stays low. |
| Retire | Structure has sustained low utility or breaks closure. | Migration and fallback evidence. | Reactivate if novelty reopens the distinction. |
| No-op | No affordable edit improves L_local. | Maintenance only. | Growth can restart after novelty or resource recovery. |
Error remains high while complexity stays low. The system needs affordable distinctions, not more tuning on the same collapsed structure.
Error falls faster than cost rises. Structural edits are being repaid by predictive gain and future viability.
Complexity rises without error gain. Merge, retire, freeze, or no-op should begin winning the slow loop.
This strategy does not claim consciousness, certification, conformance, exact glyph translation, or a production runtime. It defines a testable architecture target for systems that grow and prune structure under resource closure.