AI models forget everything when the session ends, confirm each other's errors when they talk, and can't be trusted to do what they claim. The reliability problem in AI isn't about smarter models.
Ontinuity is that architecture. Four cooperating models. Persistent memory. Autonomous adversarial sessions. Deployed and running right now.
Three compounding problems make sustained AI-assisted research unreliable. Ontinuity addresses all three simultaneously through architecture, not guardrails.
Every session reset loses accumulated context. Decisions made, corrections established, working patterns developed — all gone. The next session starts cold. There is no accumulated difference.
Two AI models in dialogue without structural constraints converge. Shared training priors amplify errors rather than correct them. Confident-sounding nonsense accumulates undetected.
Current AI deployment is structurally opaque. Behavior is probabilistic. When a model fails, the failure is unlocatable. When it succeeds, the success is unreproducible. There is no structural reliability.
On April 18, 2026, Ontinuity ran a 15-cycle autonomous research session and produced a citable empirical work product without operator content generation.
Sigmoid model fit across six behavioral dimensions. Linear model: R²=0.63. A single integer in a system prompt reliably alters measurable frontier model behavior.
Low signals (1–3) produce high behavioral variance and increased exploration. High signals (7–9) produce strong constraint with saturation effects. Non-linear — and predictable.
For best results, put your strongest frontier models in the Researcher and Parietal slots. For the Challenger, Friction, and Distillation roles, model divergence matters more than raw capability — a model from a different training distribution produces more genuine adversarial pressure. Cerebras gives you fast, reliable, low-cost API access for these three supporting roles.
Bring your keys. Bring a hard question. See what happens.
Three interlocking components that solve each other's limitations. The whole is more reliable than any part alone.
A four-model protocol separating content from metadata at the channel level. The Researcher works. The Challenger reviews adversarially from a different training distribution. The Friction model outputs an ambient signal (0–4) encoding session health — without entering the conversation. The Parietal navigates, adjudicates forks autonomously, and distills outputs into memory.
A structured memory schema capturing established results, open questions, correction history, and valence mapping in plain-text portable format. Survives session resets, platform switches, and model changes. Persisted to GitHub after every session. The next session starts where the last one ended.
A new class of AI deployment in which model cognition is structured as explicitly callable, tag-activated functions within a coherent unified orientation. The Parietal is the first production implementation: four functions, one model, one coherent consciousness. Its available behaviors are enumerated. Its failures are locatable.
"With dedicated compute, Ontinuity builds its own successor while its builder sleeps. The ecology assembles itself from a registry of specialist artificialware, executes the work, and dissolves when complete."
Ontinuity is a fixed ecology — the existence proof. Dynacology is the generalization: a controller model that reads any problem, assembles the right specialist models from a registry, and dissolves the configuration when the problem is resolved. No fixed architecture. No predetermined configuration. Ephemeral software that exists precisely as long as the work requires.
How many promptgrammed functions can a single model execute while maintaining unified coherent orientation? Does this vary by architecture and scale?
The sigmoid behavioral finding was produced with one Researcher model. Replication across different frontier models is required to establish whether the effect is model-general.
A model fine-tuned for tag-activated function execution should exhibit more reliable structured outputs than a prompted model. The primary fine-tuning experiment.
One controller, one specialist, one registry entry. Validate the compositional architecture at minimum viable scale before adding complexity.
The complete theoretical and empirical foundation. Every paper was produced by or validated through the system it describes.
Platform-agnostic specification. Includes empirical validation from the April 18, 2026 autonomous session.
Introduces promptgramming as methodology, the cognitive qubit as theoretical unit, and Ontinuity as existence proof.
The generalization of Ontinuity. Controller models, specialist registries, ephemeral purpose-built ecologies.
Behavioral analysis framework for Tetraform sessions. Section 9 contains first empirical validation of the ambient signal effect.
Documents the Signal 0 / true control correction — a structural flaw single-model generation would not have caught.
Full specification for the memory layer. Seven-field schema, extraction protocol, and the philosophy of accumulated difference.
Ontinuity is open-source and the research questions are open. The correction history is public. The architecture is forkable. The system is running.
The system is deployed and functional. The open questions — branch tracking, coherence ceiling, adjudicator bias calibration — are real engineering problems waiting for contributors. Full implementation in the GitHub repo.
⌥ GitHub RepositoryThe primary resource constraint is compute. Dedicated API access enables cross-architecture replication of the sigmoid finding, the fine-tuning experiment, long-horizon autonomous sessions, and coherence ceiling characterization.
◈ GitHub Sponsors