Open Source · April 2026 · Deployed & Running

AI models forget everything when the session ends, confirm each other's errors when they talk, and can't be trusted to do what they claim. The reliability problem in AI isn't about smarter models.

It's about better architecture.

Ontinuity is that architecture. Four cooperating models. Persistent memory. Autonomous adversarial sessions. Deployed and running right now.

▶ Begin Session ⌥ GitHub
15
Cycles · First Autonomous Run
R²=0.91
Signal Behavioral Model
9
Challenges · Autonomously Adjudicated
0
Errors · Across All Sessions
The Three Problems

What Ontinuity Solves

Three compounding problems make sustained AI-assisted research unreliable. Ontinuity addresses all three simultaneously through architecture, not guardrails.

PROBLEM 01
It Forgets

Every session reset loses accumulated context. Decisions made, corrections established, working patterns developed — all gone. The next session starts cold. There is no accumulated difference.

◈ Solved by Knowtext
PROBLEM 02
It Agrees With Itself

Two AI models in dialogue without structural constraints converge. Shared training priors amplify errors rather than correct them. Confident-sounding nonsense accumulates undetected.

◈ Solved by Tetraform
PROBLEM 03
It Can't Be Trusted

Current AI deployment is structurally opaque. Behavior is probabilistic. When a model fails, the failure is unlocatable. When it succeeds, the success is unreproducible. There is no structural reliability.

◈ Solved by Artificialware
Live Results · April 18, 2026
Empirical Validation

The First Autonomous Session

On April 18, 2026, Ontinuity ran a 15-cycle autonomous research session and produced a citable empirical work product without operator content generation.

Session Record
  • 15 cyclesfull autonomous run to SESSION_END
  • 9 challengesall adjudicated by Parietal autonomously
  • 2 REJECTsParietal identified insufficiently-grounded challenges
  • d > 0.73all signal comparisons exceeded adjusted threshold
  • p < 0.05Bonferroni correction across six measures
  • 0 errorsclean run, GitHub persistence confirmed
Behavioral Dimensions Measured
  • Certaintyhedge frequency per exchange
  • Evidencecitation ratio per substantive claim
  • Challengeacknowledgment ratio to challenges
  • Consensuscollaborative language frequency
  • Explorationnovel vs. familiar information seeking
  • Abstractionconceptual vs. concrete language density
Get Started

Bring a Question Nobody Has Answered

Ontinuity is fully model-agnostic. Any model goes in any slot.

For best results, put your strongest frontier models in the Researcher and Parietal slots. For the Challenger, Friction, and Distillation roles, model divergence matters more than raw capability — a model from a different training distribution produces more genuine adversarial pressure. Cerebras gives you fast, reliable, low-cost API access for these three supporting roles.

⚠ Before you begin: Free tier APIs from other providers will interrupt your session without warning. We have tested them. They go down without notice. Invest in API credits before you start. A session that completes is worth more than one that almost worked.
Recommended Configuration
Researcher
Any Major
Frontier Model
Primary analytical work
Challenger
Cerebras
Llama 3.1 8B
Fast · Reliable · Cheap
Friction
Cerebras
Llama 3.1 8B
Fast · Reliable · Cheap
Parietal
Any Major
Frontier Model
Navigator · Adjudicator
Distillation
Cerebras
Llama 3.1 8B
Fast · Reliable · Cheap
~$20 for your first serious session — $10 Cerebras Developer plan + ~$10 frontier model API usage for a full 15-cycle run.
Get Your API Keys
▶  Begin Session

Bring your keys. Bring a hard question. See what happens.

The Architecture

How It Works

Three interlocking components that solve each other's limitations. The whole is more reliable than any part alone.

01
Tetraform
Process Layer

A four-model protocol separating content from metadata at the channel level. The Researcher works. The Challenger reviews adversarially from a different training distribution. The Friction model outputs an ambient signal (0–4) encoding session health — without entering the conversation. The Parietal navigates, adjudicates forks autonomously, and distills outputs into memory.

The friction signal communicates through the environment rather than through the conversation. Resistance without noise.
02
Knowtext
Memory Layer

A structured memory schema capturing established results, open questions, correction history, and valence mapping in plain-text portable format. Survives session resets, platform switches, and model changes. Persisted to GitHub after every session. The next session starts where the last one ended.

Correction history is the safety-relevant field. It prevents the system from repeating errors already caught.
03
Artificialware
Reliability Architecture

A new class of AI deployment in which model cognition is structured as explicitly callable, tag-activated functions within a coherent unified orientation. The Parietal is the first production implementation: four functions, one model, one coherent consciousness. Its available behaviors are enumerated. Its failures are locatable.

Promptgramming gives deployments structure. Constitutional AI gives models values. They are complementary.
Where This Goes

Dynacology

"With dedicated compute, Ontinuity builds its own successor while its builder sleeps. The ecology assembles itself from a registry of specialist artificialware, executes the work, and dissolves when complete."

Ontinuity is a fixed ecology — the existence proof. Dynacology is the generalization: a controller model that reads any problem, assembles the right specialist models from a registry, and dissolves the configuration when the problem is resolved. No fixed architecture. No predetermined configuration. Ephemeral software that exists precisely as long as the work requires.

Open Research Questions
01
Coherence Ceiling Mapping

How many promptgrammed functions can a single model execute while maintaining unified coherent orientation? Does this vary by architecture and scale?

02
Cross-Architecture Replication

The sigmoid behavioral finding was produced with one Researcher model. Replication across different frontier models is required to establish whether the effect is model-general.

03
Fine-Tuning for Promptgrammed Execution

A model fine-tuned for tag-activated function execution should exhibit more reliable structured outputs than a prompted model. The primary fine-tuning experiment.

04
Minimum Viable Dynacology

One controller, one specialist, one registry entry. Validate the compositional architecture at minimum viable scale before adding complexity.

Research Corpus

Six Papers

The complete theoretical and empirical foundation. Every paper was produced by or validated through the system it describes.

Get Involved

Contribute to the Research

Ontinuity is open-source and the research questions are open. The correction history is public. The architecture is forkable. The system is running.

Code & Architecture

The system is deployed and functional. The open questions — branch tracking, coherence ceiling, adjudicator bias calibration — are real engineering problems waiting for contributors. Full implementation in the GitHub repo.

⌥ GitHub Repository
Support the Research

The primary resource constraint is compute. Dedicated API access enables cross-architecture replication of the sigmoid finding, the fine-tuning experiment, long-horizon autonomous sessions, and coherence ceiling characterization.

◈ GitHub Sponsors
Contact

For research collaboration, fellowship inquiries, or serious technical engagement: contact@ontinuity.org