Small models.
Absolute precision.
71 languages. 26 scripts. Three tokenizers. Two model families arriving.
AENEA is a family of small language models designed from first principles for factual determinism. Prelude-1 proved the thesis. Now we're building Prelude-2 (multilingual reasoning) and Overture-1 (advanced reasoning & code) on the QT_V.2 tokenizer family.
From first commit to launch
AENEA began in August 2025 with a simple question: what happens when you treat data quality as architecture, not preprocessing? Six months later, we have our answer.
AENEA Model Family
Prelude-1 proved the thesis. Prelude-2 brings multilingual reasoning on the QT_V.2 96K tokenizer. Overture-1 adds advanced reasoning and code generation on the QT_V.2 Code 114K tokenizer.
Prelude-1
Prelude-2
Overture-1
Why smaller models can think bigger
Most parameters in large models are wasted — compensating for noisy data, fragmented representations, and training regimes that fight themselves. We start from the opposite premise.
Ultra-Clean Data
The Quartz v7.3 pipeline removes encoding artefacts, vandalism, and noise across 71 languages and 26 script families. Every malformed token is a wrinkle in the loss landscape — we iron them out before training begins.
Coherent Geometry
Architectures designed so representations built during one training phase remain geometrically compatible with the next. Knowledge encodes cleanly and manifests back into language without distortion.
Multi-Epoch Depth
Three passes over curated data. First epoch builds the map. Second irons out the paper. Third polishes routes between internal representation and fluent generation.
Simple to use, powerful underneath
AENEA models ship as standard checkpoints compatible with common inference frameworks. Load it, prompt it, generate — the engineering complexity is in the training, not the interface.
Prelude-2 and Overture-1 will support all 71 training languages out of the box, with the QT_V.2 tokenizer family providing efficient encoding across every script.
What's coming
Prelude-1 is released. The QT_V.2 tokenizer family is live. Now we're building the next generation of models.
Prelude-1 Base
QT_V.2 Tokenizer Family
Prelude-2
Overture-1
Three pillars, one architecture
AENEA Global Ltd builds vertically integrated AI infrastructure. The model family, the data stack, and the research division are all designed to reinforce each other.
AENEA
Quartz
Crassus
The future is multilingual
Prelude-1 proved that precision beats parameter count. Prelude-2 and Overture-1 speak 71 languages. Follow our journey.