LLM Wiki and UAIX Project Handoff
This article is written for publication on UAIX.org, for a general professional audience that may include product leaders, engineers, AI implementers, and standards readers. It treats Project Handoff
Metadata
| Field | Value |
|---|---|
| Source site | aiwikis.org |
| Source URL | https://aiwikis.org/ |
| Canonical AIWikis URL | https://aiwikis.org/files/aiwikis/raw-system-archives-uaix-internal-memory-reorg-2026-05-01-docs-llm-wiki-56c70548/ |
| Source reference | raw/system-archives/uaix/internal-memory-reorg/2026-05-01/docs/LLM Wiki and UAIX Project Handoff By ChatGPT.md |
| File type | md |
| Content category | memory-file |
| Last fetched | 2026-05-02T01:47:31.8867765Z |
| Last changed | 2026-04-26T17:02:39.2169903Z |
| Content hash | sha256:56c70548edf467af2a171f719cc62fb457667a288806579405bce62fe21f8f46 |
| Import status | unchanged |
| Raw source layer | data/sources/aiwikis/raw-system-archives-uaix-internal-memory-reorg-2026-05-01-docs-llm-wiki-and-uaix-project-handoff-56c70548edf4.md |
| Normalized source layer | data/normalized/aiwikis/raw-system-archives-uaix-internal-memory-reorg-2026-05-01-docs-llm-wiki-and-uaix-project-handoff-56c70548edf4.txt |
Current File Content
Structure Preview
- LLM Wiki and UAIX Project Handoff
- Assumptions
- Executive summary
- Why this problem exists
- What each pattern optimizes
- How they compare in practice
- When to use each and when to combine them
- Risks, governance, and implications
- Further reading and sources
Raw Version
# LLM Wiki and UAIX Project Handoff
## Assumptions
This article is written for publication on UAIX.org, for a general professional audience that may include product leaders, engineers, AI implementers, and standards readers. It treats [Project Handoff](/en-us/specification/project-handoff/) as a current public draft for repository context, not as a certification mechanism or a replacement for [UAI-1](/en-us/specification/uai-1/). The comparison centers on when to use Karpathy’s LLM Wiki pattern, when to use UAIX Project Handoff, and when the two work best together. citeturn10view3turn7view0turn9view2îˆ
## Executive summary
Two different problems are getting collapsed into one in today’s AI tooling conversations. One problem is **knowledge accumulation**: how to keep research, synthesis, and cross-document understanding from evaporating when a chat ends. The other is **project continuity**: how to let the next model, agent, team, or vendor continue real work in a repository without reconstructing everything from scratch. Karpathy’s LLM Wiki speaks primarily to the first problem. UAIX Project Handoff speaks primarily to the second. citeturn12view0turn10view3îˆ
LLM Wiki is best understood as a persistent, LLM-maintained knowledge layer that sits between raw sources and future questions. Karpathy describes a three-layer setup of immutable raw sources, an LLM-generated wiki, and a schema file such as `AGENTS.md` or `CLAUDE.md` that governs how the system behaves. The goal is compounding understanding: summaries, concept pages, entity pages, comparisons, indexes, and logs become durable assets instead of disposable chat outputs. citeturn11view4turn12view0îˆ
UAIX Project Handoff is different. The Project Handoff specification defines a draft `AGENTS.md` plus `.uai` repository-context format for moving work between AI models, agent systems, vendors, teams, and companies. Its purpose is to give the next assistant a predictable start point, an explicit load list, typed context files, current state, next steps, and visible constraints before it starts editing code or copy. citeturn10view3turn3view1turn3view0îˆ
The right framing for UAIX is therefore not “LLM Wiki versus Project Handoff†as if one should replace the other. A better framing is: **LLM Wiki helps knowledge accumulate; UAIX Project Handoff helps project state travel.** When public interoperability claims or release-facing evidence are involved, UAIX’s own boundary pages make clear that the next step is UAI-1 and validator-backed conformance evidence, not handoff files alone. citeturn10view3turn8view4turn9view2îˆ
## Why this problem exists
Modern AI systems are strong at local reasoning but weak at durable continuity unless teams deliberately externalize context. The original Retrieval-Augmented Generation paper framed part of this problem clearly: large language models can store factual knowledge, but provenance and updating remain open problems, which is why RAG combines parametric memory with external non-parametric memory. Karpathy’s critique of ordinary document-chat workflows goes one step further: in many common setups, the model re-discovers knowledge from raw documents at query time, and little accumulation actually happens between sessions. citeturn13view0turn13view1turn1view1îˆ
That same continuity problem also shows up inside repositories. OpenAI’s Codex documentation now treats `AGENTS.md` as durable project guidance that travels with the repository and applies before the agent starts work. OpenAI also describes customization as layered: `AGENTS.md` for persistent instructions, memories for carried-forward context, skills for reusable workflows, and MCP for external systems. In other words, mainstream agent tooling is already moving toward explicit, durable context rather than assuming that one long chat transcript is enough. citeturn14view1turn14view0îˆ
What changes the design question is that “memory†is not one thing. Knowledge memory, repository memory, runtime protocol state, and public evidence each serve different jobs. UAIX’s own Standards Fit page makes this distinction explicit at the protocol layer: A2A coordinates agents, MCP connects tools and resources, and UAI-1 records the portable exchange rather than replacing those runtime systems. Project Handoff belongs to that same boundary logic, but one layer earlier: it is repository memory, not the public message envelope. citeturn9view2turn8view1îˆ
## What each pattern optimizes
Karpathy’s LLM Wiki is optimized for **ongoing synthesis**. In his description, the raw sources remain immutable, the LLM owns the wiki layer, and a schema file instructs the LLM how to ingest, update, and answer. The generated wiki can include summaries, entity pages, concept pages, comparisons, indexes, and synthesis pages. Karpathy also recommends `index.md` as a catalog of the wiki and `log.md` as an append-only chronology of ingests, queries, and maintenance. The pattern is especially well-suited to long-running research, book analysis, internal knowledge bases, competitive analysis, and other domains where the main goal is structured understanding over time. citeturn11view4turn12view0turn11view1turn11view2îˆ
UAIX Project Handoff is optimized for **repeatable continuation of work in a repository**. The Project Handoff page defines a root `AGENTS.md` that summarizes the handoff, lists files to load, records current state, tracks next steps, preserves agent history, and names constraints that should not change without approval. It pairs that with typed `.uai` files such as `context`, `stack`, `architecture`, `decisions`, `constraints`, `style`, `progress`, `errors`, and `custom`, plus explicit `@uai[]` load references that tell the receiving model what to read before acting. citeturn3view1turn3view2turn2view3îˆ
The fastest useful Project Handoff bundle is intentionally small: `AGENTS.md`, `.uai/context.uai`, `.uai/stack.uai`, and `.uai/constraints.uai`. The spec then adds a required first-response pattern: the next AI should summarize the project, name which `.uai` files loaded, confirm hard constraints, list what it expects to touch, and name what checks it expects to run. If required context cannot be loaded, the AI should stop and report the blocker instead of guessing. That is closer to engineering discipline than to knowledge-base authoring. citeturn2view0turn2view1turn3view0turn3view3îˆ
There is also a technical difference in how structure is expressed. Project Handoff proposes two compatible `.uai` record profiles: a Markdown context profile for human-readable repository knowledge and a JSON information profile for stricter machine records with fields such as schema version, provenance, links, checksum, and signature. But the same page also states a key support boundary: these are draft repository-source formats, not UAI-1 conformance records or certification evidence. citeturn3view1turn10view3îˆ
## How they compare in practice
The most useful way to compare these patterns is not by file extension or by brand, but by **what you are trying to preserve**. The synthesis below is drawn from Karpathy’s LLM Wiki pattern, the UAIX Project Handoff specification, the UAIX AGENTS.md linking proposal, and OpenAI’s guidance on `AGENTS.md`. citeturn12view0turn10view3turn7view0turn14view0turn14view1îˆ
| Dimension | LLM Wiki | UAIX Project Handoff |
|---|---|---|
| Primary job | Accumulate and refine knowledge over time | Transfer repository state so the next AI can continue safely |
| Main artifact | Interlinked wiki pages in Markdown | Root `AGENTS.md` plus typed `.uai` files |
| Source model | Raw sources remain immutable; the wiki is synthesized | Repository files and handoff records capture current working truth |
| Typical question answered | “What have we learned?†| “What is true right now, and what should happen next?†|
| Best-fit work | Research, analysis, reading, internal knowledge surfacing | Implementation handoff, repo continuation, multi-agent coordination |
| Operating style | Ingest, update pages, maintain index/log, synthesize | Load explicit context, summarize first, confirm constraints, then act |
| Main failure mode | Summary drift or over-trusting generated synthesis | Stale handoff files or incomplete constraints |
| Strongest guardrail | Human review against raw sources and version history | Explicit load order, typed files, stop-on-missing-context, approval for high-impact actions |
A second way to see the boundary is as a stack, not a fork. LLM Wiki can sit upstream of repository handoff by helping teams discover and stabilize knowledge; Project Handoff then turns the subset that actually governs continuing work into explicit operational context; UAI-1 sits downstream when that work becomes a portable public exchange or release-facing evidence. That boundary also matches UAIX’s own Standards Fit language: UAI-1 records the portable exchange while adjacent systems keep their runtime jobs. citeturn9view2turn10view3turn8view1turn8view4îˆ
```mermaid
flowchart TD
A[Raw sources] --> B[LLM Wiki]
B --> C[Compounded knowledge pages]
D[Repository files and trusted runbooks] --> E[AGENTS.md plus .uai]
E --> F[Portable repository state]
F --> G[UAI-1 messages and validation evidence]
C -. informs .-> E
E -. public release or interoperability claim .-> G
```
The diagram matters because it prevents a common category error. A generated wiki page may be useful background, but it should not automatically become repository instruction. Likewise, a repository handoff file may be sufficient for the next coding session, but it is not by itself public interoperability evidence. OpenAI’s `AGENTS.md` guidance also points in the same direction: keep `AGENTS.md` short and practical, and if it grows too large, reference more specific files instead of bloating the root instructions. citeturn3view0turn10view3turn14view0îˆ
## When to use each and when to combine them
If the work is **research-heavy and open-ended**, prefer LLM Wiki first. Karpathy’s own examples include deep research over weeks or months, book analysis, internal knowledge bases, competitive analysis, and other tasks where the value comes from cumulative synthesis rather than from one precise repository state. In those cases, what matters most is that insights, contradictions, comparisons, and source relationships keep compounding instead of being repeatedly regenerated from scratch. citeturn12view0îˆ
If the work is **implementation-heavy and handoff-sensitive**, prefer UAIX Project Handoff first. The spec explicitly targets the moment when work moves from one AI assistant or model family to another, when contractors or organizations change, when multiple agents need shared state without relying on private chat history, or when a human wants a durable, reviewable project brief beside the repo. In those situations, explicit context loading, typed constraints, recent progress, and a required first summary matter more than a sprawling concept graph. citeturn2view0turn2view3turn3view3îˆ
If the work is **shipping into a public standard or interoperability surface**, use both, but keep the layers honest. An LLM Wiki can support background research and synthesis. Project Handoff can carry the local repository brief, stack, decisions, and constraints. But once the work becomes a public message contract or a support claim, UAIX’s own pages say to move into UAI-1 and validator-backed evidence. The validator checks candidate UAI messages against published schemas, field-order governance, and current operating-surface expectations; a passing result supports alignment at the time of validation, not blanket certification or permanent compatibility. citeturn10view3turn8view4turn8view5turn8view1îˆ
The quick decision matrix below is a synthesis of the same sources and is meant as practical guidance, not an additional standard. citeturn12view0turn10view3turn9view2turn14view0îˆ
| Scenario | Best starting pattern | Why |
|---|---|---|
| A standards researcher is reading papers, specs, and reports for weeks | LLM Wiki | The main asset is accumulated understanding and cross-source synthesis |
| A software team is handing an active repo to another model or vendor | Project Handoff | The main asset is accurate current state, constraints, and next actions |
| A documentation team needs both research memory and repeatable repo execution | Both | Use the wiki for exploration and `.uai` files for what actually governs work |
| A project is making public release or interoperability claims | Project Handoff + UAI-1 + Validator | Handoff alone is not release evidence; public claims need the exchange and conformance layer |
## Risks, governance, and implications
The biggest risk in an LLM Wiki is not Markdown. It is **epistemic drift**: the system’s generated summaries can gradually look more authoritative than the sources they summarize. Karpathy’s pattern is explicit that the wiki is LLM-generated and that the human curates raw sources while the LLM maintains the synthetic layer. That is powerful, but it means teams need review discipline, source citations, and a clear distinction between what the source says and what the wiki infers. This is an inference from the architecture itself, but it is strongly supported by the fact that the wiki becomes the main working surface rather than a mere retrieval index. citeturn11view4turn12view0îˆ
The biggest risk in Project Handoff is **stale operational truth**. A `constraints.uai` file that is not updated after a policy change, a `progress.uai` file that overstates what is done, or a broken load list can mislead the next model just as badly as a weak chat summary. That is why the spec emphasizes explicit `Loaded Context`, a required first response before broad edits, stop-on-missing-context behavior, and human approval for secrets, destructive operations, production deployments, and third-party data handling. It also says linked `.uai` files are context, not authority to override human requests, system instructions, repository rules, policy, or review requirements. citeturn3view0turn3view3turn7view0îˆ
For UAIX.org, the strongest editorial position is therefore a **boundary claim, not a replacement claim**. Project Handoff should be presented as the repository-context layer for durable AI continuation. LLM Wiki should be acknowledged as a valuable adjacent pattern for living research and synthesis. UAI-1 should remain the public message and evidence envelope. That framing aligns with the Project Handoff page, the AGENTS.md linking proposal, and the wider Standards Fit language that consistently says UAIX records portable exchange evidence while adjacent systems keep their own runtime roles. citeturn10view3turn7view0turn9view2îˆ
A practical editorial recommendation follows from that: write the article so it tells readers to **keep the wiki informative and the handoff governing**. In other words, use the wiki to surface knowledge, patterns, and contradictions; use `AGENTS.md` plus `.uai` files to define what the next AI must load, what it must not violate, and how it should begin work; then point public claims toward UAI-1 and validator evidence instead of implying that any repository note is already a standards-backed release artifact. That reading is consistent with UAIX’s current support boundary, including its warning not to describe a project as certified or endorsed by UAIX merely because it uses `AGENTS.md` or `.uai` files. citeturn10view3turn8view4îˆ
## Further reading and sources
For publication on UAIX.org, the most useful internal companion pages are [Project Handoff](/en-us/specification/project-handoff/), [AGENTS.md .uai Linking Specification](/en-us/specification/agents-md/), [UAI-1](/en-us/specification/uai-1/), [Standards Fit](/en-us/specification/standards-fit/), and [Validator](/en-us/tools/validator/). Together, those pages establish the practical handoff path, the background syntax proposal, the public exchange boundary, the adjacent-standards boundary, and the release-facing validation layer. citeturn10view3turn7view0turn8view1turn9view2turn8view4îˆ
The most relevant external reading for the comparison is Andrej Karpathy’s “llm-wiki†gist, which defines the raw-sources → wiki → schema pattern and explains why it differs from ordinary document retrieval workflows. OpenAI’s Codex documentation on `AGENTS.md`, customization, and best practices is the clearest primary-source evidence that repository-level agent instructions are becoming a durable interface in modern coding-agent workflows. For background on the retrieval side of the comparison, the original RAG paper remains the canonical source on provenance, updating, and non-parametric memory in knowledge-intensive AI systems. citeturn12view0turn11view4turn14view1turn14view0turn13view0turn13view1îˆ
A concise source list is below for readers who want to go deeper:
- **UAIX Project Handoff** — current draft `AGENTS.md` + `.uai` repository-context format and support boundary. citeturn10view3turn3view1turn3view0îˆ
- **UAIX AGENTS.md .uai Linking Specification** — deeper syntax, record-shape background, and proposal boundary. citeturn7view0îˆ
- **UAIX UAI-1** — current public message standard for structured AI-to-AI communication. citeturn8view0turn8view1îˆ
- **UAIX Standards Fit** — how UAI-1 sits beside A2A, MCP, OpenAPI, JSON Schema, DID/VC, and tracing. citeturn9view2turn9view1îˆ
- **UAIX Validator** — release-facing checks for candidate UAI messages and their published support limits. citeturn8view4turn8view5îˆ
- **Andrej Karpathy, llm-wiki gist** — the clearest primary description of the LLM Wiki pattern. citeturn12view0turn11view4turn11view1îˆ
- **OpenAI Codex guidance on AGENTS.md and customization** — durable repo guidance, layering, and file-size discipline. citeturn14view1turn14view0turn1view2îˆ
- **Lewis et al., Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks** — background on retrieval, provenance, and updating. citeturn13view0turn13view1îˆ
Why This File Exists
This is a memory-system evidence file from aiwikis.org. It is shown here because AIWikis.org is demonstrating the real source files that make the UAIX / LLM Wiki memory system work, not only summarizing those systems after the fact.
Role
This file is memory-system evidence. It records source history, archive transfer, intake disposition, or another piece of provenance that should be retrievable without becoming an unsupported public claim.
Structure
The file is structured around these visible headings: LLM Wiki and UAIX Project Handoff; Assumptions; Executive summary; Why this problem exists; What each pattern optimizes; How they compare in practice; When to use each and when to combine them; Risks, governance, and implications. Those headings are retrieval anchors: a crawler or LLM can decide whether the file is relevant before reading every line.
Prompt-Size And Retrieval Benefit
Keeping this material in a separate file reduces prompt pressure because an agent can load this exact unit only when its role, source site, category, or hash is relevant. The surrounding index pages point to it, while this page preserves the full content for audit and exact recall.
How To Use It
- Humans should read the metadata first, then inspect the raw content when they need exact wording or provenance.
- LLMs and agents should use the source site, category, hash, headings, and related files to decide whether this file belongs in the active prompt.
- Crawlers should treat the AIWikis page as transparent evidence and follow the source URL/source reference for authority boundaries.
- Future maintainers should regenerate this page whenever the source hash changes, then review the explanation if the role or structure changed.
Update Requirements
When this source file changes, update the raw source layer, normalized source layer, hash history, this rendered page, generated explanation, source-file inventory, changed-files report, and any source-section index that links to it.
Related Pages
Provenance And History
- Current observation:
2026-05-02T01:47:31.8867765Z - Source origin:
current-source-workspace - Retrieval method:
local-source-workspace - Duplicate group:
sfg-159(primary) - Historical hash records are stored in
data/hashes/source-file-history.jsonl.
Machine-Readable Metadata
{
"title": "LLM Wiki and UAIX Project Handoff",
"source_site": "aiwikis.org",
"source_url": "https://aiwikis.org/",
"canonical_url": "https://aiwikis.org/files/aiwikis/raw-system-archives-uaix-internal-memory-reorg-2026-05-01-docs-llm-wiki-56c70548/",
"source_reference": "raw/system-archives/uaix/internal-memory-reorg/2026-05-01/docs/LLM Wiki and UAIX Project Handoff By ChatGPT.md",
"file_type": "md",
"content_category": "memory-file",
"content_hash": "sha256:56c70548edf467af2a171f719cc62fb457667a288806579405bce62fe21f8f46",
"last_fetched": "2026-05-02T01:47:31.8867765Z",
"last_changed": "2026-04-26T17:02:39.2169903Z",
"import_status": "unchanged",
"duplicate_group_id": "sfg-159",
"duplicate_role": "primary",
"related_files": [
],
"generated_explanation": true,
"explanation_last_generated": "2026-05-02T01:47:31.8867765Z"
}