Layered Publishing Architecture with LLM Wiki, UAI Memory, Project Handoff, and Claude Code
The strongest architecture for publishing durable, reviewable content to **llmwikis.org** is a **layered model** rather than a single-memory or single-agent model. In that model, **LLM Wiki** is the deep, long-lived k...
Metadata
| Field | Value |
|---|---|
| Source site | aiwikis.org |
| Source URL | https://aiwikis.org/ |
| Canonical AIWikis URL | https://aiwikis.org/files/aiwikis/raw-system-archives-llmwikis-source-site-report-preservation-2026-05-01-9b58902d/ |
| Source reference | raw/system-archives/llmwikis/source-site-report-preservation/2026-05-01/agent-file-handoff/Archive/2026-05-01/Improvement/llmwikis-integration-promoted/Layered Publishing Architecture with LLM and Claude Code.md |
| File type | md |
| Content category | memory-file |
| Last fetched | 2026-05-02T01:47:31.8867765Z |
| Last changed | 2026-05-01T17:58:03.5739599Z |
| Content hash | sha256:9b58902dbdb401ddb5f58700f18f73b47cecba8a41c58305730d8f0fb9e72136 |
| Import status | unchanged |
| Raw source layer | data/sources/aiwikis/raw-system-archives-llmwikis-source-site-report-preservation-2026-05-01-agent-file-handoff-archi-9b58902dbdb4.md |
| Normalized source layer | data/normalized/aiwikis/raw-system-archives-llmwikis-source-site-report-preservation-2026-05-01-agent-file-handoff-archi-9b58902dbdb4.txt |
Current File Content
Structure Preview
- Layered Publishing Architecture with LLM Wiki, UAI Memory, Project Handoff, and Claude Code
- Executive summary
- Goals and use cases
- Architecture and integration patterns
- Comparison table of alternatives and trade-offs
- Implementation blueprint
- Step-by-step flow
- Claude compatibility layer
- CLAUDE.md
- Claude Code
- Project-scoped MCP configuration
- Hooks-driven orchestration
- Local promotion and validation script
- Example: build a publication-evidence record only for reviewed public claims.
- Workflow sequence
- Data models and artifact schemas
- Sample
.uai/context.uai - LLMWikis Publication Program
- What This Is
- Goals
- Current Scope
- Out of Scope
- Sample llmwikis page frontmatter
- Entity relationship diagram
Raw Version
# Layered Publishing Architecture with LLM Wiki, UAI Memory, Project Handoff, and Claude Code
## Executive summary
The strongest architecture for publishing durable, reviewable content to **llmwikis.org** is a **layered model** rather than a single-memory or single-agent model. In that model, **LLM Wiki** is the deep, long-lived knowledge system; **UAI AI Memory** is the compact, portable working packet; **UAI Project Handoff** is the transfer packet for ownership or execution changes; **Claude Code** is the execution and orchestration layer; and **UAI-1 plus the UAIX validator** provide auditable exchange and evidence when claims or artifacts must be portable beyond a single runtime. This is also the exact separation the official UAIX and LLMWikis materials advocate: LLM Wiki remains expansive and informative, while accepted operational truth is promoted into compact UAI artifacts and, where needed, UAI-1 evidence. citeturn14view1turn15view0turn18view0turn23view0îˆ
That layered approach is preferable to relying on Claude Code’s native memory alone. Claude Code’s persistent mechanisms are **CLAUDE.md** and **auto memory**; both are loaded as context, not enforced policy, and auto memory is **machine-local**, per repository, with only the first 200 lines or 25 KB of `MEMORY.md` loaded at session start. Anthropic explicitly notes that Claude Code reads `CLAUDE.md`, **not `AGENTS.md`**, although a `CLAUDE.md` file can import `AGENTS.md` so both systems share instructions. That makes Claude’s native memory useful for convenience and local continuity, but insufficient as the cross-team, cross-vendor, reviewable source of truth for publication work and developer handoff. citeturn21view2turn21view3îˆ
The main implementation implication is that the **officially documented UAIX public APIs are currently strongest for discovery, OpenAPI export, validation, and mock exchange**, not for writing or mutating AI Memory / Project Handoff state directly. The documented machine routes include `catalog`, `discovery`, `adoption-kit`, `openapi.json`, `validate`, `mock-exchange`, and related evidence routes. The public AI Memory and Project Handoff materials emphasize **file-based starter bundles**, manifests, `AGENTS.md`, `readme.human`, and `.uai` files; they also explicitly keep broader SDK / CLI / hosted-generator claims out of current support. Accordingly, the most rigorous design is to keep **write paths local and repository-based**, and use UAIX’s public routes for **validation and evidence**, while treating any higher-level memory or publication “API†described below as a **local adapter assumption** rather than an official UAIX write surface. citeturn11view1turn11view5turn12view1turn15view0îˆ
For **llmwikis.org**, the publication model should be **review-gated**, not autonomous live publishing. The public handbook defines strong metadata, trust labels, review dates, source-policy boundaries, agent rules, and security/privacy rules; it also states that AI-assisted drafts and notes are **not public truth by default**, and the current public handbook does **not** claim open editing, public MCP, certification, or automatic publication. In practical terms, the best publication workflow is: Claude drafts or updates wiki-page markdown, humans review, llmwikis metadata and citations are checked, then a maintainer merges or publishes. citeturn5view0turn16view1turn16view3turn28view2turn28view3turn28view0îˆ
The rest of this report therefore recommends a **governed, repository-first publication pipeline**: use LLM Wiki for research and institutional memory; promote reviewed facts into UAI AI Memory or Project Handoff; let Claude Code operate through `CLAUDE.md`, MCP, hooks, and optionally the Agent SDK or GitHub Actions; validate public exchange claims with UAI-1 routes; and publish only review-approved markdown that satisfies LLMWikis metadata, trust, and source-policy rules. citeturn15view0turn16view1turn16view2turn20view2turn7view3turn23view0îˆ
## Goals and use cases
The official materials describe a clean division of labor among the components in this stack. **LLM Wiki** is for durable organizational knowledge: policies, architecture, runbooks, decisions, glossary pages, and long source summaries. **UAI AI Memory** is a lightweight, file-based, reviewable packet for “current enough to act on†context. **Project Handoff** is the specific transfer pattern that coordinates `AGENTS.md`, `readme.human`, and typed `.uai` files. Claude Code is the agentic coding layer that can read code, edit files, run commands, connect to tools through MCP, and automate lifecycle events through hooks. citeturn18view0turn4view0turn12view1turn5view6turn7view2turn7view1îˆ
The resulting use cases map naturally to the user’s requested publication scenario.
| Goal or use case | Primary artifact | Why this layer is the best fit | Source basis |
|---|---|---|---|
| Knowledge base augmentation | LLM Wiki pages, source summaries, index/log updates | LLMWikis positions the wiki as the durable, human-readable, machine-consumable knowledge system with explicit ownership, trust labels, metadata, and governance. | citeturn18view0turn17view0turn18view2turn28view2îˆ |
| Collaborative editing | llmwikis markdown plus review workflow | LLMWikis requires metadata, owner roles, review cycles, and human review for sensitive or authoritative changes. | citeturn16view2turn16view3turn18view3turn28view2turn28view3îˆ |
| Project continuity across sessions | UAI AI Memory plus Claude `CLAUDE.md` | UAIX says AI Memory is the compact operating memory future actors should load; Anthropic says Claude Code’s native memory is local context, not durable cross-team truth. | citeturn4view0turn14view1turn21view2turn21view3îˆ |
| Developer or vendor handoff | UAI Project Handoff | Project Handoff packages current state, constraints, owners, checks, and transfer context through `AGENTS.md`, `readme.human`, and `.uai` files. | citeturn12view1turn13view0turn13view2turn13view5îˆ |
| Public support or interoperability claims | UAI-1 messages plus validator output | UAIX positions UAI-1 as the portable public exchange and evidence layer; validator-backed support evidence is part of the release path. | citeturn23view0turn11view2turn25view3îˆ |
| Automated drafting, editing, and packaging | Claude Code CLI / Agent SDK / GitHub Actions | Claude Code supports tool use, MCP, hooks, project memory, session resume, and GitHub Actions built on the Agent SDK. | citeturn7view1turn7view2turn20view0turn20view2turn7view3îˆ |
The most important design principle is **promotion discipline**. UAIX says LLM Wiki memory stays background until reviewed and promoted into named UAI package files, docs, code, release notes, roadmap state, or machine artifacts. LLMWikis says AI-assisted drafts and notes are not public truth by default, and that sources, trust labels, freshness, and owner/reviewer signals must stay visible. Together, those rules create a publication model that is strong for both AI productivity and editorial rigor. citeturn15view0turn16view1turn28view3îˆ
## Architecture and integration patterns
A practical architecture for this stack has five layers: **sources**, **deep wiki**, **portable memory / handoff**, **agent orchestration**, and **publication / evidence**. LLMWikis’ architecture guidance explicitly separates raw sources from synthesized wiki pages, and makes navigation and lifecycle state explicit. UAIX adds the portable packet layer for accepted operational truth and exchange evidence. Anthropic adds the execution layer through Claude Code, MCP, hooks, the Agent SDK, and GitHub Actions. citeturn18view1turn15view0turn23view0turn20view2turn7view3îˆ
```mermaid
flowchart LR
A[Raw sources and research inputs] --> B[LLM Wiki deep memory]
B --> C[Reviewed promotion step]
C --> D[UAI AI Memory bundle]
C --> E[UAI Project Handoff packet]
D --> F[Claude Code or Agent SDK]
E --> F
F --> G[Local build and publication adapter]
G --> H[llmwikis.org content PR or CMS draft]
F --> I[UAI-1 record]
I --> J[UAIX validator and conformance evidence]
subgraph Governance
K[Metadata, trust labels, review dates]
L[Human approval gates]
end
B --- K
D --- L
E --- L
H --- L
J --- L
```
The cleanest Claude integration point is to treat **`AGENTS.md` as the canonical handoff front door** for multi-agent portability, and then let **`CLAUDE.md` import it** for Claude Code compatibility. Anthropic documents that Claude Code reads `CLAUDE.md`, not `AGENTS.md`, but supports `@AGENTS.md` imports. That means a repo can keep UAI Project Handoff conventions without maintaining parallel, drifting instruction files. citeturn21view3îˆ
The best tool-connection pattern is typically **project-scoped MCP** plus **hooks**. Anthropic documents project-scoped MCP servers in `.mcp.json`, designed for version control and team sharing, with approval prompts before use. Anthropic also documents hooks as shell commands, HTTP endpoints, or LLM prompts that receive JSON on lifecycle events; `PreToolUse` can allow, deny, ask, or defer tool calls and even modify inputs, while `PostToolUse` and `Stop` are good places to trigger memory updates, lint, or publication-stage packaging. citeturn20view3turn19view0turn19view1îˆ
The **public UAIX API pattern** is a read/validate/evidence pattern. UAIX documents `catalog`, `discovery`, `adoption-kit`, `openapi.json`, `validate`, and `mock-exchange` routes, and provides code examples in curl, PowerShell, Python, and TypeScript for consuming them. UAI-1 complements MCP and OpenAPI rather than replacing them: MCP handles runtime tool integration; UAI-1 handles the portable public record and validator-backed evidence. citeturn11view1turn25view3turn23view0îˆ
The **publication pattern for llmwikis.org** should be a docs-as-code path with explicit metadata, review, and source policy. LLMWikis’ implementation checklist calls for a repository or wiki platform with exportable files and history, metadata on each page, named owners, trust labels, sensitivity labels, agent instructions, linting, and human review for AI-generated edits before connecting retrieval systems. That is a strong match for a content PR workflow or a CMS-draft workflow fed by repository files. citeturn16view2turn18view4turn28view2îˆ
### Comparison table of alternatives and trade-offs
The table below is a synthesis of the documented capabilities and limits of UAIX, LLMWikis, and Anthropic’s Claude tooling.
| Option | Strengths | Weaknesses | Best fit | Evidence basis |
|---|---|---|---|---|
| **Layered stack: LLM Wiki + UAI Memory + Project Handoff + Claude Code** | Best separation between deep knowledge, portable action context, and agent execution; strongest auditability and publication discipline | More moving parts and more governance work | Recommended default for llmwikis.org publication and cross-team handoff | citeturn15view0turn16view0turn20view2turn23view0îˆ |
| **Claude Code only: `CLAUDE.md` + auto memory** | Fastest to start; excellent local productivity | Auto memory is machine-local and limited at startup; not a portable team handoff format | Solo or short-lived projects | citeturn21view2turn21view3îˆ |
| **LLM Wiki + Claude Code, no UAI bundles** | Strong deep knowledge base and governed docs | Weaker “current working packet†and weaker structured transfer boundary | Internal documentation-heavy teams with low handoff complexity | citeturn18view0turn16view0turn15view0îˆ |
| **UAI Memory / Project Handoff + docs-as-code, no full LLM Wiki** | Strong portable continuity and transfer packet | Poorer institutional memory and source-synthesis depth | Smaller engineering orgs or bounded delivery pipelines | citeturn4view0turn12view1turn14view1îˆ |
| **Anthropic Agent SDK or Managed Agents + custom DB memory** | Higher automation potential; SDK and hosted agent paths are supported by Anthropic | More engineering and operational burden; custom memory semantics become your responsibility | Larger product teams building dedicated agent platforms | citeturn20view2turn10view0îˆ |
## Implementation blueprint
This blueprint assumes three things that are **not currently specified in the public docs**: first, that you control a repository or CMS draft path for llmwikis.org; second, that you will implement a **local memory / publication adapter** for write operations; and third, that Claude is being run via Claude Code CLI, Claude Code in CI, or the Agent SDK. Those are reasonable implementation assumptions because the public docs emphasize file-based UAI bundles and reviewable publication structure, while the documented machine routes are discovery/validation oriented rather than bundle-mutation oriented. citeturn15view0turn11view1turn12view1îˆ
### Step-by-step flow
A robust implementation sequence is:
1. Create the **LLM Wiki starter structure** with `README.md`, `INDEX.md`, `GOVERNANCE.md`, `TRUST_MODEL.md`, policy pages, agent instructions, operations pages, and decision logs. citeturn17view0turn18view4îˆ
2. Create the **UAI AI Memory** starter, then add **Project Handoff** files when ownership or delivery responsibility moves. Minimum useful handoff includes `AGENTS.md`, `readme.human`, `.uai/context.uai`, `.uai/stack.uai`, and `.uai/constraints.uai`. citeturn4view0turn13view0îˆ
3. Add `CLAUDE.md` that imports `AGENTS.md`, so Claude Code loads the same handoff truth. citeturn21view3îˆ
4. Add a **project-scoped MCP server** in `.mcp.json` for your local “memory/publish adapter†service. Use this service to expose tools like `search_wiki`, `promote_to_memory`, `draft_handoff`, and `create_publication_candidate`. citeturn20view3turn7view2îˆ
5. Add **hooks** for `PostToolUse`, `Stop`, or `TaskCompleted` to lint metadata, update changelogs, or stage publication candidates. Use `PreToolUse` to guard dangerous writes or publishing actions. citeturn19view0turn19view1turn20view5îˆ
6. Where public support or interoperability claims are involved, generate a **UAI-1 record** and validate it against the UAIX validator. citeturn23view0turn25view3îˆ
7. Publish to llmwikis.org only after source-policy, metadata, trust-label, and reviewer checks pass. citeturn16view1turn28view2turn28view3îˆ
### Claude compatibility layer
```markdown
# CLAUDE.md
@AGENTS.md
## Claude Code
- Treat AGENTS.md and loaded .uai files as the decisive working packet.
- Treat wiki/ content as background until facts are reviewed and promoted.
- Before editing public content, verify:
1. page metadata,
2. trust status,
3. citation coverage,
4. human review requirement,
5. publish target.
- For llmwikis.org content, prefer narrow diffs and draft PRs over direct publish.
```
Anthropic explicitly documents this `CLAUDE.md` import pattern for repositories that already use `AGENTS.md`. citeturn21view3îˆ
### Project-scoped MCP configuration
```json
{
"mcpServers": {
"uai-memory": {
"type": "http",
"url": "https://memory.example.internal/mcp",
"headers": {
"Authorization": "Bearer ${UAI_MEMORY_TOKEN}"
}
},
"llmwiki-publisher": {
"type": "http",
"url": "https://publisher.example.internal/mcp",
"headersHelper": "/opt/bin/get-llmwiki-auth-headers.sh"
}
}
}
```
This is an **implementation assumption**, not an official UAIX or LLMWikis endpoint definition. It follows Anthropic’s documented MCP patterns for project-scoped `.mcp.json`, HTTP transport, headers, and dynamic header helpers. citeturn20view3turn19view3turn19view5îˆ
### Hooks-driven orchestration
```json
{
"$schema": "https://json.schemastore.org/claude-code-settings.json",
"hooks": {
"PostToolUse": [
{
"matcher": "Write|Edit",
"hooks": [
{
"type": "http",
"url": "https://memory.example.internal/hooks/post-edit"
}
]
}
],
"PreToolUse": [
{
"matcher": "Write",
"hooks": [
{
"type": "http",
"url": "https://memory.example.internal/hooks/pre-write"
}
]
}
],
"Stop": [
{
"matcher": ".*",
"hooks": [
{
"type": "http",
"url": "https://publisher.example.internal/hooks/session-stop"
}
]
}
]
}
}
```
This configuration is justified by Anthropic’s documented hook lifecycle, HTTP hook JSON handling, and `PreToolUse` decision control. Because hooks run with the user’s permissions when they are command hooks, and HTTP hooks still influence execution, they should be treated as privileged integration points. citeturn19view1turn19view0turn20view5îˆ
### Local promotion and validation script
```python
import json
import pathlib
import urllib.request
ROOT = pathlib.Path(".")
WIKI_PAGE = ROOT / "wiki" / "architecture" / "SYSTEM_OVERVIEW.md"
MEMORY_FILE = ROOT / ".uai" / "context.uai"
def promote_summary_to_memory(summary_text: str) -> None:
existing = MEMORY_FILE.read_text(encoding="utf-8")
block = f"\n## Promoted Facts\n- {summary_text.strip()}\n"
MEMORY_FILE.write_text(existing + block, encoding="utf-8")
def fetch_uai_catalog() -> dict:
return json.load(
urllib.request.urlopen("https://uaix.org/wp-json/uaix/v1/catalog")
)
def validate_message(message: dict) -> dict:
payload = json.dumps({"message": message, "format": "result"}).encode("utf-8")
req = urllib.request.Request(
"https://uaix.org/wp-json/uaix/v1/validate",
data=payload,
headers={"Content-Type": "application/json"},
method="POST",
)
return json.load(urllib.request.urlopen(req))
# Example: build a publication-evidence record only for reviewed public claims.
message = {
"uai_version": "1.0",
"profile": "uai.intent.request.v1",
"message_id": "msg-llmwikis-2026-05-01-001",
"source": {"type": "agent", "id": "claude-code", "label": "Claude Code"},
"target": {"type": "service", "id": "llmwikis-publisher", "label": "LLMWikis Publisher"},
"conversation": {"conversation_id": "conv-llmwikis-pub-001", "sequence": 1},
"delivery": {"mode": "sync", "priority": "routine", "reply_requested": True, "ack_required": False},
"trust": {"channel": "private-api", "auth_scheme": "bearer-token"},
"body": {
"intent": "publish-draft",
"subject": "llmwikis-page",
"requested_profile": "uai.intent.response.v1",
"parameters": {"path": "comparisons/llm-wiki-plus-uai.md"},
"constraints": ["reviewed-only", "no-secrets", "citation-required"],
"response_profile": "uai.intent.response.v1"
}
}
```
The `validate` call is directly grounded in UAIX’s documented routes and example code. The publication message itself is an **architectural example**, not an official published llmwikis or UAIX write contract. citeturn25view3turn25view0îˆ
### Workflow sequence
```mermaid
sequenceDiagram
participant Author as Human maintainer
participant Claude as Claude Code
participant Wiki as LLM Wiki repo
participant Memory as UAI Memory files
participant Handoff as Project Handoff files
participant UAIX as UAIX validator
participant Publish as llmwikis publication adapter
Author->>Claude: Research topic and draft content
Claude->>Wiki: Read smallest useful page set
Wiki-->>Claude: Reviewed pages plus trust labels
Claude->>Memory: Promote only accepted current facts
Claude->>Handoff: Update AGENTS.md / .uai where project truth changed
Claude->>UAIX: Validate public exchange or publication evidence
UAIX-->>Claude: Conformance result
Claude->>Publish: Create PR or CMS draft with metadata and citations
Publish-->>Author: Human review request
Author->>Publish: Approve and publish
```
## Data models and artifact schemas
UAIX and LLMWikis together imply three distinct schema families: **UAI exchange records**, **UAI memory / handoff artifacts**, and **LLM Wiki page metadata**. They should remain distinct. UAI-1 records are public exchange envelopes with identity, workflow continuity, trust context, business meaning, provenance, integrity, and extensions. UAI Memory / Project Handoff artifacts are repository files built for continuity and transfer. LLMWikis page schemas are metadata-rich content pages for governed knowledge and publication. citeturn23view0turn13view1turn28view2îˆ
UAIX’s public UAI-1 examples show a consistent envelope with `uai_version`, `profile`, `message_id`, `source`, `target`, `conversation`, `delivery`, `trust`, `body`, `provenance`, and `integrity`. The request, response, and task-status examples demonstrate how async flows are made explicit through `task_ref`, `task_id`, `status_profile`, `status_url`, and task-state messages. citeturn25view0turn25view1turn25view2turn24view4îˆ
For handoff artifacts, UAIX documents both a **Markdown context profile** and a **JSON information profile** for `.uai` files. It also documents required `AGENTS.md` structure, standard `.uai` types such as `context`, `stack`, `architecture`, `decisions`, `constraints`, `style`, `data`, `progress`, and `errors`, and a minimum project bundle rooted in `AGENTS.md` and `readme.human`. citeturn13view0turn13view1turn13view2îˆ
For llmwikis publication content, the highest-value metadata fields are `title`, `owner`, `status`, `last_reviewed`, `review_cycle`, `audience`, `sensitivity`, `agent_use`, `human_review_required_for_updates`, and `related`. LLMWikis treats those fields as the core mechanism by which an agent can determine what a page is, whether it is current, and what it is permitted to do with it. citeturn28view2turn28view3îˆ
### Sample `.uai/context.uai`
```markdown
---
uaix: "1.0"
type: context
title: "LLMWikis Publication Program"
created: "2026-05-01"
updated: "2026-05-01"
author: "Human + Claude"
version: 1
---
# LLMWikis Publication Program
## What This Is
A governed publication workflow for drafting, reviewing, and publishing content to llmwikis.org using Claude Code, LLM Wiki source repositories, and UAI Memory / Project Handoff artifacts.
## Goals
- Publish source-backed comparative pages
- Preserve operational continuity across agent and human sessions
- Keep public claims reviewable and citation-backed
## Current Scope
- Draft files and PR proposals
- Metadata lint and source checks
- Optional UAI-1 publication evidence for public claim flows
## Out of Scope
- Autonomous direct publishing without human review
- Storage of secrets or raw private data
```
This structure follows the documented `.uai` pattern and standard type model. citeturn13view1turn13view2îˆ
### Sample llmwikis page frontmatter
```yaml
---
title: "LLM Wiki and UAI Memory with Claude Code"
owner: "Documentation lead"
status: "working-draft"
last_reviewed: 2026-05-01
review_cycle: monthly
audience: internal
sensitivity: internal
agent_use: allowed-with-citation
human_review_required_for_updates: true
related:
- comparisons/LLM_WIKI_VS_AI_MEMORY.md
- architecture/UAI_LAYERING.md
- policies/SOURCE_POLICY.md
---
```
This frontmatter follows the LLMWikis metadata standard and trust model. citeturn28view2turn28view3îˆ
### Entity relationship diagram
```mermaid
erDiagram
SourceDocument ||--o{ WikiPage : informs
WikiPage ||--o{ PublicationCandidate : generates
WikiPage ||--o{ EvidenceLog : records
MemoryBundle ||--o{ HandoffPacket : specializes
MemoryBundle ||--o{ UAIMessage : references
HandoffPacket ||--o{ UAIArtifactFile : contains
PublicationCandidate ||--o{ UAIMessage : may_emit
PublicationCandidate }o--|| Reviewer : approved_by
WikiPage }o--|| Owner : maintained_by
SourceDocument {
string source_id
string path
string source_type
string checksum
string sensitivity
}
WikiPage {
string page_id
string path
string title
string owner
string status
date last_reviewed
string sensitivity
string agent_use
}
MemoryBundle {
string bundle_id
string name
string lifecycle
string trust_boundary
string manifest_fingerprint
}
HandoffPacket {
string handoff_id
string root_agents_path
string receiver_brief_path
string acceptance_state
}
UAIMessage {
string message_id
string profile
string source_id
string target_id
string task_ref
}
PublicationCandidate {
string candidate_id
string target_path
string publish_state
string reviewer
}
EvidenceLog {
string evidence_id
string source_path
string final_path
string checksum
string disposition
}
```
## Security, failure modes, and governance
The security model for this stack should be built around **three separate trust boundaries**. First, LLMWikis says secrets, credentials, raw customer or regulated data, sensitive legal material, and private transcripts should not enter the wiki without explicit governance. Second, UAIX says the bundle should be chosen by trust boundary, not by name alone, and external handoffs or wiki exports require redaction and approval. Third, Anthropic says Claude Code hooks and project tooling can be powerful enough to modify the filesystem and external services, so they need explicit approval boundaries and sanitization. citeturn28view0turn14view1turn20view5îˆ
The core governance rule should therefore be: **wiki memory is background; portable memory is decisive; publication is reviewed**. That rule is directly aligned with UAIX’s “promotion requires review†rule and LLMWikis’ source-policy boundary. It is the single most important safeguard against knowledge rot, hallucinated continuity, or accidental publication of unreviewed inference. citeturn15view0turn16view1îˆ
A rigorous implementation should specifically mitigate the following failure modes:
| Failure mode | Why it happens | Mitigation |
|---|---|---|
| Claude follows stale or draft wiki content as if authoritative | Wiki pages lack trust-state enforcement or metadata routing | Require agent reading order through README, INDEX, TRUST_MODEL, and GOVERNANCE; block direct reliance on `working-draft`, `proposal`, `needs-review`, or `historical` pages for operational truth |
| Local Claude memory diverges from team truth | Auto memory is machine-local and contextual | Treat auto memory as convenience only; require updates to AI Memory / Project Handoff files for durable team truth |
| Handoff packet becomes too large or noisy | Teams dump chats and notes into it | Keep UAI memory compact and append-first where appropriate; send research and archive material to the wiki instead |
| Dangerous automation via hooks or MCP | Hooks and external tools have high privilege | Use HTTP hooks with explicit server-side policy; validate inputs; avoid secrets and path traversal; require approval on publish and destructive actions |
| Public page overclaims support or evidence | Draft content is published as if canonical | Require source-policy labels, citations, reviewer date, and validator-backed evidence for public interoperability claims |
| Publication leaks secrets or raw private data | Private source material is exported directly | Publish only sanitized exports from reviewed pages and redacted UAI bundles |
The rows above are synthesis, but each one is grounded in documented risks or control points from the reviewed sources. citeturn16view3turn16view1turn21view2turn14view1turn20view5turn28view0turn23view0îˆ
A particularly important operational detail is that **Claude Code project-scoped MCP servers are meant for version-controlled team sharing, but still require approval before use**, and **command hooks run with the user’s full permissions**. That means teams should prefer narrow, internal HTTP adapters with server-side authorization and audit logging over broad shell hooks or arbitrary third-party MCP servers. Anthropic also warns to be especially careful with MCP servers that can fetch untrusted content because they increase prompt-injection risk. citeturn20view3turn20view5turn7view2îˆ
## Testing, deployment, and metrics
The testing model should combine **WikiOps**, **handoff verification**, and **publication release checks**. LLMWikis defines an operating loop of ingest, query, and lint. UAIX says a successful handoff is one where the next AI can name what it loaded, what it trusts, what it will not do, and which checks will prove the work. UAIX’s WordPress publication track defines release readiness in terms of installable package outputs, correctly rendered canonical records, aligned discovery files and references, and a release trail that states exactly what changed. citeturn18view2turn13view0turn22view0îˆ
A practical validation plan should include the following layers.
| Test layer | What to test | Success condition | Evidence |
|---|---|---|---|
| Metadata lint | Required llmwikis frontmatter, owner, trust status, sensitivity, review date | No missing required fields | Lint output and PR status |
| Source policy checks | Citations for sensitive or time-sensitive claims | All required claim types have source coverage | Citation report |
| Handoff acceptance | New agent reads `AGENTS.md`, `readme.human`, and `.uai` files; summarizes correctly; confirms constraints | Summary and intended checks are correct before edits begin | Session transcript or CI artifact |
| UAI validation | Exchange records or publication-evidence packets pass schema/validator | UAIX validator returns passing conformance result | `uai-conformance-result.json` |
| Wiki graph health | Broken links, orphan pages, contradictions, stale dates | No unresolved structural failures | Lint log |
| Publication dry run | Draft page renders correctly with metadata and links | Reviewable publication candidate exists | PR preview / CMS draft preview |
This plan is directly aligned with the official operating models rather than being invented ad hoc. citeturn18view2turn13view5turn25view3turn22view0îˆ
### Deployment and maintenance checklist
Before first deployment, ensure that the following are true:
- LLM Wiki starter structure exists, with README, INDEX, GOVERNANCE, TRUST_MODEL, CONTRIBUTING, CHANGELOG, policy pages, and agent guidance. citeturn17view0turn18view4îˆ
- UAI AI Memory files exist, with Project Handoff files added when responsibility moves. citeturn4view0turn13view0îˆ
- `CLAUDE.md` imports `AGENTS.md`. citeturn21view3îˆ
- `.mcp.json` only contains approved project-scoped servers. citeturn20view3îˆ
- Hook definitions are reviewed for unsafe commands, path traversal, and secret exposure. citeturn20view5îˆ
- Publication candidates require source labels, trust labels, and last-reviewed metadata. citeturn16view1turn28view2turn28view3îˆ
- Secrets, tokens, raw customer data, and private logs are excluded from wiki and export paths. citeturn28view0turn13view3îˆ
- If UAI public-claim packets are used, validator runs are attached to release evidence. citeturn23view0turn11view2îˆ
### Recommended metrics and KPIs
The most useful KPIs for this stack are not generic “AI productivity†numbers. They should measure whether the system is staying **portable, reviewable, and publishable**:
- **Promotion accuracy rate**: percentage of promoted facts later reverted because wiki background was misread as decisive truth.
- **Handoff readiness rate**: percentage of handoffs where the receiving agent can correctly summarize current state, constraints, intended touchpoints, and checks without re-reading prior chat history.
- **Metadata completeness**: share of llmwikis pages with all required metadata fields populated.
- **Review freshness**: share of authoritative pages still within their review cycle.
- **Citation coverage**: percentage of public or time-sensitive claims in publication candidates with source-near citations.
- **Validation pass rate**: percentage of UAI-1 packets that pass validator checks on first attempt.
- **Publication lead time**: median time from draft creation to approved llmwikis publication.
- **Stale-memory drift rate**: count of contradictions between AI Memory / Project Handoff packets and current reviewed wiki pages.
- **Security exception count**: number of attempted promotions or publications blocked due to secrets, privacy, or unsupported-claim rules.
- **Reuse rate**: percentage of answers or drafts that cite or reuse existing reviewed wiki pages rather than recreating knowledge from scratch.
These KPIs are recommendations, but they are directly informed by the review, trust, lint, and verification disciplines published by UAIX and LLMWikis. citeturn18view2turn16view1turn16view2turn13view5îˆ
## Publication outline for llmwikis.org and open questions
Because llmwikis.org is a governed handbook-style site with explicit source status, metadata, and trust boundaries, the best publication artifact is not a generic blog post. It is a **source-backed handbook or comparison page** that makes assumptions visible, links canonical UAIX pages where appropriate, and preserves the non-normative boundary when talking about UAI. LLMWikis explicitly says UAIX remains canonical for UAI-1, Project Handoff, schemas, validator behavior, roadmap, and changelog, while LLMWikis serves as the durable implementation and explanation layer. citeturn16view0turn18view3îˆ
### Sample web content outline for llmwikis.org
| Section | Purpose | Notes |
|---|---|---|
| Title and summary | Explain the stack in one paragraph | Include “Last reviewed,†owner, and source status |
| Why combine the layers | Explain LLM Wiki vs AI Memory vs Project Handoff vs Claude Code | Link canonical UAIX pages for UAI claims |
| System architecture | Show the layered workflow diagram | Mark assumptions for unpublished write APIs |
| Deep memory and promotion rules | Explain background vs decisive truth | Cite source policy and UAIX promotion rule |
| Claude integration | Document `CLAUDE.md`, MCP, hooks, and CI patterns | Clarify local vs project scope |
| Data schemas | Show `.uai`, frontmatter, and UAI-1 envelope samples | Keep public and private schemas distinct |
| Security and review gates | Document what cannot be stored or auto-published | Use llmwikis security/privacy language |
| Operational checklist | Provide a practical adoption path | End with lint, review, and validator steps |
### Sample presentation slide deck
| Slide | Title | Core message |
|---|---|---|
| Slide A | Why this stack exists | Deep memory, portable truth, and agent execution solve different failure modes |
| Slide B | LLM Wiki versus AI Memory | Deep institutional knowledge versus bounded working packet |
| Slide C | Project Handoff as the transfer layer | `AGENTS.md`, `readme.human`, and `.uai` files create predictable takeover |
| Slide D | Claude Code integration points | `CLAUDE.md`, MCP, hooks, Agent SDK, and GitHub Actions |
| Slide E | Promotion and publication workflow | Background wiki → reviewed promotion → portable packet → publication candidate |
| Slide F | Security and privacy boundaries | No secrets, no raw private data, no auto-publish of drafts |
| Slide G | Validation and evidence | Use UAI-1 plus validator only when public exchange or support evidence is needed |
| Slide H | KPIs and rollout plan | Measure readiness, freshness, validation, and review velocity |
### Open questions and limitations
A few important items remain intentionally marked as assumptions because the public docs do not make them explicit.
The first is **write APIs for UAI Memory and Project Handoff**. The public docs reviewed here clearly document file-based bundles, package wizard outputs, `AGENTS.md` / `.uai` structures, public machine routes, and validator workflows, but they do not document a stable public write API for mutating memory or handoff artifacts. I therefore recommend a local repository adapter or internal service for write operations, and I would not describe such write endpoints as official UAIX support without additional canonical documentation. citeturn4view0turn12view1turn15view0turn11view1îˆ
The second is **llmwikis.org publication plumbing**. The public handbook documents structure, governance, source policy, metadata, and security/privacy, but it does not, in the material reviewed here, document a public authoring API or direct publish endpoint. That means the safest current recommendation is a PR-based or CMS-draft workflow rather than direct model-driven publishing. citeturn5view0turn16view1turn16view2turn28view2îˆ
The third is **how far to automate evidence generation**. UAIX strongly supports validator-backed evidence for UAI-1 records, but it also keeps broader SDK, CLI, hosted generator, certification, and endorsement language outside current support boundaries. For that reason, I recommend using UAI-1 evidence selectively—for public claims, external exchange, and published support assertions—rather than wrapping every internal wiki edit in a UAI-1 record. citeturn12view1turn23view0îˆ
Why This File Exists
This is a memory-system evidence file from aiwikis.org. It is shown here because AIWikis.org is demonstrating the real source files that make the UAIX / LLM Wiki memory system work, not only summarizing those systems after the fact.
Role
This file is memory-system evidence. It records source history, archive transfer, intake disposition, or another piece of provenance that should be retrievable without becoming an unsupported public claim.
Structure
The file is structured around these visible headings: Layered Publishing Architecture with LLM Wiki, UAI Memory, Project Handoff, and Claude Code; Executive summary; Goals and use cases; Architecture and integration patterns; Comparison table of alternatives and trade-offs; Implementation blueprint; Step-by-step flow; Claude compatibility layer. Those headings are retrieval anchors: a crawler or LLM can decide whether the file is relevant before reading every line.
Prompt-Size And Retrieval Benefit
Keeping this material in a separate file reduces prompt pressure because an agent can load this exact unit only when its role, source site, category, or hash is relevant. The surrounding index pages point to it, while this page preserves the full content for audit and exact recall.
How To Use It
- Humans should read the metadata first, then inspect the raw content when they need exact wording or provenance.
- LLMs and agents should use the source site, category, hash, headings, and related files to decide whether this file belongs in the active prompt.
- Crawlers should treat the AIWikis page as transparent evidence and follow the source URL/source reference for authority boundaries.
- Future maintainers should regenerate this page whenever the source hash changes, then review the explanation if the role or structure changed.
Update Requirements
When this source file changes, update the raw source layer, normalized source layer, hash history, this rendered page, generated explanation, source-file inventory, changed-files report, and any source-section index that links to it.
Related Pages
Provenance And History
- Current observation:
2026-05-02T01:47:31.8867765Z - Source origin:
current-source-workspace - Retrieval method:
local-source-workspace - Duplicate group:
sfg-280(primary) - Historical hash records are stored in
data/hashes/source-file-history.jsonl.
Machine-Readable Metadata
{
"title": "Layered Publishing Architecture with LLM Wiki, UAI Memory, Project Handoff, and Claude Code",
"source_site": "aiwikis.org",
"source_url": "https://aiwikis.org/",
"canonical_url": "https://aiwikis.org/files/aiwikis/raw-system-archives-llmwikis-source-site-report-preservation-2026-05-01-9b58902d/",
"source_reference": "raw/system-archives/llmwikis/source-site-report-preservation/2026-05-01/agent-file-handoff/Archive/2026-05-01/Improvement/llmwikis-integration-promoted/Layered Publishing Architecture with LLM and Claude Code.md",
"file_type": "md",
"content_category": "memory-file",
"content_hash": "sha256:9b58902dbdb401ddb5f58700f18f73b47cecba8a41c58305730d8f0fb9e72136",
"last_fetched": "2026-05-02T01:47:31.8867765Z",
"last_changed": "2026-05-01T17:58:03.5739599Z",
"import_status": "unchanged",
"duplicate_group_id": "sfg-280",
"duplicate_role": "primary",
"related_files": [
],
"generated_explanation": true,
"explanation_last_generated": "2026-05-02T01:47:31.8867765Z"
}