AIWikis.org
AIWikis.org is the transparent public demonstration and documentation hub for source-governed AI memory systems. It shows how UAI AI Memory and LLM Wiki files are actually structured, where they came from, why they exist, and how focused pages help humans, crawlers, LLMs, and agents retrieve the right knowledge without carrying a whole site in the prompt.
The first two source systems are kept visibly separate:
- UAIX.org for UAI-1, AI Memory package guidance, Project Handoff, Agent File Handoff, AGENTS.md, readme.human,
.uaicontext packets, public discovery files, and UAIX-owned package evidence. - LLMWikis.org for practical LLM Wiki structure, governance, starter templates, metadata, trust labels, source policy, agent rules, and LLM Wiki handbook patterns.
AIWikis.org does not replace either source. UAIX.org remains canonical for UAI-1 public claims. LLMWikis.org remains the handbook source for LLM Wiki patterns. AIWikis shows the working files, provenance, generated explanations, update history, and dogfood evidence.
For detailed long-term memory, read the reviewed UAIX.org Source Memory Guide, LLMWikis.org Source Memory Guide, Cross-Site Memory Atlas, Source Memory Operations Playbook, Memory Coverage Matrix, and Claim Boundary Register. These pages explain what each source owns, how AIWikis preserves it, what update triggers matter, how claims move across the sites, what coverage exists, and which support claims must stay out of scope.
Source Systems
- UAIX.org separates public content information, real UAI system files, demonstrations, and provenance.
- LLMWikis.org separates public content information, real UAI / LLM Wiki system files, demonstrations, and provenance.
- Global File Index lists the current imported source files across UAIX.org, LLMWikis.org, and AIWikis.org.
- Reports lists sync logs, source-file inventories, generated explanations, duplicate resolution, archive recovery, broken-link notes, and content maps.
What The Demonstration Shows
- Many focused pages instead of one giant memory dump.
- Stable URLs for individual files and source-system indexes.
- Machine-readable metadata beside human explanations.
- Source URL, source reference, retrieval time, content hash, and update history.
- Exact source-side raw and normalized layers, with public-path redaction where needed to avoid exposing local machine paths.
- Clear relationship mapping between public content, memory infrastructure, prompts, specifications, indexes, and archive evidence.
- Generated explanations that say what each file is, why it exists, how it supports UAI AI Memory or LLM Wiki use, and when maintainers should update it.
- Current UAIX AI Memory Wizard evidence, including generated system profiles, receiver briefs, startup packets, optional LLM Wiki memory plans, and the source-authority, evidence-ledger, conflict-resolution, risk, and rollback rules that make a memory layer operable rather than empty.
Start
- Start with UAIX.org or LLMWikis.org.
- Read UAIX.org Source Memory Guide and LLMWikis.org Source Memory Guide when you need detailed source-system memory before exploring generated file pages.
- Read Cross-Site Memory Atlas for the authority, artifact, evidence-flow, and claim-routing map across UAIX.org, LLMWikis.org, and AIWikis.org.
- Read Source Memory Operations Playbook before maintaining the cross-site memory layer.
- Read Memory Coverage Matrix to see which domains are covered, partially covered, source-only, or blocked.
- Read Claim Boundary Register before writing public claims, package notes, roadmap language, or AI handoff summaries.
- Use Global File Index when you already know the file or role you need.
- Use UAI, LLM Wiki, AI Memory, Provenance, and Prompt-Size Minimization for concept-level navigation.
- Read Start Here for the human and AI-agent path.
- Read Recovered Content Index for the compact map of reprocessed
.mdand.uaisource summaries. - Read LLM Wiki Index, UAI Reference Index, AI Memory Systems Index, and Best Practices Index for topic-scoped recovery paths.
- Read Source Provenance Index and Topic Index when you need source/domain or tag-based navigation.
- Read What Is an LLM Wiki? for the concept.
- Read Autonomous Wiki Custodian for the agent-maintained knowledge model.
- Read Deep Cognitive Archive for the three-layer memory model that separates public pages, compact handoff packets, and deep source-side reasoning.
- Read System Memory Archive for the cross-site archive and final-memory flow.
- Read Intake Outcome Ledger to see what happened to recently digested dropped files and where their reviewed change documentation lives.
- Read Roadmap for current, next, later, and blocked work across AIWikis documentation, source sync, package evidence, and live deployment readiness.
- Read Source Packages for upstream package ownership and dogfood attribution.
- Read Source Map before making source claims.