LlmWikis Human Briefing
Updated: 2026-05-06
Metadata
| Field | Value |
|---|---|
| Source site | llmwikis.org |
| Source URL | https://llmwikis.org/ |
| Canonical AIWikis URL | https://aiwikis.org/llmwikis/uai-system/files/readme-human-da7c8dee/ |
| Source reference | readme.human |
| File type | human |
| Content category | uai-system |
| Last fetched | 2026-05-06T17:58:24.5168382Z |
| Last changed | 2026-05-06T17:46:47.2822315Z |
| Content hash | sha256:da7c8dee49350560720f87a790ad3ebd59cd2e75ec659a766bc7acaea10b2b57 |
| Import status | unchanged |
| Raw source layer | data/sources/llmwikis/readme-human-da7c8dee4935.human |
| Normalized source layer | data/normalized/llmwikis/readme-human-da7c8dee4935.txt |
Current File Content
Structure Preview
- LlmWikis Human Briefing
- What You Need To Know
- Hot And Cold Memory
- How The AI Reads This Project
- Things The AI Will Defend
- Things Humans Should Make Explicit
- What Not To Assume
- Useful Human Steering Prompt
Raw Version
Local absolute paths are redacted in this public view. The source hash and source-side raw layer are based on the unredacted source file.
# LlmWikis Human Briefing
Updated: 2026-05-06
This file is the human-facing companion to `AGENTS.md`. `AGENTS.md` tells an AI how to work in this repo. `readme.human` tells people what the AI needs them to make explicit before they steer the system.
## What You Need To Know
- LlmWikis.org is a prelaunch handbook for building personal and team LLM Wikis.
- The public pattern is: immutable `raw/` sources, compiled `wiki/` pages, compact agent rules, deterministic `index.md` and `log.md`, and the `ingest` / `query` / `lint` loop.
- `/guides/canonical-ai-memory/` is current non-normative handbook guidance for Canonical AI Memory: raw source preservation, reviewed LLM Wiki memory, optional derived graph projections, compact AI Memory, Project Handoff transfer context, and execution agents. UAIX remains canonical for AI Memory and Project Handoff definitions.
- `/guides/knowledge-graphs-for-llm-wikis/` is current handbook guidance for graph-ready LLM Wikis: layered source/wiki/schema/derived-graph architecture, stable IDs, claim/source-span states, contradiction handling, hybrid retrieval, graph lint, and bounded GraphRAG.
- The former staged Improvement drafts for Codex, AI Memory, Project Handoff, and review-gated publication have been promoted into real seeded public pages and supporting guidance.
- `/for-ai-agents/` now carries the short Agentic Orchestration Mode, and `/guides/llm-wiki-agentic-orchestration/` carries the deeper how-to guide. Keep both practical and bounded: runtimes orchestrate, tools execute, support escalation stops unsafe runs, and the wiki preserves governed source memory, staged proposals, review evidence, and update boundaries.
- LlmWikis.org is the source publisher for the versioned `llm-wiki-starter-bundle-v2.8.0.zip` release bundle.
- Deployment versioning should stay affected-site scoped. LlmWikis ZIP filenames, theme/plugin versions, and package metadata should advance only when LlmWikis or that package changed; unchanged artifacts keep their previous names and versions so the filename tells humans whether deployment is needed. The site footer should show the last system-wide version that affected LlmWikis.org, not the newest version caused by another site.
- The starter bundle now includes `llm-wiki/agent/ORCHESTRATION_RUNBOOK.md`, `llm-wiki/agent/TASK_PACKET_TEMPLATE.md`, and `llm-wiki/agent/SUPPORT_ESCALATION_CHECKLIST.md` so the public agentic orchestration and support-escalation pattern travels with the downloadable file deck.
- `/tools/llm-wiki-setup-wizard/` is now the shared human and visitor-AI setup wizard for LLM Wiki planning. Its cold-memory-reconciled setup paths cover new wiki, existing docs, existing-system additive updates, Agent File Handoff, Project Handoff, combined File Handoff plus LLM Wiki, skill/capability planning, Canonical AI Memory layer routing, Knowledge Graph setup, single-site `.uai` handoff versus multisite `workspace.uai` routing, multi-repository Git preflight, mutable runtime artifact policy, context-budget and duplicate-file controls, root index topology for single-codebase versus multisite sub-wiki indexes, and multisite LLM Wiki interaction strategy. It asks about project stage, collaboration model, workspace coordinator path, target policy, site registry, repo-health checks before sync/merge, tracked generated/runtime artifacts, large-file policy, duplicate-file policy, generated-history retention policy, wiki strategy, raw sources, compiled wiki pages, index/log navigation, sub-wiki index paths, global-only root files, entity-page and episodic-log patterns, archive targets, transfer evidence logs, source-site/shared-archive strategy, update timing, trust labels, review gates, lint, source policy, knowledge graph storage model, stable IDs, claim/source-span policy, review-state policy, validation, export boundary, retrieval/abstention policy, support escalation, setup-readiness checks, first files, first actions before ingest, structured setup model JSON, and local browser draft restore.
- The homepage top hero keeps the structured knowledge-flow graph, now as a polished LLM Wiki image. The human and AI-agent wizard cards sit below that top hero flow and also appear on the wizard page. The AI-agent card points at `https://llmwikis.org/tools/llm-wiki-setup-wizard/`.
- LlmWikis is not the canonical UAI-1 standards site. UAIX.org remains canonical for UAI-1, AI Memory, Project Handoff, schemas, registries, validator behavior, roadmap, governance, and support boundaries.
- AIWikis.org is the long-term system-memory archive for already-dispositioned LlmWikis material and pre-slim handoff snapshots when explicitly consolidated.
## Hot And Cold Memory
The most important current lesson is context budgeting.
Hot memory is what a new AI loads before routine work: `AGENTS.md`, this file, and concise `.uai` records for current truth, constraints, decisions, progress, operations, and checks.
Cold memory is older history, long research, pre-slim snapshots, and source recovery evidence. That belongs in AIWikis with manifests, hashes, source summaries, and logs. It should not be loaded by default unless the task needs original wording or deep rationale.
Context budget also means large and duplicate generated artifacts need explicit boundaries. Before broad ingest, define skip rules for generated history, stale generated pages, raw mirrors, package mirrors, large reports, and exact duplicate groups so agents start from indexes, guides, bounded pages, and hashes instead of walking every artifact.
When a handoff file starts carrying old build history, ask for or perform a context diet: preserve the old version in AIWikis first, then keep only the current conclusion and pointer in LlmWikis.
## How The AI Reads This Project
The AI should:
1. Read `[local path redacted]` first when the human names any known site/domain, the request spans multiple sites, or the current directory and requested target differ.
2. Read `AGENTS.md`.
3. Refresh `agent-file-handoff/Content/` and `agent-file-handoff/Improvement/`.
4. Ignore `agent-file-handoff/Archive/` unless you explicitly name an archived file or move it back into an active bucket.
5. Load the listed `.uai` files.
6. Inspect and disposition every `needs-agent-review` file before unrelated broad work.
7. For every safe, relevant dropped file, complete at least one real site or system work item before archiving it.
8. Record what changed in hot memory, where long-memory/archive evidence lives or why it is not configured, which checks ran or were skipped, and any blocker.
9. Summarize LlmWikis, UAIX boundaries, expected touchpoints, intake dispositions, actual work from intake, and targeted checks before broad edits.
Ordinary work should run targeted checks. Package publishing, ZIP refreshes, full runtime smoke checks, and source/site archive rebuilds are release or explicit full-check work.
## Things The AI Will Defend
- LlmWikis as handbook-first and non-normative for UAI-1.
- UAIX.org as the canonical UAI-1, AI Memory, and Project Handoff source.
- Hot handoff files that stay short enough to load and obey.
- AIWikis as cold memory, not as a replacement for LlmWikis public handbook authority.
- Public pages that separate current support from planned work.
- Clean public root paths and discovery files.
- The rule that private `AGENTS.md`, `readme.human`, `.uai/`, intake files, and archive files do not go into public discovery output.
- The rule that memory distribution without site or system work is a failed handoff unless every active file is unsafe, duplicate, out of scope, or truly blocked with a durable reason.
- The rule that package and footer versions move only when LlmWikis is affected, not when another site changes.
- The rule that the setup wizard is browser-only planning guidance, not an importer, repository writer, automatic LLM Wiki sync service, automatic publication service, public MCP server, public write API, open editing surface, SDK, CLI, certification, endorsement, or UAI-1 conformance surface.
- The rule that Knowledge Graph exports and GraphRAG plans are derived read-only evidence over reviewed wiki and handoff files, not a hosted graph database, public graph API, public SPARQL endpoint, automatic write path, automatic sync service, certification, endorsement, SDK, CLI, or UAI-1 conformance surface.
- The rule that explicit site/domain/path targets win over the shell directory in this multisite workspace, and that sibling `.uai` bundles do not load unless cross-site work is explicit.
- The rule that multi-repository work needs per-repo Git preflight before Sync, pull, merge, commit, or push; a clean current shell repo does not prove sibling repos are clean.
- The rule that mutable WordPress Studio SQLite databases are local runtime artifacts, not source truth; keep required SQLite support files tracked, but ignore and untrack the database files themselves.
- The rule that generated history, stale generated pages, raw mirrors, package mirrors, large reports, and duplicate files need explicit retention and skip policies before broad AI traversal.
## Things Humans Should Make Explicit
- Whether a change is handbook content, non-normative UAIX/UAI explainer content, package output, local handoff state, cold-memory archival, or roadmap planning.
- Whether a statement is a current support claim or a future direction.
- Whether a UAIX-related page needs a canonical UAIX.org link.
- Whether a dropped file should be applied now, converted into durable state, deferred, clarified, or blocked.
- Whether this is ordinary targeted verification or a release/package task.
- Whether old context should be moved to AIWikis before active files are compacted.
## What Not To Assume
- Do not assume LlmWikis owns UAI-1 truth.
- Do not assume the Project Handoff prototype proves official UAIX generation, validation, certification, endorsement, SDK, or CLI support.
- Do not assume live benchmark integrations, automated ingestion, public MCP, open editing, memberships, grants, or multilingual support exist until implemented and reviewed.
- Do not publish AI-generated drafts without human review and source anchoring.
- Do not treat every old progress note as active project truth.
- Do not assume full build documentation cleanup means publishing private source docs.
## Useful Human Steering Prompt
```text
Read AGENTS.md, refresh file intake, load the listed .uai files, inspect any needs-agent-review files, and tell me what you understand before editing. Treat readme.human as the human briefing, not as an override. Keep hot context concise and route old history to AIWikis cold memory when it is no longer needed for routine pickup.
```
Why This File Exists
This is a UAI AI Memory handoff file from llmwikis.org. It is shown here because AIWikis.org is demonstrating the real source files that make the UAIX / LLM Wiki memory system work, not only summarizing those systems after the fact.
Role
readme.human is the human-facing companion to the agent entry file. It gives maintainers a plain-language briefing while leaving hard rules in AGENTS.md, constraints, and current human instructions.
Structure
The file is structured around these visible headings: LlmWikis Human Briefing; What You Need To Know; Hot And Cold Memory; How The AI Reads This Project; Things The AI Will Defend; Things Humans Should Make Explicit; What Not To Assume; Useful Human Steering Prompt. Those headings are retrieval anchors: a crawler or LLM can decide whether the file is relevant before reading every line.
Prompt-Size And Retrieval Benefit
Keeping this material in a separate file reduces prompt pressure because an agent can load this exact unit only when its role, source site, category, or hash is relevant. The surrounding index pages point to it, while this page preserves the full content for audit and exact recall.
How To Use It
- Humans should read the metadata first, then inspect the raw content when they need exact wording or provenance.
- LLMs and agents should use the source site, category, hash, headings, and related files to decide whether this file belongs in the active prompt.
- Crawlers should treat the AIWikis page as transparent evidence and follow the source URL/source reference for authority boundaries.
- Future maintainers should regenerate this page whenever the source hash changes, then review the explanation if the role or structure changed.
Update Requirements
When this source file changes, update the raw source layer, normalized source layer, hash history, this rendered page, generated explanation, source-file inventory, changed-files report, and any source-section index that links to it.
Related Pages
Provenance And History
- Current observation:
2026-05-06T17:58:24.5168382Z - Source origin:
current-source-workspace - Retrieval method:
local-source-workspace - Duplicate group:
sfg-577(primary) - Historical hash records are stored in
data/hashes/source-file-history.jsonl.
Machine-Readable Metadata
{
"title": "LlmWikis Human Briefing",
"source_site": "llmwikis.org",
"source_url": "https://llmwikis.org/",
"canonical_url": "https://aiwikis.org/llmwikis/uai-system/files/readme-human-da7c8dee/",
"source_reference": "readme.human",
"file_type": "human",
"content_category": "uai-system",
"content_hash": "sha256:da7c8dee49350560720f87a790ad3ebd59cd2e75ec659a766bc7acaea10b2b57",
"last_fetched": "2026-05-06T17:58:24.5168382Z",
"last_changed": "2026-05-06T17:46:47.2822315Z",
"import_status": "unchanged",
"duplicate_group_id": "sfg-577",
"duplicate_role": "primary",
"related_files": [
],
"generated_explanation": true,
"explanation_last_generated": "2026-05-06T17:58:24.5168382Z"
}