LlmWikis Human Briefing
Updated: 2026-05-03
Metadata
| Field | Value |
|---|---|
| Source site | aiwikis.org |
| Source URL | https://aiwikis.org/ |
| Canonical AIWikis URL | https://aiwikis.org/files/aiwikis/raw-system-archives-llmwikis-recent-work-sweep-2026-05-03-readme-human-eba5b580/ |
| Source reference | raw/system-archives/llmwikis/recent-work-sweep/2026-05-03/readme.human |
| File type | human |
| Content category | uai-system |
| Last fetched | 2026-05-03T02:48:13.1276041Z |
| Last changed | 2026-05-03T02:24:32.2053839Z |
| Content hash | sha256:eba5b580113672627c1e20df8e7f9da2ec3a97e5db46d33706b81c1bf53fab53 |
| Import status | new |
| Raw source layer | data/sources/aiwikis/raw-system-archives-llmwikis-recent-work-sweep-2026-05-03-readme-human-eba5b5801136.human |
| Normalized source layer | data/normalized/aiwikis/raw-system-archives-llmwikis-recent-work-sweep-2026-05-03-readme-human-eba5b5801136.txt |
Current File Content
Structure Preview
- LlmWikis Human Briefing
- What You Need To Know
- Hot And Cold Memory
- How The AI Reads This Project
- Things The AI Will Defend
- Things Humans Should Make Explicit
- What Not To Assume
- Useful Human Steering Prompt
Raw Version
# LlmWikis Human Briefing
Updated: 2026-05-03
This file is the human-facing companion to `AGENTS.md`. `AGENTS.md` tells an AI how to work in this repo. `readme.human` tells people what the AI needs them to make explicit before they steer the system.
## What You Need To Know
- LlmWikis.org is a prelaunch handbook for building personal and team LLM Wikis.
- The public pattern is: immutable `raw/` sources, compiled `wiki/` pages, compact agent rules, deterministic `index.md` and `log.md`, and the `ingest` / `query` / `lint` loop.
- The former staged Improvement drafts for Codex, AI Memory, Project Handoff, and review-gated publication have been promoted into real seeded public pages and supporting guidance.
- `/for-ai-agents/` now carries the short Agentic Orchestration Mode, and `/guides/llm-wiki-agentic-orchestration/` carries the deeper how-to guide. Keep both practical and bounded: runtimes orchestrate, tools execute, support escalation stops unsafe runs, and the wiki preserves governed source memory, staged proposals, review evidence, and update boundaries.
- LlmWikis.org is the source publisher for `llm-wiki-starter-bundle.zip`.
- The starter bundle now includes `llm-wiki/agent/ORCHESTRATION_RUNBOOK.md`, `llm-wiki/agent/TASK_PACKET_TEMPLATE.md`, and `llm-wiki/agent/SUPPORT_ESCALATION_CHECKLIST.md` so the public agentic orchestration and support-escalation pattern travels with the downloadable file deck.
- LlmWikis is not the canonical UAI-1 standards site. UAIX.org remains canonical for UAI-1, AI Memory, Project Handoff, schemas, registries, validator behavior, roadmap, governance, and support boundaries.
- AIWikis.org is the long-term system-memory archive for already-dispositioned LlmWikis material and pre-slim handoff snapshots when explicitly consolidated.
## Hot And Cold Memory
The most important current lesson is context budgeting.
Hot memory is what a new AI loads before routine work: `AGENTS.md`, this file, and concise `.uai` records for current truth, constraints, decisions, progress, operations, and checks.
Cold memory is older history, long research, pre-slim snapshots, and source recovery evidence. That belongs in AIWikis with manifests, hashes, source summaries, and logs. It should not be loaded by default unless the task needs original wording or deep rationale.
When a handoff file starts carrying old build history, ask for or perform a context diet: preserve the old version in AIWikis first, then keep only the current conclusion and pointer in LlmWikis.
## How The AI Reads This Project
The AI should:
1. Read `AGENTS.md`.
2. Refresh `agent-file-handoff/Content/` and `agent-file-handoff/Improvement/`.
3. Ignore `agent-file-handoff/Archive/` unless you explicitly name an archived file or move it back into an active bucket.
4. Load the listed `.uai` files.
5. Inspect and disposition every `needs-agent-review` file before unrelated broad work.
6. Summarize LlmWikis, UAIX boundaries, expected touchpoints, and targeted checks before broad edits.
Ordinary work should run targeted checks. Package publishing, ZIP refreshes, full runtime smoke checks, and source/site archive rebuilds are release or explicit full-check work.
## Things The AI Will Defend
- LlmWikis as handbook-first and non-normative for UAI-1.
- UAIX.org as the canonical UAI-1, AI Memory, and Project Handoff source.
- Hot handoff files that stay short enough to load and obey.
- AIWikis as cold memory, not as a replacement for LlmWikis public handbook authority.
- Public pages that separate current support from planned work.
- Clean public root paths and discovery files.
- The rule that private `AGENTS.md`, `readme.human`, `.uai/`, intake files, and archive files do not go into public discovery output.
## Things Humans Should Make Explicit
- Whether a change is handbook content, non-normative UAIX/UAI explainer content, package output, local handoff state, cold-memory archival, or roadmap planning.
- Whether a statement is a current support claim or a future direction.
- Whether a UAIX-related page needs a canonical UAIX.org link.
- Whether a dropped file should be applied now, converted into durable state, deferred, clarified, or blocked.
- Whether this is ordinary targeted verification or a release/package task.
- Whether old context should be moved to AIWikis before active files are compacted.
## What Not To Assume
- Do not assume LlmWikis owns UAI-1 truth.
- Do not assume the Project Handoff prototype proves official UAIX generation, validation, certification, endorsement, SDK, or CLI support.
- Do not assume live benchmark integrations, automated ingestion, public MCP, open editing, memberships, grants, or multilingual support exist until implemented and reviewed.
- Do not publish AI-generated drafts without human review and source anchoring.
- Do not treat every old progress note as active project truth.
- Do not assume full build documentation cleanup means publishing private source docs.
## Useful Human Steering Prompt
```text
Read AGENTS.md, refresh file intake, load the listed .uai files, inspect any needs-agent-review files, and tell me what you understand before editing. Treat readme.human as the human briefing, not as an override. Keep hot context concise and route old history to AIWikis cold memory when it is no longer needed for routine pickup.
```
Why This File Exists
This is a UAI AI Memory handoff file from aiwikis.org. It is shown here because AIWikis.org is demonstrating the real source files that make the UAIX / LLM Wiki memory system work, not only summarizing those systems after the fact.
Role
readme.human is the human-facing companion to the agent entry file. It gives maintainers a plain-language briefing while leaving hard rules in AGENTS.md, constraints, and current human instructions.
Structure
The file is structured around these visible headings: LlmWikis Human Briefing; What You Need To Know; Hot And Cold Memory; How The AI Reads This Project; Things The AI Will Defend; Things Humans Should Make Explicit; What Not To Assume; Useful Human Steering Prompt. Those headings are retrieval anchors: a crawler or LLM can decide whether the file is relevant before reading every line.
Prompt-Size And Retrieval Benefit
Keeping this material in a separate file reduces prompt pressure because an agent can load this exact unit only when its role, source site, category, or hash is relevant. The surrounding index pages point to it, while this page preserves the full content for audit and exact recall.
How To Use It
- Humans should read the metadata first, then inspect the raw content when they need exact wording or provenance.
- LLMs and agents should use the source site, category, hash, headings, and related files to decide whether this file belongs in the active prompt.
- Crawlers should treat the AIWikis page as transparent evidence and follow the source URL/source reference for authority boundaries.
- Future maintainers should regenerate this page whenever the source hash changes, then review the explanation if the role or structure changed.
Update Requirements
When this source file changes, update the raw source layer, normalized source layer, hash history, this rendered page, generated explanation, source-file inventory, changed-files report, and any source-section index that links to it.
Related Pages
Provenance And History
- Current observation:
2026-05-03T02:48:13.1276041Z - Source origin:
current-source-workspace - Retrieval method:
local-source-workspace - Duplicate group:
sfg-440(primary) - Historical hash records are stored in
data/hashes/source-file-history.jsonl.
Machine-Readable Metadata
{
"title": "LlmWikis Human Briefing",
"source_site": "aiwikis.org",
"source_url": "https://aiwikis.org/",
"canonical_url": "https://aiwikis.org/files/aiwikis/raw-system-archives-llmwikis-recent-work-sweep-2026-05-03-readme-human-eba5b580/",
"source_reference": "raw/system-archives/llmwikis/recent-work-sweep/2026-05-03/readme.human",
"file_type": "human",
"content_category": "uai-system",
"content_hash": "sha256:eba5b580113672627c1e20df8e7f9da2ec3a97e5db46d33706b81c1bf53fab53",
"last_fetched": "2026-05-03T02:48:13.1276041Z",
"last_changed": "2026-05-03T02:24:32.2053839Z",
"import_status": "new",
"duplicate_group_id": "sfg-440",
"duplicate_role": "primary",
"related_files": [
],
"generated_explanation": true,
"explanation_last_generated": "2026-05-03T02:48:13.1276041Z"
}