RAG
Definition
Retrieval Augmented Generation. A technique where an AI system answers questions by first retrieving relevant content from a curated knowledge base — rather than relying solely on its general training. RAG is what turns the LLM-Wiki into a queryable product.
How It Works
A standard LLM answers from memory: whatever patterns it absorbed during training. RAG adds a retrieval step: when a question arrives, the system first searches the knowledge base for the most relevant pages or passages, then feeds that context to the model alongside the question. The model answers from the retrieved material, not just from training.
The result: answers grounded in your specific content — your concepts, your sources, your framing — rather than generic AI output.
Why the Wiki Must Come First
RAG retrieves what exists. If the knowledge base contains raw PDFs, it retrieves fragments. If it contains well-structured concept pages with clear explanations and connections, it retrieves teaching-quality answers.
This is why the LLM-Wiki is the prerequisite: the quality of a RAG system is bounded by the quality of what it retrieves. The pages built in this wiki are designed to be good retrieval targets — not just good reading.
What Good RAG Looks Like for This Workshop
A client asks: “What should we do about shadow AI?”
A RAG system over this wiki retrieves Shadow AI to Innovation, finds the 90% vs 40% data, the five-step conversion framework, and the connections to Govern and Adoption Gap — and the model answers with that specific, structured content.
Without RAG, the same model answers generically. With RAG over a good wiki, it answers as if it has read the workshop.
Connections
LLM-Wiki Context as Differentiator
Tags: RAG, retrieval, vector search, AI query, knowledge base