Reference Excluded Topics and Clarified Terms

Field Terms We Don't Teach (and Why)

You'll encounter terms in blog posts, vendor documentation, and tutorial sites that don't appear in this learning path, or that appear here with different categorizations than you'll find elsewhere. That's deliberate.

The AI field has a terminology problem. The same word means different things to different vendors. Distinct concepts get lumped together. Marketing categories masquerade as technical ones. And things that are really just applications of a technique get listed as separate techniques alongside the technique they're built on.

This page documents the terms and categorizations we intentionally exclude, recategorize, or use differently from common external sources. If you Google something and find a categorization that contradicts what we teach, check here first. We may have made a deliberate choice, and this page explains why.

How to use this page

  • When you encounter a term elsewhere that contradicts the curriculum: Check here for whether the difference is intentional.
  • When you're confused about categorization: The distinction between "technique" and "application of a technique" is the most common source of confusion. If something sounds like a separate technique but its description is "SFT but with different data," it's an application.
  • When you want to go deeper: The "premature" section points you toward real techniques that are worth learning once you've completed the curriculum.
  • When a topic feels important but unfamiliar: The "adjacent disciplines" section covers topics that are real but belong to different engineering roles than the one this curriculum teaches.

This page will grow over time as we encounter more terminology confusion in the field.


Fine-tuning categorizations

External resources (including Google Cloud's fine-tuning guide) often list these as distinct "types of fine-tuning" alongside SFT, LoRA, and DPO. I don't categorize them that way because they're applications of those techniques, not separate techniques. It's like saying git and GitHub are two different version control systems.

Few-shot learning

What external sources say: A type of fine-tuning where the model is given a few examples.

What it actually is: A prompting technique. You put examples in the prompt at inference time. No weights change. No training happens. This is prompt engineering, covered in Module 1. Categorizing it alongside supervised fine-tuning conflates inference-time and training-time techniques, two fundamentally different operations.

Where we teach the real concept: Prompt Engineering Fundamentals covers few-shot prompting as a prompt construction technique.

Transfer learning

What external sources say: A type of fine-tuning where the model leverages knowledge from pre-training.

What it actually is: The general paradigm that makes fine-tuning possible at all. Every fine-tuning technique is transfer learning, where you're transferring knowledge from the pre-trained model to the task-specific model. Listing it as a "type" alongside the specific techniques is like listing "cooking" as a type of recipe alongside "stir-fry" and "braising."

Where we teach the real concept: The concept is implicit throughout Module 8. When we fine-tune with LoRA, we're doing transfer learning. We just don't use a separate label for the general paradigm.

Domain-specific fine-tuning

What external sources say: A type of fine-tuning where the model is adapted to a particular domain.

What it actually is: SFT applied to domain-specific data. The technique is SFT. What changes is the data curation strategy: you collect training examples from the target domain. This is an application of SFT, not a separate technique.

Where we learn the real concept: Fine-Tuning teaches SFT with data curation from your specific run logs and failure clusters, which is domain-specific fine-tuning in practice, without the misleading separate label.

Multi-task learning

What external sources say: A type of fine-tuning where the model is trained on multiple tasks simultaneously.

What it actually is: SFT with a training dataset that includes examples from multiple tasks. The technique is still SFT. The data curation strategy includes variety across tasks. This is a valid technique at scale, but listing it as a separate "type" alongside SFT obscures the fact that it's SFT with a different dataset composition.

Why I'm excluding it: This learning path focuses on bounded, single-task fine-tuning because that's where beginners should start. Multi-task training introduces task interference and data balancing problems that are premature for someone doing their first fine-tune.

Sequential fine-tuning

What external sources say: A type of fine-tuning where the model is adapted to a series of related tasks in stages.

What it actually is: Running SFT multiple times in sequence. Each round is standard SFT. The "sequential" part is a training strategy, not a technique. The main risk is catastrophic forgetting between stages, which we cover as a concept in the fine-tuning lesson.

Why I'm excluding it: It's an advanced training strategy, not a foundational technique. If you understand SFT, catastrophic forgetting, and evaluation you can figure out sequential training when you need it.


Terminology we use differently

These terms appear in the curriculum, but we use them differently from how some external sources do.

"Inferencing" → inference

What external sources say: "Inferencing" as a gerund for running a model.

What I use here: Use "inference" as both noun and verb. "Run inference," not "do inferencing." The "-ing" form is non-standard and adds no precision. This is a minor style choice, but consistency matters when learners are building vocabulary.

"LRM" → reasoning model

What external sources say: "Large Reasoning Model" (LRM) as a category name for models like o3.

What I use here: "Reasoning model." The "LRM" acronym hasn't formally been adopted across provider documentation the way "LLM" has. "Reasoning model" is more descriptive and used more consistently across OpenAI, Anthropic, and Google docs.

"RAG database" → retrieval method

What external sources say: "RAG database" or "vector database" as synonymous with RAG.

What I use here: RAG is a pattern, not a database choice. The retrieval step can use any backing store or search method: grep, BM25, SQL, AST, vector search, graph traversal, or combinations. A vector database is one option for one part of the pattern. See the Retrieval Method Chooser.

"Prompt tricks" → prompt engineering

What external sources say: Prompt engineering as a collection of tricks, hacks, or magic phrases.

What I use here: Prompt engineering is writing contracts. It's decomposition, constraint specification, output schema design, and debugging. The "trick" framing implies prompting is a bag of hacks you memorize. The "contract" framing means you're designing the interface between your system and the model.

"Agentic AI" → AI engineering (with agents as one tool)

What external sources say: "Agentic AI" as a category of AI systems.

What I use here: Agents are a tool, not a category. An agent is a model + tools + control loop. Whether to use an agent (vs. a simpler pipeline) is an engineering decision, not an identity. I teach agents in Module 3 alongside non-agentic approaches because the question is always "does this task need an agent?", not "how do I make everything agentic?"

"AWS Bedrock" / "Vertex AI" → cloud provider surfaces, not separate providers

What external sources say: AWS Bedrock and Google Vertex AI listed alongside OpenAI and Anthropic as "AI providers."

Why I'm excluding them as separate provider tabs: Bedrock and Vertex AI are deployment surfaces for models that already have direct provider APIs. Claude on Bedrock is still Claude. Gemini on Vertex AI is still Gemini. Adding them as separate provider paths would mean maintaining duplicate code for the same model behavior with different SDK calls, complexity without pedagogical value. Instead, the curriculum teaches through direct provider SDKs and provides a Cloud Provider Surfaces reference with translation tables for learners who access models through cloud platforms.

"GitHub Copilot SDK" → agent platform / runtime, not provider

What external sources say: GitHub markets Copilot SDK as a way to build agentic systems into your own applications.

Why I'm excluding it as a provider tab: That capability is real, but the layer is different. Copilot SDK gives you an agent runtime: sessions, tool execution, control flow, and the surrounding Copilot orchestration layer. The curriculum's provider tabs mean "the direct model API surface you call when teaching message structure, structured outputs, embeddings, and tool calls at the contract level." Copilot SDK sits above that layer. It is better understood later alongside agent frameworks and orchestration systems than alongside direct provider APIs.

"GitHub Models" → hosted inference / routing platform, supported path but not direct provider API

What external sources say: GitHub Models exposes a real inference API that lets you call multiple publishers' models through GitHub authentication.

What I use here: GitHub Models is a supported hosted inference path in the curriculum now. The clarification is about layer, not exclusion: it is not a direct provider API in the same category as OpenAI, Gemini, or Anthropic. It is a platform layer in front of multiple publishers. That's why the chooser, lessons, and reference pages talk about it as a hosted inference path rather than pretending it is the native publisher API for a model family.


Tools that overlap more than they differ

Some tools get compared as if they're competing alternatives when they're actually layers of the same stack, or the same engine with different interfaces. Learners spend time researching "Ollama vs LM Studio vs llama.cpp" when the real question is simpler than it looks.

llama.cpp vs Ollama vs LM Studio

The confusion: Three tools for running local models. Blog posts compare them head-to-head as if you need to pick one. Benchmarks show marginal speed differences. Feature matrices list dozens of checkboxes.

The relationship they don't make obvious: Ollama and LM Studio are both built on top of llama.cpp. They're not alternatives to it; they're convenience layers. llama.cpp is the inference engine. Ollama wraps it in a developer-friendly daemon with a REST API and model registry. LM Studio wraps it (plus Apple's MLX engine on Mac) in a desktop GUI with visual controls. Choosing between them isn't like choosing between PostgreSQL and MySQL. It's like choosing between using PostgreSQL directly, through an ORM, or through a GUI client.

What I use in this curriculum:

  • Ollama is the default for local inference throughout the learning path. It provides a REST API at http://localhost:11434 that works like any other provider endpoint, which means your code doesn't need to know whether the model is local or hosted. One command to install, one command to run a model. That's the right abstraction for AI engineering work.
  • llama.cpp appears in the Hardware Guide as the option for maximum control: custom quantization, unusual hardware (ARM, Raspberry Pi, edge devices), or fully auditable open-source requirements. If you need to tune GPU layer offloading or KV-cache quantization, llama.cpp gives you those knobs.
  • LM Studio isn't taught in the curriculum. It's a good tool for exploring models visually, letting you adjust temperature and context length with sliders and compare model outputs side by side. But it requires a GUI, can't run headless, and is closed source. For the programmatic, API-driven work this curriculum teaches, Ollama is a better fit.

When the choice actually matters:

If you need...Use
A local API endpoint for your applicationOllama
Visual exploration of model behaviorLM Studio
Maximum performance on unusual hardwarellama.cpp
Apple Silicon MLX accelerationLM Studio (uses MLX engine instead of llama.cpp)
Headless/containerized deploymentOllama or llama.cpp (LM Studio requires a GUI)
Fine-grained control over inference parametersllama.cpp

**Some practical advice:** Start with Ollama. If you later need something it doesn't provide, such as MLX acceleration, raw performance tuning, headless GPU management, you'll know exactly what you need and why. Many developers use LM Studio for discovery and Ollama for development. That's not hedging, but using each tool for what it's good at.

Concepts that are real but premature

These are legitimate techniques I've chosen not to teach in detail. Not because they're wrong, but because they require prerequisites or infrastructure that are beyond the scope on this curriculum.

RLHF (Reinforcement Learning from Human Feedback)

What it is: A training method that uses a separately trained reward model to optimize the language model's behavior based on human preferences.

Why I mention but don't teach it: RLHF requires training two models (the reward model and the language model), collecting human preference data at scale, and managing a more complex training loop. DPO achieves similar goals with a simpler setup. I teach DPO as the accessible entry point to preference optimization and reference RLHF for learners who need the full machinery.

ORPO, KTO, SimPO (newer preference methods)

What they are: ORPO (Odds Ratio Preference Optimization), KTO (Kahneman-Tversky Optimization), and SimPO (Simple Preference Optimization) are alternatives to DPO that simplify or improve preference-based training in various ways.

Why I'm excluding them: The landscape is still stabilizing. DPO teaches the core concept of preference-based optimization, and the mechanics transfer to newer methods. I'll revisit these as the field converges.

Full fine-tuning (all parameters)

What it is: Updating every parameter in the model during training, not just LoRA adapters.

Why I included PEFT instead: Full fine-tuning requires multi-GPU setups and significantly more VRAM than the curriculum assumes. PEFT/LoRA/QLoRA achieves most of the same benefits on consumer hardware. I mention full fine-tuning in the fine-tuning landscape table so learners know it exists.


Patterns that skip measurement

These are development patterns you'll encounter in the wild that produce working results often enough to feel productive, but bypass the measurement and attribution discipline the curriculum teaches. They aren't necessarily wrong in every context, but they're problematic as defaults, and learners who adopt them early will struggle to diagnose failures later.

Retry loops without evals ("the Ralph Wiggum loop")

What it is: A pattern where an AI agent is given a prompt and run in a loop. If the output isn't done, feed the same prompt back and try again. The simplest version is literally while :; do cat PROMPT.md | claude ; done. The loop runs until a completion signal appears or a maximum iteration count is hit.

Why it's appealing: It works for hackathons and demos. Iteration beats one-shot perfection, and the pattern requires almost no setup. There are real examples of impressive results from brute-force retry loops.

Why I don't teach it:

  1. No failure attribution. When the loop takes 30 iterations instead of 3, you have no way to know why. Was the prompt unclear? Was the task too broad? Did the model need different context? The loop doesn't distinguish between "almost right on attempt 1" and "fundamentally wrong for 29 attempts, then lucky on attempt 30." Without attribution, you can't improve the system. You can only run the loop again and hope.

  2. No cost visibility. Each iteration costs tokens. A loop running overnight with no cost tracking or token budgeting is exactly the operational blindness that Module 6 teaches you to avoid. An impressive result that cost $297 in API calls is less impressive when you ask: what did the wasted iterations cost, and could the same result have been achieved in 3 targeted iterations with better decomposition?

  3. Prompt-as-specification without decomposition. The pattern treats a large prompt as a monolithic specification and hopes repeated attempts will converge on the right output. The curriculum teaches the opposite, where we decompose complex tasks into steps, constrain each step's output, and verify before proceeding. Decomposition is more work up front but dramatically more reliable and debuggable.

  4. It trains the wrong instinct. A learner who reaches for retry loops when things fail is learning to throw compute at problems. A learner who reaches for evals, failure attribution, and the optimization ladder is learning to diagnose problems. Where the second instinct compounds, the first one just costs more over time.

What the disciplined version looks like: If you added eval checks between iterations (run the benchmark, check whether the failure count decreased, attribute the remaining failures, and decide whether to continue, change the prompt, or change the approach), you'd have an automated version of the harness from Module 6. That's a legitimate pattern. The difference between a retry loop and an eval-driven improvement loop is measurement, and this makes all the difference for production.

The short version: Iteration is good; blind iteration is vibe coding with a while loop.


Concepts from adjacent disciplines

These are valid, important topics, but they belong to different engineering disciplines than the one this curriculum teaches. AI engineering (building applications on top of models via inference APIs) is not the same as ML engineering (training and serving models), ML infrastructure (managing compute and storage for training), or data engineering (building pipelines that feed training). If you encounter these topics and feel like you should be learning them, check whether they're actually relevant to the work you're doing.

AI storage

What it is: Specialized storage infrastructure designed to handle the I/O demands of AI training workloads, such as moving massive datasets between storage and GPUs, managing model checkpoints, and sustaining the throughput that training pipelines require. Vendors like VAST Data, NetApp, and others market purpose-built storage systems for these workloads.

Why it sounds relevant: "AI" is in the name. If you're learning AI engineering, surely you need to understand AI storage?

Why it's a different discipline: AI storage solves infrastructure problems for teams that train models at scale, such as data center operators, ML platform engineers, and infrastructure teams managing GPU clusters. This curriculum teaches you to use models via inference APIs. The storage layer between your application and an API call is HTTP. You don't manage the provider's training infrastructure any more than a web developer manages Cloudflare's CDN nodes.

When it would become relevant to you: If you move into ML infrastructure, start training large models from scratch (not fine-tuning with QLoRA on a single GPU), or build internal ML platforms for an organization. At that point, storage I/O becomes a bottleneck you'll feel directly. For the work this curriculum teaches (building applications that call inference APIs, fine-tuning small models with PEFT, and running local models with Ollama), your laptop's SSD is fine.

Further reading: VAST Data: Why AI Storage Matters gives a good overview of the infrastructure concerns for anyone curious about what this discipline involves.


Cross-references

Your Notes
GitHub Sync

Sync your lesson notes to a private GitHub Gist. If you have not entered a token yet, the sync button will open the GitHub token modal.

Glossary
API (Application Programming Interface)Foundational terms
A structured way for programs to communicate. In this context, usually an HTTP endpoint you call to interact with an LLM.
AST (Abstract Syntax Tree)Foundational terms
A tree representation of source code structure. Used by parsers like Tree-sitter to understand code as a hierarchy of functions, classes, and statements. You'll encounter this more deeply in the Code Retrieval module, but the concept appears briefly in retrieval fundamentals.
BM25 (Best Match 25)Foundational terms
A classical ranking function for keyword search. Scores documents by term frequency and inverse document frequency. Often competitive with or complementary to vector search.
ChunkingFoundational terms
Splitting a document into smaller pieces for indexing and retrieval. Chunk boundaries significantly affect retrieval quality. Split at the wrong place and your retrieval will return half a function or the end of one paragraph glued to the start of another.
Context engineeringFoundational terms
The discipline of selecting, packaging, and budgeting the information a model sees at inference time. Prompts, retrieved evidence, tool results, memory, and state are all parts of context. Context engineering is arguably the core skill of AI engineering. Bigger context windows are not a substitute for better context selection.
Context rotFoundational terms
Degradation of output quality caused by stale, noisy, or accumulated context. Symptoms include stale memory facts, conflicting retrieved evidence, bloated prompt history, and accumulated instructions that contradict each other. A form of technical debt in AI systems.
Context windowFoundational terms
The maximum number of tokens an LLM can process in a single request (input + output combined).
EmbeddingFoundational terms
A fixed-length numeric vector representing a piece of text. Used for similarity search: texts with similar meanings have nearby embeddings.
EndpointFoundational terms
A specific URL path that accepts requests and returns responses (e.g., POST /v1/chat/completions).
GGUFFoundational terms
A file format for quantized models used by llama.cpp and Ollama. When you see a model name like qwen2.5:7b-q4_K_M, the suffix indicates the quantization scheme. GGUF supports mixed quantization (different precision for different layers) and is the most common format for local inference.
HallucinationFoundational terms
When a model generates content that sounds confident but isn't supported by the evidence it was given, or fabricates details that don't exist. Not the same as "any wrong answer"; a model that misinterprets ambiguous instructions gave a bad answer but didn't hallucinate. Common causes: weak prompt, missing context, context rot, model limitation, or retrieval failure.
InferenceFoundational terms
Running a trained model to generate output from input. What happens when you call an API. Most AI engineering work is inference-time work: building systems around models, not training them. Use "inference," not "inferencing."
JSON (JavaScript Object Notation)Foundational terms
A lightweight text format for structured data. The lingua franca of API communication.
Lexical searchFoundational terms
Finding items by matching keywords or terms. Includes BM25, TF-IDF (Term Frequency–Inverse Document Frequency), and simple keyword matching. Returns exact term matches, not semantic similarity.
LLM (Large Language Model)Foundational terms
A neural network trained on large text corpora that generates text by predicting the next token. The core technology behind AI engineering; every tool, pattern, and pipeline in this curriculum runs on top of one.
MetadataFoundational terms
Structured information about a document or chunk (file path, language, author, date, symbol type). Used for filtering retrieval results.
Neural networkFoundational terms
A computing system loosely inspired by biological neurons, built from layers of mathematical functions that transform inputs into outputs. LLMs are a specific type of neural network (transformers) trained on text. You don't need to understand neural network internals to do AI engineering, but knowing the term helps when reading external resources.
Reasoning modelFoundational terms
A model optimized for complex multi-step planning, math, and logic (e.g., o3, o4-mini). Slower and more expensive but better on hard problems. Sometimes called "LRM" (large reasoning model), but "reasoning model" is the more consistent term across provider docs.
RerankingFoundational terms
A second-pass scoring step that re-orders retrieved results using a more expensive model. Improves precision after an initial broad retrieval.
SchemaFoundational terms
A formal description of the shape and types of a data structure. Used to validate inputs and outputs.
SLM (small language model)Foundational terms
A compact model (typically 1-7B parameters) that runs on consumer hardware with lower cost, latency, and better privacy (e.g., Phi, small Llama variants, Gemma). The right choice when privacy, offline operation, predictable cost, or low latency matter more than peak capability.
System promptFoundational terms
A special message that sets the model's behavior, role, and constraints for a conversation.
TemperatureFoundational terms
A parameter controlling output randomness. Lower values produce more deterministic output; higher values produce more varied output. Does not affect the model's intelligence.
TokenFoundational terms
The basic unit an LLM processes. Not a word. Tokens are sub-word fragments. "unhappiness" might be three tokens: "un", "happi", "ness". Token count determines cost and context window usage.
Top-kFoundational terms
The number of results returned from a retrieval query. "Top-5" means the five highest-scoring results.
Top-p (nucleus sampling)Foundational terms
An alternative to temperature for controlling output diversity. Selects from the smallest set of tokens whose cumulative probability exceeds p.
Vector searchFoundational terms
Finding items by proximity in embedding space (nearest neighbors). Returns "similar" results, not "exact match" results.
vLLM (virtual LLM)Foundational terms
An inference serving engine (not a model) that hosts open-weight models behind an OpenAI-compatible HTTP endpoint. Infrastructure layer, not model layer. Relevant when moving from hosted APIs to self-hosting.
WeightsFoundational terms
The learned parameters inside a model. Changed during training, fixed during inference.
Workhorse modelFoundational terms
A general-purpose LLM optimized for speed and broad capability (e.g., GPT-4o-mini, Claude Haiku, Gemini Flash). The default for most tasks. When someone says "LLM" without qualification, they usually mean this.
BaselineBenchmark and Harness terms
The first measured performance of your system on a benchmark. Everything else is compared against this. Without a baseline, you can't tell whether a change helped.
BenchmarkBenchmark and Harness terms
A fixed set of questions or tasks with known-good answers, used to measure system performance over time.
Run logBenchmark and Harness terms
A structured record (typically JSONL) of every system run: what input was given, what output was produced, what tools were called, how long it took, and what it cost. The raw data that evals, telemetry, and cost analysis are built from.
A2A (Agent-to-Agent protocol)Agent and Tool Building terms
An open protocol for peer-to-peer agent collaboration. Agents discover each other's capabilities and delegate or negotiate tasks as equals. Different from MCP (which connects agents to tools, not to other agents) and from handoffs (which transfer control within one system).
AgentAgent and Tool Building terms
A system where an LLM decides which tools to call, observes results, and iterates until a task is complete. Agent = model + tools + control loop.
Control loopAgent and Tool Building terms
The code that manages the agent's cycle: send prompt, check for tool calls, execute tools, append results, repeat or finish.
HandoffAgent and Tool Building terms
Passing control from one agent or specialist to another within an orchestrated system.
MCP (Model Context Protocol)Agent and Tool Building terms
An open protocol for exposing tools, resources, and prompts to AI applications in a standardized way. Connects agents to capabilities (tools and data), not to other agents.
Tool calling / function callingAgent and Tool Building terms
The model's ability to request execution of a specific function with structured arguments, rather than just generating text.
Context compilation / context packingCode Retrieval terms
The process of selecting and assembling the smallest useful set of evidence for a specific task. Not "dump everything retrieved into the prompt."
GroundingCode Retrieval terms
Tying model assertions to specific evidence. A grounded answer cites what it found; an ungrounded answer asserts without evidence.
Hybrid retrievalCode Retrieval terms
Combining multiple retrieval methods (e.g., vector search + keyword search + metadata filters) and merging or reranking the results.
Knowledge graphCode Retrieval terms
A data structure that stores entities and their relationships explicitly (e.g., "function A calls function B," "module X imports module Y"). Useful for traversal and dependency reasoning. One retrieval strategy among several, often overused when simpler metadata or adjacency tables would suffice.
RAG (Retrieval-Augmented Generation)Code Retrieval terms
A pattern where the model's response is grounded in retrieved external evidence rather than relying solely on its training data.
Symbol tableCode Retrieval terms
A mapping of code identifiers (functions, classes, variables) to their locations and metadata.
Tree-sitterCode Retrieval terms
An incremental parsing library that builds ASTs for source code. Used in this curriculum for code-aware chunking and symbol extraction.
Context packRAG and Grounded Answers terms
A structured bundle of evidence assembled for a specific task, with metadata about provenance, relevance, and token budget.
Evidence bundleRAG and Grounded Answers terms
A collection of retrieved items grouped for a specific sub-task, with enough metadata to evaluate whether the evidence is relevant and sufficient.
Retrieval routingRAG and Grounded Answers terms
Deciding which retrieval strategy or method to use for a given query. Different questions need different retrieval methods.
EvalObservability and Evals terms
A structured test that measures system quality. Not the same as training. Evals measure, they don't change the model.
Harness (AI harness / eval harness)Observability and Evals terms
The experiment and evaluation framework around your model or agent. It runs benchmark tasks, captures outputs, logs traces, grades results, and compares system versions. It turns ad hoc "try it and see" into repeatable, comparable experiments. Typically includes: input dataset, prompt and tool configuration, model/provider selection, execution loop, logging, grading, and artifact capture.
LLM-as-judgeObservability and Evals terms
Using a language model to evaluate or grade the output of another model or system. Useful for scaling evaluation beyond manual review, but requires rubric quality, judge consistency checks, and human spot-checking. Not a replacement for exact-match checks where they apply.
OpenTelemetry (OTel)Observability and Evals terms
An open standard for collecting and exporting telemetry data (traces, metrics, logs). Vendor-agnostic.
RAGASObservability and Evals terms
A specific eval framework for retrieval-augmented generation. Measures metrics like faithfulness, relevance, and context precision. One tool example, not a foundational concept. Learn the metrics first, then the tool.
SpanObservability and Evals terms
A single operation within a trace (e.g., one tool call, one retrieval query). Traces are made of spans.
TelemetryObservability and Evals terms
Structured data about system behavior: what happened, when, how long it took, what it cost. Includes traces, metrics, and events.
TraceObservability and Evals terms
A structured record of one complete run through the system, including all steps, tool calls, and decisions.
Long-term memoryOrchestration and Memory terms
Persistent facts that survive across conversations. Requires write policies to manage what gets stored, updated, or deleted.
OrchestrationOrchestration and Memory terms
Explicit control over how tasks are routed, delegated, and synthesized across multiple agents or specialists.
RouterOrchestration and Memory terms
A component that decides which specialist or workflow path to use for a given query.
SpecialistOrchestration and Memory terms
An agent or workflow tuned for a narrow task (e.g., "code search," "documentation lookup," "test generation"). Specialists are composed by an orchestrator.
Thread memoryOrchestration and Memory terms
Conversation state that persists within a single session or thread.
Workflow memoryOrchestration and Memory terms
Intermediate state that persists within a multi-step task but doesn't survive beyond the workflow's completion.
Catastrophic forgettingOptimization terms
When fine-tuning causes a model to lose capabilities it had before training. The model gets better at the fine-tuned task but worse at tasks it previously handled. PEFT methods like LoRA reduce this risk by freezing original weights.
DistillationOptimization terms
Training a smaller (student) model to reproduce the behavior of a larger (teacher) model on a specific task.
DPO (Direct Preference Optimization)Optimization terms
A method for preference-based model optimization that's simpler than RLHF, training the model directly on preference pairs without a separate reward model.
Fine-tuningOptimization terms
Updating a model's weights on task-specific data to change its behavior permanently. An umbrella term that includes SFT, instruction tuning, RLHF, DPO, and other techniques. See the fine-tuning landscape table in Lesson 8.3 for how these relate.
Full fine-tuningOptimization terms
Updating all of a model's parameters during training, as opposed to PEFT methods that update only a small subset. Requires significantly more GPU memory and compute. Produces the most thorough adaptation but carries higher risk of catastrophic forgetting.
Inference serverOptimization terms
Software (like vLLM or Ollama) that hosts a model and serves inference requests.
Instruction tuningOptimization terms
A specific application of SFT where the training data consists of instruction-response pairs. This is how base models become chat models: the technique is SFT, the data format is instructions. Not a separate technique from SFT.
LoRA (Low-Rank Adaptation)Optimization terms
A parameter-efficient fine-tuning method that trains small adapter matrices instead of updating all model weights. Dramatically reduces GPU memory and compute requirements.
Parameter countOptimization terms
The number of learned weights in a model, commonly expressed in billions (e.g., "7B" = 7 billion parameters). Determines memory requirements (roughly 2 bytes per parameter at FP16) and broadly correlates with capability, though training quality and architecture matter as much as size. See Model Selection and Serving for sizing guidance.
PEFT (Parameter-Efficient Fine-Tuning)Optimization terms
A family of methods (including LoRA) that fine-tune a small subset of parameters instead of the full model.
Preference optimizationOptimization terms
Training methods (RLHF, DPO) that use human or automated preference signals to improve model behavior. "This output is better than that output" rather than "this is the correct output."
QLoRA (Quantized LoRA)Optimization terms
LoRA applied to a quantized (compressed) base model. Further reduces memory requirements, enabling fine-tuning on consumer hardware.
QuantizationOptimization terms
Reducing the precision of model weights (e.g., FP16 → INT4) to shrink memory usage and increase inference speed at some quality cost. A 7B model at FP16 needs ~14 GB VRAM; quantized to 4-bit, it fits in ~4 GB. Common formats include GGUF (llama.cpp/Ollama), GPTQ and AWQ (vLLM/HuggingFace). See Model Selection and Serving for format details and tradeoffs.
OverfittingOptimization terms
When a model memorizes training examples instead of learning generalizable patterns. The model performs well on training data but poorly on new inputs. Detected by monitoring validation loss alongside training loss.
RLHF (Reinforcement Learning from Human Feedback)Optimization terms
A training method that uses human preference signals to improve model behavior through a reward model. More complex than DPO (requires training a separate reward model) but offers more control over the optimization objective.
SFT (Supervised Fine-Tuning)Optimization terms
Fine-tuning using input-output pairs where the desired output is known. The most common fine-tuning approach.
TRL (Transformer Reinforcement Learning)Optimization terms
A Hugging Face library for training language models with reinforcement learning, SFT, and other optimization methods.
Consumer chat appCross-cutting terms
The browser or desktop product meant for human conversation (ChatGPT, Claude, HuggingChat). Useful for experimentation, but not the same as API access.
Developer platformCross-cutting terms
The provider's API, billing, API-key, and developer-docs surface. This is what you need for this learning path.
Hosted APICross-cutting terms
The provider runs the model for you and you call it over HTTP.
Local inferenceCross-cutting terms
You run the model on your own machine.
ProviderCross-cutting terms
The company or service that hosts a model API you call from code.
Prompt cachingCross-cutting terms
Reusing computation from repeated prompt prefixes to reduce latency and cost on subsequent requests with the same prefix.
Rate limitingCross-cutting terms
Constraints on how many API requests you can make per unit of time. An operational concern that affects system design and cost.
Token budgetCross-cutting terms
The maximum number of tokens you allocate for a specific part of the context (e.g., "retrieval evidence gets at most 4K tokens"). A context engineering tool for preventing any single component from dominating the context window.