Getting Started Concept Map

Core Concept Map

This page defines the ideas that organize the entire learning path. I'd recommend reading it before starting Module 1. Every concept here will appear throughout your journey, and understanding how they relate to each other will make each lesson click faster.

We'll define each concept in plain language first, then show how it connects to the others.


The core concepts

Model

A model is a program that has been trained on data to predict outputs from inputs. In this curriculum, "model" almost always means a large language model (LLM), a model trained on text that generates text. We won't train models in this learning path (until the very end). You'll use them.

At its core, a model takes in a sequence of tokens and produces a sequence of tokens. Everything else in AI engineering, and everything in this curriculum, is about what you put in, what you do with what comes out, and how you measure whether it was good.

Model families you'll encounter:

FamilyWhat it isWhen to use itExamples
Workhorse modelA general-purpose LLM optimized for speed and broad capabilityMost tasks. When someone says "LLM" without qualification, they usually mean thisGPT-4o-mini, Claude Haiku, Gemini Flash, Llama 3.1 8B
Reasoning modelA model optimized for complex multi-step planning, math, and logic. Slower and more expensiveHard problems that require chained reasoning, code generation with complex constraintso3, o4-mini, Claude with extended thinking
Small language model (SLM)A compact model (typically 1-7B parameters) that runs on consumer hardwareWhen privacy, offline operation, predictable cost, or low latency matter more than peak capabilityPhi, small Llama variants, Gemma

Foundation model: A foundation model is a large model pre-trained on broad data and designed to be adapted to many downstream tasks. OpenAI's GPT models, Anthropic's Claude models, and Llama models are all examples of foundation models. In this learning path, the workhorse, reasoning, and SLM families in the table above are not separate from foundation models; they are practical categories for thinking about how different foundation models are used.

What a model is not: A model is not a product. ChatGPT is a product built on top of models. Claude is a product built on top of models. The model is the engine; the product is the car. In this learning path, you'll work with the engine directly through APIs.

Infrastructure vs. model: An inference server like vLLM or Ollama is not a model. It's the software that hosts a model and serves requests. Think of it as the power plant that runs the engine. You'll encounter this distinction in later modules when we discuss self-hosted inference.


Prompt

A prompt is the input you give to a model. At its simplest, it's a question or instruction. In practice, a prompt is a structured message (or sequence of messages) that includes:

  • A system message that sets the model's role, behavior, and constraints
  • One or more user messages with the actual request
  • Optionally, assistant messages (previous model responses) for conversation continuity

Prompt engineering isn't about tricks. I've found it's much closer to writing clear contracts or requirements: telling the model what you want, what format you want it in, what constraints apply, and what evidence to use. If you're good at writing technical specs, you'll find prompt engineering surprisingly natural.

Relationship to other concepts: A prompt is one part of the model's context. It's the part you write directly. The other parts (retrieved evidence, tool results, memory) are assembled by your system.


Context

Context is everything the model sees when it generates a response. This includes the prompt, but it also includes:

  • Retrieved documents or code snippets
  • Results from tool calls
  • Conversation history
  • Memory entries
  • System instructions

A model's context window is the maximum number of tokens it can process in a single request (input and output combined). Context windows have gotten large (100K+ tokens is common), but bigger is not always better. A model with 200K tokens of irrelevant context will often perform worse than a model with 2K tokens of precisely relevant context.

Relationship to other concepts: Context is the stage where prompt, retrieval, memory, and tool results all meet. Context engineering is the discipline of managing this stage well.


Context engineering

Context engineering is the discipline of selecting, packaging, and budgeting the information a model sees at inference time. In my experience, it's the single most important skill in AI engineering, more impactful than choosing the right model or the right framework.

Context engineering involves:

  • Selecting the right evidence for a specific task (not just "everything we have")
  • Packaging it in a format the model can use effectively (structure, ordering, deduplication)
  • Budgeting tokens so you don't waste context window space on low-value information
  • Maintaining context quality over time (avoiding context rot)

This is different from "just make the context window bigger." A longer context window gives you more room, but it doesn't tell you what to put in it. Context engineering is the judgment layer.

Context rot is what happens when context quality degrades over time: stale memory facts, conflicting retrieved evidence, bloated prompt history, accumulated instructions that contradict each other. It's a form of technical debt in AI systems, and context engineering is how you prevent and fix it.

Relationship to other concepts: Context engineering sits on top of retrieval, memory, and prompting. It's the skill that ties them together. You'll practice it throughout the curriculum, starting with simple prompt construction and building toward compiled context packs.


Tool call

A tool call is when the model requests the execution of a specific function with structured arguments, rather than just generating text. The model doesn't run the tool itself. It outputs a structured request ("call function X with arguments Y"), your code executes it, and the result goes back into the model's context.

Tool calling is what turns a language model from a text generator into a component of a larger system. With tools, a model can:

  • Look up information it doesn't have
  • Execute actions in the real world (send emails, create files, query databases)
  • Interact with APIs and external services

Relationship to other concepts: Tool results become part of the model's context. The quality of tool results directly affects the quality of the model's output. This is another place where context engineering matters: not just "did the tool return a result?" but "is this result the right thing to show the model?"


Retrieval

Retrieval is the process of finding relevant information from a larger corpus to include in the model's context. Instead of hoping the model "knows" the answer from training, you find the evidence first and give it to the model explicitly.

Retrieval methods include:

MethodHow it worksGood for
Lexical search (BM25, keyword)Matches exact termsKnown identifiers, function names, exact phrases
Vector search (embeddings)Matches by semantic similarityNatural language queries, conceptual questions
AST / symbol indexUses code structure (syntax trees)"Where is this function defined?", "What calls this?"
Metadata filtersFilters by structured attributes (file type, date, author)Narrowing results to relevant scope
HybridCombines multiple methods and reranksMost production systems

Retrieval isn't the same thing as RAG. Retrieval is the act of finding things. RAG (Retrieval-Augmented Generation) is a specific pattern where you retrieve evidence, then generate a response grounded in that evidence. Retrieval is an ingredient; RAG is a recipe.

Relationship to other concepts: Retrieved results become part of the model's context. Bad retrieval leads to bad context, which leads to bad outputs, no matter how good the model is. This is why the curriculum spends an entire module on retrieval before introducing RAG.


Memory

Memory is how an AI system persists information across conversations or tasks. Without memory, every interaction starts from zero.

There are different kinds of memory:

TypeScopeExample
Thread memoryWithin one conversation"The user said they prefer Python"
Workflow memoryWithin one multi-step task"Step 2 produced these intermediate results"
Long-term memoryAcross conversations"This user's codebase uses FastAPI and PostgreSQL"

Memory isn't free. Writing to memory creates a commitment: the system will use this information in future contexts. Bad memory (stale facts, incorrect summaries, over-broad generalizations) causes context rot. This is why we'll introduce memory only after we can measure whether it helps or hurts, through evals and telemetry.

Relationship to other concepts: Memory entries become part of the model's context. Memory management is a context engineering problem: what to remember, when to forget, and how to keep memory entries from contradicting each other.


Eval

An eval (evaluation) is a structured test that measures system quality. It's not the same as training. Evals measure; they don't change the model.

Evals answer questions like:

  • Did the retrieval step find the right evidence?
  • Did the generated answer match the evidence?
  • Did the tool call use the correct arguments?
  • Did the system complete the task end-to-end?

The curriculum distinguishes several eval families:

FamilyWhat it measures
Retrieval evalsDid we find the right documents?
Answer evalsIs the generated response correct and grounded?
Tool-use evalsDid the model call the right tools with the right arguments?
Trace evalsDid the full system execution path make sense?

LLM-as-judge is a technique where you use a language model to evaluate or grade the output of another model. It's useful for scaling evaluation beyond manual review, but it requires careful rubric design and human spot-checking. It's not a replacement for exact-match checks where those apply.

Relationship to other concepts: Evals are what make iteration scientific instead of subjective. Without evals, we're guessing whether our changes helped. With evals, we know. This is why the curriculum starts with a benchmark before almost anything else.


Inference

Inference is the act of running a trained model to generate output from input. When you call an API and get a response, that's inference. When you run a model locally and it produces text, that's inference.

Most AI engineering work is inference-time work: building systems around models, not training them. The curriculum uses the word "inference," not "inferencing."

Inference is different from training. Training changes the model's weights. Inference uses the weights as they are. The entire learning path until the Optimization module is inference-time work.

Relationship to other concepts: Everything in this concept map (prompts, context, retrieval, tools, memory, evals) is about making inference better. The model's weights stay fixed; you improve the system around it.


Distillation

Distillation is training a smaller model (the "student") to reproduce the behavior of a larger model (the "teacher") on a specific task. The student doesn't learn from the original training data. It learns from the teacher's outputs.

Why distill? Because large models are expensive and slow. If you have a task where a large model reliably produces good results, you can distill that behavior into a smaller, cheaper, faster model that handles that specific task well enough.

Relationship to other concepts: Distillation only makes sense after you have evals (to measure whether the student matches the teacher) and logs (to collect training examples from the teacher's outputs). This is why it appears near the end of the curriculum.


Fine-tuning

Fine-tuning is updating a model's weights on task-specific data to change its behavior permanently. While distillation and fine-tuning both involve training, they solve different problems: distillation compresses a larger model's behavior into a smaller one, while fine-tuning adapts a model to your specific domain, format, or task. They're separate techniques in this curriculum, not subcategories of each other.

Fine-tuning is the most permanent intervention in this curriculum. It changes the model itself. This is why it comes last: we'll only consider fine-tuning after we've exhausted prompt engineering, retrieval improvements, context engineering, and workflow changes. Those interventions are cheaper, more reversible, and often sufficient.

Common fine-tuning approaches you'll encounter:

ApproachWhat it does
SFT (Supervised Fine-Tuning)Trains on input-output pairs where the desired output is known
LoRA / QLoRAParameter-efficient methods that train small adapter layers instead of updating all weights, dramatically reducing hardware requirements
Preference optimization (RLHF, DPO)Uses human or automated preference signals to improve model behavior

Relationship to other concepts: Fine-tuning requires evals (to measure improvement), logs (for training data), and a stable system (so you're not fine-tuning around bugs in retrieval or prompting). It's the capstone of the optimization sequence.


How these concepts connect

Here's the big picture of how these concepts relate to each other in a running AI system:

How core concepts connect in a running AI system

The flow is:

  1. You write a prompt and your system assembles context from retrieval, tools, and memory
  2. Context engineering decides what goes in and what stays out
  3. The model runs inference on the assembled context
  4. Evals measure whether the output was good
  5. You improve by fixing context (cheap, reversible) before resorting to distillation or fine-tuning (expensive, permanent)

This loop is what we'll be building throughout the curriculum. Each module teaches you to build one part of it, measure it, and improve it.

What's next

Common Category Mistakes. The concept map gives you the nouns; that page covers the confusions that make those nouns blur together in practice.

Your Notes
GitHub Sync

Sync your lesson notes to a private GitHub Gist. If you have not entered a token yet, the sync button will open the GitHub token modal.

Glossary
API (Application Programming Interface)Foundational terms
A structured way for programs to communicate. In this context, usually an HTTP endpoint you call to interact with an LLM.
AST (Abstract Syntax Tree)Foundational terms
A tree representation of source code structure. Used by parsers like Tree-sitter to understand code as a hierarchy of functions, classes, and statements. You'll encounter this more deeply in the Code Retrieval module, but the concept appears briefly in retrieval fundamentals.
BM25 (Best Match 25)Foundational terms
A classical ranking function for keyword search. Scores documents by term frequency and inverse document frequency. Often competitive with or complementary to vector search.
ChunkingFoundational terms
Splitting a document into smaller pieces for indexing and retrieval. Chunk boundaries significantly affect retrieval quality. Split at the wrong place and your retrieval will return half a function or the end of one paragraph glued to the start of another.
Context engineeringFoundational terms
The discipline of selecting, packaging, and budgeting the information a model sees at inference time. Prompts, retrieved evidence, tool results, memory, and state are all parts of context. Context engineering is arguably the core skill of AI engineering. Bigger context windows are not a substitute for better context selection.
Context rotFoundational terms
Degradation of output quality caused by stale, noisy, or accumulated context. Symptoms include stale memory facts, conflicting retrieved evidence, bloated prompt history, and accumulated instructions that contradict each other. A form of technical debt in AI systems.
Context windowFoundational terms
The maximum number of tokens an LLM can process in a single request (input + output combined).
EmbeddingFoundational terms
A fixed-length numeric vector representing a piece of text. Used for similarity search: texts with similar meanings have nearby embeddings.
EndpointFoundational terms
A specific URL path that accepts requests and returns responses (e.g., POST /v1/chat/completions).
GGUFFoundational terms
A file format for quantized models used by llama.cpp and Ollama. When you see a model name like qwen2.5:7b-q4_K_M, the suffix indicates the quantization scheme. GGUF supports mixed quantization (different precision for different layers) and is the most common format for local inference.
HallucinationFoundational terms
When a model generates content that sounds confident but isn't supported by the evidence it was given, or fabricates details that don't exist. Not the same as "any wrong answer"; a model that misinterprets ambiguous instructions gave a bad answer but didn't hallucinate. Common causes: weak prompt, missing context, context rot, model limitation, or retrieval failure.
InferenceFoundational terms
Running a trained model to generate output from input. What happens when you call an API. Most AI engineering work is inference-time work: building systems around models, not training them. Use "inference," not "inferencing."
JSON (JavaScript Object Notation)Foundational terms
A lightweight text format for structured data. The lingua franca of API communication.
Lexical searchFoundational terms
Finding items by matching keywords or terms. Includes BM25, TF-IDF (Term Frequency–Inverse Document Frequency), and simple keyword matching. Returns exact term matches, not semantic similarity.
LLM (Large Language Model)Foundational terms
A neural network trained on large text corpora that generates text by predicting the next token. The core technology behind AI engineering; every tool, pattern, and pipeline in this curriculum runs on top of one.
MetadataFoundational terms
Structured information about a document or chunk (file path, language, author, date, symbol type). Used for filtering retrieval results.
Neural networkFoundational terms
A computing system loosely inspired by biological neurons, built from layers of mathematical functions that transform inputs into outputs. LLMs are a specific type of neural network (transformers) trained on text. You don't need to understand neural network internals to do AI engineering, but knowing the term helps when reading external resources.
Reasoning modelFoundational terms
A model optimized for complex multi-step planning, math, and logic (e.g., o3, o4-mini). Slower and more expensive but better on hard problems. Sometimes called "LRM" (large reasoning model), but "reasoning model" is the more consistent term across provider docs.
RerankingFoundational terms
A second-pass scoring step that re-orders retrieved results using a more expensive model. Improves precision after an initial broad retrieval.
SchemaFoundational terms
A formal description of the shape and types of a data structure. Used to validate inputs and outputs.
SLM (small language model)Foundational terms
A compact model (typically 1-7B parameters) that runs on consumer hardware with lower cost, latency, and better privacy (e.g., Phi, small Llama variants, Gemma). The right choice when privacy, offline operation, predictable cost, or low latency matter more than peak capability.
System promptFoundational terms
A special message that sets the model's behavior, role, and constraints for a conversation.
TemperatureFoundational terms
A parameter controlling output randomness. Lower values produce more deterministic output; higher values produce more varied output. Does not affect the model's intelligence.
TokenFoundational terms
The basic unit an LLM processes. Not a word. Tokens are sub-word fragments. "unhappiness" might be three tokens: "un", "happi", "ness". Token count determines cost and context window usage.
Top-kFoundational terms
The number of results returned from a retrieval query. "Top-5" means the five highest-scoring results.
Top-p (nucleus sampling)Foundational terms
An alternative to temperature for controlling output diversity. Selects from the smallest set of tokens whose cumulative probability exceeds p.
Vector searchFoundational terms
Finding items by proximity in embedding space (nearest neighbors). Returns "similar" results, not "exact match" results.
vLLM (virtual LLM)Foundational terms
An inference serving engine (not a model) that hosts open-weight models behind an OpenAI-compatible HTTP endpoint. Infrastructure layer, not model layer. Relevant when moving from hosted APIs to self-hosting.
WeightsFoundational terms
The learned parameters inside a model. Changed during training, fixed during inference.
Workhorse modelFoundational terms
A general-purpose LLM optimized for speed and broad capability (e.g., GPT-4o-mini, Claude Haiku, Gemini Flash). The default for most tasks. When someone says "LLM" without qualification, they usually mean this.
BaselineBenchmark and Harness terms
The first measured performance of your system on a benchmark. Everything else is compared against this. Without a baseline, you can't tell whether a change helped.
BenchmarkBenchmark and Harness terms
A fixed set of questions or tasks with known-good answers, used to measure system performance over time.
Run logBenchmark and Harness terms
A structured record (typically JSONL) of every system run: what input was given, what output was produced, what tools were called, how long it took, and what it cost. The raw data that evals, telemetry, and cost analysis are built from.
A2A (Agent-to-Agent protocol)Agent and Tool Building terms
An open protocol for peer-to-peer agent collaboration. Agents discover each other's capabilities and delegate or negotiate tasks as equals. Different from MCP (which connects agents to tools, not to other agents) and from handoffs (which transfer control within one system).
AgentAgent and Tool Building terms
A system where an LLM decides which tools to call, observes results, and iterates until a task is complete. Agent = model + tools + control loop.
Control loopAgent and Tool Building terms
The code that manages the agent's cycle: send prompt, check for tool calls, execute tools, append results, repeat or finish.
HandoffAgent and Tool Building terms
Passing control from one agent or specialist to another within an orchestrated system.
MCP (Model Context Protocol)Agent and Tool Building terms
An open protocol for exposing tools, resources, and prompts to AI applications in a standardized way. Connects agents to capabilities (tools and data), not to other agents.
Tool calling / function callingAgent and Tool Building terms
The model's ability to request execution of a specific function with structured arguments, rather than just generating text.
Context compilation / context packingCode Retrieval terms
The process of selecting and assembling the smallest useful set of evidence for a specific task. Not "dump everything retrieved into the prompt."
GroundingCode Retrieval terms
Tying model assertions to specific evidence. A grounded answer cites what it found; an ungrounded answer asserts without evidence.
Hybrid retrievalCode Retrieval terms
Combining multiple retrieval methods (e.g., vector search + keyword search + metadata filters) and merging or reranking the results.
Knowledge graphCode Retrieval terms
A data structure that stores entities and their relationships explicitly (e.g., "function A calls function B," "module X imports module Y"). Useful for traversal and dependency reasoning. One retrieval strategy among several, often overused when simpler metadata or adjacency tables would suffice.
RAG (Retrieval-Augmented Generation)Code Retrieval terms
A pattern where the model's response is grounded in retrieved external evidence rather than relying solely on its training data.
Symbol tableCode Retrieval terms
A mapping of code identifiers (functions, classes, variables) to their locations and metadata.
Tree-sitterCode Retrieval terms
An incremental parsing library that builds ASTs for source code. Used in this curriculum for code-aware chunking and symbol extraction.
Context packRAG and Grounded Answers terms
A structured bundle of evidence assembled for a specific task, with metadata about provenance, relevance, and token budget.
Evidence bundleRAG and Grounded Answers terms
A collection of retrieved items grouped for a specific sub-task, with enough metadata to evaluate whether the evidence is relevant and sufficient.
Retrieval routingRAG and Grounded Answers terms
Deciding which retrieval strategy or method to use for a given query. Different questions need different retrieval methods.
EvalObservability and Evals terms
A structured test that measures system quality. Not the same as training. Evals measure, they don't change the model.
Harness (AI harness / eval harness)Observability and Evals terms
The experiment and evaluation framework around your model or agent. It runs benchmark tasks, captures outputs, logs traces, grades results, and compares system versions. It turns ad hoc "try it and see" into repeatable, comparable experiments. Typically includes: input dataset, prompt and tool configuration, model/provider selection, execution loop, logging, grading, and artifact capture.
LLM-as-judgeObservability and Evals terms
Using a language model to evaluate or grade the output of another model or system. Useful for scaling evaluation beyond manual review, but requires rubric quality, judge consistency checks, and human spot-checking. Not a replacement for exact-match checks where they apply.
OpenTelemetry (OTel)Observability and Evals terms
An open standard for collecting and exporting telemetry data (traces, metrics, logs). Vendor-agnostic.
RAGASObservability and Evals terms
A specific eval framework for retrieval-augmented generation. Measures metrics like faithfulness, relevance, and context precision. One tool example, not a foundational concept. Learn the metrics first, then the tool.
SpanObservability and Evals terms
A single operation within a trace (e.g., one tool call, one retrieval query). Traces are made of spans.
TelemetryObservability and Evals terms
Structured data about system behavior: what happened, when, how long it took, what it cost. Includes traces, metrics, and events.
TraceObservability and Evals terms
A structured record of one complete run through the system, including all steps, tool calls, and decisions.
Long-term memoryOrchestration and Memory terms
Persistent facts that survive across conversations. Requires write policies to manage what gets stored, updated, or deleted.
OrchestrationOrchestration and Memory terms
Explicit control over how tasks are routed, delegated, and synthesized across multiple agents or specialists.
RouterOrchestration and Memory terms
A component that decides which specialist or workflow path to use for a given query.
SpecialistOrchestration and Memory terms
An agent or workflow tuned for a narrow task (e.g., "code search," "documentation lookup," "test generation"). Specialists are composed by an orchestrator.
Thread memoryOrchestration and Memory terms
Conversation state that persists within a single session or thread.
Workflow memoryOrchestration and Memory terms
Intermediate state that persists within a multi-step task but doesn't survive beyond the workflow's completion.
Catastrophic forgettingOptimization terms
When fine-tuning causes a model to lose capabilities it had before training. The model gets better at the fine-tuned task but worse at tasks it previously handled. PEFT methods like LoRA reduce this risk by freezing original weights.
DistillationOptimization terms
Training a smaller (student) model to reproduce the behavior of a larger (teacher) model on a specific task.
DPO (Direct Preference Optimization)Optimization terms
A method for preference-based model optimization that's simpler than RLHF, training the model directly on preference pairs without a separate reward model.
Fine-tuningOptimization terms
Updating a model's weights on task-specific data to change its behavior permanently. An umbrella term that includes SFT, instruction tuning, RLHF, DPO, and other techniques. See the fine-tuning landscape table in Lesson 8.3 for how these relate.
Full fine-tuningOptimization terms
Updating all of a model's parameters during training, as opposed to PEFT methods that update only a small subset. Requires significantly more GPU memory and compute. Produces the most thorough adaptation but carries higher risk of catastrophic forgetting.
Inference serverOptimization terms
Software (like vLLM or Ollama) that hosts a model and serves inference requests.
Instruction tuningOptimization terms
A specific application of SFT where the training data consists of instruction-response pairs. This is how base models become chat models: the technique is SFT, the data format is instructions. Not a separate technique from SFT.
LoRA (Low-Rank Adaptation)Optimization terms
A parameter-efficient fine-tuning method that trains small adapter matrices instead of updating all model weights. Dramatically reduces GPU memory and compute requirements.
Parameter countOptimization terms
The number of learned weights in a model, commonly expressed in billions (e.g., "7B" = 7 billion parameters). Determines memory requirements (roughly 2 bytes per parameter at FP16) and broadly correlates with capability, though training quality and architecture matter as much as size. See Model Selection and Serving for sizing guidance.
PEFT (Parameter-Efficient Fine-Tuning)Optimization terms
A family of methods (including LoRA) that fine-tune a small subset of parameters instead of the full model.
Preference optimizationOptimization terms
Training methods (RLHF, DPO) that use human or automated preference signals to improve model behavior. "This output is better than that output" rather than "this is the correct output."
QLoRA (Quantized LoRA)Optimization terms
LoRA applied to a quantized (compressed) base model. Further reduces memory requirements, enabling fine-tuning on consumer hardware.
QuantizationOptimization terms
Reducing the precision of model weights (e.g., FP16 → INT4) to shrink memory usage and increase inference speed at some quality cost. A 7B model at FP16 needs ~14 GB VRAM; quantized to 4-bit, it fits in ~4 GB. Common formats include GGUF (llama.cpp/Ollama), GPTQ and AWQ (vLLM/HuggingFace). See Model Selection and Serving for format details and tradeoffs.
OverfittingOptimization terms
When a model memorizes training examples instead of learning generalizable patterns. The model performs well on training data but poorly on new inputs. Detected by monitoring validation loss alongside training loss.
RLHF (Reinforcement Learning from Human Feedback)Optimization terms
A training method that uses human preference signals to improve model behavior through a reward model. More complex than DPO (requires training a separate reward model) but offers more control over the optimization objective.
SFT (Supervised Fine-Tuning)Optimization terms
Fine-tuning using input-output pairs where the desired output is known. The most common fine-tuning approach.
TRL (Transformer Reinforcement Learning)Optimization terms
A Hugging Face library for training language models with reinforcement learning, SFT, and other optimization methods.
Consumer chat appCross-cutting terms
The browser or desktop product meant for human conversation (ChatGPT, Claude, HuggingChat). Useful for experimentation, but not the same as API access.
Developer platformCross-cutting terms
The provider's API, billing, API-key, and developer-docs surface. This is what you need for this learning path.
Hosted APICross-cutting terms
The provider runs the model for you and you call it over HTTP.
Local inferenceCross-cutting terms
You run the model on your own machine.
ProviderCross-cutting terms
The company or service that hosts a model API you call from code.
Prompt cachingCross-cutting terms
Reusing computation from repeated prompt prefixes to reduce latency and cost on subsequent requests with the same prefix.
Rate limitingCross-cutting terms
Constraints on how many API requests you can make per unit of time. An operational concern that affects system design and cost.
Token budgetCross-cutting terms
The maximum number of tokens you allocate for a specific part of the context (e.g., "retrieval evidence gets at most 4K tokens"). A context engineering tool for preventing any single component from dominating the context window.