Introduction: Terminology Matters

Carlisia Campos picture
Carlisia Campos
MCP Technical Strategist

Publish Date October 05, 2025

We’ve all seen articles about “an AI system”, but what does that actually mean? The raw LLM that generates text? The agent that makes decisions? The tools it calls? The infrastructure that makes it all work reliably?

Without clear terminology, teams build brittle integrations, set wrong expectations, and misplace trust—expecting intelligence where there’s really just plumbing.

Understanding whether you need just an LLM for text generation, an agent for complex workflows, or a full system with persistent state changes everything, from API design to user experience, and even (or, more importantly) cost.

As AI capabilities expand, precise terminology becomes our shared language for building the right solutions. The clearer we are about what each component does, the better we can match technical choices to real user needs, and avoid expecting agent-level capabilities from LLM-only implementations.

LLMs provide the reasoning foundation. Agents add goal-directed behavior and decision-making. Tools extend capabilities beyond text generation. MCP standardizes how tools are exposed (servers) and accessed (clients). Hosts provide the runtime environment. Orchestration ensures everything works reliably together. And with that, I encourage you to dig deeper into specific glossary terms.