If AI is going to work at scale, it needs more than a model; it needs access, structure, and purpose.
At the AWS Summit in New York City, one phrase stuck with us:
“Models are only as good as the context they’re given.”
It came during an insightful joint session from AWS and Anthropic on Model Context Protocol (MCP), a deceptively simple concept with massive implications.
Across this recap series, we’ve explored the rise of agentic AI, the infrastructure required to support it, and the ecosystem AWS is building to accelerate adoption. MCP is the connective tissue that brings it all together. It’s how you move from smart models to useful systems.
Why Context Is the New Bottleneck
Generative AI has been evolving fast, but enterprise implementation is still slow. Why?
Because no matter how advanced your model is, it can’t help you make better decisions if it’s not connected to what makes your business unique: Your data. Your tools. Your systems. Your users.
That’s where MCP comes in.
What Is MCP—and Why It Matters
Model Context Protocol (MCP) is a specification that allows AI models to dynamically discover and interact with third-party tools, data sources, and instructions. Think of it as a structured interface—where systems publish a list of tools, what they do, the inputs they require, and how the model should use them.
For executives, that means your AI agents can tap into real business logic—not by guessing, but by calling documented resources your teams control. For engineers, it means you can expose functions, services, or datasets via an MCP server, enabling LLMs to perform meaningful actions without hardcoding every step.
The result? AI that doesn’t just respond—it executes, using tools it finds and understands in real time.
With MCP, you can:
- Connect any model to any MCP-enabled system: Pull data from structured and unstructured sources, trigger tools, and interact with APIs—on demand.
- Assign agent permissions and scopes: Let different agents access different tools based on business roles or workflows.
- Enable intelligent task execution: Run lightweight services (“microtools”) directly from the MCP environment—no handoffs required.
- Handle complex use cases: With built-in function calling, system prompting, sampling, registry, elicitation, and discovery, MCP lets agents ask questions, call other models, and complete multi-step tasks without breaking context.
In short: MCP allows generative AI to break free of the chat window and take real-world action.
Real Integration, Not Just Model Tuning
With MCP servers already available in AWS, your teams can start building agentic AI products that can utilize your unique business logic, customer data, and internal systems. This isn’t hypothetical. It’s real and ready to deploy today.
At Robots & Pencils, we’re already using this pattern with our clients:
- Exposing legacy data through modern interfaces
- Building modular tools for agents to call dynamically
- Designing UX around co-pilots that ask users the right questions at the right moment
We call this approach Emergent Experience Design, a framework for building systems where agents adapt, interfaces evolve, and outcomes unfold through interaction. If you’re rethinking UX in the age of AI, this is where to start.
And when you combine this with what we covered in The Future Is Agentic, Modernization Reloaded, and From AI to Execution, you start to see the bigger picture: Agentic AI isn’t just a new model. It’s a new way of working. And context is the infrastructure it runs on.
Plug AI into the Business, Not Just the Cloud
The hype phase of generative AI is behind us. What matters now is how well your systems can support intelligent action. If you want AI that drives real outcomes, you don’t just need better models. You need better context. That’s the promise of MCP—and the opportunity ahead for organizations ready to take the next step.
If you’re experimenting with GenAI and want to connect it to your real-world data and systems, we should talk.