The AI-Ready Localizer

The AI-Ready Localizer

The AI-Ready Localizer #15: MCP Apps Are Coming

And What That Really Means for Localization

Julia Diez's avatar
Julia Diez
Dec 11, 2025
∙ Paid

The next generation of AI interfaces is a game-changer for L10N, and multilingual ontologies will become the new core infrastructure.

Two weeks ago, Anthropic quietly dropped one of the biggest announcements in the agentic AI world:

MCP Apps are coming.

And they’re not coming alone. Anthropic is co-designing the standard with OpenAI and the emerging MCP-UI community.

This is the moment when the Model Context Protocol stops being “a cool way for LLMs to call tools” and starts evolving into what looks very much like a full application framework: with a UI layer, an action layer, a memory layer, and a universal way for AI agents to interact with human users.

If that sounds abstract, let me land it in the world I know best: localization.

Because what’s coming is not a “translator’s assistant.”

This is the beginning of AI-native UIs that can orchestrate multilingual content pipelines end to end.

From Chatbots to Applications

Until now, MCP has been the backend plumbing of agentic AI:

  • Tools to call APIs

  • Connectors to retrieve data

  • Functions to execute workflows

But the interface still lived in the chat metaphor.

That changes with MCP Apps.

For the first time, LLMs will be able to render proper UI components: forms, tables, permission screens, diffs, checklists, dashboards, inside a standard that works across Claude, ChatGPT, Cursor, and the upcoming MCP-UI ecosystem.

Agents won’t just describe workflows. they will be the interface that executes them.

You won’t need React or HTML to build an interactive app.

An MCP server + a schema → the LLM autogenerates the UI.

This is the moment where agentic AI stops being a “conversation” and becomes a new application model.

What This Means for Localization

Localization is a system-of-systems problem:

  • source content

  • terminology

  • translation memory

  • style and voice

  • repositories

  • product strings

  • UI context

  • workflows

  • QA gates

  • human review

  • versioning and deltas

  • approvals

No single TMS currently handles all of this because the world moved outside the TMS:

source lives in repos, UX strings live in Figma, docs live in three content systems, marketing lives in another galaxy.

MCP Apps suddenly make something possible we’ve never had before:

A unified interface that orchestrates ALL L10N workflows across systems without needing to rebuild every UI manually.

Imagine:

  • A UI component that checks glossary compliance on the fly.

  • A mini-app that scans product repos for string duplicates and missing context.

  • A diff interface that compares your automatic draft with the existing TM and explains why it selected a certain option.

  • A review panel that shows source+target+ontology classes directly inside ChatGPT or Claude.

  • A post-editing analyzer that forecasts risk and flags terminology violations with your ontology context attached.

All as lightweight MCP Apps. No TMS UI rebuild. No Frankenstein workflows. The agent becomes the interface.

But: Agents Need Structure.

LLMs are powerful, but without structured meaning they:

  • hallucinate

  • over-generalize

  • ignore terminology nuance

  • treat every string as isolated

  • struggle with voice consistency across a product line

  • can’t apply governance rules reliably

This is exactly where most “AI + L10N” experiments fail.

Because you can’t orchestrate multilingual operations without a shared representation of meaning, relationships, constraints, and domain knowledge.

This is why I’ve been building my Multilingual Ontology Manager.

Agents need structure the way translators need context.

If MCP Apps are the new UI and tool calls are the new API, then the ontology becomes the semantic engine that keeps everything coherent.

Ontologies: the Core Memory Layer of L10N

An ontology does what a traditional TM never could:

  • models concepts, not strings

  • expresses relationships (“part-of”, “causes”, “belongs-to”)

  • unifies terminology across products and markets

  • encodes constraints and usage rules

  • anchors meaning across all locales

  • powers consistency at scale

In an MCP world, this is gold.

MCP servers can expose:

  • concept definitions

  • allowed target variants

  • forbidden synonyms

  • domain constraints

  • governance rules

  • context windows per product area

And MCP Apps can render:

  • term cards

  • context panels

  • concept graphs

  • warnings

  • controlled-language suggestions

  • guided QA workflows

In other words:

Your ontology becomes the brain.

MCP becomes the nervous system.

The agent becomes the face.

Localization finally gets an architecture designed for meaning, not file processing.

The End of the “Do We Still Need a TMS?” Debate

User's avatar

Continue reading this post for free, courtesy of Julia Diez.

Or purchase a paid subscription.
© 2026 Julia Diez · Privacy ∙ Terms ∙ Collection notice
Start your SubstackGet the app
Substack is the home for great culture