The AI-Ready Localizer

The AI-Ready Localizer

The AI-Ready Localizer #14: Meaning doesn’t live inside words.

Why L10N, i18n, and AI are secretly the same discipline

Julia Diez's avatar
Julia Diez
Nov 15, 2025
∙ Paid


Here’s a small sentence with gigantic consequences:

Meaning doesn’t live in words.
It lives in the network of relationships your brain (or a model) activates when it reads them.

Your brain doesn’t “read text.”
It lights up a semantic graph.

When you see “slow router,” you don’t fetch a clean dictionary entry.
In a few milliseconds, your brain detonates a cluster of associations:

  • frustration memories

  • expectations about internet speed in your country

  • mental models of bandwidth / WiFi / ISP

  • brands you’ve seen (and blamed)

  • emotional tone (“oh no, not again”)

  • spatial memories of where you were the last time this happened

Meaning isn’t the text.
Meaning is the
shape of the graph that lights up.

Hi, I’m Julia Díez, an i18n/L10N engineer who spends an unreasonable amount of time mapping how humans and machines reconstruct meaning.
If you enjoy posts that blend localization, culture, semantic graphs, and the occasional existential crisis about language, stick around. This space was made for people like us.


Large Language Models Do the Same Thing. Just Implicitly

People say LLMs “predict the next word.” Technically yes.

But internally?

They build huge latent semantic graphs that look suspiciously like the ones your brain builds.

That’s why I am obsessed with knowledge graphs, multilingual ontologies, and context engineering:

I’m trying to externalize the hidden graph that both humans and AI rely on.

When we work with AI, we’re not just “giving it text.”
We’re rebuilding our own cognition in a machine-readable way.


Each Language Activates a Different Graph.
That’s Why L10N Exists

“Home” is not universal.

When you read “home” in English, your semantic graph activates warmth, belonging, safety.

But:

  • Spanish hogar → emotional, poetic warmth

  • Spanish casa → physical structure

  • German Heimat → history, identity, belonging with a cultural charge

  • Japanese uchi → inside-group belonging

  • Japanese ie → household / lineage

The differences are not in the translations.
They’re in the graph structure each language reliably activates in its speakers.

Localization is graph alignment, not word alignment.


I Build Multilingual Ontologies to Map Brains Across Cultures

When I work on a multilingual ontology or knowledge graph, I’m not “collecting terms.”

I’m mapping how entire populations conceptualize things:

  • which nodes exist

  • which nodes connect

  • which nodes are taboo

  • which nodes carry risk

  • which nodes evoke trust or fear

This is why good localization feels like magic:

We’re quietly rewriting the user’s semantic graph so the product “clicks” instantly.

And here’s the real mind-blow:

Knowledge graphs are the bridge between human thought and AI reasoning.

  • Humans → think in graphs

  • Models → reason in latent graphs

  • Ontologies → make those graphs explicit

  • Localization → realigns those graphs across cultures

When you create a multilingual ontology, you’re literally building that bridge.


The Toolmaker’s Paradigm: Why Meaning Is Always Reconstructed

I recently shared this idea as a Note on Substack, and someone suggested I explore the Toolmaker’s Paradigm. That single comment reframed everything. Thanks

Justin

based on true stories
those words are not enough
My head jerked up from my notebook, consciousness returning with the force of a stress-strain material test. Chalk dust floated in the afternoon light as I tried to orient myself. This was the Engineering Properties lecture, but I felt abandoned in a for…
Read more
a year ago · 16 likes · 9 comments · Justin

The metaphor:

A toolmaker on one island needs a tool built on another island.

He can’t send the tool.
He can only send instructions for how to recreate it.

But the other island:

  • has different materials

  • different tools

  • different habits

  • different assumptions

So the final tool is never identical.
It reflects the receiving island’s worldview.

→ That’s how human communication works.

Meaning is not transmitted — it is reconstructed.

You never “send” meaning.
You send signals, and the other person reconstructs meaning using:

  • their culture

  • their background knowledge

  • their assumptions

  • their linguistic habits

  • their cognitive graph

This implies:

  • ❌ Words don’t contain meaning.
    ✔ They’re instructions for reconstructing meaning.

  • ✔ Every listener/reader rebuilds a slightly different meaning depending on their “island.”

  • ✔ Miscommunication is not an error — it’s the default state.

Localization, then, becomes:

The art of aligning how different islands reconstruct the same tool.


Why This Matters for L10N / I18N

Localization is not “making things sound natural.”

It’s:

Teaching each island how to reconstruct the same tool with its own materials.

This is why:

  • literal translation fails

  • cultural assumptions matter more than words

  • product UX that works in the US misfires in Germany or Japan

  • one English string creates completely different user experiences in each locale

Because each locale’s “island” has:

  • different cognitive models

  • different emotional weights

  • different semantic graphs

  • different politeness structures

  • different expectations of “control”, “trust”, “safety”, “professionalism”

Localization is the discipline of aligning reconstructions.


Why This Matters for AI & Prompt Engineering

Here’s the part most people miss:

LLMs also follow the Toolmaker’s Paradigm.

They don’t “understand” your intended meaning.
They reconstruct an internal semantic graph based on:

  • your prompt

  • their training data

  • your context window

  • your constraints

  • their latent cultural fingerprints

Prompt engineering is literally:

Giving better instructions so the model reconstructs the tool you intended — not a weird alternate version of it.

My core obsession — multilingual knowledge graphs + context engineering — boils down to this:

Build the blueprint so every island (including the model) reconstructs the same tool.


1️⃣ Brand Voice: Meaning Reconstruction, Not Adjectives

Brand voice fails when the “tool” the brand intends is reconstructed differently by each audience.

Most companies think brand voice = tone + style guide.

In reality:

Brand voice = instructions for how meaning should be reconstructed in the reader’s mind.

Why voice breaks cross-market:

Because each market reconstructs:

  • “friendly” differently

  • “professional” differently

  • “reassuring” differently

  • “premium” differently

  • “trustworthy” differently

Example:

String: “Your data is safe with us.”

  • US reconstruction: “Nice tone, I trust this.”

  • DE reconstruction: “Based on what legal basis? Where is my control?”

  • JP reconstruction: “Too direct. It feels like something might have gone wrong.”

Same words. Three completely different tools reconstructed.

Application

A strong brand voice guide should include semantic blueprints, not just adjectives:

  • what concept nodes must fire

  • what tone nodes must not fire

  • emotional mappings per locale

  • cultural risk graphs

You’re defining how each market should reconstruct the intended meaning.

This is exactly why semantic graph work unlocks global brand consistency.


2️⃣ International UX Failures: The Graph Behind the Interface

Most UX failures in global products don’t happen in Figma.
They happen in meaning reconstruction.

Companies assume UI is “universal.”
That’s a myth.

Example: Auto-renew toggle

Intended meaning (US product team):

→ “Easy, seamless convenience.”

Reconstructed elsewhere:

  • DE: “Hidden contract. Where is my explicit control?”

  • JP: “This feels too direct. Where is the softening context?”

Example: Bold CTA (“Start Now”)

Intended meaning:

→ “Energetic, high conversion.”

Reconstructed:

  • FR: “Pushy. Not premium.”

  • ES: “Okay, but could use more warmth.”

  • NL: “Too shouty. Trust drops.”

The UI element is the same.
The reconstructed tool is different.

International UX should be treated as:

Market-specific meaning reconstruction curves, not a single layout spec.

A semantic graph can tell UX teams exactly:

  • which nodes the UI triggers

  • which nodes are missing

  • which nodes violate local expectations

  • which relationships break (e.g., control → convenience vs control → suspicion)

This is the foundation of truly global UX.


3️⃣ Prompt Engineering: The Toolmaker’s Paradigm for Models

The Toolmaker’s Paradigm is the secret reason why prompts fail — and how to fix them.

LLMs don’t know what you mean by:

  • “friendly”

  • “reassuring”

  • “on brand”

  • “for German users”

They reconstruct a meaning graph based on their own “island” (training data + context).

If your instructions are incomplete, vague, or culturally underspecified, the model builds the wrong tool.

Example prompt:

“Rewrite this in a friendly tone.”

The model now has to reconstruct:

  • your idea of “friendly”

  • your domain

  • your audience

  • your risk tolerance

  • your brand voice

  • your cultural context

You sent a toothpick.
The model had to build a cathedral.

Application: Bulletproof Prompts

To get reliable results, you design prompts as blueprints for meaning reconstruction, not casual requests.

A good prompt specifies:

  • who you are (your island)

  • who the audience is (their island)

  • the semantic graph required

  • nodes to preserve

  • nodes to avoid

  • tone reconstruction rules

  • context boundaries (don’t add/remove meaning)

  • examples of correct vs incorrect reconstructions

This is why a multilingual ontology with tone examples + semantic nodes radically improves LLM reliability:

You’re giving the model what humans never explicitly share with each other:
an explicit blueprint for reconstructing the intended meaning.


Toolmaker Prompt Template

(Meaning reconstruction, on purpose)

This is one of the strongest prompt patterns you can use today for L10N, AI writing, UX microcopy, and global brand ops.

Keep reading with a 7-day free trial

Subscribe to The AI-Ready Localizer to keep reading this post and get 7 days of free access to the full post archives.

Already a paid subscriber? Sign in
© 2025 Julia Diez
Privacy ∙ Terms ∙ Collection notice
Start your SubstackGet the app
Substack is the home for great culture