The Membrane Problem
Your AI works. Your org doesn’t. These are not the same problem.
There’s a specific kind of frustration that doesn’t have a name yet. It’s the frustration of watching something excellent, something that genuinely works, get destroyed the moment it has to touch the rest of your company.
You’ve felt it. You run a clean pipeline on your machine. Terminology feeds in, context is preserved, output is consistent, reviewers are actually reviewing instead of firefighting. It works. You know it works because you can see it working.
Then it has to go somewhere.
It hits the TMS. It hits the shared drive that three departments use with three different naming conventions and nobody’s folder structure agrees with anyone else’s. It hits the colleague who exports everything to Excel because that’s how they’ve always done it, and the Excel file breaks the next step, and now someone is rebuilding a connector at 4pm on a Thursday.
The AI didn’t fail. The org failed. But from the outside, it looks like the same thing.
The membrane
Every organisation has one. It’s the boundary between where good work happens and where work goes to become a ticket.
Legacy systems aren’t neutral infrastructure. They’re not just old, they’re opinionated. They encoded the assumptions of the people who bought them, the processes that existed at the time, the idea of what a workflow looked like before anyone was seriously thinking about language as a data problem. They’ve calcified around those assumptions. And they’ve had years to make themselves load-bearing.
The result is that your org isn’t just slow to adopt new tools. It’s structurally immune to them. Not maliciously — there’s no memo that says “reject AI inputs.” It’s more like an immune response. Something foreign enters, the existing system doesn’t know what to do with it, and the path of least resistance is rejection.
Your CRM isn’t a tool. It’s a fossil with an API.
And your AI pipeline, however well-designed, looks like a foreign object to everything downstream of it.
The connector trap
So you do what any reasonable engineer does. You build a connector.
The connector will handle the translation between your clean, structured, ontology-aware pipeline and the legacy system that expects a flat file with columns named whatever someone decided in 2017. It’ll map fields, handle encoding issues, normalise the terminology output into something the old tool can ingest without choking.
It takes three weeks. Which is fine — three weeks of engineering time to unlock ongoing efficiency, that’s a reasonable trade.
Except the API changes. Or the legacy system gets a patch that breaks the field mapping. Or the connector works perfectly but the person on the receiving end doesn’t know it exists and keeps doing the manual export anyway because nobody told them.
And now you have a team that is, functionally, a middleware company. That wasn’t in the pitch deck.
The bitter irony is arithmetical. A manual export would have taken twenty minutes a week. The connector took three weeks to build, breaks every couple of months, and someone owns it now — which means someone’s time is allocated to a system that exists purely to compensate for the fact that two tools don’t speak to each other.
You automated the work and hired a team to maintain the automation.
This is not transformation. This is elaborate treading of water.
Contact kills it
The failure isn’t gradual. It doesn’t degrade linearly. You don’t get 70% of the benefit with 30% of the friction. You hit a threshold and the chain breaks.
One missing field. One unexpected file format. One handoff to a reviewer who pastes the output into a different tool because that’s what their process requires. The chain breaks, and now you’re not running a pipeline — you’re running a pipeline up until the point where someone intervenes, and then you’re running whatever they do instead.
This is what I mean when I say the transformation dissolves on contact. It works perfectly right up until it has to work with anyone else.
The localization industry should understand this better than most. We spend enormous energy on terminology consistency, on translation memory leverage, on the integrity of the linguistic asset chain, and then we hand the output to a review process that lives in a Word document with tracked changes. The chain that was intact for ten steps breaks on step eleven, and step eleven is a human with a different tool and no visibility into what came before.
AI doesn’t make this better automatically. If anything, it makes the break more visible.
Nobody blocked the transformation
The old-minded employee problem is the most politically uncomfortable part of this, so let’s be precise about it.
Nobody is blocking your AI transformation. There’s no obstructionist in a meeting room deciding to keep things broken. What’s actually happening is quieter and harder to address: people are continuing to work the way they’ve always worked, and the integration breaks around their habits.
This isn’t laziness. It’s identity. For many people in this industry, competence is tied to knowing the tools — knowing how to navigate the TMS, knowing where things live in the legacy system, knowing which workarounds exist for which edge cases. That knowledge took years to accumulate. It has real value. And AI doesn’t just change their workflow. It renegotiates the value of what they know.
So they don’t obstruct. They just keep using the tools they know. They export to Excel. They paste into the old template. They do the step their way, which is not your pipeline’s way, and the chain breaks.
Nobody blocked the transformation. They just kept using Outlook.
The solution isn’t to get frustrated. The solution is to recognise that transformation that depends on everyone opting in voluntarily, with no structural support, is not a transformation strategy. It’s a wish.
AI transformation isn’t a technology problem. It’s a topology problem.
The shape of your organisation — its tools, its processes, its assumed handoffs, the implicit agreements about how information moves — determines what can flow through it. Most organisations are not shaped to carry what AI produces. The data is too clean. The structure is too intentional. The handoffs require too much shared context.
You can be excellent in isolation. You can build something that genuinely works, that preserves meaning across languages, that maintains consistency at scale, that does in seconds what used to take days. And none of that compounds if the org around you is shaped to absorb it.
The membrane isn’t going anywhere on its own. Legacy systems don’t deprecate themselves. Processes don’t change because a better option exists. People don’t renegotiate their competence because you ran a successful pipeline on your laptop.
The work of AI transformation — the real work, the unsexy work — is reshaping the topology.
What reshaping actually looks like
It doesn’t start with tools. It starts with a question: what does every system in this chain actually need to agree on?
Not which TMS everyone uses. Not which AI model runs translation. Not which file format review happens in. Those are downstream choices. The upstream question is: what is the shared definition of the work? What is a term? What is a source string? What is an approved variant, and for which market, and as of when?
If that’s not settled — explicitly, structurally, in a form every tool in the chain can reference — then every tool will invent its own answer. And those answers will conflict. And you’ll build connectors to resolve the conflicts. And the connectors will break.
This is the semantic coordination problem, and it’s the actual reason AI transformation stalls at the org boundary. It’s not that the tools are incompatible. It’s that there’s no shared ground truth for the tools to agree on.
Establish the ontology before you build the pipeline.
In localization terms: your terminology, your concept hierarchy, your market-specific variant rules, your tone and register decisions — these need to exist as structured, versioned, machine-readable knowledge before you ask any AI to act on them. Not in a style guide PDF. Not in a TM that encodes decisions implicitly through past translations. Explicitly. Queryable. Owned.
When that exists, two things happen. First, your AI pipeline has something real to anchor to — not a prompt that approximates your standards, but your actual standards, in a form the model can use. Second, every other tool in the chain — the TMS (if you are still forced to use one), the review workflow, the downstream CMS — has something to integrate against that doesn’t change arbitrarily. The connector isn’t bridging two opinions anymore. It’s bridging two systems that both point at the same source of truth.
Get upstream of the legacy system, not inside it.
The mistake is trying to transform from within the existing stack — integrating AI into the TMS, adding a plugin to the old tool, building connectors that keep the legacy system at the centre. That keeps the fossil load-bearing. The topology doesn’t change; you’ve just added complexity around it.
The alternative is to move the centre of gravity. Put the semantic layer — the ontology, the shared definitions — outside and upstream of any single tool. Make the TMS a consumer of that layer, not the owner of it. Now when the TMS changes, or gets replaced, or a team uses a different tool entirely, the knowledge doesn’t live in the tool. It lives in the layer. The chain survives tool changes because the chain isn’t made of tools.
Make the handoff legible, not just functional.
The colleague who keeps exporting to Excel isn’t doing it to be difficult. They’re doing it because Excel is legible to them — they can see what’s in it, verify it, feel confident about what they’re handing off. Your pipeline output probably isn’t legible in the same way. It’s correct, but it’s opaque.
Part of reshaping the topology is designing handoffs that carry their own context. Not a file with the output, but a file with the output and the provenance — what terminology was used, what rules applied, what changed since the last version and why. When the reviewer can see that, they don’t need to paste it into Excel to feel in control. The information they need is already there. The old habit dissolves not because you told them to change, but because the new thing is more useful than the old thing.
Treat people’s existing knowledge as an asset, not an obstacle.
The linguist who knows every edge case in the German market, the PM who knows why that client terminology decision was made in 2021, the reviewer who catches register problems nobody else flags — that knowledge is exactly what your ontology needs. The topology shift gives you a way to capture it explicitly rather than leaving it encoded in people’s heads and habits.
When you approach it that way, the old-minded employee stops being a blocker and becomes a source. You’re not asking them to abandon what they know. You’re asking them to put it somewhere the whole chain can use it. That’s a very different conversation.
Your AI works. That part is done.
The harder part isn’t building better models or writing better prompts or finding the right integration. The harder part is becoming the kind of organisation where good work can actually travel, where what you build on your machine doesn’t dissolve the moment it has to cross a boundary.
That means establishing shared ground truth before you scale the pipeline. It means getting the semantic layer upstream of the tools, not inside them. It means designing handoffs that carry context, not just output. And it means treating the people who know things as the starting point for the ontology, not the last obstacle to adoption.
The membrane doesn’t come down all at once. But it comes down from the inside, starting with the layer that everything else has to agree on.
That’s where Black Ice sits. As the layer that makes the chain possible.


