Your AI lock-in isn't commercial, it's structural
You cannot switch AI models without triggering a full risk review. This is being called vendor lock-in. It is not.
Organisations believe they are captive to providers. More often, they are captive to their own inability to isolate execution scope, reproduce behaviour under explicit contracts, or generate evidence inside the flow of work.
When a model cannot be safely substituted without triggering a manual risk assessment, the dependency is not commercial. It is structural.
This is what Token Jail actually names: the moment machine-speed execution enters a delivery system where authority was never made explicit.
Authority here means more than who decides. It means how scope is bounded, how promotion is earned, and how evidence is produced. Three distinct structural concerns, not one.
At human speed, implicit authority was survivable. Execution rights could be inherited. Promotion could rely on interpretation. Evidence could be reconstructed after the fact.
Machine speed removes those tolerances.
What was ambient becomes binding.
No routing layer changes this. No gateway. No abstraction at the API boundary. Because the dependency does not live at the interface. It lives in the operating model underneath.