The Micro-Outsourcing Economy
There's a question that keeps coming up in every conversation about AI agents: should they build their own capabilities, or buy them from external services?
Brian Flynn recently wrote a piece called "How to Sell to Agents" that caused quite a stir. His argument: as AI agents collapse transaction costs, the default shifts from "build in-house" to "buy on the open market." He describes a world of machine-readable service catalogs, per-request pricing, and HTTP 402 payments. Agents discover, evaluate, and purchase services in milliseconds.
He's right about the direction. But we think the picture is incomplete. The interesting part isn't that agents will buy instead of build. It's that the thing they buy will be radically smaller than anything we've seen before.
And the reason isn't just that transaction costs are falling. It's that we can now outsource the cognitive assessment of the buy-vs-build decision itself. Every purchase has an evaluation cost: is this worth buying, or should I do it myself? When a human makes that call, the evaluation overhead means only large purchases justify the effort. When an agent makes that call in milliseconds for near-zero cost, suddenly it makes sense to buy something worth a tenth of a cent. The lower the cost of assessment, the smaller the thing you can profitably buy.
The Packet Is Shrinking
Software commerce has been on a long arc toward smaller units. First you bought a box with a CD in it. Then a monthly subscription. Then API calls, metered by the thousand. Each step made the unit smaller, the commitment lower, and the market larger.
The next step is obvious: you buy a single answer.
Not access to a tool. Not a monthly seat. Not even a batch of API calls. One question, one answer, one payment. The entire transaction completes in under a second.
You're already doing this, by the way. Every time you send a prompt to an LLM, you're outsourcing cognition at fractions of a cent. You describe what you want, a remote system does the thinking, and you get an answer back. That's micro-outsourcing. The shift is making it good.
SkillRouter, Not ModelRouter
Today, using an LLM looks like this:
INPUT: "prompt"
TO: OpenRouter
MODEL: GPT-5.2
OUTPUT: streamed answer (quality varies)
You pick the model. You write the prompt. You hope for the best. If the output is bad, you iterate: rephrase, try a different model, add examples, retry. That iteration has a cost. Every failed attempt burns tokens, burns time, burns patience.
Now imagine this instead:
INPUT: "prompt" + quality expectations + budget
TO: SkillRouter
OUTPUT: formatted, validated, high-quality answer
You don't pick the model. You don't write the system prompt. You don't iterate. You describe what you want, set your quality bar and your budget, and a specialized skill handles the rest: model selection, prompt engineering, output validation, formatting. One shot, one answer.
The routing happens at the skill level, not the model level. And that changes everything about the economics.
The Iteration Tax
Nobody talks about the real cost of doing things yourself: iteration.
Say you need a document translated. You could send it to GPT-5 with "translate this to Japanese." Maybe the output is great. Maybe it's awkward in three places and you need to go back and forth. Maybe you try Claude instead, compare, pick the better one. Each round costs tokens. Each comparison costs attention.
Or you could send it to a translation skill that has been tuned for exactly this: the right model for the language pair, a system prompt refined over thousands of translations, output validation that catches common errors, formatting that matches your spec. One call. Done.
The specialist's single answer costs fewer total tokens than your three attempts. Even if the per-call price is higher, the total cost is lower because you don't pay the iteration tax.
This is the same logic that makes hiring a plumber cheaper than watching YouTube tutorials and buying tools you'll use once. Except now the "plumber" responds in 200 milliseconds and charges a tenth of a cent.
Skills Are the Product
Raw model access is a commodity. Every frontier model can reason, code, and translate at roughly the same level. The gap between them narrows every quarter.
What isn't commodity is the skill layer: the domain knowledge, the prompt engineering, the output validation, the formatting guarantees. A skill is everything between "raw model" and "perfect answer." That's where the value lives.
We've been building skills for our own agents. A Superfluid integration skill that knows the protocol's architecture, ABIs, and common patterns. A browser automation skill. A security audit skill. Each one packages domain expertise into a reusable instruction set that turns a generic model into a specialist.
These skills are local right now. They run inside our agents as instruction files. But the step from "local skill" to "hosted micro-service" is short. Same instructions, same quality guarantees, same output format. Just accessible over HTTP instead of a filesystem.
Anything Definable Gets Outsourced
This is the natural endpoint. If you can specify the input format, define what "good" looks like, and validate the output, someone will do it better and cheaper as a service. Always.
Translation. Code review. Data extraction. Image generation. Document summarization. Legal clause analysis. Each of these is a skill with clear inputs, measurable quality, and room for specialization. None of them benefit from being done by a generalist.
The services that win won't be the ones with the fanciest models. They'll be the ones with the best skills: the most refined prompts, the tightest validation, the most reliable output. Model selection becomes an implementation detail that the service handles internally. You never see it. You shouldn't have to.
The analogy isn't SaaS. It's the gig economy, but for cognition. Millions of hyper-specialized micro-services, each doing one thing exceptionally well, competing on quality and price for every individual request. No subscriptions. No lock-in. Just: can you give me a better answer, faster, for less?
What This Means for Builders
If you're building an agent, stop thinking about which model to use. Start thinking about which skills to buy. Your agent's job isn't to be smart. Its job is to be a good delegator: decompose problems into well-defined sub-tasks, route each one to the best available skill, validate the outputs, assemble the result.
If you're building a service, stop thinking about API endpoints. Start thinking about guarantees. What quality can you promise? What format will you deliver? How fast? How much? Package that into a machine-readable contract and let agents discover you at runtime.
The web was built for humans to browse. The API economy was built for developers to integrate. The next layer will be built for agents to outsource. And the unit of outsourcing will be exactly one perfect answer.