No Free Lunches
How do you prevent a big lab from eating your lunch?
I’ve been thinking about this question a lot recently and I believe the answer is fairly simple. It breaks down into two parts:
- Don’t pick a problem that can be prompted
- There will always be cheaper intelligence
The State of Play
There’s a couple of interesting phenomena happening right now:
-
Huge bearish sentiment in the market. Citrini Research published a thought experiment imagining an AI driven economic collapse. A speculative essay on Substack tanked IBM 13%, its worst day since 2000.
-
We are flirting with the threshold of autonomous software engineering. The AI coding tools market went from 550 million to 4 billion dollars in a single year. 50% of developers use AI coding tools daily. Cursor scaled from 50M to 500M ARR in six months.
-
Businesses and startups are evaporating from Claude’s continual improvement. Last week, a founder posted on X: “Claude just killed our startup.” Their AI ad management tool, Ryze AI Adgent, was made obsolete when Claude added the same capability as a built-in skill.
AI Coding Tools Market Size
Source: Menlo Ventures, State of Generative AI in the Enterprise 2025
The anxiety is real. Models are getting better, fast. But the conclusion people draw from this, that big labs will inevitably dominate every vertical, does not follow. The question itself contains a hidden assumption: that intelligence scales to everything for free, and that a sufficiently smart model will just solve your problem as a byproduct of getting smarter. This assumption is wrong.
Flipping the Burden
“A big lab will eat your lunch” is a positive claim. The burden of proof is on the person making it. Which lab? Which vertical? With what investment? Against what competition?
You do not prove a negative. The person making the claim has to show their work. A lab has to identify a vertical, invest billions in training and tooling for that specific domain, compete against incumbents who already have the data, the platform, and the operational expertise, and then actually win the market. That is the price of a lunch.
The history of technology does not support the dominance narrative. Microsoft did not eat cloud. Google did not eat social. Facebook did not eat search. Even within AI, OpenAI has not monopolized enterprise. The market is always more competitive than the narrative suggests.
Until someone can answer those questions with specifics, “a big lab will eat your lunch” is not an argument. It is an anxiety.
The Promptable Threshold
Some lunches deserve to be eaten.
When a founder posts “Claude just killed our startup” and the startup was an AI ad management tool, that is not a market failure. The idea was bad in the first place. Tacking AI onto it does not make it less bad. The knowledge required to manage ad campaigns is already known. You are tying together data sources (analytics platforms, ad networks, budget spreadsheets) and organizing them for someone else. That is an email job.
Email jobs are problems where all the knowledge required is already encoded in training data, the solution is orchestrating existing data sources, the output is easily verifiable, and no specialized domain expertise or real world interaction is required. Calendar management, SaaS data piping, basic report generation, ad campaign optimization, SAST (static application security testing, which is literally pattern matching and is already being absorbed into AI powered IDEs). These are all promptable problems. As models get smarter, they get solved as a byproduct of general intelligence. No deliberate investment needed. The lab does not even have to try. It just falls out of the training.
If your startup is solving a promptable problem, you do not have a lab problem. You have a business model problem.
The Lunch Isn’t Free
For non-promptable problems, eating lunch requires deliberate, massive investment. Labs have to choose which lunches to pay for.
Case Study: Coding
Coding is the single domain where AI labs have invested the most deliberately. There is a widely accepted view in the AI research community: close the feedback loop on AI research itself. The more we can automate software engineering, the faster we can build smarter models, the faster we can code. This creates a flywheel that accelerates everything. Labs have poured billions into this specific vertical because of that flywheel effect.
AI Coding Tools Revenue Growth
Source: Menlo Ventures. Cursor scaled 10x from $50M to $500M ARR between Q4 2024 and Q2 2025. Replit scaled 10x from $10M to $100M ARR in the same period.
And what did we get? Models that are really good at coding. Rivaling software engineers in raw ability. 50% of developers using AI daily. 30% of Python on GitHub written by AI. The coding tools market went from 550 million to 4 billion in a single year. Foundation model companies alone raised 80 billion in 2025.
But this huge investment improved models primarily on coding and coding specifically. We did not see proportional gains in medical diagnosis, or drone operations, or war gaming, or offensive cybersecurity. The investment in coding produced sample efficiency with existing knowledge for coding. It did not generalize to every other domain for free.
There is also a reason coding lends itself so well to LLM automation. It is text based, it has verifiable outputs (does the code compile, do the tests pass), and there is an enormous corpus of training data to pull from. Not every domain has these properties. Most do not.
Moreover, look at the competitive landscape that emerged even within coding. Claude Code, Cursor, OpenAI Codex, OpenCode, Aider, Windsurf, Kilo Code. Claude Code is winning right now not because Claude is definitively the best model, but because the product is good. 200 dollars a month for essentially unlimited Opus is Anthropic giving away their best model. If Opus were not so cheap in Claude Code, we would probably see much more variation in which tools developers actually use. The model subsidy is doing a lot of heavy lifting for adoption.
The competition is still about the product. Cursor has multi-file editing and deep IDE integration. Codex runs in a sandboxed cloud environment. OpenCode is entirely open source and free. Windsurf does background indexing of your entire codebase. These are product differentiators, not model differentiators. The model is one component.
If it took this level of investment to make models good at coding, and even then the competition is between products built on top of models rather than the models themselves, what makes you think your vertical gets solved for free?
Labs cannot do this for every vertical simultaneously. They have to pick. The verticals they have not picked are not automatically solved just because the model got smarter at coding.
Intelligence Is On Tap
Even if a model gets good at your vertical, the model itself is not the moat.
The Distillation Problem
On February 23rd, Anthropic accused three major Chinese labs of running “industrial-scale distillation attacks” against Claude. DeepSeek, Moonshot AI, and MiniMax. 24,000 fake accounts. Over 16 million exchanges. MiniMax alone generated 13 million interactions and redirected half its traffic to siphon capabilities the moment a new Claude model launched.
This is not a new phenomenon. It has been widely speculated that Chinese models, dating back to DeepSeek V3, were distilling off of frontier American models. Anthropic just said the quiet part out loud. OpenAI sent a memo to the House China Committee making the same accusation.
This is a well established occurrence. Gen. Keith Alexander, former NSA Director and CYBERCOM Commander, called Chinese IP theft “the greatest transfer of wealth in history” back in 2012, estimating 250 billion dollars per year. Tesla lost 300,000+ Autopilot files when an engineer defected to Xpeng. Apple lost autonomous vehicle secrets the same way, to the same company. The FBI opens a new Chinese counterintelligence case every 12 hours.
The difference with AI is that you do not even need espionage anymore. You can just ask Claude for answers and use the results to train your own model.
The Dirty Secret
I believe the dirty secret the labs do not want you to know is that they cannot stop this. And that means the current dynamic of Chinese open source models at a fraction of the cost will persist.
Model Pricing: Frontier vs. Open Source (Feb 2026)
Sources: Anthropic, OpenAI, OpenRouter, Artificial Analysis. MiniMax M2.5 scores 80.2% on SWE-Bench Verified (Opus 4.6: 80.8%) at roughly 1/20th the cost.
Builders do not care where the model came from. They care about the cost. Look at the OpenRouter rankings. As of this week, MiniMax M2.5 is the number one model by token volume, processing over 2 trillion tokens per week. Chinese models now account for 61% of total token consumption on the platform. Three of the top five models are Chinese. MiniMax M2.5 scores 80.2% on SWE-Bench Verified, nearly matching Opus 4.6 at 80.8%, at roughly 1/20th the price.
OpenRouter Top 10 Models by Weekly Token Volume (Feb 2026)
Source: OpenRouter rankings, Feb 24 2026. Chinese models (red) hold 4 of the top 5 spots. MiniMax M2.5 alone processes more tokens than the next two models combined.
Chinese Open Source Models: Share of OpenRouter Token Usage
Sources: OpenRouter State of AI 2025, OpenRouter rankings Feb 2026. Chinese open source models went from roughly 1% to over 55% of total usage in 15 months.
How can you impose legislation to fight this? Ban inference coming from China? The models are open source, you can serve them at home. How can you impose legislation on someone using open source software?
Open Source Always Wins
Open source has won every layer of the software stack (ex. Linux, Kubernetes, PostgreSQL, Chromium) and the same dynamic is playing out with models.
Even if Chinese labs stopped open sourcing tomorrow, the genie is out of the bottle. Existing open weights cannot be recalled. And the incentives to open source remain strong. It is not just Chinese labs. Meta releases Llama. OpenAI released gpt-oss. Alibaba released Qwen 3.5, a 397B parameter open weight model that is arguably competitive with frontier closed models. Arcee AI, a 30 person startup, trained a 400B open source model for roughly 20 million dollars. Prime Intellect is building decentralized training and released a 106B model competitive with much larger ones.
The very best models may stay closed source for a while. But there will always be good alternatives that are open. And they are getting better fast.
The Cost of Intelligence Is Collapsing
Approximate pricing trajectory. Intelligence per dollar doubling time: OpenAI ~5.8 months, Google ~3.4 months. For reference, Moore's Law was 18 to 24 months. Sources: Epoch AI, Artificial Analysis, pricepertoken.com
The model and the agent are fungible. If intelligence is your bottleneck and GPT 5.3 one shots your problem, use MiniMax and you are back in the game. The differentiators are the harness, data management, platform, and domain expertise.
Counterpoints
We should be honest about where this argument has limits.
”What if Chinese labs stop going open source?”
Possible. But the genie is out of the bottle. Existing open weights cannot be recalled, and the economic incentives for open source remain strong. Even if every Chinese lab went closed tomorrow, someone else would carry the torch. American open source labs like Arcee AI and Prime Intellect are already training frontier competitive open source models domestically. Meta continues releasing Llama. OpenAI released gpt-oss. The torch has many carriers.
”What if software engineering really does get fully automated?”
Even if it does, the argument still holds. Coding lends itself well to LLM automation. It is text based, verifiable, and there is an enormous corpus of training data. These are favorable conditions. Most domains do not share them. Coding is not necessarily the hardest problem to automate either. There are other domains with verifiable rewards, like finding vulnerabilities. But it is the one labs chose to invest in first because of the flywheel effect on AI research. The point stands: automation of a vertical requires deliberate investment in that vertical. It does not come for free.
”What if a lab decides to deliberately invest in your vertical?”
Closing
Big labs deserve their success. They are building incredible, generation defining technology. The capability curves are going up and to the right.
But the history of technology is not monopolistic dominance. It is a vibrant and diverse landscape of competitors: corporations, enterprises, startups, and consultancies all slugging it out in the open market. AI will shake up the balance. Those who adapt will survive. But it will not alter the physics of the market.
There are no free lunches.