Gus Trigos

On Abundant, Ephemeral Software

-6 min read-
#coding agents#ephemeral software
Share:

What used to feel like pair programming with coding agents has quietly turned into something else: I define intent and constraints, and the software assembles itself. For the past couple of months, I've used agents to build a dozen full stack apps, both at work and in my spare time. These are real products with databases, production environments, and users. Each iteration revealed gaps that led me to build scaffolding that lets agents build, validate, deploy, and iterate on their own.

That pushed me back to a thought I’ve been contemplating for a while: we are entering a phase where the marginal cost of software approaches zero. When that happens, the problem is no longer how to write code, but how to control, trust, and coordinate machines that write it.

The following is an attempt to sketch what a world of near-zero cost software looks like, and what still stands in the way.

The substrate

The cost and latency of LLM tokens are rapidly collapsing, while their quality increases. [1] Last year, the U.S. invested close to 1.6% of its GDP in capex for AI infrastructure alone. [2] Frontier labs have also productized and optimized heavily for coding given its high demand and testable feedback loops. Benchmarks suggest that the length of software tasks an agent can complete autonomously has been doubling about every seven months. [3] Given the productivity gains we've already seen from code generation, plus its price inelasticity (see breaking revenue records from Cursor, Claude Code, and Lovable), the economic and data flywheels being developed will only reinforce themselves.

By mid 2026, the second order effects from cheaper compute, better models, and larger training runs will be enormous.

As Christensen observed, when a technology's cost collapses, new applications emerge, spawning entirely new markets. This leads me to think that in the upcoming 6 months, it will be easier and cheaper than ever to one shot a full stack app from a prompt. And within a year? I expect a proliferation of apps for everything. People will increasingly create their own solutions instead of searching for existing SaaS vendors.

When software becomes abundant

What does the future look like when the ability and cost to create software becomes so accessible?

  1. Software stops being scarce: Applications are no longer curated projects, but disposable artifacts, created, modified, and thrown away continuously by humans and agents.
  2. Production becomes automated: Agents generate far more software than humans have ever done or ever could, turning development into a parallel, always-on process rather than a sequence of commits.
  3. Economic power moves upstream: If code itself is trivial to produce, defensibility shifts to what cannot be cheaply replicated or distributed: compute, data, foundational models, chips, biology, energy, and other systems constrained by physics and chaotic systems like politics and financial markets.
  4. Legitimacy becomes expensive: regulators and platforms become gatekeepers for what gets to be used and when.

We won’t even notice the need to reuse or destroy software until we start hitting real physics limits, primarily by feeling system latency and inference compute limitations. When spinning up a new app is easier than debugging an old one, software starts to behave more like memory than machinery. But the more pressing problems for us will be on how to handle trust, safety, and legitimacy, which ultimately become the bottleneck.

What still stands in the way

This state will most likely be obvious in two years, but in the next 12 months, we will start noticing real implications given the compounding effect as data and new architectures make foundational models more effective and efficient.

This raises a more practical question: what are the current limitations that impede us from getting closer to this future?

  1. Models are still expensive and unreliable at scale: generating a full stack app with a SOTA model like Opus 4.5 costs me about $30-40 USD. But this entails guiding the model through rounds of prompting and debugging. This is fine for early adopters, but we'll need a 10x decrease in cost alongside higher reliability for wide enterprise adoption, and 100x for anyone to create apps at will.
  2. Production is still mostly a human only layer: even if an agent perfects the craft of coding an app, deploying it to production requires managing secrets, OAuth, migrations, and debugging production only failures. Many of these steps can be overcome with duct tape solutions. But they are fragile and security critical which is why we still gravitate towards human-in-the-loop intervention.
  3. Our tools are still designed for humans: Tools like Claude Code and Cursor give humans and agents a blank canvas while platforms like Replit and Lovable give them overly rigid ones. At the same time, agents run in heavily constrained sandboxes that either block useful actions or push them into hallucinating workarounds. We don't yet have environments that are both simple, safe, and powerful enough for real agent-driven development.
  4. Everything is still optimized for human speed, not machine speed: today's workflows assume synchronous development, minute-long deploys, and expensive infrastructure. That works when teams ship new products weekly to monthly. It breaks when thousands of agents try to build and deploy in parallel, seeking sub-second feedback loops.
  5. We don't yet trust machines with real systems: we have no reliable way to autonomously verify that an agent did what it was supposed to do, followed its contracts, or avoided leaking secrets, let alone at scale. Until we can prove safety and correctness, we will keep humans between agents and production, which weakens the point of running them autonomously.

This space has been advancing so fast that it is forcing behavioral change like we have never seen before.

We went from typing every line of code, to hitting tab tab to autocomplete a line of code, to using chat to modify a single file and then agents to manage a complete codebase.

While the discussion of the externalities derived from this future is for another time, I foretell that those who build the systems that make this world possible will create (and capture) exorbitant value in a very short window of time.

Back to prompting.


Notes

[1]: The lightbulb needed about 80 years to get 80% cheaper. LLM tokens got 99% cheaper in about two years (Meeker). The difference is that with software, the thing getting cheaper is also getting better at the same time: higher quality predictions, lower latency, and more of them per dollar.

[2]: By 2025, hyperscalers alone spent roughly $400B on capex, and north of $500B including the rest of big tech. Against a 31T U.S. GDP, that works out to roughly 1.6% of GDP being pulled into AI-era infrastructure (KKR).

[3]: The METR benchmark suggests a Moore's-law-like trend: the autonomy horizon for software tasks (at 50% success) has been doubling about every 7 months. (METR)

Share:

Subscribe for more