The Autofac Era
If we don’t build AGI in the next 5 years, I think it’s likely that we end up in what I’m calling the Autofac Era, where much of the economy is automated but humans continue to drive all economic activity as consumers. In this post I’ll explain why I think it might happen, what I expect it to look like, how you can prepare for it, and how it will end.
NB: This is an informal model. I haven’t done extensive research. It’s primarily based on my 25+ years of experience thinking about AI and my read of where we are today and what I think is likely to happen in the next few years. I strongly invite you to poke holes in it, or offer hard evidence in its favor. If you’d prefer to read a better researched model of how AI changes the world, I’d recommend AI 2027.
What is Autofac?
The name Autofac is drawn from a 1955 short story by science fiction author Philip K. Dick. In it, humans live in a world overrun by self-replicating factories that deliver an endless supply of consumer goods. The exact details of the story aren’t important for the model, though, other than inspiring the name, which I chose because it assumes we need to keep the consumer economy going even though most economic goods and services are provided by AI.
My background assumption is that the economy will remain based on human consumption of goods and services. At first this will primarily be because it’s how the economy already works, but later it’ll be because, without AGI, humans are the only source of wanting stuff. Tool AI would be just as happy to sit turned off, consuming no power and doing nothing, so an AI economy without AGI only makes sense, best I can tell, if there’s humans who want stuff to consume.
The development of AGI would obviously break this assumption, as would tool AI that autonomously tries to continue to deliver the same outcomes it was created for even if there were no humans around to actually consume them (a paperclip maximizer scenario, which is surprisingly similar to the Autofacs in PKD’s story).
How does the Autofac Era happen?
To get to the Autofac Era, it has to be that we don’t develop AGI in the next few years. I’m saying 5 years to put a hard number on it, but it could be more or less depending on how various things play out.
I personally think an Autofac scenario is likely because we won’t be able to make the conceptual breakthroughs required to build AGI within the next 5 years, specifically because we won’t be able to figure out how to build what Steve Byrnes has called the steering subsystem, even with help from LLM research assistants. This will leave us with tool-like AI that, even if it’s narrowly superintelligent, is not AGI because it lacks an internal source of motivation.
I put about 70% odds on us failing to solve steering in the next 5 years and thus being unable to build AGI. That’s why I think it’s interesting to think about an Autofac world. If you agree, great, let’s get to exploring what happens in the likely scenario that AGI takes 5+ years to arrive. If you disagree, then think of this model as exploring what you believe to be a low-probability hypothetical.
What will the Autofac Era be like?
Here’s roughly what I expect to happen:
AI becomes capable of automating almost all mental labor within 3 years. Humans continue to be in the loop, but only to steer the AI towards useful goals. Many successful companies are able to run with just 1 to 3 humans doing the steering.
This is based on extrapolation of current trends. It might happen slightly sooner or slightly later; 3 years is a median guess.
Shortly after, say in 1 to 2 years, AI becomes capable of automating almost all physical labor, with again the need for supervisors to steer the AI towards useful goals.
The delay is because of ramp times to manufacture robots and organized resistance by human laborers. I could also be wrong that there’s a delay and this could happen concurrently with the automation of mental labor, shortening the timeline.
The full automation of most economic activity allows rapid economic growth, with total economic output doubling times likely in the range of 1 to 3 years. Because this dramatically outstrips the rate at which humans can reproduce and there’s no AGI to eat up the economic excess, there’s tremendous surplus that creates extreme wealth for humanity.
Because we don’t have AGI, our powerful AI tools (which we may or may not consider to be superintelligent) remain tools, and thus humans retain at least nominal control because humans are the only source of motivation to do work.
Therefore, power structures have to remain mostly in place and controlled by humans, though potentially transformed by the need to respond to AI tools that eliminate certain kinds of friction that keep existing systems working today. There will still be states and laws and civilizational coordination norms and a legal monopoly on violence to keep the whole system functioning.
At this point, there are only a few jobs left for humans that fit roughly within three categories:
Source of motivation (+ ownership)
Business executive (which consists of steering AI agents towards goals, and may require some degree of technical expertise to get good results)
Investors & board members (though expect lots of AI automation of investment activities)
Journalists (with lots of AI automation of research, but only people know what we care about knowing)
Religious/spiritual leadership (though expect some AI cults)
Landlords (though expect robots to do maintenance, negotiate leases, etc.)
Human is the product/service
Caring professions (in cases where human connection is what’s valuable, like therapists, some nurses, and some doctors)
High-end chefs (luxury good)
Teachers and nannies (luxury good or people completely reject AI involvement)
Arts (rejection of AI slop, though expect many people with little taste to favor slop as they already do today)
Sex work (luxury good that competes with AI)
Monopoly on violence/coercion
Lawyers and judges and police (though expect a lot of automation of law research and basic police work)
Political leadership (humans remains in at least nominal control of state power, though rely heavily on AI advisors to make decisions)
Military leadership (robots can do the fighting, but humans ultimately control when to pull the trigger in a metaphorical sense, as many war fighting robots will have autonomous authority to kill barring a new Geneva Convention to coordinate on war norms)
Protected professions (any group who manages to capture regulators to protect themselves from automation)
Despite a limited number of jobs, humans remain critical to the economy as consumers. If we don’t keep a large consumer base, the entire economic system collapses and we no longer have the money to fund AI.
To solve this problem, and to prevent revolution, states set up a self-sustaining consumer economy using UBI built off public investment in private companies. The most common job thus becomes “investor”, but all such investment is done indirectly with investments and payments managed by the state with people receiving fixed UBI payments monthly or weekly.
This creates a large “underclass” of people whose only source of income is UBI. Some people use their UBI wisely, invest it privately, and maintain something like a middle class lifestyle (relatively speaking; they are in fact quite wealthy in absolute terms). Others are trapped, either by circumstances or high time preference, and live UBI-check to UBI-check with no savings or investments, but still live excellent lives with abundant food, medical care, and entertainment options.
The effect of all this is that every human gets richer than they are today, and income inequality goes way up. The “underclass” live lives of incredible luxury by modern and historical standards, but feel “poor” because other people are trillionaires.
How bad it is to live with inequality depends a lot on what happens with scarce resources. If housing remains supply constrained, this would be really bad, but AI robotics should make construction cheaper. I’m more worried about electricity, but realistically I think the Autofac Era will not consume all available power before it ends.
States continue to function much as they do today, although all states with real power have heavy automation of all economic, military, and political activity. Expect the US and China to remain dominant, with their client states generally benefitting, and outsider states benefitting from surplus, some degree of goodwill, and a desire to police the world to minimize disruptive conflicts.
The above is what I view as the “happy” path. There are lots of ways this doesn’t play out the way I’ve described, or plays out in a similar but different way. Maybe people coordinate to push back hard against automation and slow AI automation. Maybe AI enables biological warfare that kills most of humanity. Maybe there’s nuclear exchanges. Maybe AI-enabled warfare damages communication or electrical systems in ways that destroy modern industry. There’s lots of ways the exact scenario I lay out doesn’t happen.
Lots of folks have explored the many risks of both tool-like AI and AGI, and I highly recommend reading their work. In the interest of quantification, if I screen off existential risks from AGI/ASI, I’d place something like 35% odds on not seeing a world that looks basically like the happy path because of some kind of non-existential AI disaster.
I’ve also assumed that we continue with something like a capitalist system. Maybe there’s so much surplus that we have a political revolution and try central planning again, but this time it actually works thanks to AI. Such a world would feel quite a bit different from the scenario I’ve described, but would share many of the core characteristics of my model.
How can I prepare for the Autofac Era?
The best way to prepare is by owning capital, either directly or through investment vehicles like stocks and bonds. I won’t give you any advice on picking winners and losers. I’ll just suggest at least following the default advice of holding a large and diversified portfolio.
You could also try to have the skills and connections necessary to continue to be employed. This is a high risk strategy, as there’ll be a lot of competition for a much more limited number of rules. If you’re not in the top 10% in some way for your preferred role, you’re unlikely to stand a chance. If you pursue this path, have investments as a backup.
You’ll also be fine if you just don’t prepare. Life in the “underclass”, while it will be coded as low status and lock you out of access to luxury goods, you’ll still live a life full of what, by historical standards, would be luxuries. This is perhaps comparable to what happened during the Industrial revolution, except without the tradeoff of the “underclass” having to accept poor working conditions to get access to those luxuries.
That said, many people will find life in the “underclass” depressing. We know that humans care a lot about relative status. If they compare themselves to people with investments or jobs who can afford luxury goods, they may feel bad about themselves. A lot of people who aren’t used to being “poor” will suddenly find themselves in that bucket, even if being “poor” is extremely comfortable. My hope is that people continue to do what they’ve been doing for a while and develop alternative status hierarchies that allow everyone to feel high status regardless of their relative economic station.
How does the Autofac Era end?
I see it ending in one of three ways.
One is that there’s an existential catastrophe. Again, lots of other people have written on this topic, so I won’t get into it.
Another way for the Autofac Era to end is stagnation by the economy growing to the carrying capacity of the Sun. If we never make the breakthrough to AGI, we will eventually transition to a Malthusean period that can only be escaped by traveling to other stars to harness their energy. If that happens, the Autofac Era won’t really end, but a world with no growth would look very different from the one I’ve described.
Finally, the Autofac Era ends if we build AGI. This is the way I actually expect it to end. My guess is that the Autofac Era will only last 5 to 10 years before we succeed in creating AGI, and the onramp to AGI might even be gradual if we end up making incremental progress towards building steering subsystems for AI. At that point we transition to a different, and to be frank, much more dangerous world, because AGI may not care about humans the way tool-like AI implicitly does because they only instrumentally care about what humans care about. For more on such a world, you might read the recently released If Anyone Builds It, Everyone Dies.
This post garnered some good discussion on LessWrong. I recommend reading the comments there.