TL;DR
- Altman’s core predictions (models keep improving, agents start working, infrastructure is the bottleneck) were largely right — but what he left out matters more
- This time AI isn’t replacing hands, it’s replacing brains — the cognitive work we told a generation to “study hard” for is getting compressed, with no equivalent new jobs in sight
- Offices in Taiwan already have two parallel worlds: people flying with AI vs. people still watching from the sidelines, with business owners caught in between
- You don’t need to become an AI company overnight — but start this week: audit everyone’s work honestly, let your early adopters lead, and have the uncomfortable conversation
On an afternoon in September 2024, Sam Altman sat in his San Francisco office and published a two-thousand-word essay. The title was ambitious: The Intelligence Age.
He wrote that humanity stood at a turning point on par with the agricultural and industrial revolutions. Deep learning had “worked.” Superintelligence could arrive “within a few thousand days.” Soon, everyone would have an AI tutor, an AI doctor, an AI financial advisor. Intelligence would be as cheap as tap water — just turn on the faucet.
The essay circulated like scripture in Silicon Valley. VCs used it to convince LPs. Startups used it to write pitch decks. Media used it for headlines. The world it painted was beautiful: technological progress would be a rising tide that lifts all boats.
Eighteen months later, the tide did come.
But in offices across Taiwan, many business owners realized they weren’t being lifted. They were staring at a choice they never expected to face.
A Line Every Business Owner Has Said
Over the past six months, I’ve talked to a lot of business owners in Taiwan. Software, finance, services, e-commerce. Companies from a handful of people to a few hundred. Different industries, different backgrounds, different temperaments — but nearly everyone said the same thing, just in different words:
The weight behind that sentence is something Sam Altman can’t feel from San Francisco.
It’s hard to explain to a Silicon Valley techno-optimist what the relationship between a boss and employees looks like in a 15-person Taiwanese company. It’s not purely contractual, not purely friendship, but stickier than both. You know the names of everyone’s kids. You know whose mortgage just went through, whose parents aren’t doing well. And it’s even harder to explain that what this business owner is feeling has already gone far beyond “should we adopt AI.” It’s a moral dilemma.
They see what AI can do. They’re not stupid. They know data that takes someone two days to organize, AI finishes in twenty minutes. They know customer messages that take three hours a day to answer can mostly be handled by an AI system. They also know if they don’t move, their competitors will, and their cost structure and response time will become liabilities.
But they also see another picture: 9 AM, that small office, a few familiar faces. Someone who’s been there since the company was just getting started, shouldering everything along the way. Someone who said just last month, “Boss, this year we can go bigger.”
Altman said AI is a multiplier for everyone’s capabilities. But nobody told these business owners: once your multiplier is installed, how do you look into the eyes of the people who are no longer needed ?
Giving Altman Credit Where It’s Due
To be fair, the core predictions Altman made in 2024 have largely held up by 2026.
He said Scaling Laws would keep working — models would keep getting better. That’s true. From GPT-4o to Claude Opus 4.6, from Gemini 2.0 to QWEN 3.5, every generation is smarter, faster, and cheaper than the last. Inference API pricing dropped nearly 10x in eighteen months. A task that cost a hundred dollars to run a year ago now costs around ten.
He said AI would evolve from “chatbot” to “autonomous agent.” That’s happening. “Agent” went from a buzzword on slides to something that actually runs — AI that can operate computers, write code, do research, and report back conclusions. It’s not perfect yet, roughly like a smart but careless intern, but it’s moving.
He said infrastructure was the critical bottleneck. Absolutely right. Global data center power consumption has become a geopolitical issue. The US is revisiting nuclear power, not because of a sudden love for clean energy, but because AI’s appetite for electricity is staggering. Taiwan feels this even more acutely — when your power grid has to simultaneously serve TSMC and an exploding AI industry, “infrastructure” takes on a completely different weight.
Altman got all of this right.
But He Skipped an Entire Chapter
Altman’s essay got a lot right, but what really matters is what he left out .
He painted a world where “intelligence gets cheap, everyone benefits.” He used an elegant analogy: people from centuries ago would see our lives today as magic, so people decades from now will feel the same looking back at us. It’s a powerful analogy, but it skips too fast. It conflates “long-term human progress” with “short-term individual fate.”
Yes, humanity as a whole became wealthier going from the agricultural age to the industrial age. But if you were a hand-loom weaver in 1820s Manchester, you didn’t feel like you were living in “an era of improvement.” You just knew your craft was suddenly worthless, and your children had to work fourteen-hour shifts in a factory.
Altman looks down at history from thirty thousand feet. You and I have to make decisions on the ground.

Two Speeds in One Office
I’ve noticed something. When I talk to business owners, two almost parallel worlds often exist inside the same company.
On one side are the people already using AI. Maybe they’re the person who always liked tinkering with new tools, or a young hire you brought in recently. These people have tasted the difference — they use AI to draft documents, crunch data, summarize meetings, draft customer replies. Their work pace is visibly faster, and there’s a “no going back” feeling. One e-commerce boss told me a single employee now lists five times the number of products per day as before, with better descriptions too. “Looking at the dashboard, sometimes I wonder if he’s one person or three.”
On the other side are those still working the old way. They’re not dumb, and they’re not lazy. They’ve just settled into a workflow that’s served them well for years — open Excel, organize manually, write emails word by word, take notes on paper in meetings. It worked all this time. They have no reason to think it would suddenly stop working.
In large corporations, these two worlds can coexist for a while because the organization’s mass absorbs the efficiency gap. But in a company of 15 or 30 people, the gap is visible to the naked eye. Sitting at the same table, one person finishes in two hours what takes another person two days. Nobody needs to make a comparison — the pressure is already in the air.
Business owners see it all, and they’re more anxious than anyone.
“It’s not that I don’t want everyone using AI,” a B2B services owner told me. “But I tell them to try it, some of them open ChatGPT for ten minutes, say ’this thing isn’t accurate, it’s useless,’ and never touch it again. What am I supposed to do? Force them?”
You can’t force them. But you can’t stand still either.
Because the market won’t wait. Your competitors won’t wait. And that employee who’s thriving with AI won’t wait forever either — if they realize the whole company can’t keep up with their pace, they’ll leave. Then you lose the person most adapted to the future and are left with a team still living in the past.
That’s the real situation for Taiwan’s business owners — and every white-collar worker under them — in 2026. It’s not “should we use AI.” It’s “some people in the company are already living in the future, others are still in the past, and you’re stuck in between.”

Demand Destroyed, Not Transferred
Here’s the thing Altman’s entire essay never touched.
Every major technological revolution in history followed a pattern: old jobs disappear, new jobs appear. Steam engines killed crafts but created factory management, railway engineering, and mechanical repair. Computers killed typists but created programmers, network admins, and UX designers. Each time, the jobs destroyed were “low-cognitive” and the jobs created were “high-cognitive.” Society’s answer was simple: educate more people, push them up the cognitive ladder.
But this time, AI is eliminating cognitive work itself .
It’s not here to replace your hands. It’s here to replace your brain. It can write reports, do analysis, write code, translate, generate designs, organize financial data. These are exactly the white-collar jobs we spent the last twenty years telling young people they’d get if they “studied hard.”
And unlike previous revolutions, there’s no equivalent “new category of work” appearing yet.
Some people say: “AI will create new roles — prompt engineer, AI trainer.” Maybe. But deep down you know — the accounting and customer service that three people handle at your company today could probably be managed by one person with AI support. The other two — it’s not that you don’t want to keep them. It’s that you don’t know what to have them do.
Altman called AI a “productivity multiplier” that makes every hour more valuable. What he didn’t say is the flip side: when every hour becomes more valuable, you need to buy fewer of them.
Economics has a term for this: demand destruction. That concept is completely absent from Altman’s essay. But in 2026, every business owner in Taiwan can already feel it — most just don’t know what to call it.

The “Few Thousand Days” Countdown
Altman dropped a striking number in his essay: “We may have superintelligence in a few thousand days.”
A few thousand days. Roughly three to eight years. Counting from September 2024, by March 2026 we’ve burned through about 550 days. Has superintelligence arrived? No. But something more interesting happened — the goal of “superintelligence” itself started to matter less.
The industry has mostly stopped caring about “can the model pass a PhD-level physics exam.” What everyone actually wants to know is: can AI reliably do work inside your company, in your workflows, with your data? A model that writes perfect essays but randomly fabricates data is less useful than a mediocre model that never lies.
Altman drew a line pointing toward the sky. But what you need is a line that touches the ground.
And the bottleneck for landing isn’t technology. It’s your organization.
Those employees still on the sidelines — they don’t disbelieve that AI is powerful. They don’t trust that they can learn it. Or deeper still, they’re afraid that once they do, they’ll discover the experience and work habits they’ve built over years suddenly don’t matter anymore.
That kind of fear can’t be fixed with a single training session.
What Altman Didn’t Tell You: You Have Choices Now
When Altman wrote that essay in September 2024, OpenAI could still speak in a “we define the future” tone. GPT-4 had been out for about a year, and there were no real challengers.
Eighteen months later, the landscape is completely different.
Anthropic’s Claude has established itself in the enterprise market. Google’s Gemini keeps eating market share through its search and cloud ecosystem. Meta open-sourced Llama, giving small and mid-size businesses around the world an option that doesn’t cost a fortune.
Then there’s DeepSeek. A Chinese company that built competitive models at a fraction of OpenAI’s training cost. The real takeaway: you don't have to be a Silicon Valley giant to do AI well .
For SMB owners, this shift matters more than anything else. Two years ago, “adopting AI” sounded like a game only big enterprises could play — spend millions on consultants, build systems, train models. Now, a subscription costing a few thousand NTD per month gives your employees access to world-class AI tools. The barrier is too low to be an excuse anymore.
When Altman wrote his essay, the future looked like OpenAI’s to dictate. Not anymore. Competition has opened the market, and you — the buyer — are the biggest beneficiary.
But the prerequisite is figuring out what you actually need. And that part, you can’t outsource.

Not Your Fault, But Your Problem
To every business owner who’s read this far —
The situation you’re facing isn’t something you caused. You didn’t invent AI. You can’t rewind time to when all you had to manage was revenue and headcount.
But this problem has landed on your desk.
And honestly, you can do more than you think.
Business owners who started moving in 2024 have two years of hands-on experience by 2026. They know what AI can do, what it can’t, where it breaks, and where it saves money. You can’t get that from reading articles — it has to grow from practice. And this gap widens every day, not because technology makes first-movers stronger, but because hands-on experience compounds . Today’s mistakes make tomorrow’s decisions better. Better decisions make next week’s processes smoother. Smoother processes let AI go deeper the week after. Once that flywheel spins, catching up isn’t about tools — it’s about the entire team’s understanding.
But it’s never too late to start. The only mistake is not starting at all.
Three things you can do starting this week.
First, audit what everyone on your team does — honestly. You don’t need a consultant. You can do this yourself. Lay out what each person does every day and ask: if AI plus one person handled this, what could it look like in twelve months? If the answer is “probably 80%,” then the question isn’t whether to let that person go — it’s how to shift their value from “execution” to “judgment” and “quality control.” Transitioning people is always cheaper than replacing them. Especially in small companies — every person who leaves takes an irreplaceable chunk of context with them.
Second, turn the person already using AI into everyone’s coach. Every company has one or two people already doing it. Don’t suppress them — give them a role: have them lead others. Not a formal training session, but showing real examples in day-to-day work. “That quote compilation you’re working on? I did it in five minutes with AI — want me to show you?” Nothing is more persuasive than “the person at the next desk already knows how.” You don’t need everyone to start at once — you need a spark, then let the fire spread.
Third, have the uncomfortable but necessary conversation. Not the “AI will make us more efficient” platitude, but sitting down with your team and saying: “Our industry is changing. Some tasks won’t be done by people anymore. Some roles will look different. I’m not going to pretend nothing is happening, but I’m also not leaving anyone behind. I need you to figure this out with me.” People don’t fear bad news. They fear silence — the boss says nothing, then one day suddenly announces “restructuring.” Saying it early costs far less than saying it late.
After the Myth
Is Sam Altman’s The Intelligence Age a myth? In some ways, yes. It simplified reality and skipped over the pain. His role is different from ours — he looks at the big picture while we walk on the ground.
But myths have their uses. They give us a sense of direction, showing us where the tide is heading.
The real question is — once you know which way the tide flows, are you going to stand on the shore and wait, or start moving?
You don’t need to become an “AI company” overnight. You just need to start this week with something small, concrete, and allowed to fail. Let your team feel what AI is like. Then learn one thing from that attempt, and do another one next week.
Those people who’ve been with you for years — they’re not your burden. They’re the ones who know your customers, your processes, your quirks best. AI can’t learn that. What they lack isn’t ability — it’s someone willing to walk them through this unfamiliar stretch.
That someone is you.
Altman wrote an essay about the future. But the future isn’t written in essays.
It’s walked out, step by step, with your team.
Original reference: Sam Altman, “The Intelligence Age,” September 2024, ia.samaltman.com
Further reading:
- Motorcycle for the Mind: Naval on AI and the Future of Work — How individuals should position themselves in the AI era
- Something Big Is Happening — What Silicon Valley founders are really talking about
- When AI Opens Your Drawers — The other side of AI agent autonomy
- Abundant Intelligence: When Smarts Are No Longer Scarce — Product strategy in the age of abundant intelligence