AI Hurtles Ahead: Investor Memo Says the Last Three Months Changed Everything
A new investor memo titled “AI Hurtles Ahead” argues that the most important shift since late 2025 isn’t merely that artificial intelligence is improving — it’s how fast it’s improving, and how quickly that improvement is now translating into real economic disruption.

The author says he originally explored the “AI bubble” question in a December memo, Is It a Bubble? After speaking with technologists in their 30s and 40s, he returned to them for an update. One of them suggested a simple test: ask Claude, Anthropic’s AI model, to produce a tutorial explaining AI and what has changed in the last three months. The result, he writes, was astonishing — a 10,000-word, personalized curriculum that felt less like an encyclopedia entry and more like “a personal note from a friend or colleague.”
The memo is presented as an addendum to the December piece. The author emphasizes he did not let Claude write the memo itself, but he quotes extensively from Claude’s output because he found it unusually coherent, logically structured, candid about limitations, and tailored to his own frameworks and past writing.
Training vs. Inference: A Model’s Two Lives
A central theme is that many people misunderstand what an AI model is. The tutorial pushed the author away from viewing a model as a search engine that retrieves and repeats information. Instead, it describes an AI model as a system that synthesizes and “reasons” from patterns learned during training.
It outlines two phases:
Training: not “loading facts,” but shaping the model’s ability to form reasoning patterns, structure arguments, and apply those patterns to unfamiliar situations. The author compares this to how a human develops intellectual capacity through exposure to the world.
Inference: the operational phase when a trained model responds to prompts. The model does not typically assign itself goals; it waits for user instructions.
The author highlights a key point: if AI’s potential is being underestimated, it may be because users aren’t prompting well enough — a limitation of people, not the systems.
Can AI Have New Ideas — or Just Remix?
The memo then tackles a question that has become increasingly unavoidable: does AI “think”? The author frames the skeptical view as “sophisticated pattern matching” with a ceiling — impressive recombination, but not original thought.
Claude’s rebuttal, as quoted, is blunt: humans also learn from others’ writing, frameworks, and experience. The distinction that matters may not be philosophical but practical: if AI can produce the output of a highly paid knowledge worker, the economic effects are the same whether the process counts as “real thinking” or not.
The author also introduces the importance of the term generative — meaning the system can create new content that resembles patterns in its data, rather than merely classifying or retrieving.
The Big Change Since December: Speed + Autonomy
The memo argues that AI’s timeline is unlike the computer revolution. It contrasts decades-long adoption arcs (ENIAC in 1945, mainstream PCs only in the early 1980s) with a compressed AI arc: invisible AI features before 2010, voice assistants soon after, and generative AI becoming a general-purpose technology only within the last two years — already used by hundreds of millions of individuals and a large majority of companies.
But the bigger leap, the author says, is that AI has moved through three capability levels:
Level 1: Chat AI (answers questions)
Level 2: Tool-using AI (executes tasks with tools when instructed)
Level 3: Autonomous agents (user gives a goal and constraints; the system does the work, checks it, iterates, and delivers results)
That last level is the difference between “productivity boost” and “labor substitute.” The author calls autonomy the defining factor that separates AI from prior innovations, because it can do not only what humans already do more efficiently — but potentially new categories of work and workflows.
He quotes at length from a widely circulated post by OthersideAI CEO Matt Shumer, who describes a moment in early February 2026 when newly released models felt like a step-change: work that previously required back-and-forth guidance could now be delegated in plain English, with the AI testing and refining outputs “like a developer would.”
Limitations: Still Real, Still Dangerous
Despite the awe, the memo lists limitations Claude itself volunteered:
AI struggles more in genuinely unprecedented situations with weak historical pattern support.
AI may not reliably recognize when it doesn’t know something and may “hallucinate.”
Reliability has improved but errors remain.
Context windows still limit working memory.
Polished output can create over-trust.
The author’s “take” is pragmatic: humans also forget, make mistakes, and miss what they don’t know. The relevant comparison is performance — and AI is increasingly better than most people at many knowledge tasks.
Investing Implications: A Higher Bar for Humans
The memo argues investors were slow to price disruption risk — a familiar human failure to incorporate new information. AI has obvious investing advantages: processing more data, less emotion, fewer behavioral biases (in theory). But it also lacks skin in the game, may struggle with true novelty, and can’t fully replicate human taste and judgment about qualitative factors.
His conclusion is not that humans are finished, but that mediocre humans are: AI will raise the bar in investing the way indexing pushed out active managers who didn’t add value.
Bubble Question: Tech Real, Valuations Unclear
On whether AI is a bubble, the memo splits the question:
The technology is real and already in de
The buildout may still involve malinvestment (as in past tech booms).
Some revenue may be “circular” (AI companies buying from each other).
Valuation risk is real, especially for startups that are “lottery tickets.”
His guidance remains: don’t go all-in and risk ruin, but don’t stay all-out and miss a structural shift. He recommends a moderate, selective, prudent posture.
The Darker Postscript: Work, Purpose, and Social Stability
The memo ends with the author’s strongest worry: joblessness and purposelessness. He cites examples of roles that may be rapidly reduced (advertising copy, software engineering, driving jobs as autonomy expands). Claude’s quote frames the shift starkly: Level 1 and 2 were “faster horses,” while Level 3 agents are “the automobile” — not speeding up work but doing it.
The author acknowledges optimists who point to history: new jobs usually appear. But he can’t confidently believe society will adjust at AI speed. He closes with a personal preference — he’d rather be an optimist and wrong than a pessimist and right — while admitting he cannot shake the concern.
If you want, I can also rewrite this as a “clean news article” version (no first-person, no long quotes, fully newsroom style) while keeping it over 600 words for platforms that enforce minimum length.




Comments
There are no comments for this story
Be the first to respond and start the conversation.