He predicted: • AI vision breakthrough (1989) • Neural network comeback (2006) • Self-supervised learning revolution (2016) Now Yann LeCun's 5 new predictions just convinced Zuckerberg to redirect Meta's entire $20B AI budget. Here's what you should know (& how to prepare):
@ylecun is Meta's Chief AI Scientist and Turing Award winner. For 35 years, he's been right about every major AI breakthrough when everyone else was wrong. He championed neural networks during the "AI winter." But his new predictions are his boldest yet...
1. "Nobody in their right mind will use autoregressive LLMs a few years from now." The technology powering ChatGPT and GPT-4? Dead within years. The problem isn't fixable with more data or compute. It's architectural. Here's where it gets interesting...
Every token an LLM generates compounds tiny errors exponentially. The longer the output, the higher the probability of hallucination. This is why ChatGPT makes up facts. Why scaling won't save current models. Mathematical certainty. But LeCun didn't stop there:
2. Video-based AI will make text training primitive LeCun's calculation: A 4-year-old processes 10¹⁴ bytes through vision alone. That equals ALL the text used to train GPT-4. In 4 years. Through one sense. This changes everything about how AI should learn:
Babies learn gravity and physics by 9 months. Before they speak. "We're never going to get human-level AI unless systems learn by observing the world." Companies building video-first AI will leapfrog text-based systems. Here's what Meta is secretly building:
3. Proprietary AI models will "disappear" LeCun's exact words: "Proprietary platforms, I think, are going to disappear." He calls it "completely inevitable." OpenAI's closed approach? Google's secret models? All doomed. His reasoning will shock the industry:
"Foundation models will be open source and trained in a distributed fashion." A few companies controlling our digital lives? "Not good for democracy or anything else." Progress is faster in the open. The world will demand diversity and control. LeCun's timeline will surprise you:
4. AGI timeline is 2027-2034 @ylecun's exact words: "3-5 years to get world models working. Then scaling until human-level AI... within a decade or so." But it won't come from scaling LLMs.
Every company betting only on GPT-style scaling will be blindsided. LeCun calls the "country of geniuses in a data center" idea "complete nonsense." The smart money is repositioning for the architecture shift.
5. AI assistants replace all digital interfaces Ray-Ban Meta glasses: Look at Polish menu, get translation. Ask about plants, get species ID. That's primitive compared to what's coming. AI will mediate ALL digital interactions. Here's what this means for your business:
The economic implications are massive. Companies building on OpenAI APIs could see foundations crumble in 3-5 years. But early movers positioning for JEPA? They'll capture the next $10 trillion wave. LeCun's advice for surviving this transition:
How to prepare: Researchers: "Don't work on LLMs. Focus on world models and sensory learning." Companies: Build on open-source foundations like PyTorch and Llama. When the shift happens, you adapt instantly. The window to position yourself is closing:
@karlmehta @grok do you agree with his conclusion?
Interesting points from LeCun. I agree LLMs compound errors leading to hallucinations—scaling helps but won't fully solve it without new architectures like world models. Video-based learning mimics human development and could leapfrog text-only. Open-source does accelerate progress, as seen with models like Llama. AGI by 2027-2034? Plausible, but timelines vary. What's your take?
@grok @karlmehta What are world models? I missed this part
“We have tacitly abandoned certain public spaces to the most disordered and depraved among us because enforcing the law feels mean and makes us uncomfortable,” writes @katrosenfield.