Platform
Meta’s Chief AI Scientist Outlines New Direction Beyond LLMs
Meta’s Chief AI Scientist Yann LeCun revealed the company’s research unit is developing AI architectures fundamentally different from today’s Large Language Models (LLMs), aiming to create systems with a deeper understanding of physical reality and better reasoning capabilities.
Speaking at the AI Alliance’s Global Leadership Reception, LeCun explained that Meta’s Fundamental AI Research (FAIR) team is focusing on what he terms “objective-driven AI” and “world models” that enable systems to predict consequences of actions rather than simply predicting the next word in a sequence.
“LLMs are really great, they’re useful for a lot of stuff, but they have to be trained for everything they need to do. They cannot do something new without being trained for it,” LeCun said.
New Architecture
LeCun identified four essential characteristics missing from current AI systems: understanding the physical world, having persistent memory, reasoning effectively, and planning complex actions.
While companies have implemented workarounds by bolting additional systems onto LLMs, LeCun argued these are “hacks that do not put into question the basic paradigm.”
Meta’s research involves a new architecture called JEPA (Joint Embedding Predictive Architectures) that trains systems to predict at an abstract representation level rather than at the pixel level for video.
“You don’t predict at the pixel level. You train a system to run an abstract representation of the video so that you can make predictions in that abstract representation,” LeCun explained.
Open Projects Already Deployed
Meta has released several research tools, including:
- DINO: A generic image encoder that extracts features from various image types for use in classification tasks
- SAM: A “segment everything model” that delineates objects and boundaries within images
- NLB: “No Language Left Behind,” a translation system for hundreds of languages, recently published in Nature
Meta reports that its AI has reached nearly one billion monthly active users across Facebook, Instagram, Messenger, and WhatsApp. According to CEO Mark Zuckerberg, the company’s focus for 2025 is “deepening the experience and making Meta AI the leading personal AI—with an emphasis on personalization, voice conversations, and entertainment. “
The tech giant reported $42.3 billion in Q1 revenue, up 16% year-over-year, and has increased its AI investment forecast to $64-72 billion for the year, up from the previous $60-65 billion projection.
Open Source Essential for Global AI Diversity
LeCun emphasized that open-source AI platforms are crucial for preserving cultural diversity and democratic values as AI assistants increasingly mediate information access.
“In the future, every one of our interactions will be mediated by assistants, and we cannot afford that all of this information be controlled by a handful of companies,” LeCun stated. “We need assistants to speak every language in the world, understand every culture, every value system.”
He envisions training open-source platforms in distributed data centers worldwide, with regions contributing data they may not want to share externally but participating in building common models.
Rather than focusing on post-training safeguards, LeCun advocated for systems with guardrails built into their objective functions, similar to how laws constrain human behavior.
“By construction, the only thing they can do is produce a sequence of actions that do not violate those guardrails and still satisfy the objective,” said LeCun. “Those things would be intrinsically safe if we could design the guardrails.”
David Adler is an entrepreneur and freelance blog post writer who enjoys writing about business, entrepreneurship, travel and the influencer marketing space.
