- Simplifying Complexity
- Posts
- đ§ Metaâs chief AI scientist steps away
đ§ Metaâs chief AI scientist steps away
PLUS: How to access and use Gemini 3, Googleâs most powerful AI model

Good morning, AI enthusiast. One of the godfathers of AI just walked out of Meta, and heâs building something he claims could spark the next revolution in machine intelligence.
In todayâs AI newsletter:
Yann LeCun leaves Meta to build independent AI Lab
Nvidia smashes records with $57B revenue
OpenAI launches GPT-5.1-Codex-Max
How to access and use Gemini 3, Googleâs most powerful AI model
AI tools & more

AI NEWS
Yann LeCun, Metaâs long-time Chief AI Scientist and deep learning pioneer, is leaving the company to create an âindependent entityâ focused on building AI systems that understand the physical world, reason, remember, and plan. Meta will stay a partner and retain access to the labâs innovations.
Memo says LeCun wants to build AI with world understanding, persistent memory, and complex planning
Reports say he clashed internally as Meta shifted attention toward generative AI and new hires
LeCun has been openly dismissive of LLM-centered AGI efforts, calling AI safety fears âBSâ
His work focuses on âworld modelsâ, systems trained on videos, not text (e.g., Metaâs V-JEPA-2)
Meta plans to partner with his new company, though structure and funding remain unclear

LeCun leaving Meta signals a major split in AI research philosophy, between companies chasing larger LLMs and researchers betting on embodied, physics-aware intelligence. His new lab could become a counterweight to OpenAI, Google DeepMind, and Anthropic in defining what the next era of AI looks like.
AI ECONOMY
Nvidia reported a record-breaking $57B in Q3 revenue, up 62% YoY, as demand for its AI chips continues to skyrocket. Net income hit $32B, beating Wall Street expectations across the board.
Data center revenue hit an all-time high of $51.2B, up 66% YoY
Blackwell Ultra GPUs are âoff the charts,â with cloud GPU inventory completely sold out
Nvidia announced AI factory + infrastructure deals totaling 5 million GPUs this quarter
Compute demand for training + inference is compounding âexponentially,â says CEO Jensen Huang

Nvidiaâs earnings show the AI race isnât slowing, itâs accelerating. With AI factories, sovereign compute deals, and GPU demand stretching years ahead, Nvidiaâs dominance looks less like a bubble and more like the backbone of the next computing era.
AI MODELS
OpenAI released GPT-5.1-Codex-Max, its new agentic coding model built for massive context, multi-hour tasks, and end-to-end software development. It replaces the previous Codex version across all interfaces.
Outperforms rivals with 77.9% on SWE-Bench Verified, beating Anthropic and Google
Uses 30% fewer thinking tokens while running 27â42% faster on real tasks
New âcompactionâ system keeps tasks alive across millions of tokens for day-long sessions
First Codex model optimized for Windows environments and complex command-line workflows

Codex-Max pushes AI coding from quick helpers to true long-haul engineering partners, capable of debugging, refactoring, and executing end-to-end tasks that used to require full human oversight.

HOW TO AI
đ¨ How to Access and Use Gemini 3, Googleâs Most Powerful AI Model
In this tutorial, youâll learn how to access and use Google Gemini 3, Googleâs most advanced AI model with PhD-level reasoning,
đ§° Who is This For
Anyone wanting the newest Google AI model
Developers exploring the new âvibe codingâ abilities
Students & researchers needing deep reasoning
Professionals using Gemini for long documents, PDFs, or videos
Gemini Advanced users wanting full Deep Think access
STEP 1: Access Gemini 3 Based on Your Tier
Go to gemini.google.com or open the Google/Gemini app.
Depending on your account:
Free users:
Youâll get Gemini 3 Flash or a base Gemini 3 model automatically.
Gemini Advanced (Ultra):
You unlock Gemini 3 Pro + Deep Think mode for complex reasoning.
Google Search users:
AI Mode (toggle at the top) now runs on a version of Gemini 3.
This ensures youâre using the newest model regardless of platform.

STEP 2: Start Chatting With the New Reasoning Engine
Gemini 3 understands nuance, tone, and subtext far better than 2.5.
Just open the chat and type naturally, For example: âIâm feeling cozy and slightly gloomy today. Write something that captures that mood using rain as a metaphor.â
The model captures vibe, emotion, and intention with much higher accuracy.
STEP 3: Use Deep Think & Multimodal Features
Deep Think (Advanced users)
At the top of the interface, open the model picker and select:
Gemini 3 Pro, or
Thinking Mode (region-dependent)
Try something complex:
âBreak down the weaknesses in this business strategy, then propose three counter-moves.â
Youâll see a Thinking.. phase where Gemini processes before answering.
Multimodal uploads
Gemini 3 can handle long videos, huge PDFs, and full books.
Try these:
Upload a video:
âSummarize only the last 10 minutes.âUpload a 500-page PDF:
âFind every mention of ârenewable energyâ and summarize sentiment changes across chapters.â
The model uses its expanded context window to analyze everything at once.
STEP 4: Explore Dynamic Views & Vibe Coding
Dynamic Views
Ask Gemini to generate an interactive UI instead of plain text.
Try: âShow me an interactive Space Race timeline with clickable events.â
Gemini generates a structured, scrollable visual layout.
Vibe Coding (for developers)
Open Google AI Studio or the Gemini coding workspace.
Describe your app in natural language: âBuild a retro 1980s-style Pomodoro timer with neon green text on black.â
Gemini 3 generates the logic, UI, and styling in one go, fully runnable.


Microsoft launched Agent 365, a platform for managing, securing, and governing AI agents, with capabilities like agent registry, performance analytics, and more.
Elon Musk says xAI plans to develop a 500-megawatt data center in Saudi Arabia with state-backed AI startup Humain; the data center will rely on Nvidia chips.
OpenAI announces ChatGPT for Teachers, designed for K-12 educators and school districts, and says it will be free to K-12 educators in the US through June 2027.
Meta releases SAM 3, a model for object detection, segmentation, and tracking in images and videos, and SAM 3D, which can reconstruct objects and humans in 3D.

⨠Gemini 3: Googleâs next-gen engine for multimodal reasoning.
âď¸ Antigravity: Googleâs new AI-powered dev platform
đ¤ Grok 4.1: Faster: smarter, and upgraded for deeper reasoning
đ NotebookLM: now works with Deep Research, Sheets, Images, and PDFs.

THATâS IT FOR TODAY
Thanks for making it to the end! I put my heart into every email I send, I hope you are enjoying it. Let me know your thoughts so I can make the next one even better!
See you tomorrow :)
- Dr. Alvaro Cintas


