- Simplifying AI
- Posts
- 🧠 OpenAI drops the Blueprint for ASI
🧠 OpenAI drops the Blueprint for ASI
PLUS: How to run Google's new Gemma 4 locally on your computer offline

Good Morning! OpenAI just dropped a sweeping 13-page policy blueprint outlining the transition to superintelligence, and the economic numbers behind their next moves are staggering. Plus, you’ll learn how to run Gemma 4 locally on your computer offline and free.
Plus, in today’s AI newsletter:
OpenAI Drops the Blueprint for a Post-AGI Society
Tufts Researchers Slash AI Energy Use by 100x in Robotics
Karpathy's "LLM Wiki" Replaces Traditional RAG
How to Run Gemma 4 Locally Offline
4 new AI tools worth trying

AI MODELS
OpenAI just published a 13-page "Industrial Policy for the Intelligence Age," explicitly stating we are beginning the transition toward superintelligence. Rather than a distant sci-fi scenario, they are treating AGI as an active economic shock requiring New Deal-level government ambition.
To offset the incoming wave of automated labor and job loss from upcoming models, OpenAI is pitching pilots for a 32-hour workweek, portable benefits, and a Public Wealth Fund to give citizens a direct dividend from AI's growth.
The blueprint explicitly warns of near-term risks involving biological weapons and nation-state cyberattacks, calling for strict containment playbooks for dangerous models and an international AI safety network.
They are pushing to treat AI access as basic infrastructure (like electricity) and fast-track energy grid expansion via public-private partnerships.
This drops alongside massive financial rumors: leaks project OpenAI's compute and training costs could hit an insane $125 billion by 2029, while whispers of a Q4 IPO suggest a mind-bending $1.2 trillion valuation.

Whether Sam is capping to hype up a trillion-dollar IPO or we are genuinely on track to hit AGI this year, the frontier labs are no longer playing around. By dropping a literal industrial policy document, OpenAI is trying to anchor the narrative and write the rules for the post-AGI economy before regulators do it for them.
AI NEWS
A research team at Tufts University has successfully combined neural networks with symbolic reasoning to create a "neuro-symbolic" AI. While hype on X is claiming this could immediately rewrite the entire AI industry, the researchers themselves have clarified that this massive 100x energy reduction currently applies to specific, structured robotic manipulation tasks rather than massive general language models.
Data centers already consume over 10% of U.S. power, putting immense pressure on the AI industry to find energy-efficient solutions before demand doubles by 2030.
Instead of relying entirely on pattern matching and brute-force trial and error, the Tufts team added a symbolic logic layer. This teaches the AI to break problems into steps and apply abstract rules (like shape and balance) before acting.
In a structured robotic planning puzzle (the Tower of Hanoi), the neuro-symbolic system hit a 95% success rate, crushing the 34% success rate of standard visual-language-action (VLA) models.
Because it didn't have to blindly guess its way to a solution, training took just 34 minutes instead of 36+ hours, dropping energy use to 1% of what standard models require.

The rules for this specific robotic puzzle were hand-coded by experts in a simulation, meaning we can't just plug this architecture into OpenAI's models tomorrow to save the power grid. However, it proves a massive point: throwing infinite compute and data at a problem isn't the only way to solve it. If researchers can eventually scale this neuro-symbolic approach beyond narrow robotics into broader applications, it could fundamentally shift how we build reliable, energy-efficient AI in the future.
VIBE CODING
Andrej Karpathy published a viral GitHub Gist called "LLM Wiki" that amassed over 5,000 stars in just 48 hours. Instead of an AI retrieving information from scratch every time you ask a question (like traditional RAG), this pattern uses an AI agent to build and maintain a persistent, ever-growing knowledge base of interlinked markdown files.
Replaces standard RAG by compiling knowledge once and keeping it current, rather than re-discovering fragments and starting over on every single query.
When you drop a raw source (article, paper, transcript) into a folder, the AI reads it, writes a summary, updates entity pages, flags contradictions, and builds cross-references automatically.
One source can update 10 to 15 wiki pages simultaneously, meaning your explorations compound into a smarter knowledge base over time.
Highly versatile use cases: tracking personal goals, evolving a research thesis over months, building book/fan wikis, or maintaining business docs from Slack threads and meeting notes.

Current AI retrieval systems like NotebookLM or ChatGPT file uploads have amnesia, they pull fragments for a single answer and then forget everything. Karpathy’s LLM Wiki flips this dynamic by treating knowledge as a compounding codebase. By offloading the tedious maintenance, indexing, and cross-referencing to an AI agent, it allows individuals and teams to build massive, highly accurate knowledge graphs that actually get smarter with every new source added.

HOW TO AI
🗂️ How to Run Gemma 4 Locally with LM Studio
In this tutorial, you will learn how to instantly download and run Google’s groundbreaking open-source Gemma 4 model locally using LM Studio. You’ll explore its advanced reasoning, multimodal vision, and agentic tool-calling capabilities directly on your machine.
🧰 Who is This For
People who want to run AI locally (no cloud)
Developers experimenting with local LLMs
Privacy-focused users avoiding data sharing
Students learning how models actually run
STEP 1: Install LM Studio and Prep Your Environment
Head over to LM Studio's website and download the application to your system. This program is the ultimate hub for running open-source large language models. Before doing anything else, click to check for software updates and ensure you are running the absolute latest runtime engine.
If your frameworks are outdated, this brand-new model will not boot properly, even if you have the hardware to support it.

STEP 2: Search for and Download Gemma 4
Inside LM Studio, use the search bar to look up "Gemma 4." You will see options ranging from the compact 2B all the way to the massive 31B parameter version. For a great balance of speed and intelligence, look for the Gemma 4 E4B (Effective 4 Billion) model with 8-bit quantization uploaded by Unsloth.
The "E" stands for effective, meaning the model actually has around 8 billion parameters but only activates 4 billion at any given time during inference. This brilliant architecture keeps the file size highly performant for local hardware while delivering incredibly sharp reasoning.

STEP 3: Load the Model and Test Multimodal Vision
Navigate to your main chat screen, select your newly downloaded Gemma 4 model from the dropdown, and wait for it to load into your system's video memory. Start by asking it a logical question, like explaining Newton's third law, to test its tokens-per-second speed.
Since Gemma 4 features native vision support, you can also click to upload an image. Try giving it a tricky photo, like a rare white wallaby instead of a kangaroo, to see how accurately its reasoning engine analyzes visual data without being easily fooled by complex prompts.

STEP 4: Utilize Tool Calling or the Cloud Alternative
Gemma 4 is built for agentic workflows, meaning it can trigger external tools. By enabling an MCP (Model Context Protocol) like the one from Hugging Face, you can prompt the AI to generate images, search the web, or execute local coding tasks.
If you ever hit an API limit with your tools, or if your local machine struggles and you simply want to test the massive 31B parameter version without downloading it, you can instantly spin up the flagship Gemma 4 models directly in your browser at Google AI Studio to keep experimenting for free.

Anthropic signs an agreement with Google and Broadcom for multiple GWs of TPU capacity, and says run-rate revenue has crossed $30B, up from ~$9B at 2025's end.
Netflix launches Netflix Playground, a games app for kids aged eight and under, in the US, UK, Canada, Australia, the Philippines, and New Zealand.
OpenAI sends a letter to the California and Delaware AGs, urging them to investigate “anti-competitive behavior” by Elon Musk, ahead of a trial in April.
OpenAI, Anthropic, and Google are sharing information via the Frontier Model Forum to detect adversarial distillation attempts that violate their ToS.

💎 Gemma 4: Google’s powerful small AI model
🎥 Veo 3.1 Lite: Google’s cheaper video generation AI
🧠 PikaStream 1.0: turns AI agents into talking, face-to-face video bots
💻 Holo 3: Open AI agent that can use computers like a human


THAT’S IT FOR TODAY
Thanks for making it to the end! I put my heart into every email I send, I hope you are enjoying it. Let me know your thoughts so I can make the next one even better!
See you tomorrow :)
- Dr. Alvaro Cintas



