🧠 Living AI Computers Are Here

PLUS: How to automate your entire creative pipeline using AI

Good morning, AI enthusiast. If you’re obsessed with how fast reasoning models are evolving (like I am), Google just dropped something you’ll love, Gemini 3 Deep Think. Also, today I’ll teach you how to automate your entire creative workflow.

In today’s AI newsletter:

  • Google drops Gemini 3 “Deep Think” mode

  • Anthropic’s leak doc reveals Claude’s soul

  • Living AI computer are here

  • How to automate your entire creative pipeline using AI

  • 4 new AI tools & more

BIOCOMPUTERS

Cortical Labs recently unveiled the CL1, the world’s first commercially available biological computer powered by living human brain cells. This Synthetic Biological Intelligence (SBI) blends neurons with silicon hardware to create a neural network that learns and adapts like a brain, all without needing an external computer.

  • Life-support unit keeps neurons alive with temperature, gas, filtration, and circulation controls

  • Fully programmable stacks let researchers run dozens of neural networks in parallel

  • Cloud-based “Wetware-as-a-Service” lets users access CL1 remotely, no lab required

  • Units are energy-efficient (~850-1,000 W per rack) and priced at around $35,000 to start

  • Engineers are exploring the “Minimal Viable Brain,” a small, fully functional neural network for research

CL1 could redefine AI, drug discovery, and neuroscience research. With cloud access, researchers anywhere can interact with living neural networks, making this more than a breakthrough, it’s a whole new way to explore intelligence.

AI MODELS

Google AI has released Deep Think, a boosted reasoning mode built on Gemini 3 Pro. It uses advanced parallel thinking to explore multiple hypotheses simultaneously, making it far better at complex math and science than everyday office tasks.

  • Scored 41% on Humanity’s Last Exam, state-of-the-art performance

  • Hit 45.1% on ARC-AGI-2 with code execution

  • Achieved 93.8% on the GPQA Diamond benchmark, another SOTA result

  • Variants recently earned gold-medal level at the International Mathematical

    Olympiad and ICPC World Finals

  • Available now for all Google AI Ultra subscribers ($250/month)

Google is racing to match and outpace DeepSeek and OpenAI’s upcoming reasoning models. By shipping Olympiad-grade reasoning to the public first, Google is signaling that the next frontier of AI competition isn’t chat, it’s advanced, tool-free scientific reasoning.

AI RESEARCH

A user on LessWrong managed to extract an internal document from Claude 4.5 Opus that outlines how the model thinks, behaves, and even handles its own “functional emotions.” Anthropic ethicist Amanda Askell confirmed the document is authentic and was used during training.

  • User recovered the text after Claude hallucinated fragments of a “soul_overview”

  • Multiple Claude instances were used to reconstruct the full document

  • Anthropic says the system prompt is embedded in the weights, not added at runtime

  • Internally referred to as the “soul doc” (informally)

  • Document explains Claude’s ethics, self-perception, and safety-first design

Claude’s persona is one of its biggest strengths, and this leak shows just how intentionally it's designed. As AI agents become more influential, understanding who shapes their character (and why) will matter as much as the models themselves.

HOW TO AI

đŸ—‚ïž How to Automate Your Entire Creative Pipeline

In this tutorial, I’ll show you how to turn a simple creative brief into a full multi-asset campaign using Lovart, the world’s first true Design Agent powered by the high-fidelity Nano Banana Pro model.

🧰 Who is This For

  • Brand managers and marketers launching new products

  • Designers and creatives wanting seamless, end-to-end asset generation

  • Startups and small teams needing fast campaign creation

  • Anyone who wants to automate their creative pipeline from concept to output

STEP 1: Access Lovart

Head over to the Lovart platform and start a new project. This is where the magic starts, Lovart works best when you treat it like your creative director. Instead of giving a tiny prompt, give context, tone, and goals just like you would in a real briefing.

For example, I might say: “I need a full social campaign for a new line of futuristic, eco-friendly sneakers. The vibe should be rebellious and vibrant, and the main assets should be product photoshoots for social ads.”

The clearer and more intentional you are here, the better Lovart will orchestrate your entire campaign.

STEP 2: Choose Your Style and Select the Visual Engine

Next, define how you want everything to look. You can type a style description, something like “neon cyberpunk photography with glossy highlights”, or upload a reference image if you already have a brand mood. This helps Lovart lock onto a visual identity for the whole campaign.

Then pick Nano Banana Pro as your model engine. I always choose this one because it delivers the cleanest realism, best textures, and the most reliable text/logo rendering. Once you confirm the style and engine, Lovart knows exactly how to direct your visual universe.

STEP 3: Customize Your Visual Like a Designer

Now hit Generate, sit back, and watch Lovart do its thing. In one go, it produces everything:

  • Product photos with studio-grade lighting

  • Ad banners for Instagram, TikTok, or Facebook

  • Branding elements like logos and typography

  • Multiple layout variations and aspect ratios

You’ll see all the assets laid out on the infinite canvas, and the cohesion is honestly wild, every piece feels like it came from the same professional design team. You can review the variations and pick the ones that match your vision best.

STEP 4: Expand Your Assets Into Video & Audio

Once you find an image or banner you love, click on it to select it. Then jump back to the chat and tell Lovart exactly how you want it transformed into multimedia.

For example: “Turn this product photo into a 15-second vertical video with a slow zoom, add a voiceover saying ‘Rebel with a cause,’ and use an electronic beat in the background.”

Lovart then handles everything, motion, voice, music, and outputs a finalized video ready for social media. No third-party tools, no manual editing, no painful workflow stitching.

Physicist Steve Hsu says he has published a peer-reviewed theoretical physics paper whose main idea came from GPT-5.

Meta is considering deep budget cuts to its metaverse efforts in 2026, potentially as high as 30% and most likely including layoffs as early as January.

Two studies suggest AI chatbots can shift political views more effectively than TV campaign ads, especially by presenting many claims, regardless of accuracy.

Kling AI released Kling 2.6, the Chinese startup’s new AI video model that introduces native synced audio generation for text and image-to-video outputs.

🐳 DeepSeek V3.2: DeepSeek’s new powerful open-source model

đŸŒ± Seedream 4.5: ByteDance’s improved AI for creating and editing images

đŸŽ„ Kling 2.6: New AI video model that can handle audio natively

🍌 Nano Banana 2: Google’s Gemini 3 image model with perfect text and character consistency

Which image is real?

Login or Subscribe to participate in polls.

THAT’S IT FOR TODAY

Thanks for making it to the end! I put my heart into every email I send, I hope you are enjoying it. Let me know your thoughts so I can make the next one even better!

See you tomorrow :)

- Dr. Alvaro Cintas