- Simplifying AI
- Posts
- đ§ Living AI Computers Are Here
đ§ Living AI Computers Are Here
PLUS: How to automate your entire creative pipeline using AI

Good morning, AI enthusiast. If youâre obsessed with how fast reasoning models are evolving (like I am), Google just dropped something youâll love, Gemini 3 Deep Think. Also, today Iâll teach you how to automate your entire creative workflow.
In todayâs AI newsletter:
Google drops Gemini 3 âDeep Thinkâ mode
Anthropicâs leak doc reveals Claudeâs soul
Living AI computer are here
How to automate your entire creative pipeline using AI
4 new AI tools & more

BIOCOMPUTERS
Cortical Labs recently unveiled the CL1, the worldâs first commercially available biological computer powered by living human brain cells. This Synthetic Biological Intelligence (SBI) blends neurons with silicon hardware to create a neural network that learns and adapts like a brain, all without needing an external computer.
Life-support unit keeps neurons alive with temperature, gas, filtration, and circulation controls
Fully programmable stacks let researchers run dozens of neural networks in parallel
Cloud-based âWetware-as-a-Serviceâ lets users access CL1 remotely, no lab required
Units are energy-efficient (~850-1,000 W per rack) and priced at around $35,000 to start
Engineers are exploring the âMinimal Viable Brain,â a small, fully functional neural network for research

CL1 could redefine AI, drug discovery, and neuroscience research. With cloud access, researchers anywhere can interact with living neural networks, making this more than a breakthrough, itâs a whole new way to explore intelligence.
AI MODELS
Google AI has released Deep Think, a boosted reasoning mode built on Gemini 3 Pro. It uses advanced parallel thinking to explore multiple hypotheses simultaneously, making it far better at complex math and science than everyday office tasks.
Scored 41% on Humanityâs Last Exam, state-of-the-art performance
Hit 45.1% on ARC-AGI-2 with code execution
Achieved 93.8% on the GPQA Diamond benchmark, another SOTA result
Variants recently earned gold-medal level at the International Mathematical
Olympiad and ICPC World Finals
Available now for all Google AI Ultra subscribers ($250/month)

Google is racing to match and outpace DeepSeek and OpenAIâs upcoming reasoning models. By shipping Olympiad-grade reasoning to the public first, Google is signaling that the next frontier of AI competition isnât chat, itâs advanced, tool-free scientific reasoning.
AI RESEARCH
A user on LessWrong managed to extract an internal document from Claude 4.5 Opus that outlines how the model thinks, behaves, and even handles its own âfunctional emotions.â Anthropic ethicist Amanda Askell confirmed the document is authentic and was used during training.
User recovered the text after Claude hallucinated fragments of a âsoul_overviewâ
Multiple Claude instances were used to reconstruct the full document
Anthropic says the system prompt is embedded in the weights, not added at runtime
Internally referred to as the âsoul docâ (informally)
Document explains Claudeâs ethics, self-perception, and safety-first design

Claudeâs persona is one of its biggest strengths, and this leak shows just how intentionally it's designed. As AI agents become more influential, understanding who shapes their character (and why) will matter as much as the models themselves.

HOW TO AI
đïž How to Automate Your Entire Creative Pipeline
In this tutorial, Iâll show you how to turn a simple creative brief into a full multi-asset campaign using Lovart, the worldâs first true Design Agent powered by the high-fidelity Nano Banana Pro model.
đ§° Who is This For
Brand managers and marketers launching new products
Designers and creatives wanting seamless, end-to-end asset generation
Startups and small teams needing fast campaign creation
Anyone who wants to automate their creative pipeline from concept to output
STEP 1: Access Lovart
Head over to the Lovart platform and start a new project. This is where the magic starts, Lovart works best when you treat it like your creative director. Instead of giving a tiny prompt, give context, tone, and goals just like you would in a real briefing.
For example, I might say: âI need a full social campaign for a new line of futuristic, eco-friendly sneakers. The vibe should be rebellious and vibrant, and the main assets should be product photoshoots for social ads.â
The clearer and more intentional you are here, the better Lovart will orchestrate your entire campaign.

STEP 2: Choose Your Style and Select the Visual Engine
Next, define how you want everything to look. You can type a style description, something like âneon cyberpunk photography with glossy highlightsâ, or upload a reference image if you already have a brand mood. This helps Lovart lock onto a visual identity for the whole campaign.
Then pick Nano Banana Pro as your model engine. I always choose this one because it delivers the cleanest realism, best textures, and the most reliable text/logo rendering. Once you confirm the style and engine, Lovart knows exactly how to direct your visual universe.
STEP 3: Customize Your Visual Like a Designer
Now hit Generate, sit back, and watch Lovart do its thing. In one go, it produces everything:
Product photos with studio-grade lighting
Ad banners for Instagram, TikTok, or Facebook
Branding elements like logos and typography
Multiple layout variations and aspect ratios
Youâll see all the assets laid out on the infinite canvas, and the cohesion is honestly wild, every piece feels like it came from the same professional design team. You can review the variations and pick the ones that match your vision best.

STEP 4: Expand Your Assets Into Video & Audio
Once you find an image or banner you love, click on it to select it. Then jump back to the chat and tell Lovart exactly how you want it transformed into multimedia.
For example: âTurn this product photo into a 15-second vertical video with a slow zoom, add a voiceover saying âRebel with a cause,â and use an electronic beat in the background.â
Lovart then handles everything, motion, voice, music, and outputs a finalized video ready for social media. No third-party tools, no manual editing, no painful workflow stitching.

Physicist Steve Hsu says he has published a peer-reviewed theoretical physics paper whose main idea came from GPT-5.
Meta is considering deep budget cuts to its metaverse efforts in 2026, potentially as high as 30% and most likely including layoffs as early as January.
Two studies suggest AI chatbots can shift political views more effectively than TV campaign ads, especially by presenting many claims, regardless of accuracy.
Kling AI released Kling 2.6, the Chinese startupâs new AI video model that introduces native synced audio generation for text and image-to-video outputs.

đł DeepSeek V3.2: DeepSeekâs new powerful open-source model
đ± Seedream 4.5: ByteDanceâs improved AI for creating and editing images
đ„ Kling 2.6: New AI video model that can handle audio natively
đ Nano Banana 2: Googleâs Gemini 3 image model with perfect text and character consistency


THATâS IT FOR TODAY
Thanks for making it to the end! I put my heart into every email I send, I hope you are enjoying it. Let me know your thoughts so I can make the next one even better!
See you tomorrow :)
- Dr. Alvaro Cintas



