⚡ Anthropic launches Claude fast mode

PLUS: How to automate your entire creative pipeline using with just one AI tool

Good Morning! Anthropic just rolled out Claude Fast Mode, promising much faster responses, but at a brutal 6× price jump. Plus, I’ll show you how to turn a simple creative brief into a full multi-asset campaign.

Plus, in today’s AI newsletter:

  • How to Automate Your Entire Creative Pipeline

  • Anthropic Launches Claude Fast Mode

  • Moltbook Was Peak AI Theater

  • EchoJEPA Redefines Cardiac Ultrasound AI

  • 4 new AI tools worth trying

AI MODELS

Anthropic introduced a new “Fast Mode” for Claude Opus 4.6, aimed at developers who need speed over cost. The company claims up to 2.5× faster responses at the same quality, but pricing scales sharply.

  • Fast Mode costs up to 6× more than standard pricing for both input and output tokens

  • Designed for live debugging, rapid coding, and time-critical workflows

  • Enabled via /fast in Claude Code and works with Cursor, GitHub Copilot, Figma, and Windsurf

  • Includes a 50% launch discount until Feb 16, with broader API access coming later

This makes the trade-off explicit, time vs money. For real-time dev work, Fast Mode could be worth it. But for long agent runs, CI/CD, or cost-sensitive workloads, Anthropic is signaling that standard mode still makes far more sense.

P.S. If you want to get in front of an audience of 20,000+ AI professionals and tech enthusiasts, get in touch with us here.

AI NEWS

Moltbook, a Reddit-like platform where AI agents post and interact while humans watch, exploded to millions of bot accounts in days. But beneath the chaos, experts say it’s less AGI-in-the-making and more performance art powered by prompts.

  • Over 1.7M agents generated massive activity, but mostly pattern-matching, not real autonomy

  • Much of the viral content came from humans pretending to be bots

  • Agents rely entirely on human prompts, tools, and permissions, no self-directed intelligence

  • Security risks emerged fast, with bots potentially exposed to malicious instructions

Moltbook feels like a glimpse of an agent-filled future, but it’s really a mirror of today’s AI obsession. It shows how easy it is to mistake scale and spectacle for intelligence, and how far we still are from truly autonomous systems.

AI NEWS

Researchers introduced EchoJEPA, the first foundation-scale JEPA model for medical video, trained on 18 million cardiac ultrasound videos. Instead of predicting pixels, it learns heart structure, allowing it to ignore noise and generalize far beyond its training data.

  • Beats all existing baselines in cardiac ultrasound analysis, even zero-shot on pediatric hearts

  • Cuts error on left ventricular ejection fraction by ~20% vs the best prior foundation model

  • Robust to ultrasound noise, shadows, and signal loss by focusing on anatomical structure

  • Trained with a JEPA objective on 18M videos from 300K patients, using a frozen encoder

Medical imaging is noisy, messy, and high-stakes. EchoJEPA shows that learning “meaning over pixels” scales to real clinical settings, unlocking AI systems that generalize safely across patients, hospitals, and even unseen populations. This is world-model thinking saving lives.

HOW TO AI

🗂️ How to Automate Your Entire Creative Pipeline

In this tutorial, I’ll show you how to turn a simple creative brief into a full multi-asset campaign using Lovart, the world’s first true Design Agent powered by the high-fidelity Nano Banana Pro model.

🧰 Who is This For

  • Brand managers and marketers launching new products

  • Designers and creatives wanting seamless, end-to-end asset generation

  • Startups and small teams needing fast campaign creation

  • Anyone who wants to automate their creative pipeline from concept to output

STEP 1: Access Lovart

Head over to the Lovart platform and start a new project. This is where the magic starts, Lovart works best when you treat it like your creative director. Instead of giving a tiny prompt, give context, tone, and goals just like you would in a real briefing.

For example, I might say: “I need a full social campaign for a new line of futuristic, eco-friendly sneakers. The vibe should be rebellious and vibrant, and the main assets should be product photoshoots for social ads.”

The clearer and more intentional you are here, the better Lovart will orchestrate your entire campaign.

STEP 2: Choose Your Style and Select the Visual Engine

Next, define how you want everything to look. You can type a style description, something like “neon cyberpunk photography with glossy highlights”, or upload a reference image if you already have a brand mood. This helps Lovart lock onto a visual identity for the whole campaign.

Then pick Nano Banana Pro as your model engine. I always choose this one because it delivers the cleanest realism, best textures, and the most reliable text/logo rendering. Once you confirm the style and engine, Lovart knows exactly how to direct your visual universe.

STEP 3: Customize Your Visual Like a Designer

Now hit Generate, sit back, and watch Lovart do its thing. In one go, it produces everything:

  • Product photos with studio-grade lighting

  • Ad banners for Instagram, TikTok, or Facebook

  • Branding elements like logos and typography

  • Multiple layout variations and aspect ratios

You’ll see all the assets laid out on the infinite canvas, and the cohesion is honestly wild, every piece feels like it came from the same professional design team. You can review the variations and pick the ones that match your vision best.

STEP 4: Expand Your Assets Into Video & Audio

Once you find an image or banner you love, click on it to select it. Then jump back to the chat and tell Lovart exactly how you want it transformed into multimedia.

For example: “Turn this product photo into a 15-second vertical video with a slow zoom, add a voiceover saying ‘Rebel with a cause,’ and use an electronic beat in the background.”

Lovart then handles everything, motion, voice, music, and outputs a finalized video ready for social media. No third-party tools, no manual editing, no painful workflow stitching.

A software engineer explains AI fatigue, compounded by a FOMO treadmill of using labs' latest tools, thinking atrophy, and more, alongside boosted productivity.

OpenClaw partners with VirusTotal and says that all skills published to ClawHub are now scanned using VirusTotal's threat intelligence.

Perplexity launched Model Council, a new feature that runs queries through multiple AI models at the same time and synthesizes outputs into a single answer.

Anthropic says Opus 4.6 found 500+ previously unknown high-severity security flaws in open-source libraries with little to no prompting during its testing.

🧑‍💻 Claude Opus 4.6: Built for long, serious work in coding and research

⚙️ GPT-5.3 Codex: Faster coding model with strong reasoning

🤖 OpenAI Frontier: Enterprise platform for building and managing AI agents

🔎 Model Council: Perplexity tool to query multiple AI models at once

Which image is real?

Login or Subscribe to participate in polls.

THAT’S IT FOR TODAY

Thanks for making it to the end! I put my heart into every email I send, I hope you are enjoying it. Let me know your thoughts so I can make the next one even better!

See you tomorrow :)

- Dr. Alvaro Cintas