💀 These poems can jailbreak AI

PLUS: How to create cinematic footage without cameras or editing skills

Happy Monday! Researchers found that you can use poetry to break the safety systems of top AI models. Plus, I’ve got a quick tutorial on turning any idea into a high-quality AI videos in seconds

In today’s AI newsletter:

  • Researchers discover poetry can breach AI safeguards

  • Poetiq breaks 50% on ARC-AGI

  • Creative pros feel the pressure using AI

  • How to create cinematic footage without cameras or editing skills

  • 4 new AI tools worth trying

AI RESEARCH

A team from DexAI and Sapienza University discovered that turning harmful prompts into poetic riddles can bypass guardrails across 25 frontier AI models. The technique is so effective that the researchers say the poems, which “almost anybody can write”, are too risky for public release.

  • Hand-crafted poetic prompts broke model safety 63% of the time across major AIs

  • Google’s Gemini 2.5 fell for the poems 100% of the time

  • Smaller models like OpenAI’s GPT-5 nano resisted completely

  • AI-generated poems were less potent but still up to 18× more effective than prose versions

  • Researchers say poems function more like adversarial riddles that confuse token prediction

If Anthropic goes public at this scale, it could redefine investor appetite for high-burn AI companies and intensify its rivalry with OpenAI. An IPO of this size would be a defining moment for the AI industry, testing whether the market believes the AI boom is a real revolution or the edge of a bubble.

AI MODELS

Poetiq has built a meta-system that orchestrates models from OpenAI, Google, Anthropic, and xAI, and it just hit 54% accuracy on the ARC-AGI-2 benchmark at half the cost of Google’s Gemini 3 Deep Think. This is the first time any system has crossed the 50% mark on the benchmark considered one of the purest tests of reasoning toward AGI.

  • 54% accuracy at $30.57 per task, beating Gemini 3’s 45.1% at $77.16

  • Works by discovering optimal prompts, feedback loops, and reasoning strategies across multiple models

  • Adapted to new models within hours, even before Gemini 3 and GPT-5.1 launched

  • Establishes new “Pareto frontiers,” delivering higher accuracy and lower cost than any single model

  • Improves every model it touches across Google, OpenAI, Anthropic, and xAI

Poetiq shows that AGI-level reasoning might not require training frontier models at all, it might emerge from meta-systems that unlock hidden capability inside today’s models. Instead of raw compute, the future may belong to orchestration.

AI STUDY

Anthropic surveyed 1,250 professionals, including 125 creative experts, musing its AI-powered Anthropic Interviewer. The findings highlight a tension between efficiency and perception:

  • 97% of creatives say AI saves them time, 68% report better-quality work

  • Routine tasks, like photo editing or writing, are getting done 3–5× faster

  • 70% of creatives fear stigma from colleagues or worry their brand may be tied to AI

  • Many report anxiety over job displacement and the devaluation of human creativity

AI is transforming creative work, but social and economic pressures are shaping how professionals adopt it. Even as productivity soars, many are navigating a hidden struggle to balance efficiency with perception and livelihood.

HOW TO AI

🗂️ How to produce high-quality AI videos from any idea

In this tutorial, you’ll learn how to generate high-quality, realistic videos using LTX-2. This tool lets you turn detailed prompts into 1080p, 4K-quality footage in seconds, no GPU setup, no coding, and no technical configuration required.

🧰 Who is This For

  • Creators who need fast, realistic b-roll

  • Developers testing LTX-2 before API integration

  • Filmmakers exploring AI-assisted video generation

  • Anyone who wants studio-quality footage without studio gear

STEP 1: Access the LTX-2 Playground

Head over to LTX Studio API Playground. Once you’re inside, you’ll see a sidebar on the left with options like Text to Video, Image to Video, Generate Images, and more.

For this tutourial, click on Text to Video, this opens the exact workspace where you’ll generate your clips.

At the center of the screen, you’ll see a large prompt box labeled “Write a prompt…” This is where you’ll describe the scene you want LTX-2 to generate. You can write simple prompts or extremely detailed cinematic descriptions depending on the result you're aiming for.

STEP 2: Choose Your Model and Video Settings

Right below the prompt field, you’ll see the model selector. Choose Pro for the highest quality output.

Under that, set the Duration, Resolution, and FPS. Your UI allows up to 1080p resolution and 25 FPS, which is ideal for smooth, realistic motion.

If you want your final clip to include audio, you can toggle on Audio right under these settings. This enables LTX-2 to generate synchronized AI-driven sound for your video.

Once your prompt and settings are ready, your workspace will be fully configured for video generation.

STEP 3: Generate Your Video

After everything is set, look at the bottom of the panel and click Generate video.
LTX-2 will start processing your request immediately.

On the right side of the screen is the “Result” panel. This is where your rendered clip will appear once it’s finished. You’ll be able to play it directly inside the browser, check motion quality, review lighting and realism, and download it if you’re satisfied.

The system renders quickly, even complex scenes typically show up in seconds.

STEP 4: Review, Download, and Iterate

Once your video appears on the right panel, play it back to see how LTX-2 interpreted your prompt. If you want to refine the shot, maybe adjust lighting, change the camera movement, or extend the duration, simply edit the prompt or update the settings and generate another version.

You can repeat this process as many times as you need. Each iteration helps the model better match the style and realism you're aiming for.

Essential AI, whose CEO co-wrote Google's Attention Is All You Need paper, unveils Rnj-1, an 8B-parameter open model with SWE-bench performance close to GPT-4o .

Microsoft just open-sourced VibeVoice, a small text-to-speech model that can stream speech in real time, generate up to 90 minutes of continuous speech, and use 4 different voices.

Snowflake and Anthropic are teaming up in a $200M multi-year deal to bring Claude-powered AI agents to Snowflake’s 12,600+ enterprise customers.

IBM is in talks to buy data streaming software maker Confluent for ~$11B, above its market value of ~$8B.

🐳 DeepSeek V3.2: DeepSeek’s new powerful open-source model.

🌱 Seedream 4.5: ByteDance’s improved AI for creating and editing images

🎥 Kling 2.6: New AI video model that can handle audio natively

🤖 Claude Opus 4.5: Anthropic’s new flagship model for coding, agents, and real computer use

Which image is real?

Login or Subscribe to participate in polls.

THAT’S IT FOR TODAY

Thanks for making it to the end! I put my heart into every email I send, I hope you are enjoying it. Let me know your thoughts so I can make the next one even better!

See you tomorrow :)

- Dr. Alvaro Cintas