🎬 China launches Seedance 2.0

PLUS: How to generate infinite AI videos locally without scene breaks

Good Morning! ByteDance just dropped Seedance 2.0, and early testers claim it outperforms Sora 2 and Google’s Veo in real-world video generation. Plus, you’ll learn how to generate infinite AI videos locally without scene breaks.

Plus, in today’s AI newsletter:

  • ByteDance Launches Seedance 2.0 AI Video Model

  • Meta’s 'Avocado' Surpasses Top Open-Source Models

  • ChatGPT Starts Showing Ads on Cheaper Plans

  • How to Generate Infinite AI Videos Locally Without Scene Breaks

  • 4 new AI tools worth trying

AI MODELS

TikTok parent ByteDance unveiled Seedance 2.0, a new multimodal AI video generator designed to create “cinematic content” using text, images, video, and audio, now rolling out to select users on its Jimeng AI platform.

  • Generates 2K video with ~30% faster output than Seedance 1.5

  • Supports seamless video extension, precise references, and natural language control

  • Reportedly beats Sora 2 and Veo 3.1 in practical testing (per CTOL)

  • Outputs are completely watermark-free, unlike rivals

High-quality, watermark-free AI video raises the bar for creators, but also accelerates deepfake risks. As tools like Seedance get more accessible and realistic, regulators and platforms may struggle to keep pace with misuse and synthetic media abuse.

P.S. If you want to get in front of an audience of 20,000+ AI professionals and tech enthusiasts, get in touch with us here.

AI MODELS

Meta’s upcoming flagship model, Avocado, has reportedly surpassed today’s leading open-source LLMs using pretraining alone. Built inside Meta Superintelligence Labs (MSL), it’s being called the strongest pretrained model Meta has ever produced.

  • Matches or beats fully post-trained open models in reasoning, vision, and multilingual tasks

  • Achieves ~10Ă— efficiency over Meta’s previous model (Maverick), and 100Ă— vs Behemoth

  • Gains come from higher-quality data, deterministic training, and massive infra investment

  • First major output after Meta’s AI reset post-LLaMA 4, led by Scale AI’s Alexandr Wang

If Avocado’s claims hold up externally, it challenges a core assumption in AI, that post-training is where real intelligence happens. Meta is betting that better data + controlled training can beat brute-force scaling, potentially reshaping how future frontier models are built.

AI NEWS

OpenAI announced it’s officially testing ads in ChatGPT. These appear as clearly labeled “sponsored” links at the bottom of responses and are rolling out gradually.

  • Ads appear only on Free and $8/month Go plans

  • Plus ($20+), Pro, Business, Enterprise, and Education remain ad-free

  • Free users can opt out of ads by accepting fewer daily messages

  • Ads won’t appear in sensitive chats (health, mental health, politics) and don’t use chat content

This marks ChatGPT’s shift toward ad-supported AI, similar to search and social platforms. As AI tools scale, monetization pressure is rising, and paying users are now the only way to guarantee a clean, ad-free experience.

HOW TO AI

🗂️ How to Generate Infinite AI Videos Locally Without Scene Breaks

In this tutorial, you’ll learn how to use Stable Video Infinite 2 Pro, that can extend videos endlessly while keeping characters, lighting, and motion consistent. It runs entirely on your own computer, is free to use, and removes the usual 3–4 second clip limitation that breaks most AI videos today.

đź§° Who is This For

  • Creators who want long, continuous AI-generated videos

  • Filmmakers experimenting with cinematic or anime-style scenes

  • Developers and tinkerers who like running AI locally with full control

  • Anyone tired of short, broken AI video clips

STEP 1: Download the Core Model Files

To get started, you first need the special model files that power infinite generation. These are not standard checkpoints but LoRA files designed specifically for long-term consistency.

Go to Hugging Face and search for Stable Video Infinite 2 Pro. On the model page, look for the FP16 LoRA files. You’ll usually see a High Rank and a Low Rank version, either one works fine for most setups.

Once downloaded, move the file into this folder on your system:
ComfyUI → models → loras

After placing the file there, restart or refresh ComfyUI so it can detect the new model. These LoRA files contain the logic that keeps characters and scenes stable across time, which is what makes infinite video possible.

STEP 2: Configure the Workflow Settings

Next, open a Stable Video Infinite 2 Pro workflow in ComfyUI. These are usually available on the same GitHub or Hugging Face pages as the model.

Inside the workflow, adjust the key settings to match the Pro configuration. Select SVI 2 Pro (KI version) in the model loader node. Set the resolution to 480 Ă— 832 for your first tests, as this is much faster and ideal for debugging.

Set total steps to 8, CFG to 1, and choose a sampler like Euler Simple for quick testing or UniPC Simple for smoother motion. Make sure the workflow uses a 4-step Lightning/Lightex configuration for efficiency.

Starting small is important. Low resolution and fewer steps let you test ideas quickly before committing to heavier renders.

STEP 3: Control the Story With Multi-Prompting

This is where Stable Video Infinity really stands out. Instead of using one prompt for the entire video, you break the motion into segments.

Upload your starting image, which acts as the first frame of the video. Then locate the multiple prompt input boxes in the workflow. Prompt 1 describes the first few seconds, Prompt 2 describes what happens next, and Prompts 3 and 4 continue the sequence.

Make sure the Image Batch Extend with Overlap node is enabled. This overlap blends the end of one segment into the beginning of the next, preventing jump cuts and making the entire video feel like a single continuous shot.

This method lets you guide motion, expressions, and camera flow while maintaining consistency across the entire video.

STEP 4: Generate, Review, and Scale Up

Once everything is set, click Queue Prompt to generate the video. Watch the output carefully and check whether faces, lighting, and framing remain consistent from start to finish.

If motion feels slow or stiff, try switching the sampler from Euler Simple to UniPC Simple for smoother transitions. Once you’re happy with how the video looks, increase the resolution to 1280 × 720 and run it again for a cleaner, sharper result.

At this point, you can keep extending the video endlessly, adding more prompts and scaling quality as needed, all locally, with no limits and no usage caps.

An eight-month study at a US tech company finds AI tools didn't reduce work but intensified it, as employees worked faster and took on a broader range of tasks.

Discord says it will roll out age verification globally from March to access some content; all accounts will have a “teen-appropriate experience by default”.

Goldman Sachs revealed it has been working with Anthropic over the last six months to build AI agents that automate accounting, compliance, and client onboarding.

xAI co-founder Igor Babuschkin praised Claude Opus 4.6’s physics capabilities, saying that a “Claude Code moment for research” may be approaching.

🧑‍💻 Claude Opus 4.6: Built for long, serious work in coding and research

⚙️ GPT-5.3 Codex: Faster coding model with strong reasoning

🤖 OpenAI Frontier: Enterprise platform for building and managing AI agents

🔎 Model Council: Perplexity tool to query multiple AI models at once

Which image is real?

Login or Subscribe to participate in polls.

THAT’S IT FOR TODAY

Thanks for making it to the end! I put my heart into every email I send, I hope you are enjoying it. Let me know your thoughts so I can make the next one even better!

See you tomorrow :)

- Dr. Alvaro Cintas