- Simplifying Complexity
- Posts
- AI Weekly Recap (Week 36)
AI Weekly Recap (Week 36)
Plus: The most important news and breakthroughs in AI this week

Good morning, AI enthusiasts. AI just had a wild week. China unleashed a trillion-parameter coding model that’s open to everyone. At the same time, Grok rolled out voice-to-video, and people are already calling it “AI Vine.”
Plus: The most important news and breakthroughs in AI this week


Grok Imagine just unlocked speech for its AI video generator, letting you create 6-second clips where characters actually talk. Just type the dialogue, and the app syncs it into your video.
→ Update the app → upload/create an image → add speech → done
→ Voice sync works instantly with your generated clips
→ Longer videos coming soon
🧰 Who is this useful for:
Creators making short skits, reels, or story snippets
Animators testing quick voiceover concepts
Marketers adding personality to brand content
Anyone experimenting with AI storytelling
Try it now → https://x.ai/


Ideogram just rolled out Styles, giving you access to 4.3 billion style presets. You can now upload up to 3 reference images to generate consistent visuals, create reusable custom styles, and even add stylized typography with ease.
→ Choose from curated presets like Abstract Organic, Analog Nostalgia, Children’s Book, Bright Art
→ Upload reference images to build your own reusable styles
→ Stylized typography fully supported
🧰 Who is this useful for:
Designers needing consistent aesthetics across projects
Marketers creating branded campaigns with a signature look
Content creators wanting unique visuals for social posts
Authors & publishers adding custom illustrations with text
Try it now → https://ideogram.ai/features/styles


Krea announced real-time video generation (12+ fps), letting you sketch on a canvas, type prompts, stream your webcam, or even stream your screen — and instantly see it transform into cinematic video.
→ Generates faster than playback for instant feedback
→ Preserves motion, identity & style across frames
→ Create by painting, prompting, or live streaming inputs
→ Built on “world model” ideas for responsive, coherent scenes
🧰 Who is this useful for:
Filmmakers & creators experimenting with live generative video
Artists sketching and instantly animating ideas
Streamers & educators adding stylized visuals in real time
Game devs & UI designers testing motion and scene concepts
Try it now → https://www.krea.ai/blog/announcing-realtime-video


Warp just unveiled Warp Code, a full suite of features that takes agent-generated code from prompt → review → edit → production. Already #1 on Terminal-Bench (52%) and top 3 on SWE-bench Verified (75.8% with GPT-5), Warp Code closes the gap between “almost right” AI code and production-ready software.
→ Top-rated coding agent with GPT-5 high reasoning
→ Built-in Code Review panel: review diffs, request changes, line-edit
→ Lightweight Code Editor: file tree, syntax highlighting, tabbed viewing
→ Projects in Warp: initialize repos with WARP.md, agent profiles & global commands
🧰 Who is this useful for:
Engineers frustrated with AI code that’s almost right
Teams wanting a smoother agent → prod pipeline
Startups scaling dev output with fewer bottlenecks
Developers curious about next-gen prompt-driven coding


Decart just dropped Oasis 2.0, its most advanced AI model yet. It transforms game worlds and visual styles in real time at 1080p, 30fps, with support for Minecraft mods and a web demo.
→ Swap worlds live: play Minecraft in the Swiss Alps, Burning Man, or any custom style
→ Works as both a Minecraft mod and a web demo
→ Open for everyone, SDKs & APIs available for devs/streamers
→ Join Decart’s Discord for updates, challenges & community play
🧰 Who is this useful for:
Gamers who want to reimagine Minecraft in entirely new environments
Creators designing custom maps, mods, and immersive experiences
Streamers looking for fresh, dynamic gameplay to showcase
Developers experimenting with AI-powered real-time world generation
Try it now → https://oasis2.decart.ai/demo


DeepMind just launched EmbeddingGemma, a lightweight 308M parameter model that delivers state-of-the-art embedding performance while running fully on-device, no internet needed.
→ Ranked #1 on MTEB benchmark, the gold standard for text embeddings
→ Trained across 100+ languages
→ Optimized for mobile & edge devices
→ Plug-and-play with Hugging Face, LlamaIndex, LangChain & more
🧰 Who is this useful for:
Developers building offline-first or privacy-focused apps
AI startups needing multilingual semantic search
Enterprises running embeddings at the edge
Researchers experimenting with lightweight, high-performing models


OpenAI added Branch Conversations to ChatGPT, making it easier to explore different directions in a chat without losing your original thread.
→ Create branches from any point in a conversation
→ Switch between threads seamlessly
→ Available now for all logged-in web users
🧰 Who is this useful for:
Students exploring multiple solutions to a problem
Writers testing different storylines or drafts
Professionals brainstorming varied approaches
Anyone who hates losing their “main thread” when experimenting
Try it now → https://chat.openai.com/


Moonshot AI has released Kimi K2, a 1 trillion parameter model that’s already outperforming ChatGPT and Claude on coding benchmarks. With extended context and tool-calling improvements, it’s built for serious dev workflows.
→ 1T parameters, one of the largest open-source models to date
→ Enhanced coding, especially front-end & tool-calling
→ 256k token context length for massive projects
→ Integrates with Claude Code, Roo Code & other agent scaffolds
→ Open weights + code available on Hugging Face
🧰 Who is this useful for:
Developers needing a high-performance, open alternative to GPT & Claude
Teams building coding agents or dev tools
Researchers exploring trillion-scale open-source models
Enterprises requiring large context for complex codebases
Try it now → https://www.kimi.com/

That's it! See you tomorrow
- Dr. Alvaro Cintas
