Good Morning! Alibaba just dropped a powerful new open-weight model that punches way above its weight class for autonomous coding tasks. Plus, I’ll show you a ChatGPT Image trick that will save you a hours of work.

  • OpenAI Launches Autonomous Workspace Agents

  • Alibaba's Qwen3.6-27B Crushes Agentic Coding Benchmarks

  • Google Unveils Specialized Chips for AI Training and Inference

  • ChatGPT Image Trick That Will Save You Hours of Work

  • 4 new AI tools worth trying

AI MODELS

OpenAI just rolled out new cloud-based "workspace agents" for its Business, Enterprise, and Edu plans. Built to move beyond simple chat, these agents integrate directly into company workflows to execute tasks completely on their own in the cloud.

  • Agents can independently gather product feedback from the web and drop a summarized report into Slack, or automatically draft follow-up sales emails in Gmail.

  • Teams can build an agent once, share it across the organization, and let it safely operate across integrated tools while asking for human approval when necessary.

  • This move directly answers the viral explosion of agentic frameworks like OpenClaw (whose founder now works at OpenAI) and escalating competition from Anthropic’s Claude Cowork platform.

  • OpenAI stated this is an "evolution" of the 2023 custom GPTs; while GPTs remain available for now, the company plans to seamlessly transition them into full workspace agents soon.

For organizations looking to integrate AI to significantly improve efficiency and drive profits, the shift from conversational bots to autonomous workspace agents is the ultimate catalyst. By turning ChatGPT into a proactive digital employee that coordinates across platforms like Slack and Gmail, OpenAI is cementing its place in the agentic automation race, proving the future belongs to AI that actually executes the work rather than just generating text.

AI NEWS

Alibaba’s Qwen Team has released Qwen3.6-27B, their first fully dense open-weight model in the 3.6 family. Released under an open Apache 2.0 license, it’s specifically optimized for complex agentic workflows and actually outperforms their massive 397B MoE model on several key coding benchmarks.

  • Introduces a novel "Thinking Preservation" feature that retains the AI's chain-of-thought reasoning across an entire conversation history, saving massive amounts of compute during iterative agent workflows.

  • Features a highly efficient hybrid architecture that blends Gated DeltaNet (linear attention) with traditional self-attention.

  • Matches Claude 4.5 Opus on Terminal-Bench 2.0 and dominates in repository-level code generation and frontend workflows.

  • Available right now on Hugging Face in both BF16 and a highly efficient FP8 quantized version, perfect for pulling down and running locally on your machine.

As the push for autonomous, agentic coding accelerates, having a highly efficient, 27-billion parameter model available under an Apache 2.0 license is a massive win for the open-source community. It proves you don't need a massive, closed-source behemoth to achieve frontier-level software engineering, you just need the right architecture to power your local workspace.

AI TOOLS

After years of building multi-purpose AI processors, Google is officially separating its hardware strategy. The company's 8th-generation Tensor Processing Unit (TPU) lineup will feature one chip dedicated entirely to training models, and a separate processor built specifically for inference.

  • The strategic shift is driven by the rise of AI agents, which require entirely distinct compute architectures for training versus serving real-time requests.

  • The new inference processor, dubbed "TPU 8i," features 384 megabytes of SRAM (triple the previous generation) to deliver the massive throughput and low latency needed to run millions of concurrent agents, boasting an 80% performance bump.

  • The dedicated training chip delivers 2.8 times the performance of the 7th-generation "Ironwood" TPU for the exact same price.

  • The TPU ecosystem is booming: Anthropic has committed to using multiple gigawatts of power, and analysts now value Google's combined TPU and DeepMind business at a staggering $900 billion.

As the AI agent economy scales, the hardware bottleneck is shifting from brute-force training to massive, low-latency serving. By splitting its chips and loading its inference processors with SRAM, a move directly mirroring Nvidia's recent Groq 3 LPU architecture, Google is aggressively positioning its cloud infrastructure as the ultimate, cost-effective home for deploying agentic AI at scale.

HOW TO AI

🗂️ This ChatGPT Image Trick Will Save You Hours of Work

In this tutorial, you will learn how to leverage the brand-new ChatGPT Image 2.0 model to generate multiple, consistent images in a single request.

🧰 Who is This For

  • Content creators making thumbnails, posts, visuals fast

  • Designers creating assets without starting from scratch

  • Marketers generating ad creatives and campaigns

  • Social media managers scaling daily content

STEP 1: Enable the "Thinking" Model

To successfully generate multiple images at once, you need the model to "reason" through your complex request first. Open ChatGPT and switch your model to the Thinking variation.

While Image 2.0 is powerful, the "Thinking" mode ensures the AI follows the structural requirements for bulk generation rather than just mashing everything into one crowded picture.

STEP 2: Structure Your Global Prompt

Instead of a vague sentence, start your prompt with a "Global Paragraph." This sets the rules for all images in the set. Define the general style,

For example: A premium product photography series featuring the iconic Coca-Cola glass bottle with red cap and contour shape, exactly as provided. Photorealistic, high production value, no CGI artificiality. The bottle must remain the hero in every shot, label clearly visible and accurate. 16:9 landscape format throughout.

And any reference images you’ve uploaded (like a specific product bottle). This ensures that Image 1 and Image 8 share the same DNA.

STEP 3: Define Individual Image Descriptions

After the global settings, list your specific requests clearly. You can generate up to 8 images in a single prompt. Use a numbered list to describe each unique visualization.

Here’s example:

  1. Ice & Condensation Hero: The bottle submerged to its neck in a bed of crushed ice inside a vintage metal bucket, water droplets running down the glass catching bright studio light, deep black background, one strong backlight creating a rim glow through the dark liquid inside.

  2. Summer Lifestyle Pour: The bottle being tipped mid-pour into a tall glass filled with ice, the dark liquid splashing dramatically, photographed at eye level on a sun-bleached wooden table outdoors, golden hour light, warm and nostalgic mood.

  3. Minimalist Editorial: The bottle standing alone, dead centre, on a deep red surface that matches the label, shot from a low angle looking up slightly, one hard directional light casting a long dramatic shadow to the right, high contrast, fashion-magazine feel.

By separating these, you tell the AI exactly where to shift the context for each file it produces.

STEP 4: Review and Refine

Once you hit send, be patient, generating a batch of images takes significantly longer than a single one. After the images appear, review them for character or product consistency. If one looks "off," you can click it to use the built-in edit tool.

Note that if you ask for a variation of one specific image in the set, the model may occasionally lose context, so keep your edit instructions very specific to that single image ID.

Microsoft considered buying Cursor in recent weeks but didn't make an offer; Microsoft has been working to boost GitHub Copilot's popularity.

Google announces the Gemini Enterprise Agent Platform, a revamped developer tool built on Vertex AI that manages the full lifecycle of AI agent fleets.

OpenAI releases ChatGPT for Clinicians, a tool for medical tasks like documentation and research, free for verified physicians, pharmacists, and more in the US.

Sony AI says its autonomous ping pong robot is the first robot to attain expert-level performance in a physical sport after beating some top-level human players.

🎆 ChatGPT Images 2.0: OpenAI’s upgraded image generator

🌎 Lyra 2.0: NVIDIA AI that turns text into interactive 3D scenes

🌎 Tiny Aya: Cohere's small, open-source model covering 70+ languages

💻 Holo 3: Open AI agent that can use computers like a human

Which image is real?

Login or Subscribe to participate

THAT’S IT FOR TODAY

Thanks for making it to the end! I put my heart into every email I send, I hope you are enjoying it. Let me know your thoughts so I can make the next one even better!

See you tomorrow :)

- Dr. Alvaro Cintas

Keep Reading