
Good Morning! Google is officially putting Gemini behind the wheel, replacing the clunky old Google Assistant in millions of vehicles with a much smarter, conversational AI. Plus, I’ll show you how to run local AI for private offline coding.
Google Gemini Rolls Out to Cars
Mistral Unleashes Medium 3.5 and Cloud Coding Agents
Motif's Wireless Brain Implant Cleared for Depression Trials
How to Run Local AI for Private Offline Coding
4 new AI tools worth trying

AI NEWS
Google is preparing a massive software update for vehicles with "Google built-in," completely replacing the current Google Assistant with Gemini to provide a far more natural, conversational in-car experience.
Drivers can now use natural language for tasks like getting restaurant recommendations or playlist suggestions, without having to memorize rigid voice commands.
Gemini can summarize and respond to text messages, give real-time journey updates, and answer specific questions about the car, including EV battery levels.
The software update is rolling out to eligible existing vehicles (dating back to 2020), starting with English users in the US.
General Motors has already confirmed the upgrade for 4 million of its vehicles (2022 models and newer), though the rollout is not exclusive to GM.
Google plans to expand to more regions soon, eventually integrating safe access to apps like Gmail, Calendar, and Google Home while driving.

For years, native car voice assistants have been frustratingly rigid and limited to basic commands. By pushing a conversational LLM directly into the infotainment systems of millions of existing cars, Google is turning the vehicle into a true, intelligent companion, and securing a massive, everyday distribution channel for Gemini that its rivals simply don't have.
AI MODELS
Mistral AI has launched Mistral Medium 3.5, a powerful new 128-billion parameter dense model with a 256k context window. Alongside the model, Mistral introduced "Work Mode" for Le Chat, taking autonomous coding agents out of local environments and dropping them straight into the cloud.
The model integrates reasoning, instruction-following, coding, and vision (handling variable image sizes) into a single powerhouse that can be deployed on as few as four GPUs.
It scored a massive 77.6% on the SWE-Bench Verified benchmark, outperforming heavyweights like Devstral 2 and Qwen3.5 in coding and agentic tasks.
Through the new Mistral Vibe CLI and Le Chat "Work Mode," users can run complex, multi-step agentic workflows in isolated cloud sandboxes that execute in parallel and ping you when finished.
Developers can literally teleport active coding sessions between their local machines and cloud environments without losing state or approvals.
The model's open weights are available under a modified MIT license, allowing for both self-hosted and cloud-based deployment.

Mistral is aggressively bridging the gap between local developer tools and cloud-based automation. By offering a highly capable, open-weight model paired with sandboxed cloud environments, Mistral is democratizing access to autonomous software engineering, making scalable, high-volume AI coding accessible to anyone with a browser or terminal.
AI NEWS
Motif Neurotech just received FDA clearance to begin U.S. clinical trials for a minimally invasive brain implant aimed at treating treatment-resistant depression.
The Device (DOT): The "Digitally programmable Over-brain Therapeutic" is about the size of a blueberry.
Minimally Invasive: Unlike traditional deep-brain implants, the DOT sits above the brain inside the skull (above the dura), meaning it doesn't actually penetrate delicate brain tissue.
Wireless & Battery-Free: The device relies on wireless power, completely eliminating the need for implanted batteries or internal wired connections.
The Goal: It delivers targeted electrical stimulation to specific neural circuits linked to depression, offering a new path for the nearly 3 million Americans who do not respond to standard medications or talk therapy.
Rapid Progress: Motif secured this investigational device exemption in just four years, an incredibly fast timeline for the heavily regulated brain-computer interface (BCI) space.

This technology represents a massive shift in mental health treatment, moving from chemical interventions (drugs) to targeted electrical interventions without the severe risks of invasive neurosurgery. As CEO Jacob Robinson noted, the ultimate vision is for this tech to become the "mental health equivalent of a continuous glucose monitor."

HOW TO AI
🗂️ How to Run Local AI for Private Offline Coding
In this tutorial, you will learn the simplest way to run open-source models like Gemma, Llama, and DeepSeek locally on your computer. By pairing LM Studio with VS Code, you can build websites and complete coding tasks entirely offline while keeping your projects 100% private.
🧰 Who is This For
Developers who want full privacy (no cloud, no data sharing)
Engineers working with sensitive or proprietary code
People with unreliable or no internet access
Open-source enthusiasts and self-hosting fans
STEP 1: Install and Set Up LM Studio
Navigate to LM Studio and download the version for your operating system. Once installed, use the search menu to find a model compatible with your hardware, look for the "thumbs up" icon to confirm it will run on your machine. Prioritize models with a blue hammer icon, as this indicates "tool calling" capabilities, which are essential for advanced AI tasks.

STEP 2: Launch the Local Inference Server
Go to the Developer tab in LM Studio and click "Start Server". Select your downloaded model (such as Gemma 4) to load it into your system's memory. Pro Tip: Before loading, increase the Context Window size from the 4,000-token default to a higher value that fits your system's RAM, this allows the AI to "remember" more of your code at once.

STEP 3: Connect VS Code via the "Continue" Extension
Open VS Code and navigate to the Extensions marketplace. Search for and install an extension called "Continue". Once installed, move the Continue icon to the Secondary Sidebar (right side) for a more natural coding experience. Click the gear icon within Continue, select LM Studio as your provider, and choose "Auto-detect" to instantly link your active local server to your IDE.
STEP 4: Prompt and Iterate Locally
Now you can start a chat in the sidebar to generate or edit code. Because local models are generally smaller than cloud-based ones, use highly detailed prompts that leave no room for ambiguity. For example, ask it to "Create an index.html file for a professional portfolio website with a dark theme and a contact section". You can then instruct it to iterate on the design, making sections "prettier" or adding new features without ever sending data to an external server.


Researchers detail CopyFail, a now-patched Linux vulnerability that lets unprivileged users gain admin access, as many distributions have yet to add fixes.
The persistent notion that AI disruption could create a permanent underclass signals how much collateral damage AI companies might tolerate in pursuit of AGI.
Anthropic's Claude Security, formerly Claude Code Security, is in public beta for Enterprise users; the Opus 4.7-powered tool can scan code for vulnerabilities.
Cybersecurity analysis: GPT-5.5 reaches a similar level of performance as Mythos Preview and is the second model to solve a multi-step cyberattack simulation.

🌎 DeepSeek V4:The lrgest open-weight model in the world
🎆 ChatGPT Images 2.0: OpenAI’s upgraded image generator
🌎 GPT5.5: OpenAI’s most powerful AI model built using recursive self-improvement.
💻 Holo 3: Open AI agent that can use computers like a human


THAT’S IT FOR TODAY
Thanks for making it to the end! I put my heart into every email I send, I hope you are enjoying it. Let me know your thoughts so I can make the next one even better!
See you tomorrow :)
- Dr. Alvaro Cintas




