- Simplifying Complexity
- Posts
- ⚡️ World’s First Probabilistic AI Chip
⚡️ World’s First Probabilistic AI Chip
PLUS: How to build unlimited free automations in N8N with your own server

Good morning, AI enthusiast. Extropic just revealed a new chip that could change how AI runs, using 10,000× less energy than today’s GPUs.
In today’s AI newsletter:
Extropic unveils world’s first probabilistic AI chip
Cursor launches Cursor 2.0
Nvidia becomes a $5 trillion company
How to build nulimited free automations in N8N
AI tools & more

HOW TO AI
💻 How to Build Unlimited Free Automations in N8N
In this tutorial, you’ll learn how to set up your own free N8N automation server with no limits or subscriptions. I’ll walk you through deploying N8N on Render using a Docker image so you can start building powerful automations in minutes.
🧰 Who is This For
Indie makers automating marketing or data workflows
Developers building API-based automation tools
Teams replacing expensive tools like Zapier or Make
Anyone who wants total control over their automation logic
STEP 1: Create a Free Render Account
Go to Render and sign up for a free account using your email or GitHub credentials. Once logged in, you’ll land on the Render Dashboard, this is your control center where you can deploy web services, databases, or background workers.
Render acts like a managed hosting platform that runs your apps from a Docker image. You don’t need to worry about infrastructure, ports, or complex setup. Everything is done through a clean, visual dashboard.

STEP 2: Create a New Web Service
Inside the Render Dashboard, click “New” and select “Web Service.” When asked for the source, choose “Existing Image.” In the image URL field, paste this link: docker.io/n8nio/n8n:latest.
Give your service a name, choose a region close to you, and select the free plan. Then add an environment variable named PORT and set its value to 5678. This helps Render detect your n8n server quickly once deployed.

STEP 3: Set Up Persistent Storage
Before deploying, scroll down to the Advanced section and find the Disk settings. Click “Add Disk.” Set the mount path to /home/node and the size to 1 GB for now. This ensures all your workflows, credentials, and settings remain safe even after redeployments. By default, n8n stores its configuration in a .n8n folder under this path.
STEP 4: Deploy and Launch
Once everything looks good, click Deploy Web Service. Render will now start building your environment using the Docker image you provided. This process might take a few minutes. You can watch the build logs in real time in the Render console.
When the deployment completes, you’ll be redirected to your new service page. At the top, you’ll see your unique URL, something like https://my-n8n-server.onrender.com. This is your live n8n instance, accessible from anywhere in the world.
Open that link in a browser. The first time you visit, you’ll be greeted by n8n’s setup screen. Here, you’ll create your Owner Account by entering an email, username, and password. Once done, you’ll be taken to your personal n8n dashboard, a visual interface where you can build and automate workflows using drag-and-drop nodes.
You now have a fully functional private automation server, completely free and entirely yours.


AI BREAKTHROUGH
Extropic’s new system harnesses thermal noise, the random motion of particles, as a computational resource instead of suppressing it. The result: hardware that performs probabilistic sampling natively, rather than simulating it with massive GPU matrix math.
X0 chip proves probabilistic circuits work at room temperature
XTR-0 dev kit combines CPU, FPGA, and two X0 boards for early experiments
thrml library (Python) lets anyone simulate the chips on GPUs today
Z1 unit in development, a large-scale thermodynamic sampling system
Their new model, the Denoising Thermodynamic Model (DTM), generates data by “pulling order out of noise,” similar to diffusion models but thousands of times more efficient

AI’s biggest bottleneck isn’t compute anymore, it’s energy. Extropic’s chips aim to remove that wall entirely. If their results scale, this could be the next GPU moment, enabling powerful AI without massive data center power demands.
AI TOOLS
Cursor just launched Cursor 2.0, introducing Composer-1, its in-house Mixture-of-Experts (MoE) coding model trained with reinforcement learning, designed to be 4× faster than similar models.
Supports long-context generation and understanding
Built with custom PyTorch + Ray infrastructure for asynchronous RL at scale
Uses MXFP8 MoE kernels for efficient low-precision training
Operates across hundreds of thousands of sandboxed coding environments
Fully integrated with Cursor’s agent harness for editing, searching, and executing cod

Cursor is positioning itself at the forefront of AI-native software engineering, where coding assistants evolve from passive helpers into autonomous agents capable of writing, testing, and improving software in real time.
AI NEWS
Nvidia’s stock jumped over 5% to $211 per share, pushing its market cap past $5 trillion, just months after hitting $4 trillion in July. It now sits ahead of Apple, Microsoft, Alphabet, Amazon, and Meta.
Surge follows a $1B Nokia investment and a partnership to develop “AI-native” 5G and 6G networks
Shares also rose after President Trump said he’ll discuss Nvidia’s Blackwell chip export bans with China’s Xi Jinping
Nvidia’s GPUs remain the backbone of global AI infrastructure, powering everything from ChatGPT to autonomous systems

Nvidia isn’t just a chipmaker anymore, it’s the engine of the AI economy. With this milestone, it’s clear that whoever controls compute, controls the future.

OpenAI released gpt-oss-safeguard, two open models that let developers set custom moderation rules and view AI’s reasoning when detecting harmful content.
Character AI will soon ban users under 18 from having open-ended chats with its bots, following legal pressure from families and lawmakers after reports linking the platform to teen deaths.
IBM has launched Granite 4.0 Nano, a new family of compact language models ranging from 350 million to 1.5 billion parameters, designed specifically for efficient on-device performance.
Google updated NotebookLM with a bigger context window, better memory, customizable chat personas, and improved response quality.

💻 Build: Code directly in Google AI Studio with Gemini

THAT’S IT FOR TODAY
Thanks for making it to the end! I put my heart into every email I send, I hope you are enjoying it. Let me know your thoughts so I can make the next one even better!
See you tomorrow :)
- Dr. Alvaro Cintas


