- AI Emergence
- Posts
- šøBill Gates net worth to drop 99% over next 20 years?
šøBill Gates net worth to drop 99% over next 20 years?
Along with: Googleās AlphaEvolve cracks complex math challenges
Hey š
Imagine waking up one day and deciding to give away $200 billion, on purpose. Thatās what Bill Gates just did. Heās pledged to donate nearly all his wealth by 2045, then shut down the Gates Foundation forever. No legacy empire. No billion-dollar endowment. Just one last moonshot to rewrite the future for global health, poverty, and education.

In a world where tech leaders are stockpiling power, Gates is choosing a full reset. No cliffhangers. Just closure.
Letās see what the rest of the AI world was up to this week š
What would be the format? Every week, we will break the newsletter into the following sections:
The Input - All about recent developments in AI
The Tools - Interesting finds and launches
The Algorithm - Resources for learning
The Output - Our reflection
Table of Contents
Google just introduced AlphaEvolve, an AI coding agent that uses Gemini and evolutionary search to create and improve algorithms for complex math and computing challenges.
Whatās New:
Code that evolves: AlphaEvolve mutates, scores, and refines programs over generations, automatically discovering novel solutions from scratch.
Powered by Gemini: Combines Gemini Flash for wide exploration with Gemini Pro for deep analysis and refinement.
Auto-evaluated: Solutions are checked for correctness and performance without human involvement.
Real-world impact: Already improving Googleās data center efficiency, cutting compute waste by 0.7%, speeding up AI training kernels by 32.5%, and contributing chip design optimizations for next-gen TPUs.
Math breakthroughs: Found new matrix multiplication algorithms and advanced 20% of 50+ open math problems, including the 11-dimensional kissing number.
AlphaEvolve is currently used internally at Google but an Early Access Program for select academic users is in the works. If youāre interested, you can register through Googleās official form. The team is also exploring ways to make it more broadly available in the future. (source)
AlphaEvolve, our new Gemini-powered coding agent, can help engineers + researchers discover new algorithms and optimizations for open math + computer science problems.
Weāve used it to improve the efficiency of our data centers (recovering 0.7% of our fleet-wide compute
ā Sundar Pichai (@sundarpichai)
4:31 PM ⢠May 14, 2025
OpenAI just released HealthBench, a tough new test to see how well AI can handle real medical conversations. At the same time, they brought in Fidji Simo as CEO of Applications to help turn AI research into real-world products.
Whatās New:
Real Doctor Input: HealthBench was created with help from 262 doctors and includes over 5,000 medical conversations. It checks how accurately AI can answer questions in important healthcare situations.
Two Special Sets of Tests: HealthBench includes two special groups of medical cases for testing AI - one with questions where many doctors agree on the best answers (Consensus), and another with tougher questions that even current AI models find hard to answer correctly (Hard). This helps highlight where AI is doing well and where it still needs improvement.
Top Performers: OpenAIās newest models like GPT-4.1 and o3 are leading, sometimes even beating human experts.
AI as a Judge: GPT-4.1 helps score AI answers, agreeing closely with real doctors. (source)
Leadership Update: Fidji Simo, former Instacart CEO and OpenAI board member, joins as CEO of Applications to lead scaling and responsible deployment, reporting directly to Sam Altman. (source)
Evaluations are essential to understanding how models perform in health settings.
HealthBench is a new evaluation benchmark, developed with input from 250+ physicians from around the world, now available in our GitHub repository.
ā OpenAI (@OpenAI)
5:37 PM ⢠May 12, 2025
CodeRabbit just made coding smoother with real-time AI-powered code reviews inside your favorite editor. Itās like having a senior engineer watching your back before you even push your changes.
Whatās New:
Stay in Flow: Get instant feedback on your uncommitted changes and fix bugs or refactor suggestions with just one click - no need to leave your coding environment.
Smart Context-Aware Reviews: The AI understands your code changes, dependencies, and coding style to catch over 95% of bugs and logic errors early.
Multi-Language Support: Works across JavaScript, Python, Go, C++, Java, Rust, PHP, and more - so you can rely on it regardless of your stack.
AI Hallucination Detection: Flags potential AI-generated mistakes or weird logic before it becomes a problem in your code.
Unlike many tools that only review committed code or focus on static analysis, CodeRabbit reviews your in-progress, uncommitted changes live, meaning it catches problems before they even hit your repo, speeding up your workflow and reducing bugs early. (source)
š Introducing CodeRabbit for VSCode, Cursor, and Windsurf š
Vibe check your code where you write it today!
(Link below!)
ā CodeRabbit (@coderabbitai)
1:17 PM ⢠May 14, 2025
Google is using generative AI to transform flat product photos into immersive 3D shopping experiences- making online browsing feel a lot more like in-store discovery.
Whatās New:
From 2D to 360° in 3 Images: New AI models now generate high-quality 3D visualizations from as few as three product photos- cutting down cost and complexity for businesses.
Built with Veo: The latest breakthrough leverages Veo, Googleās video generation model, to create smooth, 360° spins with realistic lighting, textures, and material effects- even from minimal input.
3 Generations of Progress:
NeRF (2022): Enabled 360° views from 5+ images using neural radiance fields.
View-Conditioned Diffusion (2023): Used DreamFusion-style training to scale to more complex shapes and fewer photos.
Veo (2024): Generalizes across categories- furniture, fashion, electronics- and handles reflections, lighting, and geometry with minimal data.
Live on Google Shopping: These AI-generated 3D views are already powering product listings- especially for shoes, sandals, and apparel.
Googleās tech is helping bridge the gap between screen and shelf- making online shopping more tactile, personalized, and visually rich. (source)
To help replicate the intuitive nature of shopping on a screen, learn how weāre using the latest #GenerativeAI models (including Veo) to transform 2D product images into immersive 3D visualizations for Google Shopping ā from as few as 3 product images āgoo.gle/4ddjJGJ
ā Google AI (@GoogleAI)
7:35 PM ⢠May 12, 2025
A new open protocol called AG-UI is here to standardize how AI agents connect with front-end apps. Lightweight and event-based, itās designed to simplify building agentic user experiences- without locking you into a single stack.
Whatās New:
Plug-and-Play Simplicity: AG-UI defines 16 standard event types and structured inputs for easy frontend integration- compatible with WebSockets, SSE, webhooks, and more, thanks to a flexible middleware and reference HTTP connector.
Framework-First & Developer-Tested: Designed with real-world input from CopilotKit devs and integrated with agent frameworks like LangGraph, Mastra, CrewAI, and AG2- reflecting actual developer workflows.
Out-of-the-Box Ecosystem Support: Fully supported in top frameworks today, with contributions open for others like OpenAI SDK, AWS Bedrock, and Vercel AI SDK.
Packed with Core Features: Real-time agentic chat, tool use, delta streams, structured UIs, human-in-the-loop collaboration, and more- baked in.
Production-Ready Clients: React client via CopilotKit is available now, with messaging integrations (WhatsApp, WeChat, RCS) coming soon in collaboration with AWS.
AG-UI addresses the challenge of seamlessly connecting advanced AI agents with user-facing apps by providing a standardized, event-driven protocol that simplifies integration and enables real-time, interactive experiences. With 16 standard event types and support for features like streaming and state sync, it reduces reliance on custom solutions while ensuring broad compatibility across AI frameworks and platforms. (source)
š® Try it Out: AG-UI demo

After weeks of testing, Qwen Chatās Deep Research feature is now officially available for all users- joining the ranks of AI models like ChatGPT, Bard, and Bing AI that offer advanced deep research capabilities to deliver comprehensive, nuanced insights. While some variants of Qwen may have offered research-like functions before, this release marks a full rollout of their dedicated Deep Research mode.
Whatās New:
Ask Anything, Get Specific: Start with a broad question like āTell me about roboticsā. Qwen will help narrow it down- history, theory, or real-world use.
Surprise Me Mode: Not sure what you want? Qwen can choose a direction for you and still deliver something insightful.
Hands-Free Reports: While you sip your coffee ā, Qwen builds a detailed answer, personalized and easy to digest.
For Work or Play: Great for learning, researching, or just exploring random ideas with AI that feels like a helpful study buddy.
After a few weeks of phased testing, Deep Research on Qwen Chat is now live and available for everyone ! š
Here's how to use it: Just ask something you're curious about ā like "Tell me something about robotics." Qwen will then ask you to narrow it down ā maybe history, theory,
ā Qwen (@Alibaba_Qwen)
3:05 PM ⢠May 13, 2025
Meta just launched MetaShuffling, a new way to make their big Llama 4 models work faster and smarter. These models use a mix of experts (called Mixture-of-Experts or MoE) to handle huge amounts of data - but until now, their method of processing slowed things down.
Whatās New:
Better Token Sorting: Instead of wasting time and computer power padding or cutting data awkwardly, MetaShuffling groups similar tasks (ātokensā) together. This means the GPU can work more efficiently without waiting around.
Smarter Work Splitting: It uses special code (GroupedGEMM kernels) to split the work evenly, save memory, and keep important info handy - like tidying your workspace so youāre never searching for what you need.
Made for Big Setups: Works smoothly on multiple GPUs and even multiple machines, which is key for running huge AI models.
Open for Everyone: Meta shared these improvements as open-source code, so developers can build on them easily.
Clear Speed Gains: Tests show big speed boosts on cutting-edge GPUs, especially for long and complex AI tasks.
Older methods like padding or slicing forced the AI to process extra empty or awkward data chunks, which slowed everything down. MetaShufflingās approach avoids that by reorganizing data dynamically, letting the AI use its resources more fully and run faster. Itās a smarter, leaner way to handle big AI models - making them more practical for real-world use. (source)
Mixture-of-Experts (MoE) is a popular #LLM architecture that reduces computation by activating fewer parameters per token. But it brings memory, communication, and control challenges.
š”We introduce MetaShuffling, enabling efficient Llama 4 model inference in production.š Readā PyTorch (@PyTorch)
10:40 PM ⢠May 12, 2025
Google just launched the AI Futures Fund to boost early-stage startups building the future of AI. The program offers early access to advanced models like Gemini, Veo, and Imagen, direct collaboration with Google DeepMind and Google Labs experts, equity investment opportunities, plus Google Cloud credits and dedicated support to help startups grow and scale their AI products.
Introducing the AI Futures Fund, a new program where startups can work with us to build the future of AI technology. From early access to @GoogleDeepMind models to Cloud credits, weāll help get your startup to the next level.
Learn more and apply ā¬ļø labs.google/aifuturesfund
ā Google Labs (@GoogleLabs)
5:02 PM ⢠May 12, 2025
I checked out Zero this week - an AI-native email client that manages your inbox so you donāt have to lift a finger.
What It Does
Zero uses AI to prioritize your emails, summarize long threads, draft replies, and organize your inbox into smart categories. It even lets you chat directly with your email, all while understanding the full context of your workflow. Unlike traditional email clients, Zero acts like a proactive assistant that keeps you ahead of your inbox chaos.
How to Use It:
Step 1: Sign up for the public beta at 0.email.
Step 2: Connect your email account and let Zero start prioritizing, summarizing, and drafting replies automatically.
Step 3: Chat with your inbox naturally to plan your day, manage projects, or get quick updates- right from your email client.
Hugging Face drops an epic deep-dive on Vision-Language Models (VLMs): From agentic VLMs and MoEs to video understanding and alignment- this blog maps out where the field is and where it's heading. Curious about VLMs or building with them? Start here.
Similarwebās new Global AI Tracker shows GenAI traffic is booming: ChatGPT is back on the rise, Grok and Gemini are gaining steam, and DeepSeekās momentum is holding strong. If youāre watching the GenAI race, this dashboard is a pure signal.
Metaās CATransformers is a pioneering AI framework that cuts carbon emissions by ~9.1% while maintaining performance, using a first-ever co-design of neural architectures and hardware. Powered by the Architectural Carbon Tool (ACT), it pushes AI sustainability forward- check out the paper and code in Metaās Sustainable AI hub.
Last week, we launched more free AI & data courses to help you sharpen your technical skills and explore powerful tools in practice-
Build Data Pipelines with Apache Airflow: Learn workflow orchestration by building real-world ETL pipelines using Apache Airflow. From mastering the basics of DAGs and task dependencies to advanced scheduling, this course guides you through using PythonOperators, BashOperators, cron expressions, and hooks. Perfect for anyone wanting scalable, clean, and reproducible data engineering pipelines.
Getting Started with Tableau: Jump into data visualization with Tableau, the industry-leading tool. This beginner-friendly course covers connecting data sources, creating compelling visualizations, and building interactive dashboards. Gain essential skills like data prep, calculated fields, and sharing insights- all through Tableauās intuitive interface.
From cracking complex math to making healthcare chatbots actually trustworthy, this week showed AI getting seriously practical. Whether itās Googleās AlphaEvolve cooking up new algorithms, OpenAI setting higher bars for medical AI, or CodeRabbit making coding less stressful- the focus is shifting from flashy demos to real-deal tools that help you get stuff done.
Feels like AI is settling into the everyday grind, and honestly, thatās pretty exciting.
See you next week! š
Reply