- AI Emergence
- Posts
- š„io I/O Everywhere: Big AI from Google I/O, Bigger Moves from OpenAI
š„io I/O Everywhere: Big AI from Google I/O, Bigger Moves from OpenAI
Along with: OpenAI Codex and Microsoft Build
Hi there š
This week, the only thing louder than AI was io- and no, thatās not a typo.
In what might be the most important hardware move in AI so far, OpenAI just acquired io, the secretive startup co-founded by Sam Altman and Jony Ive, for a massive $6.5 billion. Thatās OpenAI teaming up with the legendary designer behind the iPhone to build something totally new: devices made for the AI-first world. No screens. No apps. Just smart, beautiful tech that blends into your life and makes AI feel natural.
Meanwhile, at Google I/O 2025, it was an AI avalanche. Smarter Search, faster Gemini, personal agents, climate AI, video generation, AR glasses and much more. Google showed off how itās putting AI into everything you already use.
So yeah I/O was everywhere this week. One's reinventing devices. The other is reinventing daily life.
Letās get into what else made waves this week in AI.
What would be the format? Every week, we will break the newsletter into the following sections:
The Input - All about recent developments in AI
The Tools - Interesting finds and launches
The Algorithm - Resources for learning
The Output - Our reflection
Table of Contents
Google I/O 2025 just happened, and if you werenāt paying attention, you mightāve missed some big moves. Forget flashy new phones- this yearās spotlight was all about making AI work for us. Let's see how:
Whatās New:
Project Mariner (Agent): This new automation tool lets you delegate tasks like booking tickets or managing emails, up to 10 tasks simultaneously. Built on Gemini, itās Googleās most advanced AI assistant for everyday life.
Project Astra: Combining AI with satellite data, this project enables real-time environmental monitoring. It can predict weather patterns, track deforestation, and even forecast wildfires, pushing global climate action forward.
Gemini 2.5 Pro & Flash: Google introduced its most powerful models to date. Gemini 2.5 Pro adds Deep Think, making complex problem-solving smoother, while Gemini 2.5 Flash boosts speed for real-time tasks, ideal for dynamic environments.
AI Mode in Search: Google Search now has an AI-powered mode delivering direct answers, personalized suggestions, and handling tasks like trip planning. Itās rolling out in the U.S. for a faster, more conversational experience.
Google Beam (Project Starline): With Project Starline, Google brings immersive 3D video calls to life, featuring head tracking and lightfield displays. Google Beam is the commercialized version that makes remote communication feel like face-to-face interaction, now with HP integration.
Veo 3 & Flow: Veo 3 takes video creation to new heights, transforming text and images into dynamic videos. For filmmakers, Flow offers AI-driven tools for creating cinematic scenes.
Android XR: A new operating system designed for smart glasses and AR headsets, Android XR aims to replace smartphones with extended reality (XR) devices for a fully immersive tech experience.
Google is giving us a sneak peek into a future where AI isnāt just a tool you interact with- itās an intelligent assistant that works for you. The lines between humans and machines are blurring, and what we saw at Google I/O 2025 is just the beginning of that transformation. (source)
Buckle up, weāre just getting started š #GoogleIO
ā Google (@Google)
1:46 AM ⢠May 21, 2025
š§āš» OpenAI Codex Brings Agentic Coding to ChatGPT
OpenAI just rolled out Codex, a new coding agent that lets you assign programming tasks to an AI agent- no code-wrangling required.
Whatās New:
Cloud-Based Software Agent: Codex handles tasks like writing features, answering code questions, running tests, and submitting PRs- all in parallel, securely, and in the cloud.
Works Like an Engineering Manager: You assign tasks, Codex handles the rest. Itās designed to skip the IDE entirely- perfect for moving backlog faster with less manual dev time.
Codex CLI: Prefer local? Thereās also a lightweight command-line version that helps you write, edit, and understand code on the fly.
Agentic Evolution: While tools like GitHub Copilot offer smart autocomplete, Codex belongs to a new class of āagentic codersā (like Devin, OpenHands, SWE-Agent) that aim to autonomously complete dev tasks without needing human-in-the-loop for every keystroke.
Big Goals, Real Limits: Codex hits 72.1% on the SWE-Bench benchmark (vs. OpenHands at 65.8%), but hallucinations and oversight challenges remain. Experts warn full autonomy isnāt here yet- manual code review is still critical.
Rolling out now to ChatGPT Pro, Enterprise, and Team users. Plus and Edu support coming soon. (source)
Microsoftās annual developer conference wraps up today- and itās clear they went all-in on agentic AI and tools to help devs build smarter, faster, and more securely.
Whatās New:
AI Agents Take Center Stage: Microsoft unveiled a sweeping vision for an "open agentic web"- where agents handle tasks autonomously across individuals, teams, and enterprises. Over 230,000 organizations have already built AI agents using Copilot Studio.
GitHub Copilot Evolves: Copilot is moving from autocomplete to asynchronous agent, with enterprise controls, open-source chat in VS Code, and GitHub-native model experimentation tools.
Windows AI Foundry Launches: A new dev platform for training, fine-tuning, and running open-source or proprietary LLMs locally or in the cloud- designed to support every stage of the AI dev lifecycle.
Azure AI Foundry Expands: Microsoft adds support for xAI's Grok 3 models, introduces Model Router and Model Leaderboard, and rolls out observability tools, risk controls, and enterprise-grade governance.
Copilot Gets Custom Tuning + Orchestration: With Copilot Tuning, companies can train low-code agents using internal data. New multi-agent orchestration lets teams build modular, collaborative AI workflows.
Agent Security Gets Serious: Introducing Entra Agent ID for identity management and MCP support across GitHub, Azure, Copilot Studio, and Windows to prevent agent sprawl and enhance security.
NLWeb Debuts: Think of it as āHTML for the agentic webā- NLWeb helps websites build conversational AI layers with built-in agent access and content discoverability.
AI for Science: Launching Microsoft Discovery, a new platform that brings agentic AI to scientific R&D, accelerating time-to-market in fields like pharma and sustainability.
Microsoft made 50+ announcements in total, reinforcing its vision of empowering developers to invent the next wave of agent-driven software. (source)
In a landmark $6.5B all-stock deal, OpenAI is acquiring io, the AI hardware startup founded by legendary Apple designer Jony Ive, to pioneer a new generation of intelligent devices for the era of artificial general intelligence (AGI).
Whatās New:
From Chatbots to Physical AI: Despite massive breakthroughs in language and vision models, everyday interaction with AI remains stuck in apps. This acquisition aims to change that- with new hardware designed from the ground up to make AI feel seamless, ambient, and human.
A Creative Alliance Becomes a Company: What began two years ago as an informal collaboration between Ive, Sam Altman, and LoveFrom- born from curiosity, shared values, and a desire to create meaningful tools- turned into io, a dedicated venture to rethink how we live with computers. Now, that vision becomes part of OpenAI.
The Biggest Bet Yet: With ioās ~55-person team and design leads like Evans Hankey and Tang Tan joining OpenAI, the merger brings top-tier talent in industrial design, engineering, and manufacturing. LoveFrom will now take on creative direction across all OpenAI products.
High Cost, Higher Ambition: The deal comes amid OpenAIās complex transition from nonprofit roots to a for-profit structure- and ongoing financial pressure. Yet Altman is confident: āWeāll be fine.ā
The Vision: Ambient Computing: The team aims to move beyond screens and smartphones- toward wearable or ambient devices that see, think, and understand the world, helping users cut through digital noise and regain clarity, agency, and joy.
OpenAI already owned 23% of io prior to this acquisition, making this its largest strategic move yet in reimagining how people interact with AI. (source)
Windsurf just dropped Wave 9, a new frontier-class model family purpose-built for software engineering on the edge. Meet SWE-1, SWE-1-lite, and SWE-1-mini - optimized for low-latency, real-time use on mobile devices.
Whatās New:
Wave 9 marks a major leap beyond Wave 8ās enterprise-focused tools- bringing real-time, multimodal AI directly to mobile devices with the new SWE-1 model family. Built for offline use, low latency, and privacy-first scenarios, Wave 9 ditches the cloud reliance and delivers foundation-level performance optimized for the edge.
Engineered for the Edge: SWE-1 runs natively on phones- no cloud required. Perfect for apps needing privacy, speed, and offline access.
Multimodal & Fast: Processes text + vision inputs, runs at 25 tokens/sec on flagship phones, and is ready for assistants, AR tools, and productivity apps.
Built for Devs: Transformer-based, quantized, with custom attention for constrained hardware. Windsurf aims to rival Google and Apple in the on-device AI race.
Ex-Meta DNA: Windsurf was founded by former Meta engineers focused on private, edge-ready AI. SWE-1 is their strongest entry yet. (source)
Wave 9 is here: a frontier model built for software engineering.
Introducing our new family of models: SWE-1, SWE-1-lite, and SWE-1-mini.
Based on internal evals, it has performance nearing that of frontier models from the foundation labs.
Available now, only in Windsurf!
ā Windsurf (@windsurf_ai)
6:44 PM ⢠May 15, 2025
At COMPUTEX and GTC, NVIDIA unveiled Isaac GR00T N1, the worldās first open, customizable foundation model for humanoid robots- ushering in a new era of physical AI built for reasoning, general skills, and real-world deployment.
Whatās New:
GR00T N1 Foundation Model (Now Available): GR00T N1 generalizes across tasks like grasping, sorting, and multistep workflows- and can be post-trained with real or synthetic data for specific robots and use cases. Built on a dual-system cognitive architecture:
System 1 handles fast reflexive actions.
System 2 handles deliberate reasoning via a VLM.
GR00T Blueprint + Synthetic Data = 40% Gains: Using the Isaac GR00T Blueprint, NVIDIA generated 780,000 synthetic motion trajectories in 11 hours, equal to 9 months of human data. Combining this with real data improved GR00T N1ās task performance by 40%. Now on Hugging Face + GitHub; Download GR00T N1 models, data, and simulation blueprints now, and use them in:
Newton- The Open-Source Physics Engine: In collaboration with Google DeepMind and Disney Research, NVIDIA is building Newton, a physics engine optimized for robot learning - enabling faster, more precise simulation and manipulation tasks. Early users include Disney's robotic characters and the MuJoCo-Warp team.
GR00T N1 in Action: Used by 1X Technologies to power domestic robots, and being adopted by Agility Robotics, Boston Dynamics, NEURA Robotics, and more. GR00T N1 turns humanoids into generalist agents - ready for material handling, packaging, inspection, and even household chores.
Scalable Hardware for Robot Devs: Deploy models with:
RTX PRO 6000 Blackwell workstations and servers
NVIDIA Jetson Thor for on-robot inference
DGX Cloud with GB200 NVL72 for 18Ć faster training and simulation (source)
Foxconn, Appleās key manufacturing partner, is investing $1.5 billion in its India unit Yuzhan Technology, signaling a major shift away from China as Apple diversifies its supply chain amid ongoing tariff and geopolitical pressures.
Whatās New:
Massive Equity Injection: Foxconnās Singapore-based subsidiary will purchase 12.77 billion shares in Yuzhan at ā¹10 each, expanding its Tamil Nadu-based operations that assemble iPhones and produce key components.
India Rising: The move aligns with Appleās growing focus on India, which exported $2B worth of iPhones to the U.S. earlier this year. CEO Tim Cook recently noted that most U.S.-sold iPhones could soon be āMade in India.ā
Semiconductor Push: Foxconn also received Indian government approval to co-build a $432M semiconductor plant with HCL, producing 36M chips/month- part of New Delhiās aggressive drive to become a global electronics hub.
As tensions with China persist, Foxconnās deepening commitment to India highlights a strategic realignment of the global tech supply chain.
Google just launched the new NotebookLM mobile app this week - a powerful research companion that helps you understand complex content anytime, anywhere.
What It Does
NotebookLM turns documents, links, and sources into interactive, AI-powered research spaces. With the new iOS and Android apps, you can now listen to Audio Overviews offline, ask follow-up questions in real time, and save content from anywhere on your phone - PDFs, websites, even YouTube videos. Itās like carrying a research assistant in your pocket.
How to Use It:
Step 1: Download the app from the App Store or Play Store.
Step 2: Add sources or tap Share from any app to send content to NotebookLM.
Step 3: Listen to summaries, ask questions, and explore insights - all from your mobile device.
Whether youāre a student, analyst, or knowledge worker, NotebookLM mobile makes it easy to stay curious and productive on the go.

Anand S. (LLM Psychologist) shows how crafting a single, detailed prompt can generate fully functional podcast generator apps across ChatGPT o4-mini-high, Gemini 2.5 Pro, and Claude 3.7 Sonnet- all in minutes. Itās a 60x speed boost over manual coding, proving that writing great one-shot specs is the next dev meta-skill.
Turing Award winner Yoshua Bengio opens up about the moment he realized the risks of unchecked, agentic AI- and why it pushed him to shift focus toward making AI safer for future generations. In this powerful TED Talk, he lays out a scientific path forward rooted in alignment, responsibility, and human values.
Last week, we added two advanced courses to help you sharpen your AI deployment and MLOps skills- designed for real-world, production-grade systems:
Building Agentic AI Systems with Bedrock: Learn to design multi-agent AI workflows using AWS Bedrock. This course covers the shift from prompt-based calls to agentic architectures, setting up knowledge bases, applying Guardrails for safety, and chaining agents for reasoning and synthesis. Youāll also benchmark performance and automate deployment with CI/CD.
A Complete MLOps Journey: Build scalable ML pipelines from scratch- covering data/model versioning, containerization, CI/CD with GitHub Actions, and deployment to AWS or GCP. The course also teaches real-time monitoring and drift detection, helping you keep production models stable and accurate.
This weekās news didnāt just move the needle, it moved the interface.
With OpenAIās bold leap into hardware via io, and Google embedding AI into every corner of your life, weāre watching the screen era give way to something new: AI not as a tool you use, but a presence that works with you.
Microsoft is giving agents purpose. Windsurf is putting models in your pocket. NVIDIA is giving robots a brain. And Codex is showing what happens when devs go from prompting to delegating.
The message is clear:
Weāre not just using AI- weāre building a world around it.
And that world is emerging in front of us.
If you had to design this world, what would you wish for? And who would you trust to build it?
See you next week š
Reply