- AI Emergence
- Posts
- đĽOOOwsome OpenAI Updates: o3, o4-mini, 4.1, Memory Feature & More!
đĽOOOwsome OpenAI Updates: o3, o4-mini, 4.1, Memory Feature & More!
Along with - Grok also remembers all the chats now
Hi there đ
OpenAI continues to set a new bar on the pace of development in this AI led Universe. Two weeks after Sam Altman teased o3 and o4-miniâs release, OpenAI dropped both models, GPT-4.1, and even a shiny new memory feature for ChatGPT.
Thatâs a ton to digest in 2 weeks. On a side note, the model names have again sparked a new wave of memes (donât worry, Samâs on it and promised to fix them by summer).
The key question however is - How are you adapting to this world? It is keeping us on the toes - for sure!
What would be the format? Every week, we will break the newsletter into the following sections:
The Input - All about recent developments in AI
The Tools - Interesting finds and launches
The Algorithm - Resources for learning
The Output - Our reflection
Table of Contents
The wait is over! OpenAI has officially dropped o3 and o4-mini, and these new models are a serious upgrade to the o-series. Trained to think more deeply before responding, both o3 and o4-mini come with advanced reasoning and full tool access, including browsing, Python, image interpretation, and file analysis. Faster, smarter, and more capable than ever, these models are built to tackle real-world problems and take AI to the next level.
Whatâs New:
Visual Reasoning, Finally: These models can now understand and reason with images whether itâs analyzing a chart, whiteboard sketch, or visual prompt. A big step forward for multimodal use cases.
Performance Gains: o3 is outperforming previous models by 20% in coding, math, and science benchmarks. Strong early results and getting great feedback from developers.
Smarter + Cost-Effective: o4-mini balances reasoning power with efficiency. Ideal for data work, prototyping, and projects that need smart outputs on a tighter budget.
All Tools Unlocked: Python, web browsing, file reading, image generation, even canvas drawingâthese models can use the full ChatGPT toolset. Think of it as having a flexible AI teammate that can research, code, analyze, and visualize in one flow.
These models are now live for ChatGPT Plus, Pro, and Team users. Older models like o1 and o3-mini will gradually be phased out, so this is the upgrade window.
OpenAI has also dropped GPTâ4.1 - along with its smaller siblings, GPTâ4.1 mini and nano. These are API-only models, built specifically for developers and enterprise use. Theyâre faster, cheaper, better at coding, and can handle up to 1 million tokens, making them ideal for working with large codebases, legal docs, or multi-document tasks.
Whatâs New:
API-Only Access: These models wonât be available inside ChatGPT - theyâre only accessible via the OpenAI API.
1M Token Context: All three variants can process up to 1 million tokens - a major upgrade from the 128K limit of GPTâ4o.
Stronger Coding Performance: GPTâ4.1 outperforms GPTâ4o by 21% and GPTâ4.5 by 27% on coding benchmarks like SWE-bench Verified. If youâre into dev-heavy tasks, this matters.
Faster and More Affordable:
GPTâ4.1 mini is 50% faster and 83% cheaper than GPTâ4.0.
GPTâ4.1 nano is the most lightweight and cost-effective, great for quick tasks like text classification or completing a sentence.
Real-World Improvements: Evaluators prefer GPT-4.1 for web interface tasks 80% of the time over GPT-4.0. The model has shown enhanced performance in areas like frontend development and handling large, complex documents.â
If you're building apps, working with heavy text, or automating anything at scale - GPTâ4.1 is worth exploring.
Announcing GPT-4.1, GPT-4.1 mini, and GPT-4.1 nano in the API.
TL;DR: Major improvements on coding, instruction following, and long context. đĽ
00:00 Intro
02:18 Coding
04:53 Instruction following
06:58 Long context
10:22 Demos, pricing, and availability
20:00 @windsurf_aiâ OpenAI Developers (@OpenAIDevs)
7:11 PM ⢠Apr 14, 2025
Think youâve heard it all from OpenAI this week? Think again. OpenAI has also rolled out memory for ChatGPT. The model can now remember things youâve said across conversations and use that to personalize your experience. From remembering your upcoming trip to suggesting your favorite meals, the idea is to make interactions more useful and human-like.
But as youâd expect, this also raises some privacy concerns, something even Sam Altman reportedly lost sleep over. The line between convenience and overreach is getting thinner.
Whatâs New:
Always-On Memory: ChatGPT now remembers past chats like âYou mentioned youâre planning a trip to Japanâ or âYou like Italian food.â This feature is on by default for most users.
Youâre in Control: You can view, edit, or delete anything it remembers. So if you told it about a phone search last week and want that gone, itâs just a click away.
Temporary Chat Mode: Having a sensitive conversation? Switch to Temporary Chat so nothing gets saved. Perfect for personal topics or one-off questions.
Transparent Recall: Ask why it gave a certain suggestion, and itâll let you know -âI remembered you said you like pasta, so I shared Italian recipes.â
Availability: This feature is currently available to ChatGPT Plus and Pro users. It hasnât rolled out to free users yet, and regions like the U.K. and EU are still on hold due to privacy reviews.
This update pushes ChatGPT into a more âpersonal assistantâ territory but how comfortable are we with that level of memory? Thatâs the real conversation.
Starting today, memory in ChatGPT can now reference all of your past chats to provide more personalized responses, drawing on your preferences and interests to make it even more helpful for writing, getting advice, learning, and beyond.
â OpenAI (@OpenAI)
5:06 PM ⢠Apr 10, 2025
xAI has just released two major updates- the first version of Grok Studio and the memory feature for Grok.
First up, Grok Studio is a dynamic, AI-powered workspace designed to make collaboration and content creation easier than ever. Whether you're drafting documents, executing code, or collaborating with your team, Grok Studio has got you covered.
But thatâs not all! Grok now comes with a memory feature, which means it can recall previous interactions to make your workflow even smoother and more personalized. No more repeating yourself, Grok remembers where you left off and helps you pick up right where you were.
Both Grok Studio and the memory feature are available to all users on Grok.com, making these powerful tools accessible to everyone, no matter their subscription plan.
Hugging Face believes robotics is the next major frontier for AI. Over the past 18 months, they've seen significant advancements in the field and are now focused on making robotics open-source, transparent, and community-driven.
Teaming up with Pollen Robotics, Hugging Face is working to build robots that are affordable, hackable, and safe. Their collaboration has already resulted in the launch of Reachy 2, an open-source humanoid robot platform available on Pollenâs website. This marks the beginning of their journey to share more open-source robots in the coming months, empowering developers and communities to innovate and collaborate.
Why It Matters:
Open-Source Accessibility: With Reachy 2, Hugging Face and Pollen Robotics are making humanoid robots available for everyone to use, modify, and improve.
Community Innovation: The goal is to create robots that encourage creativity, allowing anyone to hack, build, and experiment with the technology.
Democratizing Robotics: By making robotics affordable and accessible, Hugging Face aims to break down barriers to entry and open up new possibilities for developers and organizations.
If you've followed the progress of robotics in the past 18 months, you've likely noticed how robotics is increasingly becoming the next frontier that AI will unlock.
At Hugging Faceâin robotics and across all AI fieldsâwe believe in a future where AI and robots are open-source,
â Thomas Wolf (@Thom_Wolf)
2:40 PM ⢠Apr 14, 2025
Kling has released version 2.0 of its AI video model, aiming to improve fluid motion and prompt alignment significantly. In a recent deep-dive, a creator shared how they used Kling 2.0- alongside tools like Midjourney V7 and Figma- to build a polished launch film. The process involved generating visual assets, animating nearly 500 clips over two days, and collaborating with other AI creators to fill key visual gaps.
What stood out was the emphasis on speed and ease- simple prompts, lightweight editing, and an overall streamlined creative process. While it's not without limitations, Kling 2.0 shows how fast AI tools are evolving in the video space, especially for rapid prototyping and visual storytelling. (source)
Hollywood is so cooked.
Some of these shots have better VFX than Game of Thrones, and this was made in just three days.
Prediction: A small team will do an unofficial remake of GoT S8, and it will be better than the original season.
â PJ Ace (@PJaccetturo)
2:42 PM ⢠Apr 16, 2025
ByteDance has launched Seaweed-7B, a 7-billion-parameter video generation model thatâs proving smaller models can deliver top-tier results. Despite using significantly less compute, Seaweed competes head-to-head with larger models like Kling 1.6, Google Veo, and Wan 2.1, setting new standards for cost-effective video generation.
Whatâs New:
Versatile Generation Modes: Seaweed supports text-to-video, image-to-video, and audio-driven synthesis, generating high-quality video outputs up to 20 seconds long.
Competitive Performance: In human evaluations, Seaweed has proven highly effective, especially in image-to-video tasks, outperforming rivals like Sora and Wan 2.1.
Advanced Capabilities: Seaweed handles multi-shot storytelling, controlled camera movements, and synchronized audio-visual generation, pushing the boundaries of video production.
Human Animation Focus: The model has been fine-tuned for human animation, prioritizing realistic human movement and lip-syncing, making it ideal for applications requiring human-centric video creation.
Glad to share Seaweed-7B, a cost-effective foundation model for video generation. Our tech report highlights the key designs that significantly improve compute efficiency and performance given limited resources, achieving comparable quality against other industry-level models. To
â Ceyuan Yang (@CeyuanY)
3:12 AM ⢠Apr 14, 2025
Transcript LOL lets you transcribe, generate, and repurpose content from your videos and podcasts in minutes. With 99% accuracy and support for 70+ languages, itâs perfect for creators, marketers, and researchers.
How to Use:
Upload Media: Easily upload your audio or video files.
Generate Transcript: Get accurate transcriptions with speaker detection and real-time editing.
Create Content: Automatically generate titles, descriptions, social posts, emails, blog posts, research insights, and more.
Download: Export your work in 7 formats for easy sharing and use.


At TED 2025, Sam Altman shared insights on ChatGPTâs rapidly growing user base, potentially nearing 1 billion active users. He discussed new memory features, creator compensation, and AI agents, all while emphasizing the importance of guardrails as AI continues to evolve.
Cursor bot makes up a new policy - leads multiple users to cancel their subscription - Read the bizarre story here.
If you're interested in leadership, AI, or behavioral science, this oneâs worth a look: a new study shows that people who lead AI agents effectively also tend to lead human teams well (Ď = 0.81). Itâs a fascinating take on how AI can be used as a proxy for human behavior in social science experiments - and what that means for measuring leadership in the age of intelligent agents.
Weâve added more free courses to help you master GenAI workflows, AI agents, and everything in between - whether youâre just getting started or ready to go hands-on with advanced tools like RAG and vector memory:
GenAI Landscape- Learn the fundamentals of Generative AI- including prompting, fine-tuning, RAG, and AI agents- through a beginner-friendly course that demystifies how LLMs work and how to apply them in real-world scenarios.
Knowledge Bases & Memory for Agentic AI- Discover how to build smart, searchable knowledge systems using vector databases and RAG to give AI agents memory, context awareness, and better retrieval- all through hands-on projects and practical workflows.
OpenAI and xAI are both doubling down on memory - and itâs clear the AI race is moving into a more personalized, persistent phase.
But that raises a deeper question: as models start to remember more about us, how does that shift our relationship with them?
Are we just making AI more useful- or are we quietly changing the way we think, act, and relate to technology itself? Something worth watching closely.
Reply