• AI Emergence
  • Posts
  • 🚨 Amazon Says It Out Loud: AI Will Shrink Jobs

🚨 Amazon Says It Out Loud: AI Will Shrink Jobs

Along with: The First Open Model to Hit 1M Tokens Is Here!

Hey there đź‘‹

This week, Amazon CEO Andy Jassy made headlines with a company-wide memo that didn’t mince words: AI is already reshaping the workplace, and it’s going to shrink the number of corporate roles over time.

The memo wasn’t about layoffs but it was a clear signal. If you’re not actively learning how to use AI, build with it, or adapt around it, you might be left behind. It’s a tone shift: from “AI will help” to “AI will define how we work.”

And coming from one of the world’s largest employers, that message travels.

Let’s see what else happened this week 👇

What would be the format? Every week, we will break the newsletter into the following sections:

  • The Input - All about recent developments in AI

  • The Algorithm - Resources for learning

  • The Output - Our reflection

Table of Contents

Day 1 of #MiniMaxWeek wasn’t just another product drop. It was a serious moment. MiniMax released M1, the first open-weight model with a native 1 million token context window and 80k token output. No tricks, no stitching, no waiting list. This is the first time anyone’s made this level of long-form capability fully open. Not even GPT-4o or Gemini are offering this much context publicly.

What’s New

  • 1M tokens, for real: No open model has gone anywhere near this. DeepSeek-R1, Qwen3, Claude - they’re still working with way shorter limits.

  • Totally open: Weights, code, and even a web demo are live. You can try it out right now on Hugging Face and GitHub.

  • Huge output: 80k tokens is enough for multi-file code refactors, entire books, or long planning workflows  and it handles that cleanly.

  • Surprisingly efficient: Trained in just three weeks on 512 H800s. That’s about $535K all-in.

  • New tech inside: Uses something called Lightning Attention to handle long context, and a fresh RL technique called CISPO to make its reasoning sharper across massive inputs.


    Long context has been this elite capability locked up inside closed labs. MiniMax just threw that door open. And they didn’t just match the best, they went further. This is what leveling up looks like when open-source starts leading.

Google just took Gemini 2.5 out of beta and made it official. Both Gemini 2.5 Pro and Flash are now generally available. And there’s a new lightweight model in the mix: Flash‑Lite, built for ultra-fast, cost-friendly tasks where you don’t need deep reasoning, just speed and scale.

What’s New

  • Flash and Pro are production-ready: These are no longer previews. They're now powering companies like Snap, Spline, and SmartBear in live settings.

  • Flash‑Lite joins the family: A leaner, faster sibling built for things like translations, quick replies, and real-time prompts. Early access is now live in Google AI Studio and Vertex AI.

  • Thinking budgets still apply: You can toggle how smart or fast your response should be. Want deep reasoning? Crank it up. Just need a one-liner? Keep it cheap and quick.

  • Still supports 1M-token context: That massive input window from Gemini 2.5 carries over to all variants, including multimodal and tool-calling support.

  • Simplified pricing: Flash now runs at $0.30 per million input tokens and $2.50 per million output. Flash‑Lite comes in even cheaper, though Google hasn’t shared exact numbers yet.

Gemini isn’t just chasing OpenAI’s benchmarks anymore, it’s building a full-stack lineup. Pro for reasoning, Flash for balance, and Flash‑Lite for scale. This gives developers way more control over how smart, fast, or cheap each request needs to be. (source)

First the startups, then the enterprise, and now the government. OpenAI just created a new branch focused only on government work. It's called OpenAI for Government, and it kicks off with a huge $200 million pilot deal with the US Department of Defense.

Yes, this is OpenAI’s first formal military contract.

What’s New

  • New government-focused offering: OpenAI is now offering special versions of its tools, like ChatGPT, built for public agencies at the federal, state, and local level.

  • $200M pilot with the US military: The Department of Defense will use OpenAI models to explore things like faster healthcare logistics, better cybersecurity, smarter purchasing systems, and more over the next 2 years.

  • Not entirely new, but now formal: OpenAI has already been working with labs like NASA and NIH. This just pulls it all under one clear offering.

  • No weapons allowed: This move required an update to OpenAI’s policy- they’re still not allowing weapons or warfare, but now make exceptions for defense-related work like planning and analysis.

  • Coming to local agencies too: State governments, public hospitals, city agencies- they’ll all be able to use these tools soon.

This is a big shift. Until now, most AI progress has been focused on businesses. OpenAI is now officially entering the world of public services and it’s starting at the top. AI might soon be running the systems behind defense, healthcare, and public infrastructure. (source)

MiniMax started the week with long-context bragging rights, but they didn’t stop there. On the Day 2 of #MiniMaxWeek  we have: Hailuo 02, a new text-to-video model that’s quietly setting a new bar for quality and efficiency.

What’s New

  • 1080p Output, Finally: Forget 480p potato clips. Hailuo 02 delivers full HD video from text, with dynamic lighting (sparks, reflections, fire), clean motion, and no more awkward warping during complex scenes like spinning or fast movement.

  • Better prompt-following: If you write “a man walks through a firelight,” the fire shows up where you expect not in a random corner.

  • Ridiculously Efficient: According to internal tests, Hailuo 02 delivers near-cinematic clips at a fraction of the GPU cost of existing models. Yes, they’re benchmarking against Pika, Runway, and Sora.

  • Leaderboard Love: Chinese tech circles are buzzing. On Zhihu, Sina, and Bilibili, creators are ranking Hailuo second only to Sora (which... isn’t even public). Not bad for a new-gen contender.

  • Try it out: There’s a free trial and you get 500 exclusive points. You can try it out here.

Hailuo isn’t just trying to be “China’s OpenAI.” They’re carving a spot in video-native AI tooling and doing it with real product polish. If you’re building short-form content, ads, social assets, or animations, this could be the first serious alt to Runway and Pika that doesn’t break your GPU budget.

In a rare all-hands memo, Amazon CEO Andy Jassy didn’t sugarcoat it. The company’s going deep into generative AI  and yes, that means fewer traditional corporate jobs ahead. AI won’t just help employees. It’ll start replacing some of them.

What Jassy Shared:

  • GenAI is already everywhere: Over 1,000 use cases across Amazon from Alexa and shopping to warehouse logistics and internal tools.

  • Shifting roles, not mass cuts: Jassy said some corporate roles may shrink over time, mostly through natural attrition. No big layoff announcements- just a heads-up that the org will evolve.

  • Skills are the new focus: Employees were encouraged to get hands-on with AI, attend trainings, test tools, and start building fluency.

  • AI-native infrastructure: Amazon’s investing heavily in AI-specific chips (like Trainium2) and building massive cloud infrastructure to support both internal and customer AI tools.

Big Tech has been talking about transformation for years, but Amazon just gave us a clearer roadmap. GenAI won’t just be a tool for productivity, it's going to reset the size and shape of entire org charts. (source)

Tencent just released Hunyuan3D 2.1, and this isn’t just another 3D-from-text toy. It’s fast, textured, runs on regular GPUs, and most importantly- it looks good enough to actually use.

This version brings in physically-based rendering (PBR) textures, sharper shapes, and full open-source access making it one of the most practical 3D AI tools out there right now.

What’s New

  • PBR textures are here: This is what gives assets that real-world look, think metal reflections, leather softness, and better light interaction.

  • Sharper geometry: 2.1 improves the shape generation quality, so you get more accurate, usable meshes with less cleanup.

  • Text or image input: You can start from either a prompt or a reference image, both give you mesh + texture output.

  • Fully open-source: Code, weights, training data, it’s all out there. And yes, it runs on consumer GPUs.

Available now via Model | Demo | GitHub | Creation Engine

Moonshot AI just released Kimi-Dev-72B, and this one isn’t just about code completion, it’s built to fix real bugs, run actual test suites, and score high where most open models break. It’s now the top open-weight model on SWE-bench Verified, a benchmark that evaluates how well a model can patch real-world GitHub issues.

What’s New

  • Best-in-class on SWE-bench Verified: Kimi-Dev-72B hits 60.4%, outperforming every other open-source model.

  • Reinforcement learning with real code: It was trained by testing its own outputs in Docker and only getting rewarded when the fix actually passed full test suites. No shortcut rewards, no hallucinated “it should work.”

  • Built for dev tools, not just demos: You can run it locally, hook it into VSCode, or deploy it into CI/CD flows. Available now on Hugging Face and GitHub.

Available now on Hugging Face and GitHub.

  • In this must-watch keynote by Andrej Karpathy, he breaks down how LLMs are reshaping software from the ground up. Drawing on his experience at OpenAI, Tesla, and Stanford, he introduces the concept of “Software 3.0”- where natural language becomes the new programming interface and LLMs act as a new kind of computer.

  • Meta’s new Llama Startup Program offers early-stage U.S. startups up to $6,000/month in cloud credits, direct technical support, and access to Llama experts to help build and scale generative AI products. Open to startups with <$10M in funding- apply by May 30, 2025.

  • In this candid interview, AI pioneer Geoffrey Hinton- often called the "Godfather of AI"- opens up about the real dangers he sees in today’s AI race, including a 20% chance of human extinction, his regret over creating the tech, and six major threats he believes we’re not ready for. He also explores AI’s potential upsides in healthcare, productivity, and education.

  • This week, we’re spotlighting a full Data Science Program- a free collection of 9 courses designed to build real-world skills across the entire data stack. Perfect for anyone looking to break into data science or sharpen their technical edge.

This week’s updates weren’t about one big breakthrough, they were about breadth.

We’re seeing models go beyond chat and hit every layer of the stack: policy, code, video, 3D, voice. We’re seeing orgs like Amazon acknowledge not just the power of AI but its structural impact. And we’re seeing open models that aren’t just replicating benchmarks, but reshaping how entire workflows get done.

The story now isn’t “can AI do it?”- it’s “what else can it take off the table?”

And that’s a shift worth paying attention to, not because of what changed this week, but because of how much is getting covered.

Catch you next week đź‘‹

Reply

or to participate.