Google’s leap towards Quantum Computing

Along with: 12 Days of OpenAI

Hey there, 

Google did the unthinkable - it announced a new Quantum chip - Willow. With 105 qubits, it reminds me of my first x486 desktop (it had a hard disk of 256 MB and 8MB RAM). 

If I were to assume that it would still take 5 years for this to be used commercially and then I apply the same rate of growth as we have seen on other silicon hardware - we could be looking at chips with a few thousand qubits available commercially in about a decade!  

What would it mean - it would mean that some of the problems that we think are unsolvable today - become easy to solve. It would mean that the current encryption methods go for a toss!

Exciting times! Let’s go through the developments this week in more detail:

What would be the format? Every week, we will break the newsletter into the following sections:

  • The Input - All about recent developments in AI

  • The Tools - Interesting finds and launches

  • The Algorithm - Resources for learning

  • The Output - Our reflection 

Table of Contents

Imagine solving a computational problem that would take approximately 1025 years to solve in just 5 minutes. That’s the power of a quantum computer. 

Google has announced Willow, a quantum chip with generation performance that drives significant advancements in quantum computing. 

The chip tackles two major challenges:

  • Achieving an exponential reduction in quantum errors as more quantum bits scale up.

  • Completing a computational task in under five minutes- an achievement so fast that it would take supercomputers nearly 10 septillion years to match.

Willow uses high-quality components to enhance the performance of its quantum bits, moving us closer to using quantum computers for real-world problem-solving.

Despite these breakthroughs, experts argue that quantum computing is still far from practical applications. With 105 qubits/quantum bits, Willow remains too small to solve critical industry problems. Its reliance on superconducting qubits, which require extreme cooling, also limits scalability.

Google acknowledges that its current benchmarks lack real-world applications but remains focused on developing algorithms that outperform classical computers and offer practical, commercial value. (source)

ChatGPT meets Siri for smarter task management

On Day 5 of the 12 Days of OpenAI, Sam Altman, Miqdad Jaffer, and Dave Cummings introduced ChatGPT integration for iOS and macOS, aimed at making ChatGPT more accessible. 

This allows users to interact with ChatGPT through Siri, writing tools, and camera control. Users can now learn more about objects by pointing the camera at them, while Siri can delegate tasks to ChatGPT. The writing tools enable users to compose or refine documents with ChatGPT's assistance.

The session demonstrated how easily ChatGPT can be invoked on these devices, helping users with tasks like planning a Christmas party or analyzing images. 

Additionally, macOS now supports Siri's collaboration with ChatGPT to efficiently handle tasks such as processing long documents and answering queries. (source) 

Say Hello to SORA Turbo

Earlier this year, OpenAI introduced Sora, a video-generation AI model designed to bridge the gap between text-based inputs and video creation, marking a significant leap in AI's ability to simulate reality.

Now, OpenAI has launched Sora Turbo, its latest standalone product. This upgraded model generates realistic videos from text and serves as a foundation for AI capable of simulating and understanding reality.

Sora Turbo includes features like 1080p video generation, custom aspect ratios, and tools for detailed input specification. It is faster and more user-friendly than its predecessor.

Available to ChatGPT Plus and Pro users, Sora Turbo offers tiered usage plans with transparency safeguards, including metadata and watermarks, to ensure responsible use. (source)

OpenAI’s new step towards RFT

OpenAI recently expanded its Reinforcement Fine-Tuning (RFT) program, offering accepted applicants the opportunity to develop specialized AI models designed to address complex, domain-specific tasks.

Through RFT, developers can customize OpenAI's models by utilizing high-quality tasks and reference answers to guide the model's performance. This approach helps improve the model’s ability to reason through similar problems and enhances its accuracy in specific domains.

The program is open to research institutes, universities, and enterprises, with accepted applicants gaining access to the RFT API in its alpha version. In return, they will provide feedback to OpenAI to help refine the technology.(source)

X recently announced that its AI chatbot, Grok, is now free to access, offering 10 free prompts and image generation every two hours, with a limit of 3 image analyses per day, without the need for an X Premium subscription.

This move brings Grok in direct competition with other open sources like ChatGPT, Claude, and Gemini, which have similar freemium models, as Grok was previously exclusive to Premium subscribers. (source)

Google has surpassed OpenAI to take the top spot in a key AI benchmark, following the release of its Gemini-Exp-1206 experimental model. 

This model outperforms OpenAI's ChatGPT-4o in several categories, including mathematics, creative writing, and visuals, showing a 40-point improvement.

In addition, Google’s free-to-use model can process video content, a capability not yet offered by competitors like ChatGPT and Claude, which are limited to image analysis. (source)

Meta has introduced the Llama 3.3 70B model, which offers performance comparable to the larger Llama 3.1 405B at a lower cost. 

The model outperforms others like Google’s Gemini 1.5 Pro, OpenAI’s GPT-4, and Amazon’s Nova Pro across several benchmarks, including math, general knowledge, and instruction-following. 

It is available for download via platforms like Hugging Face, though platforms with over 700 million monthly users are required to obtain a special license. (source)

WaveForms AI, a new company founded by former OpenAI veteran Alexis Conneau, is developing advanced AI audio software that can detect emotional cues in voice conversations.

The goal is to create more natural and immersive interactions between humans and machines. WaveForms has raised $40 million in funding, bringing its valuation to $200 million.

Having previously worked on OpenAI's voice assistant project, Conneau now aims to enhance voice assistants' ability to interpret tone, hesitation, and other subtle vocal cues without the need for pre-training on specific interactions.(source)

Tool: SORA

Unlock the future of creativity as you transform your ideas into stunning visuals. Sora, OpenAI's groundbreaking AI video-generation tool, empowers you to create captivating, high-quality videos from text effortlessly.

Problem Statement: Capture a random picture of your surroundings, upload it to Sora, and transform it into a hyper-realistic video by adding your creative touch.

How to Access Sora:

  1. Log in to SORA.

  2. Upload your image.

  3. Copy the image link and paste it into the text tab below, along with your creative prompt.

  4. Click Generate and watch your vision come to life!

Check out the hyper-realistic video I generated with Sora!

To learn more about the tool, check out this Analytics Vidhya blog.

  • In the recent episode of Leading with Data, I had an engaging conversation with Mark Landry, Director of Data Science & Product at H2O.ai. He shared his journey through data science competitions, his role at H2O.ai, and his insights on GenAI, advancements in LLMs, and the transformative potential of agentic AI in addressing business challenges.

  • The "Anyone Can Build AI Agents" free course by Analytics Vidhya offers beginners to create and customize AI agents using the no-code platform Wordware. It guides users to design agents for business or personal use without programming.

As I mentioned in the last edition, this is the most exciting December I have seen by a mile. I expect an announcement about the Agentic AI platform from OpenAI in the next few days and a couple more big announcements in the industry before Santa visits us.

How about you? What do you think is coming between now and Christmas Eve?

How do you rate this issue of AI Emergence?

Would love to hear your thoughts

Login or Subscribe to participate in polls.

Reply

or to participate.