• AI Emergence
  • Posts
  • Is "im-good-gpt-2-chatbot" really a peek into GPT-5's capabilities?

Is "im-good-gpt-2-chatbot" really a peek into GPT-5's capabilities?

Along with Is Elon Musk's new AI feature changing how we consume news?

Hey there, 

Last week I ended the newsletter with the following questions and options - 

What are your thoughts on what OpenAI would bring to the world this Summer?

Options:

  1. Prompt-based video generation (Public release of Sora)

  2. Agent-based interface to all OpenAI models (with evolved capabilities)

  3. A model that understands X standard mathematics along with language generation.

  4. A new evolved Search interface showing what Google should have been?

  5. Other

Within 7 days, we see 2 new developments:

  • The reappearance of "im-a-good-gpt2-chatbot" and "im-also-a-good-gpt2-chatbot" on LMSYS with Sam Altman continuing to play cryptic tweets

  • OpenAI playing with the DNS records of search .openai. com

The rumors are that something big should be released later today! I am feeling similar to how I do before big Apple events, with the added uncertainty of whether there is a release today or now.

Phew!! 

How are you feeling?

Irrespective, we announced new speakers including Joshua Starmer of Statquest, Danny Butvanik, Dr. Manish Gupta, Dr. Vikram Vij, and many more! It is surely heating up and you don’t want to miss the DataHack Summit from 7th to 10th August 2024 in Bengaluru.

Now coming to the Newsletter

What would be the format? Every week, we will break the newsletter into the following sections:

  • The Input - All about recent developments in AI

  • The Tools - Interesting finds and launches

  • The Algorithm - Resources for learning

  • The Output - Our reflection 

Please note: This is an abbreviated version of our newsletter due to email length restrictions. For the complete experience, visit our website and enjoy the full, unabridged edition.

Table of Contents

The mysterious chatbot "gpt2-chatbot" has reappeared on LMSYS Org, a site for benchmarking language models. Now, there are two versions: "im-a-good-gpt2-chatbot" and "im-also-a-good-gpt2-chatbot". As per benchmarks, these models display capabilities similar to GPT-4, or possibly even better. 

OpenAI CEO Sam Altman's tweet about "im-a-good-gpt2-chatbot" suggests the company might be testing new models. (source)

Reports indicate that Microsoft is actively training an in-house LLM known as MAI-1.

This new LLM, MAI-1, should reportedly perform on par with today’s top LLMs, including OpenAI’s GPT-4 and Google Gemini Advanced.

The design of MAI-1 means it needs significant computing power and extensive training data, making it costly. It is also reported to have about 500 billion parameters.

Mustafa Suleyman, CEO of Microsoft AI, leads the development of this model. An AI pioneer, Suleyman co-founded DeepMind and Inflection AI before joining Microsoft in March of this year. (source)

Stack Overflow and OpenAI have announced a partnership aimed at enhancing AI development and user experience by integrating Stack Overflow’s technical knowledge directly into OpenAI's ChatGPT. 

This collaboration utilizes Stack Overflow’s OverflowAPI to provide OpenAI with accurate, vetted data, which is essential for refining AI solutions (source)

OpenAI is working on a new ChatGPT feature that would allow it to search the web and include citations. This could make ChatGPT a direct competitor to Google and other AI-powered search engines like Perplexity.  (source)

Elon Musk’s X has recently introduced a new AI-powered feature called ‘Stories,’ which leverages its LLM Grok AI to offer summaries of trending posts.

It is designed to help users quickly grasp the essence of major news stories without needing to read the entire posts.

Currently, it is available only for premium subscribers on iOS and the web. It can be found in the 'For You' section. Users are cautioned about potential inaccuracies due to the AI's tendency to hallucinate and are advised to verify the summaries provided. (source)

Indie pop artist Washed Out, real name Ernest Greene, has pioneered the use of OpenAI's Sora text-to-video technology in the music industry by collaborating with filmmaker Paul Trillo to create a music video for his new song "The Hardest Part”.

The video, which captures the life journey of a couple, is rendered entirely through AI-generated imagery, marking a significant step in AI-assisted creative production. Released by the record label Sub Pop, the video navigates through pivotal moments in the couple's life, from high school romance to family life, without using real actors. (source)

Apple has launched the M4 chip, significantly enhancing the new iPad Pro with its second-generation 3-nanometer technology.

This chip features up to 10-core CPU and GPU, offering 1.5 times faster performance than its M2 predecessor, and introduces advanced capabilities like hardware-accelerated ray tracing and mesh shading.

Furthermore, it supports efficient video streaming with a new Media Engine, reinforcing Apple's commitment to energy efficiency and environmental sustainability. The M4 chip sets a new standard for mobile computing power and efficiency in the iPad Pro. (source)

Early reports also say a 25% jump in performance vs. M3. That is a lot of power for a tablet, but I feel the OS needs an equally big update! I haven’t heard any complaints about the hardware prowess of iPad Pro! May be something Apple is going to announce in WWDC.

Researchers from the University of Pennsylvania, the University of Texas, and Nvidia have successfully trained a quadruped robot using Nvidia’s Eureka platform and its DrEureka LLM agent to balance and walk on a yoga ball.

This achievement represents a significant advancement in sim-to-real transfer technologies, which traditionally required laborious manual tweaking of parameters and reward functions.

DrEureka automated the entire process, from creating initial skills in a simulated environment to implementing these skills in the real world without the need for manual adjustment. (source)

OpenAI has released the "Model Spec" to guide AI interactions on its platforms, ensuring ethical usage by adhering to a hierarchy of commands- Platform, Developer, User, Tool, and rules that mandate legal compliance, protect privacy, prevent NSFW content, and avoid information hazards.

This initiative is part of OpenAI’s commitment to responsible AI deployment, aiming to enhance transparency and engage public discourse on AI ethics. (source)

Tesla's Optimus humanoid robot is set to receive an upgrade that will greatly enhance the dexterity of its hands, enabling it to perform more complex tasks with enhanced flexibility.

Elon Musk shared that the robot's degrees of freedom in its hands will increase from 11 to 22.

This update follows the previous iteration, Optimus Gen 2, which already showed significant improvements over its predecessor by handling delicate objects more gracefully and featuring advanced sensors and actuators. (source)

J.P. Morgan Chase has launched IndexGPT, an AI-powered tool that uses OpenAI's GPT-4 to simplify thematic investing.

Thematic investing focuses on emerging trends rather than traditional sectors or company financials. IndexGPT generates keyword lists for specific themes and then uses natural language processing to scan news and identify relevant companies.

This allows it to curate thematic stock baskets beyond just obvious choices. The tool aims to provide a more accurate, efficient approach to trend-based investing. While AI has enabled trading and research capabilities, generative AI presents new opportunities like IndexGPT for banks to enhance investment products and services. (source)

Machine learning models are very advanced, but we often don't understand how they make decisions. They act like "black boxes." To solve this problem, researchers created a new dataset called GSM1k with 1,250 basic math problems.

This dataset tests if language models can truly reason or if they just memorize information. The researchers compared how models performed on GSM1k versus an existing similar dataset.

Models that did much worse on GSM1k were likely just memorizing, while models that did similarly were demonstrating real reasoning abilities. This helps identify which models can truly understand and reason, rather than just memorizing data. (source)

The Rabbit R1, a $199 AI device from Rabbit Inc., has ignited controversy due to revelations that its Rabbit OS software can be installed on regular Android smartphones, raising questions about the necessity and innovative claims of this supposed standalone AI assistant.

Despite Rabbit's assertions of unique hardware integrations, the ability to replicate the experience challenges the device's perceived value. This incident highlights broader concerns over transparency in AI product marketing and aligns with Rabbit's history of abrupt pivots from NFTs to AI, fueling consumer skepticism. (source)

OpenVoice V2, developed by researchers from MIT CSAIL, MyShell.ai, and Tsinghua University, significantly advances voice cloning technology, enabling enhanced multilingual support and precise control over voice styles.

It features improved audio quality and supports languages like English, Spanish, French, Chinese, Japanese, and Korean. Unique to OpenVoice V2 is its ability to perform zero-shot cross-lingual voice cloning, cloning voices in untrained languages, and adjusting emotional and accent nuances efficiently.

By separating tone color from style and language control, it offers robust, real-time voice cloning, freely available under the MIT License for commercial use. (source)

Google DeepMind has introduced AlphaFold 3, a groundbreaking AI model.  This advanced model significantly enhances the prediction of molecular structures and interactions within cells, boasting at least a 50% improvement over existing methods and, in some cases, doubling the accuracy.

The new model extends its capabilities beyond proteins to encompass a wider array of biomolecules, potentially revolutionizing areas like drug design, material science, and agriculture.

AlphaFold 3's tools are largely accessible for free through the AlphaFold Server, and it's already being employed by Isomorphic Labs in partnership with pharmaceutical companies to tackle real-world drug development challenges.(source)

OpenAI is dedicated to developing AI tools that benefit humanity by addressing global challenges in sectors like agriculture, healthcare, and education.

Their AI models, including ChatGPT, not only enhance accessibility but also respect and support creators through systems like Media Manager, which helps manage copyright content.

Committed to ethical AI, OpenAI ensures its models are trained on diverse data sources, safeguard personal information, and prioritize generating new content over reproducing existing materials. (source)

Tool: Covers.ai

Creating personalized voice covers of your favorite songs can be an exhilarating experience, especially when you don't have to go through years of vocal training. Covers.ai offers an innovative platform that enables you to craft unique voice covers easily, turning your musical dreams into reality.

Problem Statement: Suppose you're an aspiring musician looking to create a unique portfolio to showcase to potential clients or employers. Use Covers.ai to generate voice covers of popular tracks that highlight your unique style and voice capabilities.

  1. Sign up or log in to Covers.ai.

  2. Pick a song and upload it.

  3. Choose an AI cover voice that suits your style.

  4. Enter your email to receive the finished cover.

  5. Accept the terms and conditions and start the process.

  6. Receive your personalized voice cover via email within 5 minutes.

Here is how you can utilize this application:

Use your AI-generated voice covers in your music portfolio to demonstrate your vocal capabilities without the need for extensive training.

For more information about the tool, check out our guide on using Covers.ai on the Analytics Vidhya blog.

  • In the recent video by Open AI, The Possibilities of AI Sam Altman discusses the potential of AI to transform industries and the importance of ethical considerations in AI development.

  • In the recent episode of Leading with Data, I had a chat with Joshua Starmer, the brain behind Starquest and we discussed his unique approach to making complex data concepts accessible and engaging through storytelling.

  • If you are interested in developing intelligent web applications that interact with and query data dynamically. The course titled "JavaScript RAG Web Apps with LlamaIndex" offered by DeepLearning.AI focuses on teaching participants how to build full-stack Retrieval Augmented Generation (RAG) web applications using JavaScript.

Over the last few weeks, I have been spending time with a lot of leaders in the industry. They broadly fall into one of 2 camps:

  • The first camp believes AI is on the march to AGI - the timelines can vary from 3 years to 10 years. These people are going all in, leading the teams from the front, allowing more experiments, and building new products and workflows. A lot of these people think AI is like electricity and would change everything it touches.

  • The second camp feels we are now at the cusp of hype buildup and we will be shortly landing in the Valley of Disillusionment. These leaders are being cautious. They acknowledge that GenAI is powerful - but it will be years before it makes a meaningful impact on businesses. They are viewing GenAI as an incremental change in AI

If you have been following the newsletter, you know where I stand! I would love to know which camp you belong to.

Until next week!

How do you rate this issue of AI Emergence?

Would love to hear your thoughts

Login or Subscribe to participate in polls.

Reply

or to participate.