- AI Emergence
- Posts
- Musk raises $6 Bn to build 'Gigafactory' of Compute
Musk raises $6 Bn to build 'Gigafactory' of Compute
OpenAI’s new safety team is being led by Altman!
Hey there,
In typical Elon style, he is taking the fight with OpenAI, Google, and Meta head-on! Interestingly, he does a big fundraiser the same week when OpenAI is trying to figure out how to navigate AI safety while delivering returns to shareholders and synergies with its biggest investor - Microsoft!
Let’s look at the developments this week!
What would be the format? Every week, we will break the newsletter into the following sections:
The Input - All about recent developments in AI
The Tools - Interesting finds and launches
The Algorithm - Resources for learning
The Output - Our reflection
Table of Contents
This week, Elon Musk's AI startup, xAI, raised a whopping $6 billion in Series B funding, bringing its pre-money valuation to $18 billion.
Investors, including Andreessen Horowitz and Sequoia Capital, are betting on xAI as a formidable challenger to leading AI companies like OpenAI and Alphabet.
The funding will be used to bring xAI's first products to market, build advanced infrastructure, and accelerate research and development of future technologies.
In other news, xAI is planning to develop a supercomputer by fall 2025 in partnership with Oracle, potentially investing $10 billion to rent cloud servers.
As per a report by The Information, The planned supercomputer termed the "Gigafactory of computing," aims to connect 100,000 NVIDIA H100 GPUs, making it four times larger than any current GPU cluster. (source)
OpenAI has established a Safety and Security Committee led by directors Bret Taylor, Adam D’Angelo, and Nicole Seligman headed by CEO Sam Altman.
The previous safety team was disbanded following the departures of key figures such as co-founder Ilya Sutskever and AI safety leads Jan Leike and Daniel Kokotajlo, who left due to concerns about OpenAI's prioritization of rapid product launches over safety measures.
There’s no update on GPT 5 till now but OpenAI has claimed to have begun training its next-generation model towards achieving AGI. The committee's first task is to evaluate and develop OpenAI’s processes and safeguards over the next 90 days. (source)
Elon Musk was yet again involved in a public feud on social media, with LeCun criticizing Musk's leadership and conspiracy theories as Musk tried to recruit talent for his AI startup, xAI.
After facing similar issues with its Gemini AI, Google's new AI-generated summaries feature, AI Overview, has faced criticism and mockery on social media for producing misleading and sometimes dangerous misinformation.
Examples include summarizing right-wing conspiracy theories, plagiarizing text, and incorrectly identifying pythons as mammals.
Google CEO Sundar Pichai acknowledged that AI hallucination remains an unresolved issue, but Google claims these errors are uncommon. (source)
Meta Platforms is considering launching a paid version of its AI-powered assistant, Meta AI, to compete with other tech giants in the AI market.
Currently available for free across Meta's social media platforms, Meta AI, built using the Meta Llama 3 model, helps users with tasks such as restaurant recommendations, vacation planning, solving math problems, and generating high-quality animations and images.
The potential premium version could offer unique features, particularly in virtual and augmented reality. (source)
Character.ai, known for its persona-based chatbots, launched in 2022 and recently introduced a new personalized AI platform. It is exploring partnerships with Meta and Elon Musk’s xAI to collaborate on pre-training and developing AI models. These discussions focus on research rather than acquisitions due to regulatory concerns.
Both companies are heavily invested in AI and have recently introduced their own chatbot models. (source)
Researchers investigated whether OpenAI’s GPT-4 could analyze financial statements as effectively as human analysts.
The results were surprising: GPT-4 not only predicted changes in company earnings more accurately than human analysts but also matched the performance of advanced machine learning models, even when given only raw financial data.
Using data from the Compustat database (1968-2021), GPT-4 achieved a prediction accuracy of 60.35%, compared to 52.71% for human analysts. (source)
Tool: Watermelon AI
Generative AI is most commonly used to develop intuitive and empathetic chatbots. But how can you create a chatbot that is both safe and understanding?
Problem Statement: Develop a customer service chatbot for a product.
Solution:
Set Goals
Measure Success Using Relevant KPIs
Set Up Your Platform
Connect Your Channels (e.g., WhatsApp, Facebook Messenger)
Define the Scope
Outline Possible Questions
Configure Your Chatbot
Add Knowledge Base
Test the Chatbot
For more details, read the full article here: GPT-Powered AI Chatbot Using Watermelon AI
In the recent episode of Leading with Data, I had a chat with Ravit Dotan about her professional experiences and insights on Responsible AI, the importance of prioritizing ethical AI practices, frameworks for new AI startups, long-term investment fundamentals, and much more in a thrilling rapid-fire round.
If you are interested in building and customizing multi-agent systems, this course by DeepLearning.AI is perfect for you. It covers how to enable agents to take on different roles and collaborate to accomplish complex tasks using the AutoGen framework.
Ethan Mollick talks about four Narrow singularities that Research might face. If you are thinking about what is narrow singularity - here is what Mollick says “A narrow singularity is a future point in human affairs where AI has so altered a field or industry that we cannot fully imagine what the world on the other side of that singularity looks like.” A must-read article for every researcher.
Came across this video on Reverse Turing Test! The video gave away the human by the fiddling on the phone, but irrespective it represents the world we are all going to be living in shortly.
I might be dreaming things - but what are the chances of Ilya Sutskever joining xAI given his ‘alignment’ on the safety concerns he and Musk have raised independently? It would for sure strain the relationship between Ilya & OpenA, but also add fuel to the ongoing legal battle between Musk and OpenAI!
Only time will tell what is in store!
How do you rate this issue of AI Emergence?Would love to hear your thoughts |
Reply