Adobe gives us a peek into the future!

Along with: GPT4 turbo takes the crown back! new electric Atlas!

Hey there, 

Did you watch the Adobe Premiere Pro release video? It gives you a glimpse of what future video editing would look like. Interestingly, it integrates with Sora, Runaway, and many other tools. This is a slight (though not direct) divergence in Adobe’s stance of no copyrighted material used for training - let’s see when it comes out.

Additionally, GPT4 Turbo takes the crown back and Boston Dynamics launches a new Electric Atlas and it teases the world with an interesting maneuver.

With that thought in mind, let’s dive in! 

What would be the format? Every week, we will break the newsletter into the following sections:

  • The Input - All about recent developments in AI

  • The Tools - Interesting finds and launches

  • The Algorithm - Resources for learning

  • The Output - Our reflection 

  • Question to ponder before we meet next!

Please note: This is an abbreviated version of our newsletter due to email length restrictions. For the complete experience, visit our website and enjoy the full, unabridged edition.

Table of Contents

Adobe is enhancing its Premiere Pro software with a new GenAI video model as part of its Firefly family.

With this edition, users will be able to - 

  1. Extend video clips’ length by using Generative fill

  2. Add new objects using text prompts

  3. Remove objects from a scene

  4. Third-party AI integrations with Runway, Pika Labs, and OpenAI’s Sora models, aim to offer Premiere Pro users greater flexibility in extending shots and generating B-rolls.

Adobe hasn’t mentioned any specific release date as of now. 

Additionally, Adobe plans to use Content Credentials labels to identify the AI models used in creating video content, enhancing transparency and user control over the creative process. (source)

xAI has launched Grok 1.5 Vision - its first-generation multimodal model. 

Grok can now process visual information such as documents, diagrams, charts, screenshots, and photographs.

Grok has excelled in the RealWorldQA benchmark that assesses the model's understanding of real-world spatial concepts,  although it proved less effective in other tests. (source)

Microsoft has expanded its partnership with UAE-based AI company G42, investing $1.5 billion to accelerate AI innovation in the Middle East, Central Asia, and Africa using Microsoft Azure. 

This strategic move includes Microsoft gaining a minority stake in G42 and a board membership, aiming to boost regional AI skills with a new $1 billion developer fund. 

The partnership emphasizes secure and compliant AI deployment and will see G42's technology infrastructure migrated to Azure. 

Additionally, G42's Arabic Large Language Model, Jais, will now be available on Azure, enhancing AI accessibility for over 400 million Arabic speakers. (source)

Boston Dynamics has retired its hydraulic Atlas robot, introducing a new, fully electric model tailored for practical, real-world applications.

Developed in collaboration with Hyundai, this next-generation Atlas is designed to be stronger and more agile, capable of handling complex tasks across various industries.

It features enhanced motion capabilities and innovative gripper variations for diverse operational needs.

As part of its commercial rollout, Boston Dynamics will partner with select companies, starting with Hyundai, to refine and enhance Atlas’s applications, integrating advanced AI and machine learning technologies. (source)

Google DeepMind's CEO, Demis Hassabis, announced at a TED conference that Google's investment in AI is projected to exceed $100 billion, aiming to surpass the joint efforts of Microsoft and OpenAI in their $100 billion AI supercomputer project.

This investment underlines Google’s commitment to achieving artificial general intelligence (AGI), with Google's robust computing resources playing a crucial role. 

Additionally, Google is considering monetizing its core services with premium GenAI features, a first for the company. (source)

Reka AI has launched Core, a new frontier-class multimodal language model, which showcases advanced capabilities in understanding and processing images, videos, and audio alongside textual data. (source)

As per Reka’s release, Core has performed on par with GPT 4V and Claude 3 Opus on benchmarks such as MMLU, GSM8K, and HumanEval.

Core offers a 128k context window, multilingual support, flexible deployment, and advanced multimodal understanding among its many features. (source)

Poe has launched a new multi-bot chat feature, enabling users to interact with multiple AI models in one conversation thread.

This functionality allows for context-aware comparisons of model responses and the ability to invoke any Poe bot with an @-mention. Users can simultaneously conduct research, generate creative writing, and produce custom images.

This feature not only enhances user experience by utilizing the specialized capabilities of various models but also reflects its integration in Poe's updated logo, symbolizing multi-model collaboration. (source)

Instagram is testing a new AI program called "Creator A.I." that helps influencers interact with fans through chatbots.

This initiative, part of Meta's broader push to integrate AI across its platforms, allows influencers to use AI-generated responses in direct messages, and potentially comments, to maintain engagement without the overwhelming workload.

The chatbots will mimic influencers' communication styles using data from their past interactions on the platform.

Initially, these messages will disclose their AI origin. This tool aims to enhance connections between creators and their audiences while reducing the manual effort required to handle large volumes of fan interactions. (source)

OpenAI, supported by Microsoft, has officially launched its first Asia office in Tokyo, marking a significant expansion into Japan. Known for its ChatGPT chatbot, the AI startup is actively exploring new revenue streams and has developed a Japanese-optimized AI model.

Tadao Nagasaki, former president of Amazon Web Services in Japan, has been appointed to lead its operations. Prominent Japanese clients include Toyota and Daikin Industries. 

Concurrently, Microsoft has announced plans to invest $2.9 billion in Japan's cloud and AI infrastructure over the next two years, highlighting a broader trend of investments by U.S. tech giants in the region. (source)

Meta has added an AI-powered chatbot to Instagram that allows users to chat with an AI, get answers to queries, and even generate images from text prompts.

The chatbot, represented by a colorful circular logo in the messages interface, can also help users find relevant content like Reels on specific topics, show accounts related to those topics, and provide information from Instagram's terms of service.

While offering potential utility, the addition of an AI chatbot encouraging users to interact with artificial intelligence rather than real people on a social media platform raises some unsettling questions.

However, it seems clear Meta has big plans for generative AI integration, with the ability for users to generate videos from the chatbot potentially coming to Instagram soon. (source)

Apple is gearing up for a major revamp of its Mac lineup, introducing the new M4 chip family designed for superior AI capabilities.

The transition will begin later this year with updates to models like the iMac, MacBook Pro, Mac mini, and more - all powered by different M4 chip variants optimized for AI tasks. (source)

In parallel, iOS 18's initial AI features may work on-device without cloud processing for Apple's large language model. (source)

OpenAI has reportedly fired two researchers - Leopold Aschenbrenner and Pavel Izmailov - following an internal investigation into alleged information leaking. 

Aschenbrenner was seen as a rising star on OpenAI's safety team and a close ally of chief scientist Ilya Sutskever. The dismissals come amidst ongoing internal tensions at OpenAI, including a previous attempted ousting of CEO Sam Altman by Sutskever that led to employee backlash. 

Aschenbrenner, a young vocal advocate for safe AI development affiliated with the effective altruism movement prioritizing existential AI threats, had worked at the Future Fund philanthropic organization before OpenAI. 

However, OpenAI has not disclosed details on what specific information was allegedly leaked. (source)

Tool - LearnGPT

Are you looking for a free, structured resource to learn about various topics such as Nutrition, Project Management, etc.?

Problem Statement: Need a structured, interactive resource to learn Java.

Solution:

  • Go to LearnGPT.

  • Select the topic: Java.

  • Interact with the tool using prompts, for example, “Explain like I am 5”.

For additional details about the tool, refer to our blog.

Our Opinion: While LearnGPT is a great tool for accessing structured, text-based, interactive courses, it does not replace courses taught by human instructors.

  • This podcast features Dario Amodei, former OpenAI researcher and now CEO of Anthropic, as he discusses the exponential growth of artificial intelligence and delves into the societal challenges and implications of these rapid advancements.

  • This week on “Leading with Data”, I talk to Vijay Gabale, Co-Founder of Infilect. In the episode, we talk about his story, staying focused (Saying NO), bootstrapping his venture, and how a simple consumer insight ultimately led him to product-market fit for his startup.

  • Stanford released the 2024 AI Index report. Some of the key takeaways include continued industry dominance in Research, Rising costs of AI, and AI’s impact on roles in Labour and, also regulations. You can download the report here.

Over the last week, I have been thinking about how the world would look like with agents

Simply put, agents are smart models working independently to pursue a specified goal. They can interact with other agents (if needed) and get the job done. You already see simple agents in action like the incoming call filtering by Google Assistant, and Google Maps trying to find out the most efficient route for you - you get the idea!

Once you see this, you can see that with current LLMs and better AI models, these agents would be able to handle complex tasks over time.

To stimulate the thinking -  think of an ‘evolved’ agent to actively manage my day/calendar. It could not only coordinate my meetings but also update stakeholders in real time if I am running late. It could share the context for my upcoming meetings from my emails and documents, create summaries and action items from the meetings, and also tell me if I am not sleeping enough for my schedule and activities!

This is just one agent. I think every person will be using a few agents in the next few years. It should be liberating as a lot of mundane tasks get delegated. But then, it would also enable people to use agents to create junk engagement and do them at scale. There is a barrage of robocalls coming our way (unless regulated). How about an agent that figures out the best way to keep you hooked on your phones? Games?

AI vs. AI war is going to take center stage in every aspect of our lives. What do you think?

How do you see agents changing the world?

Login or Subscribe to participate in polls.

Do share your thoughts - would love to hear them.

How do you rate this issue of AI Emergence?

Would love to hear your thoughts

Login or Subscribe to participate in polls.

Reply

or to participate.