I’ve been tracking humanoid robotic startup Figure for a while but they are just now, thanks to a string of significant progress updates, starting to really grab people’s attention in a significant way. This interview with CEO Brett Adcock is the most revealing update that I’ve seen to date.

Screenshot of Figure CEO Brett Adcock on the Brighter with Herbert podcast

AI and robotics are enablers and drivers for each other’s development in a very profound way. I think we’ll see more progress in the next 3-5 years than we have seen in the previous couple of decades and the capability/deployment curves will just rise exponentially from there.

I’ve been working with LLMs for language learning over the past year and have found them extraordinarily useful. The moment OpenAI introduced custom GPTs I immediately rushed to create My Korean Tutor to address some specific challenges for Korean language learners.

Screenshot for My Korean Tutor custom GPT for Korean language learning

It does a lot, but my primary goal in creating it was to make the process of daily journaling in Korean a little bit easier for learners who are still struggling to acquire vocabulary and basic grammar. My Korean Tutor helps reduce the friction by letting the user propose a topic and then giving them some basic vocabulary, grammar, and examples that they can use to get started.

Screenshot for My Korean Tutor custom GPT for Korean language learning

As for the other features, some of them are visible via action buttons but if you ask it “What can you do?” it will reveal even more features:

Screenshot for My Korean Tutor custom GPT for Korean language learning

If you’ve seen the Rabbit r1 but found it confusing this short Rough Guide to the Rabbit r1 might help:

Basically, it’s like a very smart 10 year child with perfect recall who does exactly what you tell it to. It can watch and mimic what you do on a computer, sometimes with uncanny accuracy. Whether you’re editing videos and could use an “auto mask” assistant, or you’re always booking trips online and get tired of the details, you let the Rabbit watch what you do a few times, then, it just…does it for you.

Technically, a Rabbit is an agent, a “software entity capable of performing tasks on its own.” Those of us with a love of sci-fi will remember the hotel agent in the book Altered Carbon; we ain’t there yet, but we’re getting closer.

Of course we’ll have to see how it actually lives up to that promise once units land in users hands in April/May. I should be receiving one then. I’ll share my thoughts soon after. FWIW I don’t expect it to be perfect and it doesn’t have to be perfect to be a success. The real question is: does this human-machine interface approach and its software layer show promise? Is there enough of a foundation there to build on or is it destined for the Museum of Failure? Their pricing strategy, keeping it cheap and subscription free, shows that they understand that there’s both an attraction to, and presumption of failure for, a device like this. I think we’ll know if they can clear that hurdle soon after it drops.

This is the Rabbit R1, an AI first device with a novel OS and form factor that is aiming to be something more than phone but not quite a complete computer replacement. It’s shooting for that holy grail slot of AI companion.

We’re destined to soon start filling junk drawers filled with several generations of these AI devices before someone gets it right and this one will almost certainly end up there as well. Still, there are some interesting ideas and design choices at work here, and the price ($199 - no subscription), is right so of course I ordered one immediately.

Rabbit R1 AI Device

You can see a demo here.

If you didn’t watch the videos Google dropped with its Gemini announcement yesterday I highly recommend carving out a half hour or so of your time to do so. They’re excellent demonstrations of the power of multimodal AI models and just a hint of what is about to explode into every aspect of your personal and professional lives - no matter what you do and who you are. Get ready for it.

The announcement post and associated technical report are super interesting - if you’re into that sort of thing. 2024 is going to be wild.

This is an interesting and rare peek at the progress of humanoid AI robotics company Figure.

Chair, Lord Lisvane, and committee member, Lord Clement-Jones discuss the recent findings of The House of Lords Artificial Intelligence in Weapon Systems Committee in this video:



Highlights and their full report, ‘Aspiration vs reality: the use of AI in autonomous weapon systems AI and the future of warfare’, are available here.

It’s hard to believe that a film from 1970 could get so much about AI so right, but ‘Colossus: The Forbin Project’ does exactly that. I won’t reveal any spoilers, but its exploration of issues related to alignment and emergent behaviors was prescient. It’s also a really entertaining film, albeit a bit dark.

DALL·E 3 illustration of Colossus: The Forbin Project

I fed the plot to ChatGPT and asked it to create the image above. It seemed like the appropriate thing to do. You can stream Colossus: The Forbin Project for free at the Internet Archive.

A team of researchers from Rutgers and the University of Michigan have developed WarAgent:

“…the first LLM-based Multi-Agent System (MAS) that simulates historical events. This simulation seeks to capture the complex web of factors influencing diplomatic interactions throughout history”

They used it to simulate events related to both world wars and published their findings in War and Peace (WarAgent): Large Language Model-based Multi-Agent Simulation of World Wars. This is really exciting stuff.

Here’s the abstract:

Can we avoid wars at the crossroads of history? This question has been pursued by individuals, scholars, policymakers, and organizations throughout human history. In this research, we attempt to answer the question based on the recent advances of Artificial Intelligence (AI) and Large Language Models (LLMs). We propose WarAgent, an LLM-powered multi-agent AI system, to simulate the participating countries, their decisions, and the consequences, in historical international conflicts, including the World War I (WWI), the World War II (WWII), and the Warring States Period (WSP) in Ancient China. By evaluating the simulation effectiveness, we examine the advancements and limitations of cutting-edge AI systems’ abilities in studying complex collective human behaviors such as international conflicts under diverse settings. In these simulations, the emergent interactions among agents also offer a novel perspective for examining the triggers and conditions that lead to war. Our findings offer data-driven and AI-augmented insights that can redefine how we approach conflict resolution and peacekeeping strategies. The implications stretch beyond historical analysis, offering a blueprint for using AI to understand human history and possibly prevent future international conflicts. Code and data are available at github.com/agiresear…

H/T Ethan Mollick

As mentioned earlier I have been constantly creating custom GPTs (both public and private) as technology demonstrators or to offload or minimize repetitive tasks. I just finished creating one to help me stay on top of some of the topics I post about here. This screenshot illustrates how simple this process can be.

Screenshot that shows interaction with Custom GPT

It’s important to note that I’m not using this to automate blogging or social media posts (although that’s certainly possible). It’s just a handy tool that helps me stay on top of a number of rapidly evolving topics - a research assistant.

This is an excellent deep dive on AI’s potential and the resulting jobs disruption that will follow - and why you should be adapting in response now.

Here’s a video roundup of the humanoid robot projects from Apptronik, Sanctuary AI, Agility Robotics, Tesla, Unitree, and Figure that I’m actively tracking. AI was already driving rapid progress in the field but recent advancements have pushed the financial incentives through the roof. We should start seeing significant deployments (beyond current pilots at places like Amazon) of this form factor within the next 2-3 years (China is leaning into this in a big way) which will should only serve to accelerate progress even more.

I finished Mustafa Suleyman’s The Coming Wave: Technology, Power, and the Twenty-first Century’s Greatest Dilemma over the Thanksgiving holiday. I rarely read these kinds of books. And given that I work with AI and have done significant forecasting work on its potential impact, I don’t really need a primer on the risks and opportunities. However, Suleyman presents an exceptionally well-architected and supported case that’s focused as much on how society is likely to respond to “the wave” as the technology itself. The book is dense with insights but is neither overly technical nor a dry slog. If you don’t feel up to speed on the topic, and even if you are, this book is definitely worth your time.

Here’s a perfect example of what I mean when I say that we’ve unlocked science fiction:

Today, in a paper published in Nature, we share the discovery of 2.2 million new crystals – equivalent to nearly 800 years’ worth of knowledge.

What a mind-blowing announcement from the Deepmind team. I knew AI would transform materials science - but not this abruptly. And to think that we will soon see these kinds of announcements in virtually every discipline imaginable.

Scientific American: “Human intelligence may be just a brief phase before machines take over. That may answer where the aliens are hiding.”

This is an idea that I explored with GPT-4 here.

The air force is apparently now denying that it ran a simulated exercise in which an AI drone killed its operator.

Whether you believe that it happened or not it’s a good reminder that there will be increasingly less room for error in the design phase of autonomous weapon systems or even more benign autonomous AI platforms. There are countless ways to mitigate unwanted outcomes (design reviews, adversarial testing, red teaming, and AI enabled testing - to name just a few) but if we don’t do all of them, and do them well, the potential for disaster is high.

General purpose robots are the logical next giant leap forward after widespread adoption of AI and it looks like we might not have to wait long for it to happen. Check out Figure.ai.

Figure may, or may not, be the first to crack this but even if they aren’t, progress in all the domains required to enable this vision will soon rapidly accelerate thanks to AI.

A big (literally) announcement from Anthropic:

“We’ve expanded Claude’s context window from 9K to 100K tokens, corresponding to around 75,000 words! This means businesses can now submit hundreds of pages of materials for Claude to digest and analyze, and conversations with Claude can go on for hours or even days.”

Google A/I, I mean I/O, was packed with updates that are going to impact billions of people.

Wendy’s Is Bringing a Google-Powered AI Chatbot to Its Drive-Thru: “Wendy’s Chief Information Officer Kevin Vasconi told the Journal that the chatbot is “probably on average better” than the company’s best customer service rep.” - GIZMODO

I stumbled across this interview of author Justin Cronin - he gets it:

“People start to think about things like universal basic income when you hear about AI taking all of these menial jobs and office tasks.

It’s not just going to be menial tasks. I’m in a college English department. Everybody is asking what we do about ChatGPT and student papers. I’m like, who cares? We need to think about where this is going to be in about five years or 10 years, after it’s spent a decade here interacting with the entire data structure of the human species. For instance, I’m glad that my career as a novelist has maybe another 10 years in it. Some point I’m going to do something else. Writers do retire! Because I think an enormous amount of cultural content, from film to novels and so on will be produced rapidly and on the cheap by artificial intelligence.

…I think all the problems we’re facing now, we’re going to face in increasing amounts until something catastrophic happens. Except for the fact that I have no idea what AI is going to do, and all bets are off. All bets are off. "

AI promises almost unimaginable benefits but there’s no denying that it’s coming for a lot of jobs - and much faster than you think. This acknowledgement from IBM’s CEO today represents a significant moment in the transformation that is underway. Countless other CEOs have realized that this pivot is now mandatory and are moving forward with much less transparency and honesty - but not for long. We’ll soon be at a point where the market demands clear and decisive AI roadmaps (complete with staff reductions) from all companies.