Company Makes “I, Robot” an AI-Powered Reality

Ociacia /
Ociacia /

While still classified as a startup in the robotics industry, Figure is doing amazing things with Open AI’s advanced language tech. Their first major AI robot, Figure 01, understands and responds to human interactions with no processing delays. Combining advanced visual and auditory processing, the robot is capable of “fast, low-level, dexterous robot actions.” Doing this, as well as holding conversations simultaneously, is something previously unseen at such high levels.

Getting investments from Jeff Bezos and Invidia certainly didn’t hurt the company, either.

Properly using these investments, the company has grown its projects significantly in a very short amount of time and to astronomical levels. Senior AI Engineer Corey Lynch was featured in clips online interacting with the robot and proving its capabilities in real-time. Correctly and quickly identifying things like apples, dishes, and cups, the robot can also organize them and put them where they belong on the table. Asked to grab food, the robot grabbed the apple, showing that it can also categorize items.

According to a Tweet from Lynch, this isn’t all the robot can do. “We are now having full conversations with Figure 01, thanks to our partnership with OpenAI. Our robot can: describe its visual experience, plan future actions, reflect on its memory, [and] explain its reasoning verbally.” Done by processing the entire conversation and images, the AI generates text responses which the robot then reads off.

Of all the achievements in AI and robotics in the last 3 decades, this marks the closest to bringing AI-driven robots, like in “I, Robot” to reality. This kind of change and achievement can easily grow and advance unchecked per the current programming of OpenAI as well as its ability to be manipulated.