Jenga is a pastime that many humans can still enjoy even after more than a few drinks, making it a popular bar game. But for robots, the game remains a challenge that tests how they both see and feel the physical world, a combination of skills that that, once mastered, will have major implications far beyond winning a free bar tab.
Professor Alberto Rodriguez and graduate student Nima Fazeli from the Massachusetts Institute of Technology tell Inverse that this breakthrough is key to training robots in the real world. Their research was published Wednesday in the journal Science Robotics.
By using artificial intelligence, the two researchers enabled their robot to process both real-time visual and touch data, as opposed to feeding it hundreds of spreadsheets. This sort of real-time data processing could one day lead to assembly line robots that can learn on the fly using tactile information, without having to reprogram them. Domestic bots could also learn new cleaning skills with just a bit of a trial run. Machines could eventually be trained like apprentices.
Read More: Video Shows Beer-Fetching Lego Robot That Could Take on Boston Dynamic
“The ability to learn how to interact with the tower with care and confidence it is key to developing a robotic manipulation skill,” write Rodriguez and Fazeli in an email to Inverse. “A second key reason why we picked Jenga is data efficiency. How do we get the robot to learn from tens or hundreds of attempts rather than tens or hundreds of thousands of attempts? Both of these are important for many tasks that we do with our hands and that would be great for robots to help us with. From assembling phones to sorting through trash.”
In a video released by the researchers, a robotic arm pokes the tower of wooden blocks to explore what moves it could possible make; it quickly identifies the stuck pieces and steers clear of those. Eventually it becomes a Jenga expert that might just have a shot at beating a (likely drunk) human. This differs from many robots today that exclusively rely on visual data to go about their tasks.
Now that this training method has been proven to crush it at Jenga, it’s up to researchers to translate the method to helping robots master more practical tasks. Perhaps learning how to sort recycling from compost waste based on sight and feel could be is next big test.
Until then, this robotic gripper will happily make you look like a fool at your next Jenga bar session.
Related Video: This robotic hand was taught human-like reflexes.