Science

Video Reveals the Surprising Challenges of Teaching A.I. to Dress Itself

by Danny Paez

Crawling into a tee shirt might be one of the few tasks us humans are capable of doing even we’re barely awake and still scratching the sleep out of our eyes. But the fact that we’ve mastered how to dress ourselves (more or less) belies how complex the series of motions required to go from being in the buff to being clothed enough to step out doors really is.

One person who understands this as well as anyone is Alex Clegg, a computer science Ph.D. student at the Georgia Institute of Technology who has been focused on using machine learning to tech artificial intelligence how to dress itself. As he tells Inverse, while A.I. is smart enough to predict which patients will get sepsis or how to challenge world champions in complex strategy games, teaching machines how to put on a shirt has proved to be an elusive goal.

“Cloth is complex,” he explains in an email. “It can respond immediately and drastically to small changes in the position of the body and often constrains motion… Clothing also has the tendency to fold, stick and cling to the body, making haptic or touch sensation essential to the task.”

So why, exactly, is a computer whizz trying to break down how we suit up in the morning? Clegg explained that there are a few possible applications for A.I. that understands the deceptively simple-seeming art of getting dressed. In the short term, Clegg’s findings could be used to some day speed the process of making lifelike 3D animations. But more importantly, these insights could help lead to the design of assistive robots that can help take care of human beings young and old.

Way to go, buddy!

The researchers started by teaching a computer how to master getting an arm into the sleeve. In the paper that will be presented at the upcoming SIGGRAPH Asia 2018 conference on computer graphics in December, Clegg and his colleagues explained the precise technique they used, a type of machine learning called “deep reinforced learning.”

The goal of deep reinforced learning is to try and teach robots how to complete certain motions and tasks by having them do it over and over again. In the case of the dressing A.I., Clegg’s team had the A.I. observe the process virtual environment, replicate it, and then rewarded it when it seemed to be on the right track.

Clegg explained that it took hundreds of thousands of tries in order for the sausage-shaped animated character they developed to learn how to put on a jacket or t-shirt. After all, their bot had to learn how to how to perceive touch so it could yank the shirt when it needed to. Plus, they also also needed to incorporate a physics engine to make the simulation as accurate to life as possible.

In the end, Clegg’s clumsy, animated son did manage to learn how to get its shirt on, even if a bit inelegantly. Still, the results may be most useful as a proof-of-concept for how deep learning can be used for solving nuanced problems.

The struggle is real.

“It is exciting to imagine the host of problems we can solve with deep reinforced learning,” he says. “We look forward to continuing working toward enabling robotics and finding solutions to big problems which affect the everyday lives of so many people.”

Converting the findings of the findings of this study to work with robotics will take a bit more work to harmonize both the the software and hardware aspects. But Clegg’s findings lay out a path for researchers that are interested in freeing our futuristic robot caretakers from their current limitations.

Related Tags