Science

Humans taught a robot how to be a teaching assistant in just 3 hours

But the human teacher's workload wasn't necessarily lightened as a result.

by Sarah Wells
Aleutie/Shutterstock.com

What if a robotic teaching assistant could help lighten a teacher’s workload? New research has answered just that question and reports that teachers were able to train a robot teaching aid to be effective in as little as three hours.

The research, published Wednesday in the journal Science Robotics, used a style of machine learning called SPARC (Supervised Progressively Autonomous Robot Competencies) that allowed a robotic teaching aid to first learn from a teacher’s instructions, then implement the teaching tasks themselves with guidance from the teacher before ultimately teaching the children autonomously.

Teaching children is a rewarding and necessary, though often stressful and thankless, job. This is especially true as classroom sizes continue to mount and giving individual attention to every child becomes an increasingly difficult task for teachers. What helps make this robot especially appropriate for such classrooms is that unlike other forms of machine learning that require technical knowledge in order to train the robot, this robot requires no technical knowledge on the part of the user. Instead, it can be programmed for different unique tasks using an intuitive learning method.

“The standard approach to designing robotic controllers requires multiple conversations between the engineers coding the behavior and the domain experts,” write the authors. “Robot learning from end users (e.g., by using SPARC) would bypass these costly iterations, allowing end users to directly teach an efficient controller adapted to their specific needs in a minimally intrusive way.”

In the case of this study, the researchers were evaluating how well a robotic aid could facilitate play for a child playing a learning game about feeding animals and maintaining farmland. They visited two primary schools in Plymouth, England and worked with a student group of 75 eight to ten-year-olds and a teacher with no prior knowledge of the study’s hypotheses.

Child interacting with the robotic teaching aid during a laboratory test before heading to the primary schools

Science Robotics

They assumed that, after being directed by and then learning to mimic the actions of the teacher, the robotic aid would be able to make actions autonomously with fewer and fewer corrections from the teacher.

That wasn’t exactly how it played out.

Through measuring the number of corrections made during the robot’s supervised teaching and autonomous teaching, the researchers found that the human teacher made about the same amount of corrections. That means the robot was not necessarily adapting the way they’d imagined and that the teacher’s workload was not being reduced.

Looking more closely at the teacher’s notes from the experiment though it appeared that it was the sheer influx of responses from the robot, not necessarily whether or not they were correct, that overwhelmed the teacher and caused her to dismiss more of the robot’s actions than she wanted.

“[I] found it difficult to know how best to respond,” the teacher wrote about the second session of the experiment. “I’m dismissing robot’s suggestion more than I actually want to.”

Even after the teacher had more practice in monitoring the robot, she writes that she still found herself struggling to relinquish her control of the bot.

“Controlling the robot is really easy now, although I still tend not to let it carry out its suggested actions even when they are valid,” she wrote.

The researchers write that this reaction could be unique to the one teacher they used in their trials, but that experiments with more teachers would be necessary to know for sure. They also noted that decreasing the influx rate of the robot’s responses could be beneficial as well.

Diagram of the experimental set-up in which the teacher supervised the robot and the robot interacted with the student

Science Robotics

As for how it helped the children, the researchers found that the robot’s autonomous actions were comparable to that of actions supervised by the teacher but that the autonomous actions didn’t necessarily improve the students’ learning or participation beyond what it already was.

Human-machine cooperation is hard, especially in sensitive and dynamic environments like classrooms. While the robot’s behavior fell short of a few expectations, it was able to demonstrate safe, appropriate behavior and quick learning.

“We have shown here a path forward, and our approach makes it possible for autonomous social behaviors to be learned in an online manner, gradually taking over the social interaction from the human operator,” write the researchers.

The researchers say that such technology has a place in not only classrooms but in assistive robotics and eHealth as well, and it should be tested in more diverse environments to enable SPARC to be more easily generalized in the future.

Read the abstract here:

Striking the right balance between robot autonomy and human control is a core challenge in social robotics, in both technical and ethical terms. On the one hand, extended robot autonomy offers the potential for increased human productivity and for the off-loading of physical and cognitive tasks. On the other hand, making the most of human technical and social expertise, as well as maintaining accountability, is highly desirable. This is particularly relevant in domains such as medical therapy and education, where social robots hold substantial promise, but where there is a high cost to poorly performing autonomous systems, compounded by ethical concerns. We present a field study in which we evaluate SPARC (supervised progressively autonomous robot competencies), an innovative approach addressing this challenge whereby a robot progressively learns appropriate autonomous behavior from in situ human demonstrations and guidance. Using online machine learning techniques, we demonstrate that the robot could effectively acquire legible and congruent social policies in a high-dimensional child-tutoring situation needing only a limited number of demonstrations while preserving human supervision whenever desirable. By exploiting human expertise, our technique enables rapid learning of autonomous social and domain-specific policies in complex and nondeterministic environments. Last, we underline the generic properties of SPARC and discuss how this paradigm is relevant to a broad range of difficult human-robot interaction scenarios.
Related Tags