Engineering news

MIT creates a more intuitive brain-controlled robot

Tanya Blake

Human-robot communication can be difficult due to the complexity and nuances of language, but MIT researchers hope to bypass this with the use of telepathy to send commands to machines instead.


The team from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and Boston university have developed a prototype feedback system that lets people correct a robot's mistakes instantly - by using nothing more than their brains.

This technology could one day be used to intuitively control industrial and medical robots or autonomous vehicles.

The data MIT has used to control the humanoid robot comes from an electroencephalography (EEG) monitor, which records brain activity. It can detect when a person notices an error when the robot is performing an object-sorting task. Machine-learning algorithms developed by the team mean that the system can classify brain waves in the space of 10-30 milliseconds, according to the researchers.

“Imagine being able to instantaneously tell a robot to do a certain action, without needing to type a command, push a button or even say a word,” said CSAIL director Daniela Rus. “A streamlined approach like that would improve our abilities to supervise factory robots, driverless cars, and other technologies we haven’t even invented yet.”

Brain-computer interfaces are not new, with the University of Florida having developed software to fly a group of drones using brainwaves and engineering firm Honeywell, who has developed an EEG system to fly planes.

Boeing, which part funded the MIT study, told PE that it hopes the research will "improve human robot collaboration in manufacturing settings".

Phil Freeman, who focuses on materials and technology research at Boeing said that as industrial automation gets more sophisticated and adaptable that people will increasingly work alongside automation in a "dynamic and collaborative environment". 

"Robots need a way to recognise communication from the people they work with and this includes gestures, spoken commands, and possibly even direct recognition of brain signals," explains Freeman. "Our focus is on investigating better ways for people and robots to work collaboratively in manufacturing environments.”

More intuitive control

However, the MIT researchers say that past work in EEG-controlled robotics has required training humans to “think” in a prescribed way that computers can recognise. For example, an operator might have to look at one of two bright light displays, each of which corresponds to a different task for the robot to execute.

This often means a lengthy training process, as well as the attempts to modulate your thoughts can be tiring and require intense concentration.  

This project has attempted to make the experience more natural. To do this the researchers  focused on brain signals called “error-related potentials” (ErrPs), which are generated whenever our brains notice a mistake. As the robot indicates which choice it plans to make, the system uses ErrPs to determine if the human agrees with the decision.

“As you watch the robot, all you have to do is mentally agree or disagree with what it is doing,” said Rus. “You don’t have to train yourself to think in a certain way — the machine adapts to you, and not the other way around.”

ErrP signals are extremely faint, which means that the system has to be fine-tuned enough to both classify the signal and incorporate it into the feedback loop for the human operator. In addition to monitoring the initial ErrPs, the team also sought to detect “secondary errors” that occur when the system doesn’t notice the human’s original correction.

“If the robot’s not sure about its decision, it can trigger a human response to get a more accurate answer,” said CSAIL research scientist Stephanie Gil.

While the system cannot yet recognise secondary errors in real time, Gil expects the model to be able to improve to upwards of 90%accuracy once it can.

The team believes that future systems could extend to more complex multiple-choice tasks.

Co-writer of the study Salazar-Gomez added that the system could one day be used to help people who are unable to communicate verbally, as well as for prostheses control.

Wolfram Burgard, a professor of computer science at the University of Freiburg who was not involved in the research, said that given how difficult it can be to translate human language into a meaningful signal for robots, “work in this area could have a truly profound impact on the future of human-robot collaboration."

The project was funded, in part, by Boeing and the National Science Foundation.

Share:

Read more related articles

Professional Engineering magazine

Professional Engineering app

  • Industry features and content
  • Engineering and Institution news
  • News and features exclusive to app users

Download our Professional Engineering app

Professional Engineering newsletter

A weekly round-up of the most popular and topical stories featured on our website, so you won't miss anything

Subscribe to Professional Engineering newsletter

Opt into your industry sector newsletter

Related articles