Behind the Headlines

MATLAB and Simulink behind today’s news and trends

This robot learns by reading your mind 5

Posted by Lisa Harvey,

Researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and Boston University collaborated to design a system that combines neuroscience and machine learning to create a mind-reading robot. Seriously. This robot can read your mind to learn if it is right or wrong.

According to Forbes, “The researchers created a system that allows a robot to correct its mistakes in real time when a human observer notices that a mistake is being made. The observer just sits there and watches; she doesn’t physically interact with the robot or anything else. If the observer recognizes that a mistake is occurring, the robot changes course and does the right thing.”

Image credit: MIT CSAIL.

For this project, the team used a humanoid robot named “Baxter” from Rethink Robotics. They designed a system that detects when a person notices an error as Baxter performs an object-sorting task. If Baxter tries to place an item in the incorrect bin, the system sends a command to Baxter to correct his choice. The research is detailed in the paper Correcting Robot Mistakes in Real Time Using EEG Signals.


The system acquires data from an electroencephalography (EEG) monitor that records brain activity. As the human observer watches Baxter perform his sorting task, the signals from the EEG cap are fed into a computer. The computer analyzes the signals to see if the observer thinks Baxter is doing his work correctly.

Image credit: MIT CSAIL/YouTube.

A dedicated computer then analyzes the EEG signal, looking for error-related potentials (ErrPs). ErrPs are our naturally occurring responses to an unexpected error. These signals show up when we observe or make a mistake.

ErrPs are useful for robotics since they occur quickly after a person notices an error, typically observable within 500ms. When the person notices an error, the computer signals Baxter that he’s made a mistake in time for him to correct his actions.

Machine Learning

The researchers used 12 human participants to test the system. The participants did not have previous experience with human-robot interfaces or EEGs. They were simply asked to watch Baxter perform his sorting.

The computer system processed the EEG signals to identify when ErrPs occurred. This system used MATLAB and Simulink to capture, process, and classify these signals.

Since each person’s ErrPs can differ slightly, machine learning was used to train the system to identify the corresponding signal. Each participant completed the trial four times. The first time through the trial was used to collect training data. During this trial, Baxter randomly made mistakes in order to introduce ErrPs.

An elastic net was implemented and trained after the first trial for each of the 12 participants. This trained the classifier for that person’s unique responses by looking for changes in the electrical activity that corresponded to when Baxter made a sorting mistake.

The team’s machine-learning algorithms classify brain waves in less than 30 milliseconds. This enabled the classification to occur online during the second through fourth trials for each human participant. The person’s thoughts were monitored, and when an ErrP was detected, a signal was sent to Baxter so that he could “learn” from his mistake and change his response in real time.

Secondary ErrPs

Baxter wasn’t always perfect. The system was able to identify ErrPs and correct Baxter 70% of the time.

The researchers found that sometimes the ErrPs weren’t enough to get him to change his response, i.e. they were misclassified. That’s when the researchers noticed that the secondary ErrPs, the equivalent of “oops… you’re still doing it wrong”, came into play. If Baxter didn’t correct his behavior, the EEG signal showed a secondary ErrP which proved to be easier to classify. This boosted his performance to 90%.

Human-robot interface

The system was able to achieve 90% accuracy with random test subjects. These participants didn’t receive special training to learn how to communicate with the robot.

As Newsweek reported, “The biggest breakthrough is the way in which the robot is able to understand what the human controller is thinking without the person having to change their natural thought patterns. Past work in EEG-controlled robotics has required training humans to think in a prescribed way that computers can recognize.”

This shows promising potential in improving human-robot interfaces, a field that is receiving intense interest due to innovations such as ADAS and home assistants. Imagine if your Alexa device could read you mind when it played the wrong song, or your self-driving car could tell that you thought it made a wrong turn.

5 CommentsOldest to Newest

Burcin Ozturk replied on : 1 of 5

This is great! I believe this is one of the best design in AI GAP Analysis. I am interested in to understand more around the coding to read the brain waves.

sabir replied on : 2 of 5

i hope that This new approach also includes the ability to convert corrected setpoints into motion to control actuators at arms, legs to help people hit by parkinson’s for example to lead a normal life

Trent replied on : 3 of 5

What my 19 yr old son commented after I shared this article: “goodbye polygraph, hello mind-reading neuroscience!” ,

wesam19 replied on : 4 of 5

So interesting. Actually, I like your writing style it’s clear and voluble. Please could you write something about Cybernetics?

Benjamin He replied on : 5 of 5

EEG would be interfered easily and rely on the deployment environment. fNIR should be better choice.

Add A Comment

What is 2 + 5?

Preview: hide