Brain’s mistakes provide insight for human & machine learning

Credit: Emily Giedzinski/ Credit: Emily Giedzinski/

Brain-machine interfaces are devices that allow their subjects to control external devices using only their thoughts. Using these interfaces, a team of researchers at Carnegie Mellon has found that the brain makes mistakes because its conception of the world isn’t always an accurate depiction of how the world really works.

The project primarily focuses on the process of learning while using a brain-machine interface. Learning can be defined as an organism becoming increasingly able to adapt to its environment. This type of learning is nondeclarative, meaning that it cannot be expressed in words. Nondeclarative information can be skills or certain motor abilities, in contrast to declarative memory, which is comprised of facts and information. Nondeclarative memory can easily be understood as “knowing how,” where declarative memory is “knowing that.”

In this experiment, the brain-machine interface used data received from the activity of the primary motor cortex — the area of the brain that controls the execution of movement — to drive cursors on a monitor. The project involved looking at the movements made by the subject, especially the mistakes that they made.

Those movements required the subject’s brain to issue motor commands, which generated a pattern of neural activity that drove the cursor. The subject’s brain, in this way, was learning how to perform these tasks and perform them accurately. In this way, learning can also be defined as becoming increasingly able to perform a task without making mistakes.

Researchers found that the observed patterns of neural activity match the subject’s expectation of the working of the brain-computer interface. In order to come across these results, they used techniques developed by Matt Golub, a postdoctoral fellow in the Department of Electrical and Computer Engineering, over the course of his thesis.

“We were able to figure that those neural activity patterns are appropriate to drive what the subject thought the brain-computer interface was,” Steve Chase, assistant professor in the Department of Biomedical Engineering and the Center for the Neural Basis of Cognition, said. “It suggests that one of the limiting factors in brain-machine interfaces is the learning itself. To make the devices work better, should we record more neurons or is there a deficit of learning that we need to get around?”

The brain-machine interfaces work by relating the neuron fired to a particular direction. When recording neural activity, it can be observed that neurons tend to be “directionally tuned”, e.g. they fire more if the subject intends to “go” in a particular direction and less if they want to go in a different direction.

When one looks at the action potential (an electrical pulse that travels through the neuron to facilitate communication) fired to go in a particular direction and plot the action potential as a function of direction that the neurons are firing in, one can observe a smooth curve. That curve has a peak, which is representative of the neurons’ preferred direction.

These devices work to figure out the peak of these graphs and understand that when a particular neuron fires, the cursor has to be moved in that direction. As the subject receives visual feedback from her actions, she can get better at controlling the cursor. While there are some broad similarities across different subjects, the particular patterns of activity and the relationship between neurons differs significantly from subject to subject. In fact, there is no way to correlate neurons from one subject to another subject.

The first step to building a brain-computer interface, is finding the relationship between neural activity and cursor movements, then creating a “mapping” of sorts and connecting that to the device. Based on the data received from the experiment, the team concluded that the subjects would misinterpret the working of the interface and “expect” it to work a certain way.

Their internal model influenced their learning processes, as they continued to make fewer mistakes as time went on. This project has a myriad of applications; just like humans learn from our mistakes, we may soon be able to observe robots doing the same.

Rather than having a robot intent of performing its duty and not realizing whether or not it’s performing it well, we may soon see robots who can better adapt and learn to better perform their tasks. “By understanding how to improve the brain-machine interface, we could immediately apply this to telesurgery,” Chase said. A trained surgeon operating in a remote location would have to have a very good model of the device to practice surgical maneuvers.

One could study the mismatches between the internal models and the machine’s response to train the surgeons to perform that action.

“[The research team and I] want to understand the relationship between the training that we give and how quickly you would develop accurate internal models,” Chase said.