Dimensionality reduction helps neuroscience studies

Byron Yu is an assistant professor of electrical and computer engineering and biomedical engineering and a member of the Center for the Neural Basis of Cognition. (credit: Byron Yu) Byron Yu is an assistant professor of electrical and computer engineering and biomedical engineering and a member of the Center for the Neural Basis of Cognition. (credit: Byron Yu)

Oftentimes, analyzing a large system by only looking at its individual parts is a hopeless task.

For example, in team sports, individual players might not seem to be acting rationally in the moment; however, over time, the actions make a lot more sense in the context of the general strategy of the team. With this approach in mind, Byron M. Yu, assistant professor of electrical and computer engineering and biomedical engineering, and John P. Cunningham, a professor at Columbia University, have developed a new strategy to utilize big data techniques to help better understand the complex neuronal network in our brains.

In a University press release, Yu said, “One of the central tenets of neuroscience is that large numbers of neurons work together to give rise to brain function. However, most standard analytical methods are appropriate for analyzing only one or two neurons at a time. To understand how large numbers of neurons interact, advanced statistical methods, such as dimensionality reduction, are needed to interpret these large-scale neural recordings.”

Current neuroscience research methods are trying to locate and analyze significant neurons and try and gain insight into the entire systems behavior through these individuals. However, oftentimes there is a lot of noise in the data because the neurons display lots of behavior that is hard to understand within the scope of one single neuron. Unfortunately, solving this problem is not as easy as simply looking at more neurons at a time.

By scaling the investigation from a single neuron to a population of neurons, having too much data starts to become a problem. There is so much data from neuronal systems with thousands of neurons that it is hard to sift through it to get the important information. This is why computational big data techniques can provide large advances to this field.

The specific technique that Yu and his team used is known as dimensionality reduction. This is a technique used to simplify the many variables in a large system to a smaller set of explanatory variables. In the case of neuroscience, the number of variables is the number of neurons in the system.

The method is employed to characterize many of the neurons together so that they can be described by a fewer set of explanatory variables. This works because, while the actions of individual neurons is hopelessly complicated, simpler organizing principles may exist in the population as a whole.

Dimensionality reduction is already being used to help understand neuronal systems. As written in Yu’s paper that is published in Nature Neuroscience, a group studying a motor system was able to successfully employ the technique. The group had gathered data from the primary motor cortex, the part of the brain associated with planning and executing movements, during a task with 108 different experimental conditions.

By utilizing dimensionality reduction, they were able to analyze the massive amount of data and surmise that the system was actually performing preparatory behavior which then led to the movement being executed.

Looking at an individual neuron, this two-step process was completely out of grasp. Simplifying the model and studying the population as a whole was integral to the success of the group.With more and more data being gathered through new technologies and advances in neuroscience, computational methods may no longer be assistive — they may become a necessity. Yu’s work is an essential step toward bringing the two fields together and it will likely have widespread applications throughout neuroscience research.