SciTech

Guest speaker discusses theory of neural dimensionality

Assistant professor of applied physics Surya Ganguli spoke at CMU on a theory of neural dimensionality. (credit: Surya Ganguli) Assistant professor of applied physics Surya Ganguli spoke at CMU on a theory of neural dimensionality. (credit: Surya Ganguli)

The human brain contains approximately 100 billion neurons. With this number in mind — pun intended — you can imagine how difficult it is to collect and understand datasets in the world of neuroscience. How, then, can neuroscientists analyze these huge datasets in an efficient yet biologically meaningful manner?

Last Thursday, Carnegie Mellon’s Department of Electrical and Computer Engineering hosted a seminar lecture in which this exact question was explored. Surya Ganguli, an assistant professor of applied physics and, by courtesy, of neurobiology and electrical engineering at Stanford University, spoke in the Scaife Hall Auditorium to discuss how neuroscientists can use high-dimensional statistics and computation to obtain accurate models of neural systems from large datasets.

The topic of Ganguli’s talk is a very unique niche field that straddles the interface between neuroscience and electrical and computer engineering. “I was told that there would be both neuroscientists and electrical engineers here. It’s always a bit tough to pitch a talk to an interdisciplinary crowd, so if anything doesn’t make sense, I’m sure I’ll please half of you, half the time … so I guess I’ll always piss off somebody all the time,” Ganguli joked as he began his talk. For those in the crowd who fell under neither of the two categories, the talk was an ironic struggle to use every relevant brain neuron to understand the vast amount of neurons in our brains.

Most people — especially people at Carnegie Mellon — are familiar with Moore’s Law, the doubling of the number of transistors on integrated circuits every two years. Ganguli stated that neuroscience has been undergoing a revolution and has its own version of Moore’s Law. By now, approximately 100 to 1,000 neurons can be recorded. However, the circuits in the brain which control our behavior consist of a million to a billion neurons, so the sample that can currently be recorded is still only a small fraction of the total. Ganguli refers to this situation as an “anti-Goldilocks moment” because we cannot seem to obtain a neural sampling that is “just right.”

“On the one hand, we do have a lot of neurons, so the data analysis is not easy,” Ganguli said. “On the other hand, we may not have enough neurons to really understand circuit computation in any meaningful sense.”

He presented the audience with a graph which plotted the firing rates of individual neurons against time of a monkey reaching for an object. The graph appeared very heterogeneous, and no qualitative information could be immediately gathered from it. However, through a method known as dimensionality reduction, the dataset could be more easily interpreted.

“A widespread practice is to take a pattern of activity across neurons at any instance in time and express them as time-dependent linear combinations of a fixed set of basis patterns of activity,” Ganguli explained. In other words, by decomposing high-dimensional firing space into a smaller number of dimensions of firing rate space and projecting neural activity onto these dimensions, neural datasets can be more readily analyzed.

These simplified representations are known as dynamical portraits. “What’s interesting about these dynamical portraits is that they reveal a lot about the way that circuit computation works,” Ganguli said. He then went on to show the audience the dynamical portraits obtained by observing a monkey’s prefrontal cortex while it was doing a tactile discrimination task, as well as the dynamical portrait of the entire brain of a zebrafish. He explained the qualitative information that could be obtained by looking at each one.

While the fact that dimensionality reduction and dynamical portraits can simplify neuronal datasets is a beautiful thing, it leads to many questions. In the abstract for Ganguli’s talk, he posits several key questions: “What is the origin of this simplicity and its implications for the complexity of brain dynamics? Would neuronal datasets become more complex if we recorded more neurons? How and when can we trust dynamical portraits obtained from only hundreds of neurons in circuits containing millions of neurons?”

The task from which data for Ganguli’s studies was recorded was the monkey-reaching task. A target was shown to a monkey and then taken away, and the monkey had to reach for it. The neurons in the monkey’s dorsal premotor cortex were recorded as the monkey reached in eight different directions. In fact, some of this data was taken by Byron Yu, an assistant professor in electrical and computer engineering and biomedical engineering at Carnegie Mellon. “He’s a man of many talents,” Ganguli said about Yu. “He even knows how to deal with monkeys!”

The key piece in solving the puzzle of how well we can trust dynamical portraits is by determining why dimensionality in neural recordings can be so low. “In order to do that, we’d like to derive upper bounds on how high a dimensionality could possibly be in any dataset,” Ganguli said. An idea that is currently out there is that dimensionality might be low because of the simplicity of the task. “But nobody has really quantitatively proved it and come up with a definition of, say, task complexity that is somehow related to the dimensionality of neural dynamics,” Ganguli said. This final upper bound, which is a quantitative measure of task complexity, is what Ganguli and his colleagues have derived.

According to the abstract for the paper in which Ganguli and his colleagues discuss their theory of neural dimensionality, “the dimensionality of motor cortical data is close to [the theoretical upper bound], indicating neural activity is as complex as possible, given task constraints.”

The underlying consequence of their theory is that it provides a framework for whether neural dimensionality is limited by task complexity or by intrinsic brain dynamics. As the Moore’s Law of neuroscience continues to increase the amount of neurons that can be recorded, this breakthrough will help neuroscientists better design experiments and interpret large datasets in the future.