Research reveals how brain arranges nouns

Abhay Buch Feb 8, 2010

Using functional magnetic resonance imaging (fMRI) technology, members of the Center for Cognitive Brain Imaging have gained deep insight into the way human brains categorize objects. In a breakthrough that demonstrates the interdepartmental cooperation here at Carnegie Mellon, neuroscientists Marcel Just and Vladimir Cherkassky and computer scientists Tom Mitchell and Sandesh Aryal have arrived at results that bode well for human-computer interfaces and neuropsychiatry.

Their research has concluded that humans represent all non-human objects in terms of three classes or dimensions. Just defines these dimensions as having to do with eating, shelter, and the way the object is used. He explained that when one sees an object, the brain thinks, “Can I eat it? How do I hold it? Can it give me shelter?” Indeed, all concrete objects are represented in terms of these three dimensions, much in the way that all places in space are represented by the three dimensions that we experience every day.

The technology behind the study, fMRI, is similar to the magnetic resonance imaging (MRI) machines used in hospitals. The basic idea is that when a particular part of the brain is active, it receives more blood, and the increased blood flow can be seen by MRI machines. Researchers cannot directly tell what a person is thinking, but they can tell where the thinking is happening and infer from there, since certain parts of the brain are used for certain functions. In the context of the research, it was found that objects belonging to a particular dimension all triggered activity in a particular part of the brain.

Just feels that these dimensions point back to our evolutionary origins, noting that “there is a biological predisposition to consider objects with respect to those three dimensions.” He also believes that most other animals have similar methods for representing objects because “there are fundamental biological concerns with eating, usage, and shelter.”

According to a press release by Carnegie Mellon, the actual experiment Just and his team carried out involves placing people in an fMRI machine and studying which parts of their brain are activated when they think about specific objects. Two additional results that the research team noted were that they could predict which parts of the brain would be activated by new words and that they could actually tell how many objects were being thought about. As Just puts it, the researchers can “identify the quantity a person is thinking about, as long as [they] instantiate it as an object.”

This raises an interesting point. “As advanced and abstract as we think we are ... we’re still concerned about food,” Just stated. It seems that our mental structures have not really caught up to something as old and fundamental as numbers. Thus, Just extends the metaphor to note that may of the more abstract concepts that we take for granted are actually second-order representations and not fundamental to the way we think.

Just provided two examples of the implications of his research. The first is that this research paves the way for further improvements in direct communications between the human mind and machines. He pointed out that while the current fMRI technology is both cumbersome and expensive, teams are working on more efficient ways of performing the same scans. Second, he mentioned the applications relating to determining the fundamental causes of many mental disorders, stating that people with autism may show less activity in the areas dealing with social concepts. A deeper understanding of the low-level mental processes would help researchers understand the causes of such diseases and possibly lead to better ways to deal with them.

Just’s future plans include the opening of a new brain imaging center on campus, which, he said, will have the only MRI machine operated by a computer science department in the world. Further improvements in this technology could allow people to communicate with computers by thought alone. In the next year, Just would like to have a demonstration where “someone in the MRI scanner is thinking ‘I want an apple’ and a robot is going to go and hand them an apple.”