SciTech

Researchers create screens that can differentiate users

Credit: Photo Illustration by Jennifer Coloma/Operations Manager Credit: Photo Illustration by Jennifer Coloma/Operations Manager

How does an iPhone know when a finger touches its screen? It’s because the screen is capacitive, meaning that it has a little bit of electric current resting on its surface. When someone touches it, a tiny amount of current flows through that person’s body to the Earth, which makes the voltage on the screen surface drop. The iPhone observes this difference in voltage and realizes that a human must be touching it.

Chris Harrison, a Ph.D. student in the Human-Computer Interaction Institute, has refined this technology in his research. He and his collaborators — Ivan Poupyrev of Disney Research and Munehiko Sato of the University of Tokyo — created a prototype of a capacitive sensor that can tell who is touching the screen. While normal capacitive screens might measure the draw of electricity at 1,000 Hz, this prototype measures the draw at many different frequencies — ranging from 1,000 to 3.5 million Hz. This creates a curve in real time representing the electrical properties of the person touching the screen.

The construction of the original capacitive sensor took two years to fully develop, followed by six months of research on person differentiation. The researchers tested their sensor on a group of 12 participants.

The researchers’ sensor samples the electrical properties of a human; these properties are influenced by “how dense your bones are, how much blood you have, how muscular you are, what kind of shoes you’re wearing,” Harrison said. After measuring these characteristics and learning from them, the sensor can use that to guess who the person is. Their research leverages the fact that electrical properties differ from person to person, but also from tissue to tissue.

“Your bones might have the lowest resistance at 12,000 Hz, but if you go to 15,000, there is higher resistance,” Harrison said.

For example, if you have a two-player game on a big touch screen, one player would draw in blue and the other would draw in red. The system would know who drew in what color without prior knowledge of the players and without them needing to identify themselves before each move. Undoing a move would also become easier. When a player clicks the undo button, his last move would be erased — not the global last move, which might belong to the other player.

Knowing who is touching the screen and when is a useful feature, especially for big surfaces. In the case of games that are on the same screen, capacitive sensing can keep individual score.

However, the system has not yet been perfected. A user could fool the system by trying to copy the curve representing the electrical properties of another person. “If you lift your foot off the ground and you touch something made of metal, you can trick the system so that your curve looks very similar to mine,” Harrison said.

When wearing thick boots, a user is going to look different than a user wearing light sneakers. While this is not good for security, the system looks for differentiation when two people come together to play a game on a tablet.

“You look different from me just enough that it would work for an hour,” Harrison said.

But sensing in the real world is more probabilistic than binary. Harrison believes he can combine his capacitive sensing method with computer vision, speech processing, and motion detection in order to get better accuracy. Any single sensing method might be 90 percent accurate, but when combined, they could be as accurate as 99 percent.

This is the challenge for Harrison’s future research: making the system more robust, out of the lab and into the real world in phones and tablets. For this, more users and test cases are needed to completely understand how it performs. The user study the researchers conducted was not big, but it showed that the technology can work. “And that’s already a big contribution,” Harrison said.