Researchers integrate social science in cybersecurity project
Researchers at Carnegie Mellon University have collaborated with scientists from the Army Research Laboratory; Pennsylvania State University; the University of California, Davis; the University of California, Riverside; and Indiana University to develop methods for computers to make security relevant decisions in cyberspace. The project, called Models for Enabling Continuous Reconfigurability of Secure Missions, strives to increase security in cyberspace.
The five-year funding for the program is $23.2 million, with an additional $25 million for an optional five-year extension. The Carnegie Mellon branch of the project is funded through CyLab, the world’s largest university-based research and education center for computer and network security, information security, and software assurance.
The collaborative research focuses on detecting attacks in cyberspace, measuring and managing risk, and altering the environment to optimize results, all while minimizing cost. The previous three objectives will be reached with help of human behavior models that allow computers to predict the motivations of users, defenders, and attackers.
The lead researcher from Carnegie Mellon, Lorrie Cranor, is an associate professor of computer science and engineering and public policy. Cranor became involved with this project through her previous work, which also dealt with human factors related to security and privacy. She was recruited for the project team by Patrick McDaniel, a professor of computer science and engineering at Penn State and the principal investigator on this project.
Cranor’s main role is to lead the psychosocial team, which will be contributing to the project by investigating psychological and human factor issues. One of her teams developed techniques that allowed computers to distinguish between real and false cyberattacks, which may aid the performances of overwhelmed human analysts. Results from this research will enable future computing systems to take action derived from human decision making in response to attacks without physical human intervention. For example, a server observing unusual network traffic from an unknown entity might decide it is under attack and filter that traffic.
Despite the project’s seemingly technical nature, it requires a variety of expertise, most importantly that of people who understand risk, game theory, and human factor issues, according to Cranor. This is because an important approach to combating cyberattacks involves understanding the motivations and behaviors of attackers.
Cleotilde Gonzalez, an associate research professor of social and decision sciences and director of the Dynamic Decision Making Laboratory is responsible for many of the decision-making aspects of the project. Other Carnegie Mellon contributors include Lujo Bauer and Nicolas Christin, both assistant research professors of electrical and computer engineering and associated with the CyLab.
Finally, Cranor stresses the role of researchers who specialize in the social sciences. “One of the salient aspects of our proposed research is in the realization that humans are integral to maintaining cybersecurity and to breaches of security,” Cranor said via email. “Their behavior and cognitive and psychological biases have to be integrated as much as any other component of the system that one is trying to secure.”