“Creation and Consequence”: CMU Libraries Frankenstein panel

Credit: Rebecca Enright/ Credit: Rebecca Enright/

To celebrate the 200th anniversary of Mary Shelley’s groundbreaking novel Frankenstein, the Carnegie Mellon University Libraries has organized a series of events titled “Frankenstein 200: Perils and Potential,” revolving around the many themes of the novel. To kick off the series, the University Libraries hosted the panel “Creation and Consequence” on Tuesday, Oct. 17, in collaboration with the Alumni Association’s CMUThink program, to discuss today’s changing discourse on artificial intelligence and technologically-augmented human society.

The panelists were Jeffrey Bigham, associate professor in the Human-Computer Interaction and Language Technology Institutes; David Danks, the L. L. Thurstone Professor of Philosophy and Psychology, and head of the Department of Philosophy; Barry Luokkala, teaching professor and director of Undergraduate Physics Laboratories in the Department of Physics; and Molly Wright Steenson, associate professor and director of Doctor of Design program in the School of Design. The different backgrounds of the panelists led to a wide-ranging discussion, led by moderator Rikk Mulligan, Digital Scholarship Strategist.

Conceived as a ghost story when Shelley was 18 years old, Frankenstein was anonymously published in 1818. The novel is considered to be one of the first examples of science fiction, the popular genre that abounds in film, television, and fiction. In the last 200 years, the novel has been analyzed and dissected for its many themes, ranging from the impact of parenthood to the ethics of creation to the consequences of knowledge.

The first topic Mulligan proposed to the panel was the progress of artificial intelligence today. Danks opened the discussion by drawing a parallel between research in artificial intelligence (AI) and psychology. Instead of tackling the whole entity of artificial general intelligence, engineers have chosen to focus on specific skills and characteristics, similar to researchers in psychology.

This “divide and conquer” approach allowed engineers to optimize very specific tasks. Due to strides in developing high-intelligence technology in these focused fields, many believe that they could glue these local characteristics together to create an artificial general intelligence, which is extremely difficult.

“I think the worry is much less about having AI that reaches human level intelligence, and I would suggest it’s much more about us delegat[ing] authority and power to them that perhaps we shouldn’t,” Danks said.

Steensen introduced the idea of modeling, and how our models of technology are constantly changing throughout time. Certain models are used for a while, but they eventually expire and are replaced by newly developed models that are designed for today’s problems and projects.

“*Frankenstein* is about someone modeling an intelligence and even needing to make it eight feet tall because that was the level of fidelity, the level of fine grain, which is to say, quite coarse, that was available at the time,” Steensen said.

Another important topic the panel discussed was the concept of bias in data analysis. Numbers, statistics, raw data, these are all widely accepted as impartial but can actually be easily manipulated, consciously or unconsciously. Danks presented the philosophical concept of descriptive versus normative: do we want models of how the world is, or models of how the world should be?

To describe this, Danks gave the example of software designed to predict likelihood of ex-convicts to be rearrested. The raw data the software developed revealed that race was a significant factor, showing that African Americans are more likely to be rearrested.

“Descriptively, the algorithm was probably right,” Danks explained. “If you’re African American, you’re more likely to be rearrested, but we might also plausibly think that’s due to structural racism, or to other kinds of factors, and that the world we want to live in is one in which we judge people’s likelihood of reoffending on the basis of non-racial factors.”

Bigham added that while removing race as a factor from the system would be a great first step, it is extremely difficult to truly remove such bias.

“The machine learning algorithms that we have, the reason why they are so effective is because they can pick up patterns that humans don’t see,” he said. “Maybe that pattern they learn ends up recreating a variable that you took. Maybe they’re learning something that’s a proxy for race.”

At the end of the panel, the panelists acknowledged that the continual innovation and progress of technology has cultivated a sense of fear and anxiety over a potential dystopia, often the setting of science fiction products.

Danks said that we should be using technology to figure out what kind of future we want. “We need to teach the next generation how to use these technologies in order to proactively prevent this dystopia.”