CMU Center for Human Rights Science receives $1 million

YouTube, Facebook, and Twitter let the world watch the daily progress of massive social movements like the Arab Spring and the Black Lives Matter movement. Now, Carnegie Mellon’s own Center for Human Rights Science will help protect human rights by creating a system to collect and archive user­generated media.

The John D. and Catherine T. MacArthur Foundation, Humanity United, and Oak Foundation recently donated a collective $1,185,000 to Carnegie Mellon’s Center for Human Rights Science for the development of instruments to collect photographic and video evidence of human rights abuses, post-conflict situations, and protests.

With this new funding, the Center for Human Rights Science will provide members of the human rights community with technologies and abilities which presently only military, corporate and intelligence personnel can access. They aim for these new products to promote transparency and human rights organizations’ leverage to protect victims of human rights abuses around the world.

The director of the Center and associate professor of science, technology, and society in the Department of History Jay D. Aronson said, “What we’re trying to do is level the playing field and provide [information] to the human rights community as well.”

Aronson united Carnegie Mellon scientists and human rights defenders in 2011 by creating the Center for Human Rights Science. Aronson and an array of representatives from Amnesty International, the Carter Center, Human Rights Watch, the International Criminal Court, SITU Research, and the Syria Justice & Accountability Center will guide the program’s expanding technology.

Other Carnegie Mellon faculty members direct the Center, including Alex Hauptmann, a principal systems scientist in the School of Computer Science’s Language Technologies Institute, and Yaser Sheikh, associate professor in the CMU Robotics Institute.

Aronson founded the Center for Human Rights Science in 2011 after multiple conversations with human rights organizations about their limited abilities to document technical human rights data. “The Center was really born out of requests for help,” said Aronson. “The idea was to create a structure where I could build partnerships between the human rights community and faculty here or researchers here who had expertise that might be relevant to problems that the human rights community was increasingly facing at that time.”

The Center is currently working on projects such as post­-conflict and post­-disaster DNA identification, as well as the Syrian Casualties Project.

As human rights organizations accumulated more user­-documented footage of human rights abuses, the need to store the data increased.

“One of the huge questions in this domain is archiving,” said Aronson. “I think that it’s really important to gather this information for posterity so that we can use it in the future and so that we can understand how other people can use it in the future.”

Yet, many online videos and photos of human rights abuses violate the terms of services of platforms, like YouTube and Facebook, because of their graphic content. This poses a challenge for human rights defenders, who try to gather the user-­generated media for statistics and evidence. Aronson seeks to address this problem: “The question then becomes, well, where do they keep it? Do they provide it to some organization for safekeeping? Do they keep it in their servers even if it violates their terms of service? It’s very challenging."

Aronson describes the technology used by the Center for Human Rights Science as a “filtering mechanism” that human rights defenders use to find relevant images and the audio of human rights abuses.

As of now, the Center’s most prominent projects analyze content from the Euromaidan protests in Ukraine and the ongoing conflict in Syria. The Center anticipates collecting data from U.S.­-based protests and human rights abuses in the future. The reports from this project will be published in a series called Voice of Witness.

“One of the big things that we’re developing, what it will be able to do is to find objects using computer vision and machine learning,” he said. “So tanks, helicopters. It works also for sounds. So particular kinds of weapons, you can key in and ask it to find a particular kind of sound of a particular type of weapon if you’re interested in that. Screams. Crying. It also works for some spoken language. So if there’s a particular phrase that a group uses or that’s of interest to you, you can search or that.”

Aronson recognizes the limited power of user­-generated media in legal proceedings, but believes that it can aid a witness’s testimony. Videos and images provide context and perhaps proof of human rights abuses, even if they do not stand as direct evidence.

The other challenge posed by user­-generated content is the reputability of the content creators themselves. “I think the challenge is really to understand not just what’s being put online, but why. Every video tells a story, and every story provides someone’s perspective, and so it’s really important to understand that part, in addition to trying to figure out how to extract information from these sources,” said Aronson.

The Center intends to share their developing technology with their current partners, including Human Rights Watch and Amnesty International.

The Center’s leaders have not yet decided how to make the technology available to more human rights organizations, or even to the public, although they realize that the technology will probably be commercialized in the coming years.

“We’re trying to figure out the best way to make the technology available to the human rights community, in a usable but also in a responsible way, in the sense that it’s powerful, it can extract things out of video quickly,” said Aronson.

Mossab Alsadig, a junior computer science major, was skeptical about the ability of user-generated content to inform people of the nuances of an issue.

“The media is usually a small picture, so it’s naturally going to be missing a lot of the context surrounding the initial issue,” she said. “And when that’s left out the conversations become really skewed, one sided, and uninformed.”

Junior statistics and technical writing major Angie Tang also voiced her concerns about social media’s role, yet ultimately approved its purpose: “Overall, I think social media and user-generated content can positively benefit the organization’s mission, since they show how these abuses and protests are a reality. I think with the right organization, it can be a very powerful tool to help with social good.”