SciTech

K&L Gates Foundation donates to further AI ethics research

Credit: Simin Li/ Credit: Simin Li/

Developing an artificial intelligence (AI) is arguably one of this century’s most ambitious goals. It will be a radical and powerful computational tool capable of affecting every crevice of daily life as we know it. Yet, ever since scientists and philosophers visualized the thinking computer, they have also pondered the ethical challenges accompanied by any such an invention.

Like any other tool with incredible power, AI presents equally lucrative benefits and dismal chaos, depending on how it is made and applied.

In fact, premonition of AI is everywhere in popular culture, spawning prevalent apocalyptic stories set often in dystopian worlds such as the Terminator series where the omni-present artificial general intelligence system Skynet fights the near-extinction human race, and the chilling futuristic opera 2001: A Space Odyssey in which the protagonist’s spaceship’s AI control system HAL goes rogue. Despite their differences, these two films have one thing in common: unethical AI.

Predictably, these ethical concerns found their way to Carnegie Mellon University. On Nov. 1, U.S.-based international law firm K&L Gates Foundation made a donation to further research in the ethics of AI at the university. The endowment, worth $10 million according to a Carnegie Mellon press release, will materialize in the form of the K&L Gates Endowment for Ethics and Computational Technologies research center.

This donation is most appropriate for this research university. Carnegie Mellon hosts cutting-edge research in computer science, robotics and AI, so the K&L Gates’ Presidential Fellowship Endowment Fund will aid two research professors and three doctoral students in research concerning computational technology ethics. The K&L Presidential Scholarship Fund and the annual K&L Gates Prize will be awarded to exceptional undergraduates in the field.

This endowment will be used to launch an international biennial conference that will allow academics and policy-makers to come together and discuss critical issues, share research and raise awareness among the public.

Carnegie Mellon president Subra Suresh admitted in an article by The New York Times that, “We are at a unique point in time where the technology is far ahead of society’s ability to retain it.” K&L Gates’ chairman Peter J. Kallis echoed Suresh’s sentiments, saying, “Law and technology converge at a profoundly 21st century challenge: how to define the ethical boundaries of artificial intelligence.”

Kallis appropriately refers to AI as a challenge. Many realize the immoral practices AI could perform or aid if left unchecked. Specific areas of concern include user privacy, robot rights, and transparency. The apprehension is what inspires open-source research organizations dedicated to establishing the best practices for AI, like OpenAI and Partnership on AI — the collaborative coalition of tech-industry competitors Google, Facebook, Microsoft, Amazon, and IBM.

AI evolved from science fiction and theory to foreseeable reality in recent decades, thanks to crucial innovation in computing technology. The internet and big data, along with faster and cheaper processing hardware, grant computer scientists access to extensive resources which are essential for AI experimentation to be remotely possible.

Yet, despite ardent research, AI itself is still hard to define. This can be attributed to factors including our nebulous understanding of intelligence and consciousness.

According to ComputerWorld, AI is the sub-field of computer science whose “goal is to enable the development of computers that are able to do things normally done by people — in particular, things associated with people acting intelligently.”

These intelligent attributes include decision making, visual learning, pattern recognition, heuristics, and inference. 1940s philosopher and mathematician Alan Turing developed the well-known Turing test for AI, also known as the imitation game. If a human holds two conversations, one with another human and one with a machine, and cannot tell which conversation is with whom, then the machine is deemed artificially intelligent.

AI has abundant uses in society. Currently, the most ubiquitous application to emerge from it is machine learning, which enables a computer to learn from experience, without intervention from its programmer. Machine learning is already influential in millions of users’ everyday interactions with technology; social networks, like Facebook, Pinterest and Tumblr, employ machine learning to sort feed content according to what users would be most interested in.

Furthermore, corporations use it to display advertisements that reflect a user’s spending or browsing habits — these programs need only receive enough data to heuristically learn from it.

AI’s potential does not end there. Once strong AI is tangible, it may be incorporated into autonomous vehicles, the internet of things, digital personal assistants, and warfare.

This century’s technological future is inevitably intertwined with AI. It is not a matter of ‘if’, but ‘when’ the computer that thinks will eventually be ingrained in all aspects of life. Clearly, policies concerning AI’s use and treatment are essential to protecting user and robot interests from mishap or corporate and institutional gain, and K&L Gates’ donation is a prudent step.