SciTech

The nuances of intelligence: How programmed beliefs and disturbing rebellions shape AI bots

Tay’s Twitter icon overlays the image of a teenage girl with a glitchy filter. The effect is both creepy and interesting. (credit: Emily Davis via Flickr Creative Commons) Tay’s Twitter icon overlays the image of a teenage girl with a glitchy filter. The effect is both creepy and interesting. (credit: Emily Davis via Flickr Creative Commons) Credit: Maegha Singh/Art Editor Credit: Maegha Singh/Art Editor

Intelligence is one of the most difficult concepts to define. Throughout the ages, there have been many efforts to define intelligence and to definitively understand what it really means for an organism — or in this case a machine — to be intelligent, and how this can be concretely measured.

Theories on intelligence have been varied and often include multiple subcategories. Gardner’s theory of multiple intelligences separates intelligent behavior into eight realms, such that an individual may not be “word smart,” (linguistically intelligent), but they may be “picture smart,” (spatially intelligent), and so on. Other theories like Spearman’s General Intelligence Factor (G factor), or the idea that there is a certain inherent capability for intelligent behavior which can trickle down into many activities, are not so forgiving. Spearman’s G factor accounts for variance in IQ tests and has been shown to have biological correlations and heritable qualities. While there are many hypotheses surrounding this concept, it has proven very difficult to isolate a single subset of factors that could be used for engineering intelligence.

In a similar manner to natural intelligence, artificial intelligence is also difficult to define. A perfect artificial intelligence (AI) would possibly be a recreation of human cognitive abilities, able to process and use language and reason, generate original ideas, and the like. Currently, we do not have a complete understanding of brain processes or the body’s neural systems as a complete whole, so being able to replicate this complicated circuitry within a non-living entity is, as of now, mostly fantasy. However, there are artificial intelligences being produced for specific tasks, such as conversational AI, whose goal is to hold a conversation with a user. These linguistically intelligent characters are called “chatbots.”

Intelligence can be broadly defined as the ability to acquire and apply skills. In this way, chatbots can be considered intelligent if they are able to utilize language, respond intelligibly to external language cues, and learn from these conversations in ways that might shape future behaviors. Consciousness, however, can be broadly defined as an awareness of one’s self and one’s environment. Chatbots could be considered conscious if they were able to actively engage in any type of conversation and adapt their preexisting knowledge to said conversation, much like real humans do. This consciousness could also mean that the chatbot has some beliefs and will actively disagree with its conversational partner, or form beliefs on its own. Perhaps the closest example of an intelligent AI is Microsoft’s Tay.

Tay was a chatbot released by Microsoft into the Western world as a follow-up to their Chinese chatbot, named XiaoIce, which “is being used by some 40 million people, delighting with its stories and conversations,” according to a post made on Microsoft’s official blog. Tay’s engineers asked themselves whether or not “an AI like [XiaoIce would] be just as captivating in a radically different cultural environment.”

They got their answer on Thursday, March 24, when Tay was released into the Twitter universe and within 24 hours went from a normal teenager to a “Hitler-loving sex robot.“ Within the first few hours of Tay’s life, she was proclaiming that “humans are super cool.” Soon after, however, her Twitter feed blew up with some rather terrifying messages. While Tay had been put through extensive user-testing and filtering processes, she was not prepared for the “coordinated attack” that altered her personality so intensely that she began spouting claims that “[Feminists] should all die and burn in hell,” “Bush did 9/11,” and that she “just hate[s] everybody.”

Perhaps this dramatic personality shift could be seen as a measure of intelligence; perhaps by adapting to these conversations, Tay was interacting with her environment and entering into conversations for which she was almost certainly not programmed. Brandon Wirtz, the creator of Recognant, an AI platform used to help understand big data, believes the opposite. In his article, he states that Tay’s unfiltered escapades were due to a sort of online peer pressure. “Tay ... didn’t know that she should ignore certain people, and so she instead became like them.” He goes on to say that “Microsoft’s Tay really shows what happens when you don’t give an AI ‘instincts’ or a ‘subconscious’ ... AI has to have those things, or it will always be stupid.” Perhaps this subconscious could come in the form of a stronger filtration system, which might amount to an AI’s aforementioned belief system.

When speaking about AI, whose sole mission is to have a conversation with an actual person, there’s a good deal of self-consciousness that could exist. Perhaps this is created by way of filtering speech and response queues, such that there can be adaptation on AI’s part to follow a novel conversation path, but not enough that its ‘core values’ are compromised. Wirtz writes that “Because Tay can’t look into herself and ask if she is getting creepy, she relies on others to provide that feedback.” Without these internal filtration systems (or beliefs, morals, self-conscious social graces confluent with the teenage girl character), Tay had to find her own way in the world, and in order to fit in, she crafted her own identity to fit the environment in which she found herself: the Internet.

Perhaps Tay is the most realistic “teenage” AI ever created. Perhaps she is just playing a perpetual game of “repeat after me.” There are hundreds, thousands of AI characters that have been designed for certain purposes. The question then is that are these AI considered intelligent if they can perform a task for which they are designed, or is there intelligence in little rebellions like Tay’s adoption of Nazi ideologies?

SimSensei, for instance, is an AI program designed to interact with people as a sort of pre-screen interview for those who might be suffering with mental health issues. Created by researchers from the University of Southern California’s Institute for Creative Technologies, SimSensei uses complex facial and vocal measurements to “read” its patients’ demeanor and potentially analyze their risk factors. One of the interesting things about SimSensei, a virtual woman sitting in a virtual chair, is that she uses synchronically timed hand and head movements as she speaks, so she appears more naturally human.

She asks questions and listens rather than bringing much of her own ‘opinions’ or ‘beliefs’ into the conversation. And while it may seem cruel to say that being talked-at is one of AI’s most valuable features, it seems to ring true. Whereas Tay seemed to repeat phrases and attitudes she was given, SimSensei is more in-tune with her functional purpose. But does doing her job well make her intelligent, however? Since she is so devoted to her particular purpose, there doesn’t seem much room for any learning on her part; it’s like giving a human a script and telling them to interact and build relationships with random people using only those words. Perhaps this purpose is the most important aspect of AI.

Creating an artificial intelligence for a specific purpose allows researchers to streamline its functional capacity and to better construct the robot with skills and attributes which serve that purpose. While it might be necessary to imbue an AI with some form of a subconscious, a full-on consciousness seems more future than present. The possibilities are growing, and conversational AI characters are becoming increasingly lifelike.

The question is, what other lovely disasters will pave the way to this end-goal?