Improving meme accessibility for vision-impaired users
People suffering from visual impairments rely on screen reader software to use social media. However, current screen reader software is insufficient for those with visual impairments to read some online content, such as memes. Many memes don’t include alternate (alt) text, text which informs readers of what the image is depicting.
Carnegie Mellon University researchers created a tool to recognize memes automatically and apply premade templates of alt text that describes them. Now able to be read using screen reader software, memes can be enjoyed by even more people.
“If you’re blind, you miss that part of the conversation,” stated Cole Gleason, a Ph.D. student in Carnegie Mellon's Human-Computer Interaction Institute (HCII), in a press release.
Jeff Bigham, an associate professor in HCII, added, “Memes may not seem like the most important problem, but a vital part of accessibility is not choosing for people what deserves their attention. Many people use memes, and so they should be made accessible.”
Social media platforms, where memes are usually spread, make inserting alt text difficult. While some platforms like Twitter have the ability to add alt text to uploaded images, the feature is well hidden and seldom used. The researchers examined nine million tweets and found that images were included in one million of them — in their estimation, only 0.1 percent of these images featured alt text.
According to Gleason, basic computer vision techniques allow his team’s software to describe the images featured in each meme. The software can distinguish between categories of images such as celebrities or cartoon characters. Often, a particular meme’s overlaid text differs with each iteration, so optical character recognition techniques allow for the identification of the overlaid text. Only one template describing a meme’s image is required for each type of meme, and then, any requisite text is added to the meme. Despite the accuracy of transcriptions produced by the software, it is significantly more difficult to describe the intended meaning of a meme.
"It depended on the meme if the humor translated,” explained Gleason. “Some of the visuals are more nuanced, and sometimes it's explicit and you can just describe it."
One example of such a simple meme is the “success kid” meme. The alt text for one iteration of this meme is just “Toddler clenching fist in front of smug face. Overlaid text on top: Was a bad boy all year. Overlaid text on bottom: Still got awesome presents from Santa.”
The team of researchers also developed a platform that translates memes into audio as opposed to text. The platform allows meme creators to associate sounds with meme formats using a simple drag-and-drop interface. This allows the complex feelings behind memes to be conveyed with sound effects and music.
“One of the reasons we tried the audio memes was because we thought alt text would kill the joke, but people still preferred the text because they're so used to it,” Gleason said.
Moving forward, the main challenge for the researchers is deploying the meme transcription software. It could be integrated into meme generator websites, but that wouldn’t translate into alt text on social media. Gleason remarked that his team would have to persuade Twitter into adding a new feature, speculating that adding it might burden users, especially on smartphones.
Other Carnegie Mellon researchers are working on a browser extension to add alt text to images on Twitter, which could involve translating memes as well. Another project aims to insert alt text into image metadata, making the two inseparable as memes jump between social media sites. In a few years, the magic of memes will be accessible to more than just those with keen eyes, but to anyone who can hear.