Facebook for the Blind

Reading Time: 4 minutes
Rate this post

In a world where Facebook has literally revolutionized socializing and connecting across borders, there is a group of people that has been left behind in the race. The visually challenged, though they know what’s Facebook, have rarely been able to enjoy and utilize the network. But this is soon to become a thing of the past.

The Facebook Accessibility Team gave this a thought and started working on an artificial intelligence (AI) based object recognition tool—known internally as Facebook for the Blind—that can analyze any photo and describe to a user. Harnessing neural network technology (the technology is extensively described in an iRunway Research Report on Speech Recognition Technology), the team is developing an interface that can paint a nearly accurate picture of an image with words. The tool is said to provide context using nothing but the photo itself and will also allow Facebook to answer a user’s questions about a photograph.

Facebook, as most of us know, already embeds text-to-speech software in the platform and approximately 50,000 members access the site through a screen reader, for example Apple’s Voiceover. At present, visually impaired people who have access to screen readers — tools used to identify what’s displayed on a screen — can listen to what people are writing on Facebook. But there’s no way to read out what’s going on in the millions of photos shared on the social media platform every day.

Facebook for the Blind is being perceived as the next step up the accessibility ladder – a feature aimed at helping blind people “see” images uploaded to the social network by automatically interpreting some photos and videos into spoken words that offer blind users context, which has never been possible before. Take for example the photo update below which shows a couple with their child outside Solvang, a California restaurant and its famous windmill. While the caption or description might not throw much light, the AI system can offer this context for the photo: “This image may contain 3 people, smiling, outdoor.” It’s true that this obviously does not tell someone the whole story, but to not have somebody else interpret and to be able to fill up the blanks without help can be empowering for a visually challenged person.

The AI tool is still in its nascent stages of development and is a mere prototype at present, but Mark King who is an active member of the Facebook Accessibility Team—and who is visually challenged himself—believes that being able to increase a blind user’s perspective of an image from almost null to approximately 50% of potential enjoyment, is a massive leap forward in terms of accessibility.

How will this work?

The AI tool for the blind is based on a principle called “deep learning”, which also enables a standard, already-implemented feature of Facebook that helps in identifying faces and objects in photos posted. Facebook uses a vast web of neural networks to teach its services to identify photos by analyzing enormous numbers of similar images. For instance, to identify your face it feeds all known pictures of you into the neural network, and over time, the system develops a pretty good idea of what you look like. And that’s how Facebook recognizes you and your friends when you upload a photo and start adding tags.

From pattern and speech recognition, decision making to language translation, when all big players, from Apple to Microsoft, are reaping the fruits of effective use of neural networks, it’s no big surprise that Facebook, which connects 1.5 billion people across the globe, would like to use this technology to describe photos for the blind and visually disabled.

So much of sharing on social networks is photos and videos. Much of your brain is dedicated to processing visual imagery. So one of the keys to building systems that work is teaching computers to understand the visual world” says Facebook’s chief technology officer Mike Schroepfer.

Experts in the field of deep learning opine that the industry has already reached close to human performance with respect to object recognition and face recognition. Problems, however, still remain in recognizing complex images or understanding the whole scene. At present, Facebook’s system is capable of providing a basic description of a photo. It can identify certain objects, or for example, tell whether the photo was taken indoors or outdoors. It can also say whether the people in the photo are smiling. This is no doubt useful when your friends upload new photos without appropriately captioning or describing it. However, there still exists ample room to improve the system. Companies such as Google and Microsoft have suggested through their research papers how neural networks can be used to automatically generate more complete photo captions describing the full scene.

Currently, Facebook owns more than two hundred patents related to facial recognition and object/pattern/gesture recognition. Some exemplary relevant patents include: US8824748B2, US8666198B2, US9087273B2, US8929615B2, US9143573B2, US20150036919A1, WO2015066628A1 and more.

Can there be more to it?

Well, why not. This AI image recognition tool can most certainly enhance Facebook’s search capabilities. With millions of photos being uploaded every day, Facebook aims to do the best job in showing the user exactly what they want! “By understanding just by looking at the pixels what’s in this photo we can do a better job of showing you what you want, and not showing you what you don’t want,” says Facebook’s CTO.

Connecting the World

Facebook’s AI efforts to bring its service to the blind and other disabled people fit right with the company’s mission—connecting the entire world! “For 20% of the world, if you do not make things accessible, they will not connect,” says Mark King of the Facebook Accessibility Team. “That’s over a billion people, and that’s totally counter [to our mission]. To not include those people would be a big mistake.”

The team plans to present its newest work at the Neural Information Processing Systems (NIPS) artificial intelligence conference on December 7, 2015. Though it’s unsure when the feature would actually make its way to our screens , but considering the fast growth and sophistication of AI techniques, the day isn’t far when Facebook would no longer ask ‘what’s on your mind’ –It will know it!

(Featured image source: https://pixabay.com/p-440788/?no_redirect)

Subhasri Das
Subhasri Das

Subhasri is a technocrat who enjoys reading between the lines of patents to understand their hidden value.


Post a Comment

Your email address will not be published. Required fields are marked *