Source: Global Accessibility News
UK: Computer scientists are developing new adaptive mobile technology which could enable people who are blind or have low vision to ‘see’ through their smartphone or tablet.
Mobile phones could help people who are blind ‘see’. Aqeel Qureshi Funded by a Google Faculty Research Award, specialists in computer vision and machine learning based at the University of Lincoln, UK, are aiming to embed a smart vision system in mobile devices to help people with sight problems navigate unfamiliar indoor environments.
Based on preliminary work on assistive technologies done by the Lincoln Centre for Autonomous Systems, the team plans to use colour and depth sensor technology inside new smartphones and tablets, like the recent Project Tango by Google, to enable 3D mapping and localisation, navigation and object recognition. The team will then develop the best interface to relay that to users – whether that is vibrations, sounds or the spoken word.
Project lead Dr Nicola Bellotto, an expert on machine perception and human-centred robotics from Lincoln’s School of Computer Science, said: “This project will build on our previous research to create an interface that can be used to help people with vision disabilities.
“There are many visual aids already available, from guide dogs to cameras and wearable sensors. Typical problems with the latter are usability and acceptability. If people were able to use technology embedded in devices such as smartphones, it would not require them to wear extra equipment which could make them feel self-conscious. There are also existing smartphone apps that are able to, for example, recognise an object or speak text to describe places. But the sensors embedded in the device are still not fully exploited. We aim to create a system with ‘human-in-the-loop’ that provides good localisation relevant to visually impaired users and, most importantly, that understands how people observe and recognise particular features of their environment.”
The research team, which includes Dr Oscar Martinez Mozos, a specialist in machine learning and quality of life technologies, and Dr Grzegorz Cielniak, who works in mobile robotics and machine perception, aims to develop a system that will recognise visual clues in the environment. This data would be detected through the device camera and used to identify the type of room as the user moves around the space.
A key aspect of the system will be its capacity to adapt to individual users’ experiences, modifying the guidance it provides as the machine ‘learns’ from its landscape and from the human interaction. So, as the user becomes more accustomed to the technology, the quicker and easier it would be to identify the environment.
The research team will work with a Google sponsor and will be collaborating with specialists at Google throughout the ‘Active Vision with Human-in-the-Loop for the Visually Impaired’ project.
Artificial intelligence enables people who are blind to “see”
An app that allows people who are blind to identify the world directly in front of them using machine vision technology.
This short video shows the reactions of first users: Youtube clip
Media release
Artificial intelligence enables blind people to “see”
Students at Singularity University have created an app that allows blind people to identify the world directly in front of them using machine vision technology.
The app, Aipoly, is an intelligent assistant for the visually impaired that empowers them to explore and understand their surroundings through computer vision and audio-feedback.
“The power is in helping us construct the mental picture. And not everybody has the same skill at creating mental images,” says Steve Mahan, president of the Santa Clara Blind Centre and Google’s self-driving car’s first user. “Most of us are trying to do [that]. Knowing where we are is sometimes more than an address.”
The user takes a picture that is automatically uploaded to the Aipoly servers, where it is analysed and tagged, and a description is sent back and read out loud using text-to-speech. This means that blind people may be able to see what their kids are wearing each day, recognise street signs, find objects that are out of reach, and have freedom to purchase gifts for their friends by themselves.
The machine vision algorithm is optimised for use by the visually impaired with training in street signs and objects commonly used by blind people.
Machine vision, or computer vision, is an exponential technology that has more than doubled in accuracy between 2012 and 2013. Convolutional neural networks are used to identify the elements within a picture and neural image caption generation to feed back a semantic description of its content.
There are 285M visually impaired people in the world and in the next 5 years, two thirds of them will become smartphone users.
As for the bigger vision, “developing this technology further could help us identify and search for objects around our homes and outdoors like we do with websites online,” says Aipoly cofounder Alberto Rizzoli.
Singularity University was founded in 2008 by Ray Kurzweil, a pioneer in blind technology having created optical character recognition (OCR) and the first text-to-speech synthesizer, with over 40 years of experience in the field.
“This complements the work that Ray Kurzweil has done,” says Aipoly cofounder and 2012 Young Australian of the Year Marita Cheng. “In every focus group, people mention a Kurzweil technology they use to get about their daily lives.”
Singularity University students learn about using exponential technologies to impact the lives of a billion people within 10 years. The Aipoly technology will be showcased at Singularity University’s Demo Day on 18 August at NASA AMES Research Park in Mountain View.
Aipoly is now looking for beta testers from around the world. Beta testers of all visual abilities (including fully sighted and blind) are sought.
Press kit, video, photos, logo, images:
Contact:
Marita Cheng
Cofounder, Aipoly
+1 (650) 695-7409
marita.cheng@singularityu.org