The application is supposed to be able to provide real time detection of human emotions.
Scientists in Germany have now developed a new mobile app specifically for Google Glass that is meant to be able to conduct an analysis of facial expressions in order to provide real time detection of emotions.
The SHORE real time facial detection and analysis software has now been adapted for these augmented reality headsets.
The SHORE mobile app was created by researchers from the Fraunhofer Institute for Integrated Circuits. It is the first application of this nature to have been created for Google Glass. It functions with the assistance of the integrated camera in the wearable technology device. It is able to detect the face of the wearer and analyzes facial expressions in order to determine their emotions.
This Google Glass mobile app also detects gender and gauges the age of the individual wearer.
The Glassware (the term used for applications specifically for this device) is capable of several forms of detection, but researchers have pointed out that it is not able to determine the identity of the wearer. The calculations are all conducted in real time by the Google Glass integrated CPU. The images that are collected by the device for use by the application never actually leave these augmented reality glasses.
The true achievement of the research team is not only in the development of this Google Glass app, but it also broadens the spectrum of the types of applications that are available for AR glasses. It opens the door to uses such as assisting individuals who have challenges in interpreting emotions demonstrated through facial expressions, such as in the case of people who have an autism spectrum disorder.
The information that is obtained through this type of mobile app could be superimposed within the field of vision of the wearer of a pair of augmented reality glasses such as Google Glass. In fact, it has even been suggested that people who have visual impairments could be able to benefit from applications based on this type of software, as they could theoretically be able to obtain supplementary audio information with regards to the individuals with whom they are interacting, including age estimates, determining a person’s gender, and identifying emotion through facial expressions.