|When it comes to emotional intelligence and your computer, what constitutes too much information?|
(Image courtesy singularityhub.com.)
According to the New Yorker, this may be happening more quickly than we expect. Computers can already attempt to determine moods from vocal pitch and intensity, while simultaneously analyzing any attendant videos for evidence of micro-expressions or gestures that could reveal even more about an interaction. Even the placement of words in a sentence can be taken to imply other things, indicating how angry, passionate, or spectacularly talented certain authors are. Now, computers can not only be aware of these elements, but use them to temper their own responses or advice.
Rana el Kaliouby, an Egyptian scientist who runs the Boston-based company Affectiva, is on the forefront of this mecha-emotional leap. Affectiva's most prominent software, Affdex, is trained to recognize four major emotions: happy, confused, surprised, and disgusted. Breaking down the user's face-image into deformable and non-deformable points, the software analyzes how far certain parts of one's face will move (such as a smile or frown raising or lowering the corners of the mouth) in relation to other set points on the face (such as the tip of the nose.) Things like skin texture (where wrinkles appear, or not) also factor in. These metrics are analyzed into computing what you feel.
Based off the research of 1960s scientist Paul Ekman, the idea behind this technology stems from a simple, universal concept: all humans, regardless of race, gender, age or language barriers, have at least six specific facial expressions that register particular emotions. Ekman broke these expressions down into their constituent movements and wrote a 500-page epic called FACS (Facial Action Coding System) on the subject. The work has been considered the preeminent treatise on this topic for decades now.
Other companies are on the e-emotional bandwagon too, with names like Emotient, Realeyes, and Sension. Companies who rely on videoconferencing could now have a useful extra line on what their clients and associates are thinking. Emotions, which have been found to be closely neurologically related to decision-making and common sense, now can be deduced from faces and choices with a degree of accuracy that seems like mind-reading.
|We're less unique than anyone thinks.|
(Image courtesy thewaylifeis.com.)
While useful (and now predominantly operational) in business, Kaliouby also spent time researching if this specific recognizance could act as an "emotional hearing aid" for those with autism. The National Science Foundation offered Kaliouby and her mentor nearly a million dollars to develop this idea. This proved successful, but the idea was almost immediately extrapolated by businesses from Pepsi to Toyota in the interest of learning more about their consumers' preferences. These requests overwhelmed the scientists, leading to the creation of Affectiva. The company, which claims to have refused requests to use the software for espionage (corporate and personal), wanted to generate revenue from investors to augment their autism-relating research.
Thus Affdex began testing users' response to advertisements, giving the promotions industry a leg up on what consumers would be feeling when exposed to their sales pleas. More than two million videos from eighty countries lent the program an unprecedented amount of information, all adding up to more accuracy in prediction from the program. Affectiva now deals in these negotiations and improvements full-time. In coming years, with more "smart" devices and internet-enabled items out there for our interaction, emotional electronics could use their ever-increasing knowledge to hopefully make our lives better.
These programs have our attention, which is a valuable resource. Now, can that be used to hold our interest, connect us more completely, and/or improve our circumstances (even just by knowing we need the room temperature raised a little?) Or will it simply serve as another metric to keep tabs on a passive populace? Will we have the right to know when and where we are being emotionally analyzed, and will we be able to thwart such advances if desired? Kaliouby maintains that there must be an overall altruistic tilt to the usage of the program, explaining to various advertisers that, “In our space, you could very easily be perceived as Big Brother, as opposed to the gatekeeper of your own emotional data—and it is two very different positions. If we are not careful, we can very easily end up on the Big Brother side.”
Whether we'll end up selling our attention to gain happiness points to sell for more happiness remains uncertain. But the fact remains that the market for your emotions is vast and lucrative. Companies will want to know you're happy if it makes them feel they're doing something right. Other more insidious organizations may be tickled to learn that you're feeling deeply unsettled and on edge (right where some of them want you.) Will the future be made of humans wearing constant poker faces, lest we be called out by computers? Will there be surcharges for extra super-sized doses of happiness from certain places or products? Or should we maybe turn the lens in on ourselves, and understand the nature of our own feelings, before we release them into the wild to be tagged and tracked...or hunted?
|And remember, all of this information is taken from imagery alone. We're not even really "plugged in" yet...|
(Image courtesy rdn-consulting.com)