Mindreader API,built-in webcams and Minority Report ??

 
Post new topic   Reply to topic    Couchtripper Forum Index -> Tech news, tips and help
View previous topic :: View next topic  
Author Message
gordonrussell



Joined: 22 Oct 2011
Location: Glasgow UK

PostPosted: Wed May 30, 2012 9:03 am    Post subject: Mindreader API,built-in webcams and Minority Report ?? Reply with quote

Call me paranoid, call me " a glass half full" man but I do wonder.

Quote:
"I feel like this technology can enable us to give everybody a non-verbal voice, leverage the power of the crowd," Dr Rana el Kaliouby a member of the Media Lab's Affective Computing group.


High ideals which I applaud.... but somehow the word leverage makes me cringe a little.

Here is an extract from New Scientist:

Face-reading software to judge the mood of the masses

28 May 2012 by Lisa Grossman
Magazine issue 2866. Subscribe and save

Systems that can identify emotions in images of faces might soon collate millions of peoples' reactions to events and could even replace opinion polls

IF THE computers we stare at all day could read our faces, they would probably know us better than anyone.

That vision may not be so far off. Researchers at the Massachusetts Institute of Technology's Media Lab are developing software that can read the feelings behind facial expressions. In some cases, the computers outperform people. The software could lead to empathetic devices and is being used to evaluate and develop better adverts.

But the commercial uses are just "the low-hanging fruit", says Rana el Kaliouby, a member of the Media Lab's Affective Computing group. The software is getting so good and so easy to use that it could collate millions of peoples' reactions to an event as they sit watching it at home, potentially replacing opinion polls, influencing elections and perhaps fuelling revolutions.

"I feel like this technology can enable us to give everybody a non-verbal voice, leverage the power of the crowd," el Kaliouby says. She and her colleagues have developed a program called MindReader that can interpret expressions on the basis of a few seconds of video. The software tracks 22 points around the mouth, eyes and nose, and notes the texture, colour, shape and movement of facial features. The researchers used machine-learning techniques to train the software to tell the difference between happiness and sadness, boredom and interest, disgust and contempt. In tests to appear in the IEEE Transactions on Affective Computing, the software proved to be better than humans at telling joyful smiles from frustrated smiles. A commercial version of the system, called Affdex, is now being used to test adverts (see "Like what you see?").

Collecting emotional reactions in real time from millions of people could profoundly affect public polling. El Kaliouby, who is originally from Egypt, was in Cairo during the uprising against then-president Hosni Mubarak in 2011. She was startled that Mubarak seemed to think people liked his presidency, despite clear evidence to the contrary.

"She thought maybe Mubarak didn't think a million people was a big enough response to believe that people are upset," lab director Rosalind Picard said at the lab's spring meeting on 25 April. "There are 80 million people in Egypt, and most of them were not there. If we could allow them the opportunity to safely and anonymously opt in and give their non-verbal feedback and join that conversation, that would be very powerful."

Pollsters could even collect facial reactions on the streets, or analyse the reaction of an audience listening to a politician's speech. Picard's group recently ran an MIT-wide experiment called Mood Meter, placing cameras all over campus to gauge the general mood. To preserve privacy, the cameras didn't store any video or record faces - they just counted the number of people in the frame, and how many were smiling.

Frank Newport, editor in chief of political polling firm Gallup, headquartered in Washington DC, says such software could be useful. "There's no question that emotions and instincts have an impact in politics," he says. "We're certainly open to looking at anything along those lines." But he'd want to know how well facial responses predict actual votes.

Picard worries that the technology might have a dark side. "My fear is that some of these dictators would want to blow away the village that doesn't like them," she says. It would be important to protect the identities and IP addresses of viewers, she says.


Like what you see?

In 2009, MIT researchers Rosalind Picard and Rana el Kaliouby co-founded Affectiva in Waltham, Massachusetts, to commercialise their facial-recognition research.

Since launching a project to record viewers' facial reactions to Super Bowl adverts in February, they have collected more than 40 million frames of people responding to what they see. Facial expressions and head position are picked up by the user's webcam and then processed to gauge emotion.

The adverts that were tested can be viewed on the company website, as can graphs of the audience response, grouped by age. The idea is to give advertisers a fast, accurate response to campaigns.


Here is an extract on the MindReader API :

People express and communciate their mental states, including emotions, thoughts, and desires
through facial expressions, vocal nuances, gestures and other nonverbal channels. This is true
even when they are interacting with machines. Our mental states shape the decisions that we make, govern how we
communicate with others, and influences attention, memory and behavior. Thus, our ability to read nonverbal
cues is essential to understanding,
analyzing, and predicting the actions and intentions
of others, and is known, in the pyschology and cognitive science literature,
as "theory of mind" or ~mind-reading.

MindReader API enables the real time analysis, tagging and inference of cognitive-affective
mental states from facial video. The API builds on Rana el Kaliouby's doctoral research, which presents a computational model of mind reading as a framework for machine
perception and mental state recognition. This framework combines bottom-up vision-based
processing of the face (e.g. a head nod or smile) with top-down predictions of mental state models
(e.g. interest and confusion) to interpret the meaning underlying head and facial signals over time.
A multilevel, probabilistic architecture (using Dynamic Bayesian Networks) models the hierarchical way with
which people perceive facial and other human behavior
and handles the uncertainty inherent in the process of attributing mental
states to others. The output probabilities represent a rich modality that
technology can use to represent a person’s state and respond accordingly.
Using Google's face tracker (formerly NevenVision), 24 feature points are located and tracked on the face. Next,
motion, shape and color deformations of these features are used to identify 20 facial and head movements (e.g., head pitch, lip corner pull)
and communicative gestures (e.g., head nod, smile, eyebrow flash). Dynamic Bayesian Networks model
these head and facial movements over time, and infer the person's affective-cognitive state.

Links


http://www.newscientist.com/article/mg21428665.400-facereading-software-to-judge-the-mood-of-the-masses.html

http://web.media.mit.edu/~kaliouby/API.html
Back to top
View user's profile Send private message
Brown Sauce



Joined: 07 Jan 2007

PostPosted: Wed May 30, 2012 9:41 am    Post subject: Reply with quote

google is getting more and more sinister every day ..
Back to top
View user's profile Send private message
major.tom
Macho Business Donkey Wrestler


Joined: 21 Jan 2007
Location: BC, Canada

PostPosted: Thu May 31, 2012 12:30 am    Post subject: Reply with quote

That article reminded me of this:



Install the Collusion addon for Firefox and see how many other sites are piggy-backing the sites you visit. I also use a privacy addon called Noscript (it allows you to only allow script access to the sites you select) and turn off 3rd party cookies.
Back to top
View user's profile Send private message
gordonrussell



Joined: 22 Oct 2011
Location: Glasgow UK

PostPosted: Fri Jun 08, 2012 5:33 pm    Post subject: Reply with quote

Thanks for that very informative Gary Kovacs clip.

Here's more potential advertisers' voyeurism :


Quote:
However, it's said that Intel believes its hardware's ability to gather data on consumers would represent a major boon for cable providers, who currently have to rely on outmoded Nielsen ratings information from a limited sample of the US population.

http://www.theverge.com/2012/6/8/3072229/intel-planning-tv-platform-with-targeted-ads-via-facial-recognition
Back to top
View user's profile Send private message
gordonrussell



Joined: 22 Oct 2011
Location: Glasgow UK

PostPosted: Fri Jul 20, 2012 11:04 am    Post subject: Reply with quote

Brown Sauce wrote:
google is getting more and more sinister every day ..




and with Google possibly giving Microsoft a kick in the gahoulies
http://www.bbc.co.uk/news/business-18917906

we might be well to keep an eye on Goo gle .........and Microsoft of course.
Back to top
View user's profile Send private message
Display posts from previous:   
Post new topic   Reply to topic    Couchtripper Forum Index -> Tech news, tips and help All times are GMT
Page 1 of 1

 
Jump to:  
You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot vote in polls in this forum
You can attach files in this forum
You can download files in this forum


Couchtripper - 2005-2015