Tuesday, December 4, 2012

Smartphones might soon develop emotional intelligence

Smartphones might soon develop emotional intelligence [ Back to EurekAlert! ] Public release date: 4-Dec-2012
[ | E-mail | Share Share ]

Contact: Leonor Sierra
lsierra@ur.rochester.edu
585-276-6264
University of Rochester

An algorithm for speech-based emotion classification

If you think having your phone identify the nearest bus stop is cool, wait until it identifies your mood.

New research by a team of engineers at the University of Rochester may soon make that possible. At the IEEE Workshop on Spoken Language Technology on Dec. 5, the researchers will describe a new computer program that gauges human feelings through speech, with substantially greater accuracy than existing approaches.

Surprisingly, the program doesn't look at the meaning of the words. "We actually used recordings of actors reading out the date of the month it really doesn't matter what they say, it's how they're saying it that we're interested in," said Wendi Heinzelman, professor of electrical and computer engineering.

Heinzelman explained that the program analyzes 12 features of speech, such as pitch and volume, to identify one of six emotions from a sound recording. And it achieves 81 percent accuracy a significant improvement on earlier studies that achieved only about 55 percent accuracy.

The research has already been used to develop a prototype of an app. The app displays either a happy or sad face after it records and analyzes the user's voice. It was built by one of Heinzelman's graduate students, Na Yang, during a summer internship at Microsoft Research. "The research is still in its early days," Heinzelman added, "but it is easy to envision a more complex app that could use this technology for everything from adjusting the colors displayed on your mobile to playing music fitting to how you're feeling after recording your voice."

Heinzelman and her team are collaborating with Rochester psychologists Melissa Sturge-Apple and Patrick Davies, who are currently studying the interactions between teenagers and their parents. "A reliable way of categorizing emotions could be very useful in our research,". Sturge-Apple said. "It would mean that a researcher doesn't have to listen to the conversations and manually input the emotion of different people at different stages."

Teaching a computer to understand emotions begins with recognizing how humans do so.

"You might hear someone speak and think 'oh, he sounds angry!' But what is it that makes you think that?" asks Sturge-Apple. She explained that emotion affects the way people speak by altering the volume, pitch and even the harmonics of their speech. "We don't pay attention to these features individually, we have just come to learn what angry sounds like particularly for people we know," she adds.

But for a computer to categorize emotion it needs to work with measurable quantities. So the researchers established 12 specific features in speech that were measured in each recording at short intervals. The researchers then categorized each of the recordings and used them to teach the computer program what "sad," "happy," "fearful," "disgusted," or "neutral" sound like.

The system then analyzed new recordings and tried to determine whether the voice in the recording portrayed any of the known emotions. If the computer program was unable to decide between two or more emotions, it just left that recording unclassified.

"We want to be confident that when the computer thinks the recorded speech reflects a particular emotion, it is very likely it is indeed portraying this emotion," Heinzelman explained.

Previous research has shown that emotion classification systems are highly speaker dependent; they work much better if the system is trained by the same voice it will analyze. "This is not ideal for a situation where you want to be able to just run an experiment on a group of people talking and interacting, like the parents and teenagers we work with," Sturge-Apple explained.

Their new results also confirm this finding. If the speech-based emotion classification is used on a voice different from the one that trained the system, the accuracy dropped from 81 percent to about 30 percent. The researchers are now looking at ways of minimizing this effect, for example, by training the system with a voice in the same age group and of the same gender. As Heinzelman said, "there are still challenges to be resolved if we want to use this system in an environment resembling a real-life situation, but we do know that the algorithm we developed is more effective than previous attempts."

###

Na Yang was awarded a grant by the International Speech Communication Association to attend the SLT Workshop.

For more information on the project visit http://www.ece.rochester.edu/projects/wcng/project_bridge.html.

About the University of Rochester

The University of Rochester is one of the nation's leading private universities. Located in Rochester, N.Y., the University gives students exceptional opportunities for interdisciplinary study and close collaboration with faculty through its unique cluster-based curriculum. Its College, School of Arts and Sciences, and Hajim School of Engineering and Applied Sciences are complemented by its Eastman School of Music, Simon School of Business, Warner School of Education, Laboratory for Laser Energetics, School of Medicine and Dentistry, School of Nursing, Eastman Institute for Oral Health, and the Memorial Art Gallery.


[ Back to EurekAlert! ] [ | E-mail | Share Share ]

?


AAAS and EurekAlert! are not responsible for the accuracy of news releases posted to EurekAlert! by contributing institutions or for the use of any information through the EurekAlert! system.


Smartphones might soon develop emotional intelligence [ Back to EurekAlert! ] Public release date: 4-Dec-2012
[ | E-mail | Share Share ]

Contact: Leonor Sierra
lsierra@ur.rochester.edu
585-276-6264
University of Rochester

An algorithm for speech-based emotion classification

If you think having your phone identify the nearest bus stop is cool, wait until it identifies your mood.

New research by a team of engineers at the University of Rochester may soon make that possible. At the IEEE Workshop on Spoken Language Technology on Dec. 5, the researchers will describe a new computer program that gauges human feelings through speech, with substantially greater accuracy than existing approaches.

Surprisingly, the program doesn't look at the meaning of the words. "We actually used recordings of actors reading out the date of the month it really doesn't matter what they say, it's how they're saying it that we're interested in," said Wendi Heinzelman, professor of electrical and computer engineering.

Heinzelman explained that the program analyzes 12 features of speech, such as pitch and volume, to identify one of six emotions from a sound recording. And it achieves 81 percent accuracy a significant improvement on earlier studies that achieved only about 55 percent accuracy.

The research has already been used to develop a prototype of an app. The app displays either a happy or sad face after it records and analyzes the user's voice. It was built by one of Heinzelman's graduate students, Na Yang, during a summer internship at Microsoft Research. "The research is still in its early days," Heinzelman added, "but it is easy to envision a more complex app that could use this technology for everything from adjusting the colors displayed on your mobile to playing music fitting to how you're feeling after recording your voice."

Heinzelman and her team are collaborating with Rochester psychologists Melissa Sturge-Apple and Patrick Davies, who are currently studying the interactions between teenagers and their parents. "A reliable way of categorizing emotions could be very useful in our research,". Sturge-Apple said. "It would mean that a researcher doesn't have to listen to the conversations and manually input the emotion of different people at different stages."

Teaching a computer to understand emotions begins with recognizing how humans do so.

"You might hear someone speak and think 'oh, he sounds angry!' But what is it that makes you think that?" asks Sturge-Apple. She explained that emotion affects the way people speak by altering the volume, pitch and even the harmonics of their speech. "We don't pay attention to these features individually, we have just come to learn what angry sounds like particularly for people we know," she adds.

But for a computer to categorize emotion it needs to work with measurable quantities. So the researchers established 12 specific features in speech that were measured in each recording at short intervals. The researchers then categorized each of the recordings and used them to teach the computer program what "sad," "happy," "fearful," "disgusted," or "neutral" sound like.

The system then analyzed new recordings and tried to determine whether the voice in the recording portrayed any of the known emotions. If the computer program was unable to decide between two or more emotions, it just left that recording unclassified.

"We want to be confident that when the computer thinks the recorded speech reflects a particular emotion, it is very likely it is indeed portraying this emotion," Heinzelman explained.

Previous research has shown that emotion classification systems are highly speaker dependent; they work much better if the system is trained by the same voice it will analyze. "This is not ideal for a situation where you want to be able to just run an experiment on a group of people talking and interacting, like the parents and teenagers we work with," Sturge-Apple explained.

Their new results also confirm this finding. If the speech-based emotion classification is used on a voice different from the one that trained the system, the accuracy dropped from 81 percent to about 30 percent. The researchers are now looking at ways of minimizing this effect, for example, by training the system with a voice in the same age group and of the same gender. As Heinzelman said, "there are still challenges to be resolved if we want to use this system in an environment resembling a real-life situation, but we do know that the algorithm we developed is more effective than previous attempts."

###

Na Yang was awarded a grant by the International Speech Communication Association to attend the SLT Workshop.

For more information on the project visit http://www.ece.rochester.edu/projects/wcng/project_bridge.html.

About the University of Rochester

The University of Rochester is one of the nation's leading private universities. Located in Rochester, N.Y., the University gives students exceptional opportunities for interdisciplinary study and close collaboration with faculty through its unique cluster-based curriculum. Its College, School of Arts and Sciences, and Hajim School of Engineering and Applied Sciences are complemented by its Eastman School of Music, Simon School of Business, Warner School of Education, Laboratory for Laser Energetics, School of Medicine and Dentistry, School of Nursing, Eastman Institute for Oral Health, and the Memorial Art Gallery.


[ Back to EurekAlert! ] [ | E-mail | Share Share ]

?


AAAS and EurekAlert! are not responsible for the accuracy of news releases posted to EurekAlert! by contributing institutions or for the use of any information through the EurekAlert! system.


Source: http://www.eurekalert.org/pub_releases/2012-12/uor-sms120312.php

nene dark shadows trailer nate mcmillan clooney arrested southern miss rod blagojevich rod blagojevich

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.