There are a lot of reasons to smile. Some people do it because they're happy. Others smile to express other emotions, such as frustration. Most of us are innately equipped to detect these often subtle differences, but exactly how does the human brain work to interpret a smile? A team of MIT researchers seem to have solved that mystery with the creation of a "smile algorithm" capable of telling real smiles from forced ones.
The technology was developed after a series of experiments in MIT's famed Media Lab. Subjects were asked to fill out a data form and click submit when they were done. But the submit button was designed to cause frustration by intentionally erasing all the data entered, bring the subjects back to the beginning of the newly blank form. Researchers recorded the forced smiles that subjects naturally displayed, and measured differences between those smiles and genuine ones. The MIT team found, for example, that frustrated smiles tend to develop quickly, while genuine expressions of joy are more gradual.
According to researchers, the MIT smile algorithm has a 90% success rate. But what's the point in identifying fake smiles? For people with Asperger's Syndrome who have difficulty picking up on facial expressions and meaning, such an algorithm could help them better communicate with the world at large. Certainly, this type of algorithm would be incredibly beneficial as an add-on app to a pair of Google Glasses.
More from Tecca:
- MIT students design award-winning non-stick ketchup bottle coating
- MIT students complete "The Holy Grail of Hacks" by turning building into playable Tetris game
- What's the deal with Google's crazy augmented reality glasses?