News & Blog
AI Can Tell Your Sexuality From A Photograph
By Francis West on 13th September 2017
An AI algorithm developed by researchers at Stanford University has been able to identify men and women as gay or straight just by studying photographs, with up to 91% accuracy.
Why?
The study by Stanford’s Michal Kosinski and Yilun Wang used AI to test a theory that sexual orientation stems from exposure to certain hormones before birth, meaning people are born gay, and that female sexual orientation is more fluid. The study also tested the hypothesis that faces contain more information about sexual orientation than can be perceived and interpreted by the human brain, and that a computer AI program could discover that information using an algorithm. The study was, therefore, intended to advance understanding of the origins of sexual orientation and the limits of human perception.
What Happened?
The study used the facial images of 35,326 men and women, and their stated sexuality that had been publicly posted on a US dating website. The researchers then used a sophisticated mathematical system known as ‘deep neural networks’ to extract various facial features from the images. These features were then entered into a logistic regression in order to attempt to classify the sexual orientation of the people in the photos, and this data was then matched with their stated sexuality to determine accuracy.
How Accurate?
After the AI system was able to succeed with single images, the accuracy of the algorithm increased dramatically to 91% when 5 images of the person were used. The overall results of the study showed that the algorithm could tell gay men from straight men with 81% accuracy and lesbians from straight women with a 74% accuracy.
This not only illustrates the idea that gay men and women tend to have gender-atypical features, expressions and grooming styles, but that the study appeared to show that a computer program is better at perceiving these than humans. For example, humans who looked at the same images were only able to tell gay from straight men with 61% accuracy, and lesbians from straight women with only 54% accuracy.
Criticism and Concerns
Although some have criticised the study as being more about stereotyping, the results have prompted many to express concern that tools like this algorithm could pose a threat to the privacy and safety of gay men and women, and could potentially be abused by anti-gay groups to target their hate-crimes, or by other organisations as a means of discrimination. Some commentators have expressed a worry that the billions of facial images of people stored on social media sites and in government databases could be used without consent, and that some governments could even use the technology to 'out' and target populations and to prosecute and punish LGBT people.
It is also easy to imagine that this kind of profiling could possibly be used in the future to identify other traits and behaviours such as telling the truth / lying, or even if someone is being unfaithful or hiding something.
What Does This Mean For Your Business?
From a business perspective, identifying and profiling of people e.g. employees and customers should always be done in a way that is ethical, and protects the privacy and security of individuals.
Businesses already use psychometric profiling, and this kind AI algorithmic tool (not just based on photos) could, therefore, be used in a positive way to help organisations select staff.
Some business people may also be using Facebook and other social media data to reach conclusions about personality and compatibility with a company, and judgements made this way raise their own ethical questions.
The authors of this study’s findings have made the point that this technology already exists and its capabilities are important to expose so that governments and companies can proactively consider privacy risks and the need for safeguards and regulations. Some would say, therefore, that this study and its results have in fact made a clear statement about how powerful this kind of technology can be, and have exposed the need for protections now, before the technology is used further e.g. for the kind of negative profiling that has been identified as a real worry.
It is also interesting to note that the report authors for this study have also suggested that AI could now conceivably be used explore links between facial features and other phenomena, such as political views, psychological conditions or personality. Findings along these lines could clearly have a value to companies, organisations, and governments.
AI Can Tell Your Sexuality From A Photograph
An AI algorithm developed by researchers at Stanford University has been able to identify men and women as gay or straight just by studying photographs, with up to 91% accuracy.
Why?
The study by Stanford’s Michal Kosinski and Yilun Wang used AI to test a theory that sexual orientation stems from exposure to certain hormones before birth, meaning people are born gay, and that female sexual orientation is more fluid. The study also tested the hypothesis that faces contain more information about sexual orientation than can be perceived and interpreted by the human brain, and that a computer AI program could discover that information using an algorithm. The study was, therefore, intended to advance understanding of the origins of sexual orientation and the limits of human perception.
What Happened?
The study used the facial images of 35,326 men and women, and their stated sexuality that had been publicly posted on a US dating website. The researchers then used a sophisticated mathematical system known as ‘deep neural networks’ to extract various facial features from the images. These features were then entered into a logistic regression in order to attempt to classify the sexual orientation of the people in the photos, and this data was then matched with their stated sexuality to determine accuracy.
How Accurate?
After the AI system was able to succeed with single images, the accuracy of the algorithm increased dramatically to 91% when 5 images of the person were used. The overall results of the study showed that the algorithm could tell gay men from straight men with 81% accuracy and lesbians from straight women with a 74% accuracy.
This not only illustrates the idea that gay men and women tend to have gender-atypical features, expressions and grooming styles, but that the study appeared to show that a computer program is better at perceiving these than humans. For example, humans who looked at the same images were only able to tell gay from straight men with 61% accuracy, and lesbians from straight women with only 54% accuracy.
Criticism and Concerns
Although some have criticised the study as being more about stereotyping, the results have prompted many to express concern that tools like this algorithm could pose a threat to the privacy and safety of gay men and women, and could potentially be abused by anti-gay groups to target their hate-crimes, or by other organisations as a means of discrimination. Some commentators have expressed a worry that the billions of facial images of people stored on social media sites and in government databases could be used without consent, and that some governments could even use the technology to 'out' and target populations and to prosecute and punish LGBT people.
It is also easy to imagine that this kind of profiling could possibly be used in the future to identify other traits and behaviours such as telling the truth / lying, or even if someone is being unfaithful or hiding something.
What Does This Mean For Your Business?
From a business perspective, identifying and profiling of people e.g. employees and customers should always be done in a way that is ethical, and protects the privacy and security of individuals.
Businesses already use psychometric profiling, and this kind AI algorithmic tool (not just based on photos) could, therefore, be used in a positive way to help organisations select staff.
Some business people may also be using Facebook and other social media data to reach conclusions about personality and compatibility with a company, and judgements made this way raise their own ethical questions.
The authors of this study’s findings have made the point that this technology already exists and its capabilities are important to expose so that governments and companies can proactively consider privacy risks and the need for safeguards and regulations. Some would say, therefore, that this study and its results have in fact made a clear statement about how powerful this kind of technology can be, and have exposed the need for protections now, before the technology is used further e.g. for the kind of negative profiling that has been identified as a real worry.
It is also interesting to note that the report authors for this study have also suggested that AI could now conceivably be used explore links between facial features and other phenomena, such as political views, psychological conditions or personality. Findings along these lines could clearly have a value to companies, organisations, and governments.
Comments