Mix

Can AI tell your sexuality, politics and intelligence just from your face?

Great news for future totalitarian dictatorships! A researcher at Stanford University claims to have developed an AI tool that can unmask the most private parts of your identity using just a picture of your face. According to psychologist Michal Kosinski, his machine-learning models can accurately predict traits such as sexual preferences, political affiliations, and intelligence based on a person’s appearance.

Sound scary? It should, says Kosinski, who warns of “serious privacy threats” related to the technology in a recent interview with Business Insider, especially when it’s paired with already widespread facial recognition technology that’s in operation across the globe. Nevertheless, groups like GLAAD and the Human Rights Campaign have previously denounced his work for threatening the liberty of marginalised groups by exposing their secrets.

The researcher himself has a complicated relationship with the tech. His work to expose supposed hidden links between people’s facial features and their mental traits and emotions should serve as a warning for policymakers, he suggests, and inform future governance of AI technologies. He’s still doing it, though, feeding machine learning models with tens of thousands of images scraped from social media and dating sites.

We’ve gathered everything you need to know about the ominous technology below.

Kosinski has previously demonstrated how someone’s Facebook likes can be used to predict their religion, politics, and sexuality with unnerving accuracy, even when there are no obvious links between the topic of the likes and the personality trait. In this 2013 study, “thunderstorms” and “curly fries” were some of the best predictors of high intelligence, while “being confused after waking up from naps” was a strong sign of male heterosexuality.

His newer research goes a step further, though, determining people’s preferences from nothing more than an image of their face. In one study, he’s proven that an AI model is able to distinguish between gay and straight men with 91 per cent accuracy. Humans, by contrast, only managed to tell the difference 61 per cent of the time.

More recently, Kosinski published a study on the links between facial features and political allegiances. Shown hundreds of thousands of images, a facial recognition model was able to correctly decide whether a face belonged to a liberal or conservative voter 72 per cent of the time. Human accuracy was significantly lower, at 55 per cent. Even a 100-question personality questionnaire could only achieve a measly 66 per cent.

“In one study, he’s proven that an AI model is able to distinguish between gay and straight men with 91 per cent accuracy. Humans, by contrast, only managed to tell the difference 61 per cent of the time”

The research essentially involves scraping hundreds of thousands – or sometimes millions – of facial images from the internet and feeding them into machine learning models. Once trained on enough data points, these models can make remarkably accurate predictions about images they’ve never seen before.

How the models translate all of these seemingly arbitrary data points into meaningful guesses isn’t too clear, even to the authors of the studies. The thing is, that doesn’t really matter. The system seems to work, and that’s what’s important… and potentially dangerous.

Last year, scientists unlocked the ability to read people’s minds using AI, which sounded pretty scary. Luckily, it didn’t mean that just anyone could infiltrate your thoughts as you walked down the street – reading a test subject’s mind required getting them into an fMRI scanner for hours on end, and even then the output wasn’t perfect.

AI being able to read people’s inner thoughts and feelings via their facial and expressions is a whole different ball game. If true, it’s easy to imagine an oppressive regime using “ubiquitous” surveillance technology to take a peek into the mind of its subjects and locate any possible dissenters. And the risks for LGBTQ+ people go without saying, if an intolerant government gets its hands on an “AI-powered gaydar”.

Surveillance technology is already everywhere, as Kosinski wrote three years ago. “Given the widespread use of facial recognition,” he adds in his 2021 study, “our findings have critical implications for the protection of privacy and civil liberties.”

There are many doubts cast on Kosinski’s research, besides the people who don’t believe it should even exist in the first place. Fellow academics, for example, have suggested that the algorithms might not be basing their predictions on the structure of a person’s face as much as their self-presentation – things like haircuts, make-up, and facial expressions. Kosinski tries to address this criticism in a paper published in 2024, where he captured 591 “carefully standardised facial images” under lab conditions. In the study, the accuracy of the machine’s guesses was much closer to that of humans.

There was still a significant correlation between people’s face shape and their politics, however (again, it’s difficult to say exactly how this correlation occurs). To Kosinski, this means that “widespread biometric surveillance technologies are more threatening than previously thought”, because it suggests that even those who are trying to hide their identity or blend in will get clocked by AI.

The one thing that everyone can agree on is that more research needs to be done to determine the actual risks of AI’s powers of facial analysis – and whether we’re heading for a physiognomy-empowered dystopia. For that reason, we probably shouldn’t shy away from research like Kosinski’s, even if we don’t like the results.

  • For more: Elrisala website and for social networking, you can follow us on Facebook
  • Source of information and images “dazeddigital”

Related Articles

Leave a Reply

Back to top button