Stanford University professor Michal Kosinski made waves recently for research he conducted that suggested AI can determine a person's sexual orientation based on pictures of their face. Now Kosinski is going further, saying that AI could also pinpoint someone's political leanings, level of intelligence and other personal data points based on photographs, as the Guardian reports:
Faces contain a significant amount of information, and using large datasets of photos, sophisticated computer programs can uncover trends and learn how to distinguish key traits with a high rate of accuracy. With Kosinski’s “gaydar” AI, an algorithm used online dating photos to create a program that could correctly identify sexual orientation 91% of the time with men and 83% with women, just by reviewing a handful of photos.
Kosinski’s research is highly controversial, and faced a huge backlash from LGBT rights groups, which argued that the AI was flawed and that anti-LGBT governments could use this type of software to out gay people and persecute them. Kosinski and other researchers, however, have argued that powerful governments and corporations already possess these technological capabilities and that it is vital to expose possible dangers in an effort to push for privacy protections and regulatory safeguards, which have not kept pace with AI.
Kosinski is also known for his controversial work on psychometric profiling, including using Facebook data to draw inferences about personality. The data firm Cambridge Analytica has used similar tools to target voters in support of Donald Trump’s campaign, sparking debate about the use of personal voter information in campaigns.
There is much more to the Guardian's full report, which is well worth a read.
Analysis: AI advancements test, but don't rise above privacy laws
Privacy regulators increasingly recognize that the creation of personal information through computer algorithims is a form of collection, says Constellation VP and principal analyst Steve Wilson, who leads the firm's coverage of digital security and privacy issues: "If a computer program sets a flag in a database saying 'this person is right wing,' or 'this person is LGBT,' then that represents an act of collection of personal information."
This practice can be termed algorithmic collection, or synthetic PII, and is treated exactly the same under privacy laws as collecting the information by getting subjects to fill out a questionnaire. In some jurisdictions, such as Australia, PII related to sexual preference, health, political beliefs and biometrics are classified as sensitive and given extra protections, Wilson says.
"Sensitive PII cannot be collected without consent, so automated algorithmic collection of a person's sexuality or politics is a huge problem, even if it's done for research purposes," he says. "This is another nice example of how technology does not outpace the law. If most people are intuitively uneasy about computers working out their sexuality or other deep, even unconscious traits, then they can get some comfort from the fact that existing laws put restraints on this type of action."
The fundamental point is that there are limits to what personal information should be collected abotu people, and conditions on how it's collected, Wilson adds. "While new technology can create new ways to break the law, the fact is that privacy laws themselves remain as relevant as ever."