With Generative AI being used to imitate celebrities and authors, the question arises, is your likeness a form of intellectual property (IP)? Can you copyright your face or your voice?

These questions are on the bleeding edge of IP law and may take years to resolve. But there may be a simpler way to legally protect appearances. On my reading of technology-neutral data protection law — widespread internationally and now rolling out across the USA — generating likenesses of people without their permission could be a privacy breach.

Let’s start with the generally accepted definition of personal data as any data that may reasonably be related to an identified or identifiable natural person.

Personal data (sometimes called personal information) is treated in much the same way by the California Privacy Rights Act (CPRA), Europe’s General Data Protection Regulation (GDPR), Australia’s Privacy Act, and the new draft American Privacy Rights Act (APRA).

These sorts of privacy laws place limits on how personal data is collected, used and disclosed. If personal data is collected without a good reason, or in excess of what’s reasonable for the purpose, or without the knowledge of the individual concerned, then privacy law may be breached.

Technology neutrality in privacy law means it does not matter how personal data is collected. In plain language, if personal data comes to be in a storage system, then it has been collected.  Collection may be done directly via forms, questionnaires and measurements, or indirectly by way of acquisitions, analytics and algorithms.

To help stakeholders deal with the rise of analytics and Big Data, the Australian privacy regulator has developed additional guidance about indirect collection of personal data:

“The concept of ‘collects’ applies broadly, and includes gathering, acquiring or obtaining personal information from any source and by any means. This includes collection by ‘creation’ which may occur when information is created with reference to, or generated from, other information” (emphasis added; ref: Guide to Data Analytics and the Australian Privacy Principles, Office of the Australian Information Commissioner, 2019).

How should privacy law treat facial images and voice recordings?

What are images and voice recordings? Simply, these are data (‘ones and zeros’) in a file which represent optical or acoustic samples that can be converted back to analog to be viewed or heard by people.

Now consider a piece of digital text. That too is a file of ones and zeros, this time representing coded characters, which can be converted by a printer to be human readable. If the words thus formed are identifiable as relating to a natural person, then the file constitutes personal data.

So if any data file can be rendered as an image or sound which is identifiable as relating to a natural person (that is, the output looks like someone) then the file is personal data about that person.

Under technology neutral privacy law, it doesn’t matter how the image or sound is created. If data generated by an algorithm is identifiable as relating to a natural person (for example, by resembling that person) then that data is personal data, which the Australian privacy commissioner would say has been collected by creation. The same sort of interpretation would be available under any similar technology-neutral data protection statute.

If a Generative AI model makes a likeness of a real-life individual Alice, then we can say the model has collected personal information about Alice.

I am not a lawyer but this seems to me to be easy enough to test in a ‘digital line up’. If a face or voice is presented to a sample of people, and an agreed percentage of them say it reminds them of Alice, then the evidence would suggest that personal data of Alice has been collected.

In any jurisdiction with technology-neutral privacy law, that might be a breach of Alice's rights.