Human digital thought clones: the Holy Grail of artificial intelligence for big data

This article explores the legal and ethical implications of big data’s pursuit of human ‘digital thought clones’. It identifies various types of digital clones that have been developed and demonstrates how the pursuit of more accurate personalized consumer data for micro-targeting leads to the evolution of digital thought clones. The article explains the business case for digital thought clones and how this is the commercial Holy Grail for profit-seeking big data and advertisers, who have commoditized predictions of digital behavior data. Given big data’s industrial-scale data mining and relentless commercialization of all types of human data, this article identifies some types of protections but argues that more jurisdictions urgently need to enact legislation similar to the General Data Protection Regulation in Europe to protect people against unscrupulous and harmful uses of their data and the unauthorized development and use of digital thought clones.

1. Introduction

How would you feel if a company developed a ‘digital thought clone’ of you, representing everything known about you, in order to predict and manipulate your choices in real time by using your own data against you for its profit? This would be your digital twin, made by constantly collecting your intimate personal data in real time even when you are asleep.

Given their commercial value, it is possible that every human has, or will have, a digital thought clone replicating all their known digital data at an industrial scale from data shared through free apps, social media accounts, gadgets, mobile phones, GPS tracking, monitored online and offline behaviour and activities, and public records. A digital thought clone, evolved from previous types of digital clones, goes beyond predictive analysis. It is a personalised digital twin comprising a replica of all known data and behaviour on a specific living person, recording their choices, preferences, behavioural trends, and decision-making processes. Artificial intelligence (AI) algorithms test strategies in real time, and predict, influence, or manipulate a person’s consumer or online decisions. This is the ultimate advertising tool, as it is the closest representation a company would have of a living person’s thoughts. It also enables companies to try to sell products and services at the most effective time premium price, to influence a user’s voting intentions, or to use intimate details of their personal digital life to decide whether their bank should grant them a loan. Digital thought clones tracking each user’s every move can record who a person is meeting, who their friends are, what they talk about, what they are spending, and what they are reading.

Our willingness to trade personal data for the free and convenient use of technology, has enabled data miners to commercialise these data for use in predictive technologies, with increasing sophistication and relentless exploitation. Zuboff called this exchange of free services for data, enabling the detailed monitoring of behaviour, as ‘surveillance capitalism’. 1 It is unimaginable that people would knowingly agree to any company collecting such levels of data on them, and would then allow those data to be used in such intrusive and personal ways that are already being deployed by big data and AI companies.

A digital thought clone is not only extremely dangerous for a person’s privacy but is also potentially detrimental to their interests and ability to choose. US National Security Advisor Robert O’Brien warned that ‘If you get all the information on a person and then you get their genome, and you marry those two things up … that is an incredible amount of power,’ that could be used to ‘micro-target’ people and even to ‘exploit their hopes and their fears’. 2

The use of deepfakes 3 has made headlines in both entertainment and politics, and their potential dangers for creating misinformation and confusion have been noted. 4 Whereas the more entertaining type of visual and audio digital clones are well known, digital clones come in different types, all of which pose ethical, philosophical, and legal questions that need to be addressed.

This article discusses the legal and ethical issues raised by digital cloning and digital thought clones, and the need to re-conceptualise current theoretical notions on data privacy. Section 2 of the article provides a necessary definitional context for the different types of digital clones. This section categorises digital clones into audio-visual, memory, personality, and consumer behaviour cloning. It explains how such advancements have led to a risky path of normalising the purposeful creation of a digital thought clone for each natural person, or an individualised digital twin.

Section 3 argues that misaligned legal protections may be driven by theoretical concepts of data privacy that do not meet the realities of big data and AI. The section examines prevailing theoretical concepts of data privacy, such as the public/private dichotomy, and Nissenbaum’s contextual integrity theory of data privacy. 5 It argues that the above theoretical views of privacy fall short when applied to AI and digital thought clones that require the pervasive and continuous use of a person’s data in both public and private spheres of overlapping contexts. The article argues that the legal protection of personal data can only work under a human-centred theory of data privacy.

Section 4 examines the various legal issues that digital cloning poses, including data privacy, informed consent, and laws against discrimination that may encourage behaviour cloning, copyright, and the right to publicity. It explains how the law seeks to protect people against the misuse of their data and identifies cases where the law absurdly encourages businesses to develop more personalised human digital twins.

Section 5 discusses the ethical questions raised by the creation of digital clones. It first conducts a general discussion of the ethical and moral objections raised by digital cloning in comparison to biological cloning.

Section 6 examines specific potential ethical issues raised by digital cloning, including consent and privacy, digital immortality, and the potential status of digital clones as people. It explains that the creation of a digital thought clone poses the immediate and most challenging questions to our existing notions of law and ethics. The article identifies the protections available in some jurisdictions against some of the risks of digital thought clones, such as the General Data Protection Regulation (GDPR) and the California Consumer Privacy Act of 2018 (CCSP). It then discusses the various legal and ethical issues raised by digital cloning and highlights the urgent need for stringent domestic regulations to protect citizens against the unauthorised development of digital thought clones and the associated risks of misuse.

Review/Download

7 thoughts on “Human digital thought clones: the Holy Grail of artificial intelligence for big data

  1. Pingback: next page

Comments are closed.