Digital cloning


Digital cloning is an emerging technology, that involves deep-learning algorithms, which allows one to manipulate currently existing audio, photos, and videos that are hyper-realistic. One of the impacts of such technology is that hyper-realistic videos and photos makes it difficult for the human eye to distinguish what is real and what is fake. Furthermore, with various companies making such technologies available to the public, they can bring various benefits as well as potential legal and ethical concerns.
Digital cloning can be categorized into audio-visual, memory, personality, and consumer behaviour cloning. In AV cloning, the creation of a cloned digital version of the digital or non-digital original can be used, for example, to create a fake image, an avatar, or a fake video or audio of a person that cannot be easily differentiated from the real person it is purported to represent. A memory and personality clone like a mindclone is essentially a digital copy of a person's mind. A consumer behavior clone is a profile or cluster of customers based on demographics.
Truby and Brown coined the term "digital thought clone" to refer to the evolution of digital cloning into a more advanced personalized digital clone that consists of "a replica of all known data and behavior on a specific living person, recording in real-time their choices, preferences, behavioral trends, and decision making processes."
Digital cloning first became popular in the entertainment industry. The idea of digital clones originated from movie companies creating virtual actors of actors who have died. When actors die during a movie production, a digital clone of the actor can be synthesized using past footage, photos, and voice recordings to mimic the real person in order to continue the movie production.
Modern artificial intelligence, has allowed for the creation of deepfakes. This involves manipulation of a video to the point where the person depicted in the video is saying or performing actions he or she may not have consented to. In April 2018, BuzzFeed released a deepfake video of Jordan Peele, which was manipulated to depict former president Barack Obama making statements he has previously not made in public to warn the public against the potential dangers of deepfakes.
In addition to deepfakes, companies such as Intellitar now allows one to easily create a digital clone of themselves by feeding a series of images and voice recordings. This essentially creates digital immortality, allowing loved ones to interact with representations of those who died. Digital cloning not only allows one to digitally memorialize their loved ones, but they can also be used to create representations of historical figures and be used in an educational setting.
With the development of various technology, as mentioned above, there are numerous concerns that arises, including identity theft, data breaches, and other ethical concerns. One of the issues with digital cloning is that there are little to no legislations to protect potential victims against these possible problems.

Technology

Intelligent Avatar Platforms (IAP)

Intelligent Avatar Platform can be defined as an online platform supported by artificial intelligence that allows one to create a clone of themselves. The individual must train his or her clone to act and speak like themselves by feeding the algorithm numerous voice recordings and videos of themselves. Essentially, the platforms are marketed as a place where one 'lives eternally', as they are able to interact with other avatars on the same platform. IAP is becoming a platform for one to attain digital immortality, along with maintaining a family tree and legacy for generations following to see.
Some examples of IAP include Intellitar and Eterni.me. Although most of these companies are still in its developing stages, they all are trying to achieve the same goal of allowing the user to create an exact duplicate of themselves to store every memory they have in their mind into the cyberspace. Some include a free version, which only allows the user to choose their avatar from a given set of images and audio. However, with the premium setting, these companies will ask the user to upload photos, videos, and audio recordings of one to form a realistic version of themselves. Additionally, to ensure that the clone is as close to the original person, companies also encourage interacting with their own clone by chatting and answering questions for them. This allows the algorithm to learn the cognition of the original person and apply that to the clone. Intellitar closed down in 2012 because of intellectual property battles over the technology it used.
Potential concerns with IAP includes the potential data breaches and not getting consent of the deceased. IAP must have a strong foundation and responsibility against data breaches and hacking in order to protect personal information of the dead, which can include voice recording, photos, and messages. In addition to the risk of personal privacy being compromised, there is also the risk of violating the privacy of the deceased. Although one can give consent to creating a digital clone of themselves before his or her physical death, they are unable to give consent to the actions the digital clone may take.

Deepfakes

As described earlier, deepfakes is a form of video manipulation where one can change the people present by feeding various images of a specific person they want. Furthermore, one can also change the voice and words the person in the video says by simply submitting series of voice recordings of the new person lasting about one or two minutes long. In 2018, a new app called FakeApp was released, allowing the public to easily access this technology to create videos. This app was also used to create the BuzzFeed video of former President Barack Obama. With deepfakes, industries can cut the cost of hiring actors or models for films and advertisements by creating videos and film efficiently at a low cost just by collecting a series of photos and audio recordings with the consent of the individual.
Potential concerns with deepfakes is that access is given to virtually anyone who downloads the different apps that offer the same service. With anyone being able to access this tool, some may maliciously use the app to create revenge porn and manipulative videos of public officials making statements they will never say in real life. This not only invades the privacy of the individual in the video but also brings up various ethical concerns.

Voice cloning

Voice cloning is a case of the audio deepfake methods that uses artificial intelligence to generate a clone of a person's voice. Voice cloning involves deep learning algorithm that takes in voice recordings of an individual and can synthesize such a voice to the point where it can faithfully replicate a human voice with great accuracy of tone and likeness.
Cloning a voice requires high-performance computers. Usually, the computations are done using the Graphics Processing Unit, and very often resort to the cloud computing, due to the enormous amount of calculation needed.
Audio data for training has to be fed into an artificial intelligence model. These are often original recordings that provide an example of the voice of the person concerned. Artificial intelligence can use this data to create an authentic voice, which can reproduce whatever is typed, called Text-To-Speech, or spoken, called Speech-To-Speech.
This technology worries many because of its impact on various issues, from political discourse to the rule of law. Some of the early warning signs have already appeared in the form of phone scams and fake videos on social media of people doing things they never did.
Protections against these threats can be primarily implemented in two ways. The first is to create a way to analyze or detect the authenticity of a video. This approach will inevitably be an upside game as ever-evolving generators defeat these detectors. The second way could be to embed the creation and modification information in software or hardware. This would work only if the data were not editable, but the idea would be to create an inaudible watermark that would act as a source of truth. In other words, we could know if the video is authentic by seeing where it was shot, produced, edited, and so on.
15.ai—a non-commercial freeware web application that began as a proof of concept of the democratization of voice acting and dubbing using technology—gives the public access to such technology. Its gratis and non-commercial nature, ease of use, and substantial improvements to current text-to-speech implementations have been lauded by users; however, some critics and voice actors have questioned the legality and ethicality of leaving such technology publicly available and readily accessible.
Although this application is still in the developmental stage, it is rapidly developing as big technology corporations, such as Google and Amazon are investing vast amounts of money for the development.
Some of the positive uses of voice cloning include the ability to synthesize millions of audiobooks without the use of human labor. Also, voice cloning was used to translate podcast content into different languages using the podcaster's voice. Another includes those who may have lost their voice can gain back a sense of individuality by creating their voice clone by inputting recordings of them speaking before they lost their voices.
On the other hand, voice cloning is also susceptible to misuse. An example of this is the voices of celebrities and public officials being cloned, and the voice may say something to provoke conflict despite the actual person has no association with what their voice said.
In recognition of the threat that voice cloning poses to privacy, civility, and democratic processes, the Institutions, including the Federal Trade Commission, U.S. Department of Justice and Defense Advanced Research Projects Agency and the Italian Ministry of Education, University and Research, have weighed in on various audio deepfake use cases and methods that might be used to combat them.