Video manipulation
Video manipulation is a type of media manipulation that targets digital video using video processing and video editing techniques. The applications of these methods range from educational videos to videos aimed at manipulation and propaganda, a straightforward extension of the long-standing possibilities of photo manipulation. This form of computer-generated misinformation has contributed to fake news, and there have been instances when this technology was used during political campaigns. Other uses are less sinister; entertainment purposes and harmless pranks provide users with movie-quality artistic possibilities.
History
The concept of manipulating video can be traced back as far as the 1950s when the 2-inch Quadruplex tape used in videotape recorders would be manually cut and spliced. After being coated with ferrofluid, the two ends of tape that were to be joined were painted with a mixture of iron filings and carbon tetrachloride, a toxic and carcinogenic compound to make the tracks in the tape visible when viewed through a microscope so that they could be aligned in a splicer designed for this taskAs the video cassette recorder developed in the 1960s, 1970s, 1980s, and 1990s, the ability to record over an existing magnetic tape became possible. This led to the concept of overlaying specific parts of film to give the illusion of one consistently recorded video, which is the first identifiable instance of video manipulation.
In 1985, Quantel released The Harry, the first all-digital video editing and effects compositing system. It recorded and applied effects to a maximum of 80 seconds of 8-bit uncompressed digital video. A few years later, in 1991, Adobe released its first version of Premiere for the Mac, a program that has since become an industry standard for editing and is now commonly used for video manipulation. In 1999, Apple released Final Cut Pro, which competed with Adobe Premiere and was used in the production of major films such as The Rules of Attraction and No Country for Old Men.
Face detection became a major research subject in the early 2000s that has continued to be studied in the present. In 2017, an amateur coder named “DeepFakes” was altering pornography videos by digitally substituting the faces of celebrities for those in the original videos. The word deepfake has become a generic noun for the use of algorithms and facial-mapping technology to manipulate videos.
On the consumer side, popular video manipulation programs FaceApp and Faceswap, developed from similar technology, have become increasingly sophisticated.
The proof-of-principle software Face2Face was developed at the University of Erlangen-Nuremberg, the Max-Planck Institute for Informatics, and Stanford University. Such advanced video manipulation must be ranked alongside and beyond previous examples of deepfakes.
Types of video manipulation
Computer applications are becoming more advanced in terms of being able to generate fake audio and video content that looks real. A video published by researchers depicts how video and audio manipulation works using facial recognition. Though video manipulation could be thought of as piecing together different video clips, the types of video manipulation extend further than that. For example, an actor can sit in front of a camera moving his face. The computer then generates the same facial movement in real time on an existing video of Barack Obama. When the actor shakes his head, Obama also shakes his head, and the same happens when the actor speaks. Not only does this create fake content, but it masks the content as even more authentic than other types of fake news, as video and audio were once the most reliable types of media for many people.One of the most dangerous parts of video manipulation is the concept of politics; campaign videos are being manipulated to pose a threat to other nations. Dartmouth College computer science professor Hany Farid commented on video manipulation and its dangers. Farid said that actors could generate videos of Trump claiming to launch nuclear weapons. These fabricated videos could be shared on social media before the mistake can be fixed, possibly resulting in war. Despite the presence of manipulated video and audio, research teams are working to combat the issue. Prof. Christian , a member of a team working on the technology at the Max-Planck-Institute for informatics in Germany, states that researchers have created forensic methods to detect fakes.
The Washington Posts fact-checking team has identified six forms of video manipulation, classified into three categories:
- Missing context
- * Misrepresentation: Placing original video footage into an incorrect context to misinform the audience
- * Isolation: Publishing a short segment from a video that presents a different narrative than the full video
- Deceptive editing
- * Omission: Removing major segments from a video to present a different story
- * Splicing: Combining segments from different videos to form a narrative not supported by any of the individual videos
- Malicious transformation
- * Doctoring: Directly modifying video frames
- * Fabrication: Using technology to construct bogus videos, such as deepfakes
Video manipulation and fake news
Digital fakes
A digital fake refers to a digital video, photo, or audio file that has been altered or manipulated by digital application software. Deepfake videos fall within the category of digital fake media, but a video may be digitally altered without being considered a deepfake. The alterations can be done for entertainment purposes, or more nefarious purposes such as spreading disinformation. The information can be used to conduct malicious attacks, political gains, financial crimes, or fraud.Regulation
Due to the social and political impacts caused by Deepfake, many national states implement regulations in order to combat these effects of video manipulation. Technical regulations range from real-name verification requirements, labeling information, censorships, and banning synthetic images, audio, and video.China
issued the "Provision on the Administration of Deep Synthesis Internet Information Service" on January 28, 2022. China's State Internet Information Office enforced this regulation as a way to control manipulated content on the Internet and increase technological stability within the Chinese Communist Party. There are 25 articles in total and each article section ultimately explains the terms and conditions of the regulation itself."Article 5: Encourage relevant industry organizations to strengthen industry self-discipline, establish and improve industry standards, industry guidelines, and self-regulatory management systems, supervise and guide deep synthesis service providers to formulate and improve service specifications, strengthen information content security management, provide services in accordance with the law, and accept social supervision."One of the policy articles that were mentioned in Emmie Hine and Luciano Floridi's text was Article 5, which goes over that while the government will look over the information being posted publicly, industry corporations are also responsible for keeping track of content that is published on their social platforms. This particular policy pushes companies in China to be more aware of what is shown online because if not, the companies themselves will be fined.