Heads of states, celebrities targeted by Deep Fake videos

[ad_1]

LAHORE: As the embattled former Pakistani premier, Imran Khan, fears his political foes might malign his public image through Deep Fake videos in the run-up to his much-trumpeted Islamabad march that is due anytime soon, research shows that should his apprehensions come true, he would not be the first high-profile subjected to the negative use of technology.

Former American Presidents Donald Trump and Barack Obama, sitting Ukrainian President Volodymyr Zelensky, eminent Hollywood actor Tom Cruise and Facebook founder Mark Zuckerberg etc have all been targeted by this technology.

However, fortunately enough, none of the rulers or globally-renowned celebrities were shown featuring in any sexually explicit content, though Deep Fakes are today used as a tool for revenge porn too.

On February 20, 2018, a “Fox News” reporter had observed on air: “Deep Fakes are spreading like wildfire. Programmers use existing video and images of celebrities, public figures, or anyone they know to superimpose the source images into a pornographic movie.”

The news channel had then quoted an Emmy Award-nominated Hollywood studio executive as opining: “This is very dangerous for a celebrity because the technology is so good that nearly all viewers will assume that the video, they are looking at, is real and attribute whatever emotion they feel to that celebrity.”

So, it is all not just about videos.This technology can create convincing but entirely fictional photos from scratch, and audios of public figures can be Deep Faked we well.For example, a British media house “Guardian” had reported about two years ago that the chief of a UK subsidiary of a German energy firm had gone to pay nearly £200,000 into a Hungarian bank account after being phoned by a fraudster who mimicked the German CEO’s voice. The company’s insurers believe the voice was a Deep Fake, but the evidence was unclear.

Similar scams have reportedly used recorded WhatsApp voice messages.Deep Fakes take their name from the fact that they use deep learning technology to create fake videos. Deep learning technology is a kind of machine learning that applies neural net simulation to massive data sets. Artificial intelligence effectively learns what a particular face looks like at different angles in order to transpose the face onto a target as if it were a mask.

In one of his fake videos, ex-US President Trump was seen and heard taunting Belgium for remaining in the Paris climate agreement.With Trump’s hair looking even stranger than usual and the crude movement of the mouth, it looked very fake, and so was the voiceover, with subtitles in Flemish language. One of Belgium’s key political parties had posted this video on Facebook back in May 2018.

In American actor Tom Cruise’s case, a TikTok account was dedicated entirely to him. The actor’s voice and mannerism were copied frame to frame in the most convincing ways.Videos showed Tom Cruise doing everything from golfing to demonstrating a magic trick, even in everyday situations like washing his hands. These videos were created with the help of impersonators that mimic the actor’s voice and gestures.

Not long ago, University of Washington computer scientists used Artificial Intelligence to model the shape of President Obama’s mouth and synchronized his lip movement to audio input.

Then came the Facebook founder Mark Zuckerberg’s doctored video where he was boasting of how the social media platform “owned” its users.

Instagram didn’t take the Zuckerberg video down, but said: “We would treat this content the same way we treat all misinformation on Instagram. If third-party fact checkers mark it as false, we will filter it.”Similarly, a hacked Ukrainian TV station, barely a month into the Russian invasion of the country, had shown country’s President Zelensky ordering the country’s troops to surrender to Russia.

Research further reveals the volume of Deep Fake videos, a type of media created with an Artificial Intelligence-powered Generative Adversarial Network (GAN), has registered staggering growth with reputation attacks topping the list.

According to a report by “Sensity,” an Amsterdam-based visual threat intelligence company, over 85,000 harmful Deep Fake videos, crafted by expert creators, were detected up to December 2020. The September 2021 report claimed the number of expert-crafted videos had been doubling every six months since observations started in December 2018. It stated: “Our research reveals Deep Fake videos are growing rapidly online, doubling over the last seven months to 14,678, with 96% concerning reputation attacks to public figures in the form of fake adult material. Some of the top four websites considered in analysis received more than 134 million views, whilst the most common victims’ country of origin are the US, the UK, South Korea, Canada and India.”

It is imperative to note that in its May 3, 2022 research report, computer scientists at the University of California Riverside claimed they had the ability to detect manipulated facial expressions in Deep Fake videos with higher accuracy than current state-of-the-art methods.

The report maintained: “Developments in video editing software have made it easy to exchange the face of one person for another and alter the expressions on original faces. As unscrupulous leaders and individuals deploy manipulated videos to sway political or social opinions, the ability to identify these videos is considered by many essential to protecting free democracies. Methods exist that can detect with reasonable accuracy when faces have been swapped. But identifying faces where only the expressions have been changed is more difficult and to date, no reliable technique exists.”

In its January 13, 2020 edition, “The Guardian” had written: “It takes a few steps to make a face-swap video. First, you run thousands of face shots of the two people through an Artificial Intelligence (AI) algorithm called an encoder. The encoder finds and learns similarities between the two faces, and reduces them to their shared common features, compressing the images in the process. A second AI algorithm called a decoder is then taught to recover the faces from the compressed images. Because the faces are different, you train one decoder to recover the first person’s face, and another decoder to recover the second person’s face. To perform the face swap, you simply feed encoded images into the “wrong” decoder. For example, a compressed image of person A’s face is fed into the decoder trained on person B. The decoder then reconstructs the face of person B with the expressions and orientation of face A. For a convincing video, this has to be done on every frame.”

The newspaper had viewed: “As the technology becomes more accessible, Deep Fakes could mean trouble for the courts, particularly in child custody battles and employment tribunals, where faked events could be entered as evidence. But they also pose a personal security risk: Deep Fakes can mimic biometric data, and potentially trick systems that rely on face, voice, vein or gait recognition. The potential for scams is clear. Phone someone out of the blue and they are unlikely to transfer money to an unknown bank account. But what if your “mother” or “sister” sets up a video call on WhatsApp and makes the same request? These videos can enliven galleries and museums. In Florida, the Dalí museum has a Deep Fake of the surrealist painter who introduces his art and takes selfies with visitors. For the entertainment industry, technology can be used to improve the dubbing on foreign-language films, and more controversially, resurrect dead actors.”

[ad_2]

Source link

Related posts

Parineeti Chopra-Raghav Chadha Wedding: Cameras, 100 guards; Tight venue security – Report

Anupam Kher on losing interest in comedy 1

Disha Patani dazzles in a white co-ord with crop top and thigh-high slit skirt 1