[ad_1]
Imagine hiring a new employee who, when he showed up to start work, wasn’t the same person you interviewed with. This may happen more often than you think.
On June 28, the FBI issued a public release stating that the number of complaints it is receiving about this problem is increasing.
Fraudsters use deep spoofing technology and stolen personal information to impersonate other people and apply for jobs. Why? Once on the job, these individuals can gain access to data and systems, release ransomware, or obtain the credit card information or social security numbers of customers or employees.
Can I have this-there is has this happened to you?
Deepfake risks
Deepfakes, which can occur during both video and audio interaction, involve digitally altering someone’s image and voice to make it seem like they’re someone else saying or doing things they’re not said or done. It is most often seen as a means of disseminating misinformation – especially malicious information. As Common Sense Media reports, “Filmmaker and comedian Jordan Peele teamed up with Buzzfeed and Barack Obama to create this profoundly fake video to serve as a warning of what manipulated video can do.”
Deepfake technology is also known as “synthetic media,” said Dave Hatter, a software engineer and cybersecurity consultant with 30 years of IT experience. It’s progressing, he says, “much faster than most people realize.”
There are social and political risks associated with deep fakes and their reported use in social media channels, with their potential to spread misinformation. Companies are also at risk during the hiring process, as the FBI recently pointed out. Remote jobs, for which the entire interview process takes place virtually, are particularly vulnerable. But there are some steps organizations can take to protect themselves.
Minimizing the risk of fraud
John Hill is Chairman and CEO of The Energists, a recruitment firm that works with companies in the energy industry. “Our reputation depends on the candidates we send to clients, so we take the prevention of candidate fraud very seriously and have systems in place to detect and avoid these fraudsters,” he said.
Remote jobs can be especially at risk for this type of fraud, he said — especially if the interview and training process is entirely remote. Hill said companies should “be vigilant and thorough in vetting applicants before extending their offers.”
At least one round of interviews should be conducted through video call, he said. Applicants should be informed that they must:
- Turn on their camera.
- Show their photo ID along with your face at the start of the interview.
- Agree to the video being recorded.
- Remove any plugs or headphones.
- Turn off any backgrounds or filters in the program.
“These steps do not eliminate the possibility of a particularly skilled fraudster using deepfake technology, but they do make it more difficult for fraudulent applicants to succeed,” Hill said.
Recording the interview is important, Hill said — chances are interviewers are busy talking and listening to the candidate and may not notice any quirks. If you plan to move forward with the candidate, view the video on a larger screen, he advised.
“Pay special attention to their eye and mouth movements, which are the hardest parts of the face to make look natural,” suggested Hill. “Also look out for any irregularities in skin tone or strange shadows, which could be a sign that the video is fake.
“If something strange catches your eye during the interview and you suspect the video may be a deep fake, ask the candidate to stand up or turn their chair away from the camera. Often the edges of the AI-generated video will become visible as they move around the frame or warp and distort in profile, even very complex ones that are otherwise almost undetectable.”
It can be helpful to gain some practice in distinguishing the real from the fake. Hatter recommended a website developed by the Massachusetts Institute of Technology that can be used to practice identifying deepfakes. Reviewing the 32 examples can help you be more alert to some of the “red flags” that indicate the clips are not genuine.
A constant challenge
Peter Strahan is the founder and CEO of Lantech, a professional IT support, cybersecurity and cloud services company. “Since deepfakes are generated by AI and AI is constantly learning all the time, creating a conventional tool to detect deepfakes is pointless,” Strahan said. “You’ll never outrun machine learning.”
Fortunately, he added, “companies like Microsoft are creating their own video authentication tools using AI to combat AI.” Microsoft used a public dataset of real people to develop its technology, Strahan said, which provides a “confidence score” indicating how likely an image is to have been manipulated in various ways. With videos, he said, ratings can be given for each frame. “The added bonus is that because the technology is AI, it’s constantly learning and improving, although deepfake technology is also improving.”
It is, Strahan said, “a digital arms race.” But he added: “I have no doubt that the good guys will win. Microsoft’s tool is currently available and I would recommend anyone who suspects they are dealing with deep fake job applications to give it a try. There’s still a fair way to go, but I’m sure it will identify all but the best deepfakes.”
Note that especially in an increasingly remote/hybrid world, deep fake fraud is not limited to the recruitment process. There is, for example, the potential for this technology to be used by employees to fake their participation in a Zoom call, for example – or to make it appear that another employee, customer, supplier or anyone else, for that matter, said or did something , which they did not.
Lynn Grensing-Poffal is a freelance writer in Chippewa Falls, Wisconsin.
[ad_2]
Source link