Portions from pcbb.com
We’ve all seen photos or videos where a person has been “placed” in the picture to make it look like he/she was there or did something and they didn’t. Many times, these are done in fun to create an entertaining meme online. Sometimes they’re used for darker reasons. According to SentinelOne, a cybersecurity startup, “A Deepfake is the use of machine learning to produce a kind of fake media content – typically a video with or without audio – that has been ‘doctored” or fabricated to make it appear that some person or persons did or said something that in fact they did not.”
The pandemic resulted in an urgent need to move to digital identification for many companies. As businesses rely more heavily on this technology, the risk of deepfakes increases. Cybercriminals use a plethora of technologies, including artificial intelligence, to mimic specific people in video, pictures, or audio in deepfakes and convince customers to fall for a scam. According to Pacific Coast Bankers Bank, “these types of misinformation have been estimated to cost businesses $78 billion annually.” The level of quality and believability in these deepfakes make them increasingly hard to spot and enable financial fraud for those who fall prey. Companies should be cautious of only using selfie IDs or voice authentication to verify customers. A combination of good cyber hygiene, rigorous staff training and careful monitoring for anomalies should also be incorporated.