With the explosion of AI in the enterprise, deepfake tools are more affordable and accessible than ever, making it easy for criminals with little or no technical background to pull off sophisticated scams.
Deepfakes have become a global problem, with the number of reported cases increasing rapidly. In the first quarter of this year, there was a 245 percent year-on-year increase in deepfakes detected by identity verification provider Sumsub.
Hong Kong police have registered three cases related to the technology and discovered 21 clips using deepfakes online since last year to imitate government officials or celebrities, Hong Kong security chief Chris Tang said in June in response to a lawmaker’s query.
Deepfakes are more than just generating the likeness of another person in a video. They can be used to create convincing but fake documents and biometric data.
Hong Kong police cracked down on a fraud syndicate that sent more than 20 online loan applications using deepfake technology to bypass the online application process. One of the applications, for a HK$70,000 loan, was approved.
Not only do these tools make it harder for fraudsters to detect scams, but the technology can also be used as a defense. For example, the Deepfake Inspector from the American-Japanese cybersecurity company Trend Micro analyzes images for noise or color deviations to identify deepfakes in live video calls.
Theft of digital identities
Everyone is familiar with classic examples of identity theft. These usually involve government ID numbers, credit card numbers or biological information, which are often used for fraudulent purposes. Digital identity theft is similar in that it allows fraudsters to impersonate other people on computer networks. However, in some cases it can be even more insidious than traditional identity theft.
Digital identities are software and algorithms that serve as proof of a person or machine’s online identity. Think of persistent cookies that keep a user logged into platforms like Google and Facebook, or an application programming interface (API) key. By stealing this information, a malicious actor can appear as someone with authorized access.
The growth of cloud services has increased both the incentives and risks of these types of cyber threats. When a system uses a single form of digital identity to verify that users are who they say they are, it is even more vulnerable.
“There is a possibility that cookies may be stolen or made available to third parties and they may use the cookies to access other applications or internal resources,” said Sandy Lau, district manager for Hong Kong and Macau at CyberArk, an Israeli information security provider.
Hybrid work environments, such as using personal devices at work, could increase the risk of cyber theft, Lau added.
To meet customer demands and address growing concerns about machine identities, CyberArk launched an identity-centric secure browser in March to help employees separate work and personal applications and domains.
Large language models
Now there’s a seemingly endless list of options for users seeking everything from a little help cleaning up their prose to being scammed out of their life savings. Malicious actors are increasingly turning to LLMs to assist with tasks like crafting text messages and tracking down system vulnerabilities.
Phishing attacks – which involve sending malicious links via email, SMS or voice message – remain the most common method of gaining access to a target’s system. LLMs have put a new face on an old scam, allowing more convincing messages to be sent at scale.
Cryptocurrency attacks
A common attack in the crypto sector targets users’ wallets, which in many cases are made accessible through browser extensions. Scammers can create fake websites or phishing emails that look like they come from legitimate crypto services, tricking victims into revealing their private keys.
These keys are a type of the only form of digital identity that cybersecurity experts have warned about. Anyone with the private key can access everything in that wallet and send crypto tokens to a new location in an irreversible transaction.
The rise of decentralized finance, which does not rely on intermediaries such as centralized crypto exchanges, has also created new risks. Self-executing smart contracts have increased the speed and efficiency of transactions, which some see as an advantage, but poses major challenges when it comes to fraud. Fraudsters can exploit vulnerabilities in these contracts, which sometimes involves technical errors in the code, but can also be as simple as taking advantage of delays in transaction times to trick a victim into making a new transaction.