close
close

Gottagopestcontrol

Trusted News & Timely Insights

Here are the technologies that are changing the face of fraud in Hong Kong
Alabama

Here are the technologies that are changing the face of fraud in Hong Kong

Frauds based on new technologies are on the rise and, as a global financial centre, Hong Kong is particularly vulnerable.

The city’s police recorded 16,182 technology-related crime cases in the first half of the year, an increase of 3.5 percent from the same period in 2023. According to police commissioner Raymond Lam Cheuk-ho, losses in these cases amounted to HK$2.66 billion (US$341.1 million).

But how have fraudsters been able to carry out increasingly convincing scams? Here are the key technologies fueling the fraud boom:

Do you have questions about the biggest topics and trends from around the world? Get the answers with SCMP Knowledge, our new platform of curated content with explainers, FAQs, analysis and infographics brought to you by our award-winning team.

Deepfakes – which use generative artificial intelligence (AI) to create videos, images or audio files that resemble a person – are becoming increasingly difficult to distinguish from real people.

With the explosion of AI in the enterprise, deepfake tools are more affordable and accessible than ever, making it easy for criminals with little or no technical background to pull off sophisticated scams.

Deepfakes have become a global problem, with the number of reported cases increasing rapidly. In the first quarter of this year, there was a 245 percent year-on-year increase in deepfakes detected by identity verification provider Sumsub.

Hong Kong police have registered three cases related to the technology and discovered 21 clips using deepfakes online since last year to imitate government officials or celebrities, Hong Kong security chief Chris Tang said in June in response to a lawmaker’s query.

Deepfakes of celebrities are increasingly being used to deceive people online. The Hong Kong Securities and Future Commission (SFC) warned earlier this year about a scam involving deepfakes of Elon Musk promoting a cryptocurrency trading platform called “Quantum AI”. Photo: Screenshot alt=Deepfakes of celebrities are increasingly being used to deceive people online. The Hong Kong Securities and Future Commission (SFC) warned earlier this year about a scam involving deepfakes of Elon Musk promoting a cryptocurrency trading platform called “Quantum AI”. Photo: Screenshot>

One of the three deepfake cases involved a HK$200 million loss when a Hong Kong employee of multinational design and engineering firm Arup was deceived in a video conference. All other participants in the call, including a person who appeared to be the chief financial officer, were imposters. Publicly available video and audio data was all the scammers needed to stage the deception.

Deepfakes are more than just generating the likeness of another person in a video. They can be used to create convincing but fake documents and biometric data.

Hong Kong police cracked down on a fraud syndicate that sent more than 20 online loan applications using deepfake technology to bypass the online application process. One of the applications, for a HK$70,000 loan, was approved.

Not only do these tools make it harder for fraudsters to detect scams, but the technology can also be used as a defense. For example, the Deepfake Inspector from the American-Japanese cybersecurity company Trend Micro analyzes images for noise or color deviations to identify deepfakes in live video calls.

Everyone is familiar with classic examples of identity theft. These usually involve government ID numbers, credit card numbers or biological information, which are often used for fraudulent purposes. Digital identity theft is similar in that it allows fraudsters to impersonate other people on computer networks. However, in some cases it can be even more insidious than traditional identity theft.

Digital identities are software and algorithms that serve as proof of a person or machine’s online identity. Think of persistent cookies that keep a user logged into platforms like Google and Facebook, or an application programming interface (API) key. By stealing this information, a malicious actor can appear as someone with authorized access.

CyberArk’s Billy Chuang (left), Solution Engineering Director for North Asia, and Sandy Lau, District Manager for Hong Kong and Macau. Photo: CyberArk alt=CyberArk’s Billy Chuang (left), Solution Engineering Director for North Asia, and Sandy Lau, District Manager for Hong Kong and Macau. Photo: CyberArk>

The growth of cloud services has increased both the incentives and risks of these types of cyber threats. When a system uses a single form of digital identity to verify that users are who they say they are, it is even more vulnerable.

“There is a possibility that cookies may be stolen or made available to third parties and they may use the cookies to access other applications or internal resources,” said Sandy Lau, district manager for Hong Kong and Macau at CyberArk, an Israeli information security provider.

Hybrid work environments, such as using personal devices at work, could increase the risk of cyber theft, Lau added.

To meet customer demands and address growing concerns about machine identities, CyberArk launched an identity-centric secure browser in March to help employees separate work and personal applications and domains.

When Microsoft-backed startup OpenAI launched ChatGPT in late 2022, it sparked an arms race among companies trying to outdo each other with their own large language models (LLMs) – the underlying technology – with ever larger datasets and sophisticated training methods.

Now there’s a seemingly endless list of options for users seeking everything from a little help cleaning up their prose to being scammed out of their life savings. Malicious actors are increasingly turning to LLMs to assist with tasks like crafting text messages and tracking down system vulnerabilities.

Hackers can use LLMs to generate queries to automate the process of finding vulnerabilities on a target network. Once they gain access, they can use LLMs again to further exploit vulnerabilities internally. The average time between the initial compromise of a system and the exfiltration of data was reduced to two days last year, a 45 percent drop from the nine days in 2021, cybersecurity firm Palo Alto Networks concluded in a report published in March.

Phishing attacks, which involve sending malicious links via email, SMS or voice message, remain the most common method of gaining access to a target’s system. LLMs have put a new face on an old scam, allowing more convincing messages to be sent at scale.

Fortunately, AI is also good at spotting fraudulent links when users may not be paying attention. The Hong Kong Computer Emergency Response Team Coordination Centre (HKCERT), the city’s information security agency, has been testing AI language models since May to detect phishing websites and improve its risk warning system.

Cyber-utopians hailed the invention of Bitcoin as a revolution that would change life on the internet as we know it. Cryptocurrencies may not have revolutionized money for most people, but they have opened up a whole new way to siphon money from unsuspecting users.

A common attack in the crypto sector targets users’ wallets, which in many cases are made accessible through browser extensions. Scammers can create fake websites or phishing emails that look like they come from legitimate crypto services, tricking victims into revealing their private keys.

The logo of the cryptocurrency platform JPEX was installed in Hong Kong on September 19, 2023. Photo: Bloomberg alt=The logo of the cryptocurrency platform JPEX was installed in Hong Kong on September 19, 2023. Photo: Bloomberg>

These keys are a type of the only form of digital identity that cybersecurity experts have warned about. Anyone with the private key can access everything in that wallet and send crypto tokens to a new location in an irreversible transaction.

The rise of decentralized finance, which does not rely on intermediaries such as centralized crypto exchanges, has also created new risks. Self-executing smart contracts have increased the speed and efficiency of transactions, which some see as an advantage, but poses major challenges when it comes to fraud. Fraudsters can exploit vulnerabilities in these contracts, which sometimes involves technical errors in the code, but can also be as simple as taking advantage of delays in transaction times to trick a victim into making a new transaction.

Hong Kong’s efforts to position itself as a Web3 business hub since late 2022 have drawn both praise and criticism. Concerns about the type of business crypto attracts were heightened last year when an apparently fraudulent exchange called JPEX was linked to a HK$1.5 billion loss.

Some used the JPEX scandal, one of the biggest financial frauds in the city’s history, to criticize regulators. Others said it proved that Hong Kong was on the right track with regulations that came into force last year requiring cryptocurrency exchanges to be licensed.

This article originally appeared in the South China Morning Post (SCMP), the most authoritative voice for reporting on China and Asia for more than a century. For more SCMP stories, visit the SCMP app or the SCMP Facebook page and Þjórsárdalur Pages. Copyright © 2024 South China Morning Post Publishers Ltd. All rights reserved.

Copyright (c) 2024. South China Morning Post Publishers Ltd. All rights reserved.

LEAVE A RESPONSE

Your email address will not be published. Required fields are marked *