close
close

Gottagopestcontrol

Trusted News & Timely Insights

High-tech fraud: These technologies are changing the face of fraud in Hong Kong
Alabama

High-tech fraud: These technologies are changing the face of fraud in Hong Kong

With the explosion of AI in the enterprise, deepfake tools are more affordable and accessible than ever, making it easy for criminals with little or no technical background to pull off sophisticated scams.

Deepfakes have become a global problem, with the number of reported cases increasing rapidly. In the first quarter of this year, there was a 245 percent year-on-year increase in deepfakes detected by identity verification provider Sumsub.

Hong Kong police have registered three cases related to the technology and discovered 21 clips using deepfakes online since last year to imitate government officials or celebrities, Hong Kong security chief Chris Tang said in June in response to a lawmaker’s query.

Deepfakes of celebrities are increasingly being used to deceive people online. The Hong Kong Securities and Future Commission (SFC) warned earlier this year about a scam involving deepfakes of Elon Musk promoting a cryptocurrency trading platform called “Quantum AI”. Photo: Screenshot
One of the three deepfake cases concerned the Loss of HK$200 million when an employee of Hong Kong-based multinational design and engineering firm Arup was deceived in a video conference. All other participants in the call, including a person who appeared to be the chief financial officer, were imposters. Publicly available video and audio data was all the imposters needed to stage the deception.

Deepfakes are more than just generating the likeness of another person in a video. They can be used to create convincing but fake documents and biometric data.

Hong Kong police cracked down on a fraud syndicate that sent more than 20 online loan applications using deepfake technology to bypass the online application process. One of the applications, for a HK$70,000 loan, was approved.

Not only do these tools make it harder for fraudsters to detect scams, but the technology can also be used as a defense. For example, the Deepfake Inspector from the American-Japanese cybersecurity company Trend Micro analyzes images for noise or color deviations to identify deepfakes in live video calls.

Theft of digital identities

Everyone is familiar with classic examples of identity theft. These usually involve government ID numbers, credit card numbers or biological information, which are often used for fraudulent purposes. Digital identity theft is similar in that it allows fraudsters to impersonate other people on computer networks. However, in some cases it can be even more insidious than traditional identity theft.

Digital identities are software and algorithms that serve as proof of a person or machine’s online identity. Think of persistent cookies that keep a user logged into platforms like Google and Facebook, or an application programming interface (API) key. By stealing this information, a malicious actor can appear as someone with authorized access.

CyberArk’s Billy Chuang (left), Solution Engineering Director for North Asia, and Sandy Lau, District Manager for Hong Kong and Macau. Photo: CyberArk

The growth of cloud services has increased both the incentives and risks of these types of cyber threats. When a system uses a single form of digital identity to verify that users are who they say they are, it is even more vulnerable.

“There is a possibility that cookies may be stolen or made available to third parties and they may use the cookies to access other applications or internal resources,” said Sandy Lau, district manager for Hong Kong and Macau at CyberArk, an Israeli information security provider.

Hybrid work environments, such as using personal devices at work, could increase the risk of cyber theft, Lau added.

To meet customer demands and address growing concerns about machine identities, CyberArk launched an identity-centric secure browser in March to help employees separate work and personal applications and domains.

Large language models

When Microsoft-supported start-up OpenAI launched ChatGPT In late 2022, it sparked an arms race among companies trying to outdo each other with their own Large Language Models (LLMs) – the underlying technology – with ever larger data sets and sophisticated training methods.

Now there’s a seemingly endless list of options for users seeking everything from a little help cleaning up their prose to being scammed out of their life savings. Malicious actors are increasingly turning to LLMs to assist with tasks like crafting text messages and tracking down system vulnerabilities.

Hackers can use LLMs to generate queries that automate the process of finding vulnerabilities in a target network. Once they gain access, they can reuse LLMs to further exploit vulnerabilities internally. The median time between the first compromise of a system and the exfiltration of data was reduced to two days last year, a 45 percent decrease from the nine days in 2021, according to the cybersecurity firm Palo Alto Networks closed in a report published in March.

Phishing attacks – which involve sending malicious links via email, SMS or voice message – remain the most common method of gaining access to a target’s system. LLMs have put a new face on an old scam, allowing more convincing messages to be sent at scale.

Fortunately, AI is also good at detecting fraudulent links when users may not be paying attention. The Hong Kong Computer Emergency Response Team Coordination Centre (HKCERT), the city’s information security watchdog, has has been testing AI language models since May to better detect phishing websites and improve the risk warning system.

Cryptocurrency attacks

Cyber-utopians announced the invention of Bitcoin as a revolution that would change life on the internet as we know it. Cryptocurrencies may not have revolutionized money for most people, but they have opened up a whole new way to siphon money from unsuspecting users.

A common attack in the crypto sector targets users’ wallets, which in many cases are made accessible through browser extensions. Scammers can create fake websites or phishing emails that look like they come from legitimate crypto services, tricking victims into revealing their private keys.

The logo of the cryptocurrency platform JPEX was erected in Hong Kong on September 19, 2023. Photo: Bloomberg

These keys are a type of the only form of digital identity that cybersecurity experts have warned about. Anyone with the private key can access everything in that wallet and send crypto tokens to a new location in an irreversible transaction.

The rise of decentralized finance, which does not rely on intermediaries such as centralized crypto exchanges, has also created new risks. Self-executing smart contracts have increased the speed and efficiency of transactions, which some see as an advantage, but poses major challenges when it comes to fraud. Fraudsters can exploit vulnerabilities in these contracts, which sometimes involves technical errors in the code, but can also be as simple as taking advantage of delays in transaction times to trick a victim into making a new transaction.

Hong Kong’s efforts to establish itself as Web3 Established as a business hub since late 2022, the cryptocurrency exchange has drawn both praise and criticism. Concerns about the type of business crypto attracts were heightened last year when an apparently fraudulent exchange called JPEX was linked to a HK$1.5 billion loss.
Some used the JPEX scandalone of the biggest financial scams in the city’s history, criticized regulators. Others said it showed Hong Kong was on the right track with rules that came into force last year requiring cryptocurrency exchanges to be licensed.

LEAVE A RESPONSE

Your email address will not be published. Required fields are marked *