The English version is AI translated.

Continue
Issues

05.2024 Life Guide

Seeing is not believing? Public disclosure of AI fraud techniques

Far Eastern New Century Corporation / Jian Junru
播放语音
4053701        With the rapid development of AI, related applications have gradually penetrated into everyone's life, however, the number of fraud cases has also been increasing. A familiar phone call, a business email, or a video conference may seem normal, but in reality, they may hide unknown AI traps. If one blindly believes them, personal wealth, integrity, and even personal assets may disappear overnight. This issue of "Information Network" lists the latest AI fraud methods, reminding you of self-protection methods.

                Application of Generative AI

        Have you ever used ChatGPT to write emails or create images? These tasks, which originally required a lot of time to complete, became effortless after the emergence of generative AI. However, not only the general public uses generative AI, but hackers also use it to accelerate attacks. As the incidence of phishing emails gradually increases, it not only poses a threat to individuals, but also poses a serious challenge to corporate security.

        Traditional Fishing Attack vs AI Fishing Attack

        In the past, readers may have been able to make a preliminary judgment through unsmooth sentences, strange variant characters, spelling errors, or other flaws in phishing emails. However, with the help of generative AI, the realism of phishing emails has greatly improved. Not only is the language style and wording more authentic and precise, but the content can even be tailored based on the victim's personal information and online behavior, making it difficult to distinguish authenticity. Taking language as an example, hackers can first provide an English manuscript for ChatGPT to polish and improve the quality, or they can first make demands to ChatGPT in familiar language, wait for them to translate a fluent English version, and then output it as phishing content.

        Can BEC email fraud also be generated using AI?

        The BEC email fraud, also known as "face changing scam attack," which was introduced in 2023, refers to hackers impersonating internal employees, senior executives, suppliers, or external partners to deceive employees into wire transfers or leak confidential information. With the increasing scope of AI applications, the number of BEC scams is also gradually increasing. After a hacker hijacks an email conversation, as long as they feed the email content to ChatGPT, they can naturally generate corresponding reply emails, and even imitate the tone and writing of the original email, which is more difficult for ordinary people to detect.

        Dark version ChatGPT

        Have you ever thought that ChatGPT would also be blacked out? Hackers use the technology of generative AI programming to modify code in malicious software, creating attack specific robots such as FraudGPT and WormGPT, creating a large number of social engineering emails, allowing people without programming backgrounds to easily create viruses and expand the scope of malicious software damage. The aggressive generative AI listed above provides subscription based services, allowing users to launch complex phishing attacks for only USD200 per month, significantly reducing the threshold for attacks.

        Have you heard of AI Stefanie Sun?

        When we hear familiar sounds but cannot confirm their authenticity, we should be wary of encountering AI voice fraud. Especially in recent years, AI technology has made it easier to replicate the voices of others. Taking singer Stefanie Sun as an example, netizens use her previously sung songs as training data to generate extremely realistic sounds through AI models, including breathing sounds, biting words, and ending sounds, which are very close to the singer's interpretation. What is even more shocking is that copying a person's voice only takes "3 seconds" of sound files. Therefore, hackers can easily collect personal information and sounds through questionnaire surveys, theft, purchase, and other methods, and launch precise attacks. Everyone should be more careful to protect personal information and sounds. Prevent fraud.

        Seeing is believing? No pictures, no truth?

        Since 2017, deep fake technology has received widespread attention. The term combines deep learning with fake, referring to the use of AI deep learning technology to synthesize images, films, and even sound. Previously, it was popular on social media platforms to upload photos generated by apps showing one's age or gender, using Deepfake technology. In addition to being used for self entertainment, it may also become a tool for hackers to cheat. Taking the recent new type of AI fraud in Hong Kong as an example, the criminals used Deepfake to attend video conferences and pretended to be several senior executives of the headquarters, demanding that the financial staff of the Hong Kong branch remit HKD2 billion for secret transactions. In the era of AI prevalence, seeing as evidence became a thing of the past, and even seeing images may not necessarily be true.

        How to protect oneself when the road is one foot high and the demon is one foot high?

        Faced with rampant AI fraud, enterprises must not take it lightly. The annual report of the Federal Bureau of Investigation (FBI) shows that the losses caused by online fraud in the United States in 2023 reached as high as USD12.5 billion, an increase of 22% from the previous year and even a new high in six years. At the same time, the number of hackers using AI to launch online attacks is also growing at an astonishing speed. Reminder: If you receive suspicious letters, remember the principles of "not impulsively clicking on links and downloading unknown files", "not providing personal and financial information at will", and "pay more attention to the domain and network name". Whether it is text, voice, or video fraud, it is recommended to use a second channel to confirm the authenticity of the content. The authenticity of the phone or video can also be verified through a clearance code only known to both parties to avoid harm.

        In addition, hackers can also use publicly available personal and voice data to commit fraud. Therefore, the public should take good care of sensitive personal information, adjust privacy settings on social media platforms, and try to avoid publicly disclosing photos of their faces; For enterprises, enhancing their awareness of information security, strengthening identity verification through multi factor verification, and establishing data protection mechanisms are all important ways of protection. Only by strengthening vigilance and protecting accounts, passwords, and personal information can we stay away from the potential risks brought by AI.

        *Image source: Freepik

        #

        
Back  Back To List
Comments(0)

Recommend

Events