AI scams
Combining AI with cybercriminals is never a good mix.
Alexander Pan avatar
Written by Alexander Pan
Updated over a week ago

It's no secret that AI has taken massive leaps in recent years (ChatGPT anyone?). So when that technology is combined with scammers who have less-than-honourable intentions, the result is a massive online safety concern in the form of AI scams.

There are four main types of scams involving AI:

  • AI phishing scams

  • AI chatbot scams

  • AI voice cloning scams

  • AI deepfake scams

As this is a potential issue to unsuspecting people, we're going to take a look at the types of AI scams making the rounds, what to keep an eye out for, and how you can stay safe from them.

If you need support or just someone to talk to, our Sonder support team is available 24/7 to chat whenever you need it.

AI phishing scams

A variation of the common phishing scam in which scammers use AI to create very convincing emails or websites that mimic legit companies or organisations in an attempt to trick unsuspecting people into parting with their personal information.

While this is no different to regular phishing scams on paper (both involve trying to trick people using fake websites and emails where they're prompted to enter personal info), AI phishing scams are harder to distinguish due to how convincing the artificial intelligence-generated fake websites and emails are.

The AI tech used can generate convincing fake emails that can be difficult to distinguish from legit emails due to how highly personalised they can be. For example, AI-generated fake emails may include the recipient's name and other personal details, therefore making it seem more legitimate.

How to stay safe:

  • Be wary of emails that ask you to click on a link or provide personal information. Always verify the legitimacy of the email before taking any action

  • Check the sender's email address and look for suspicious activity, (i.e. misspellings/strange characters)

  • Hover over any links in emails to see where they lead before clicking on them. If the URL looks suspicious or unfamiliar, do not click on it

  • Use strong, unique passwords for all your accounts, and never share your password with anyone

  • Keep your antivirus software up to date to protect against malware and other security threats

If you've shared personal or financial details with the scammers, you need to:

  • Contact your bank immediately to let them know what happened and ask what they can do to help.

  • Change the passwords for any online accounts that might be at risk. Make sure to enable two-factor authentication for an extra layer of security.

  • If you've shared personally sensitive information, such as your driver's licence, passport details, or contact details, visit IDCare for assistance on how to address potential identity theft.

AI chatbot scams

AI chatbots are pretty commonly used as customer service tools by many big companies. However, cybercriminals use AI chatbots to impersonate customer service reps or individuals from a legitimate organisation in order to trick unsuspecting people into sharing money and/or personal information. There are a number of types of AI chatbot scams:

  • Investment chatbot scams - Scammers use AI to create fake investment opportunities that target vulnerable people who are looking to make money quickly.

  • Tech support chatbot scams - Scammers create an AI chatbot that impersonates a tech support rep from a legitimate company, like Microsoft or Apple. The chatbot will ask for remote access to the user's computer to fix an 'issue', but instead, the scammer is given access to the user's personal and sensitive information.

  • Romance chatbot scams - Scammers use an AI chatbot to impersonate an attractive person to lure unsuspecting people into starting a 'relationship' with them. Once trust is established, the chatbot will ask the victim for money.

These AI chatbots simulate human conversation and mimic customer service reps or individuals in a convincing way to gain people's trust. Once trust has been established the chatbot will then press the victim into sharing their personal and/or financial information.

How to stay safe:

  • Verify the identity of the chatbot when asked to provide any personal information. If in doubt, stop contact immediately.

  • Be suspicious of any unsolicited messages that ask for personal information or money. This is almost certainly a scam.

  • Do not click on any links or download any attachments from unknown chatbots.

AI voice cloning scams

AI-powered voice cloning technology is sophisticated enough to generate convincing recreations of a real person's voice. As such, scammers have been using this technology to trick unsuspecting people into chatting with someone they think is real, only to later scam them out of money and/or personal information. There are a number of ways AI voice cloning technology can be used by scammers:

  • Impersonating a celebrity or public figure - Scammers create a fake audio recording of a famous celebrity or public figure and then use it to trick unsuspecting people into believing they're chatting with them.

  • Impersonating a loved one - As it says on the label. What makes this scenario particularly precarious is if the faked loved one is in a vulnerable situation, such as needing money or requiring urgent help.

  • Impersonating a company rep - Scammers recreate a company rep's voice and use it to deceive people into giving them personal information.

How to stay safe:

  • Always double-check with friends or family directly to verify their identity, or come up with a safe word to say over the phone to confirm a real emergency.

  • Always be wary of unexpected phone calls, even from people you know. Caller ID numbers can be faked and scammers will use every trick in the book to scam you.

  • Always be vigilant when asked to share personal information.

AI deepfake scams

'Deepfakes' are videos that use AI technology to convincingly manipulate and sometimes outright change someone's appearance and/or voice. As such, scammers can use deepfake AI technology to create fake videos of someone doing or saying something they never actually did in order to blackmail and extort money from them. There are a number of ways deepfakes can be used to scam people:

  • Blackmail and extortion - Using deepfake videos and/or images of a victim saying or doing something they never did to extort money.

  • Identity theft - Using deepfake videos and/or images to impersonate someone online to gain access to their personal information.

  • Financial fraud - Using deepfake videos and/or images to impersonate CEOs or other high-level executives to convince employees to transfer money to fraudulent accounts.

How to stay safe:

  • Always verify the authenticity of any image or video by checking the original source to make sure it is legitimate.

    • Use reputable fact-checking services to verify the authenticity of images or videos.

  • Be wary of any images or videos that seem too good to be true or that are out of character for the person depicted. If it looks or sounds fake, it almost always is.

  • Report any suspicious images or videos to the appropriate authorities.

If you have any questions or need extra support, we're here to help you anytime in any language. Simply start a chat with us via the home screen of the Sonder app to connect to our team of qualified, caring health professionals.

Image credit: Ex Machina

All content is created and published for informational purposes only. It is not intended to be a substitute for professional advice. Always seek the guidance of a qualified health professional.

Did this answer your question?