Select Language:
Hackers are increasingly leveraging generative AI to target individuals more effectively and at a lower cost than ever before. While you might feel assured in your ability to recognize malicious attacks, now is an ideal time to refresh your knowledge of the latest strategies they employ to take advantage of people.
How Do Hackers Use AI to Choose Their Targets?
Hackers often exploit stolen social media accounts to deceive individuals. They create fake profiles that mimic legitimate users or take control of real accounts to exploit the inherent trust people have in their networks.
AI-driven bots can scour social media for users’ photos, bios, and posts to construct convincing fake accounts. Once a scammer creates a fraudulent profile, they send friend requests to the victim’s contacts, misguiding them into believing they’re communicating with someone they know.
Having control over a real account allows scammers to use AI for more sophisticated and targeted scams. AI bots can analyze close relationships, uncover hidden information, and examine past conversations. They can initiate conversations by mimicking speech patterns and pushing scams, such as phishing links, fake emergencies, or requests for sensitive information.
For instance, many have encountered a friend’s Facebook account being compromised and used to disseminate phishing links. This represents just one facet of an account takeover. Once a scammer gains access, they can use AI tools to message the contacts of the hacked account, attempting to reel in additional victims.
In response to these threats, numerous web services have implemented complex CAPTCHAs, mandatory two-factor authentication, and more precise tracking of user behavior. However, despite these added precautions, people remain the most significant vulnerability.
Types of AI-Powered Scams You Should Be Aware Of
Cybercriminals are utilizing multi-modal AI to create bots that impersonate high-profile individuals or organizations. While many AI-driven scams still rely on traditional social engineering tactics, the incorporation of AI improves their effectiveness and makes them harder to detect.
AI Phishing and Smishing Schemes
Phishing and smishing schemes are longstanding tactics used by scammers. These scams involve impersonating reputable companies, government entities, and online services to steal your login credentials. Although these attacks are widespread, they can often be spotted fairly easily. Scammers frequently rely on the sheer volume of their attempts to find a few successes.
On the other hand, spear-phishing attacks are significantly more effective. These attacks demand extensive research on the part of the perpetrators, allowing them to craft highly personalized emails and messages aimed at particular individuals. However, spear-phishing attempts are less common because they require a substantial investment of time and effort.
Here’s where AI poses a serious threat. Cybercriminals can harness AI chatbots and tools to automate mass spear-phishing campaigns without needing to allocate significant resources or time. In some cases, deepfake videos of key figures have been used to enhance these scams. For example, YouTube recently alerted creators about phishing scams that featured a deepfake video of its CEO, aimed at tricking individuals into divulging their login credentials.
Romance Scams
Romance scams exploit emotional vulnerability to build trust and affection before ultimately deceiving victims. Unlike traditional phishing scams, where social engineering ends with the loss of credentials, romance scams require perpetrators to invest weeks, months, or even years in developing relationships—a strategy referred to as pig butchering. Due to this extensive time commitment, scammers can only target a limited number of victims, making these scams less prevalent than more straightforward spear-phishing attempts.
Nevertheless, today’s scammers can use AI chatbots to automate some of the more time-intensive elements of romance scams—conversing via text, sending images and videos, and even making live phone calls. Since targets are often in emotional distress, they might even unconsciously excuse AI-generated dialogues as charming or peculiar.
The Scottish Sun reported on an incident in which a neuroscientist lost a considerable sum of money due to an AI-based romance scam. The fraudster utilized AI-generated videos and messages to convincingly pose as a romantic interest, fabricating a detailed narrative about working on an offshore oil rig and employing deepfake technology to make their claims appear credible. This highlights the evolving tactics scammers are adopting to exploit victims further.
AI-Driven Customer Support Scams
Customer support scams take advantage of people’s trust in prominent brands by impersonating their help desks. Scammers often send fake alerts, pop-ups, or emails claiming that an account has been locked, needs verification, or contains urgent security issues. Traditionally, these scams required direct interaction with victims, but the rise of AI chatbots has transformed the landscape.
AI customer support scams now utilize chatbots to automate interactions, making them seem more credible. Through automation tools like n8n, chatbots can engage in real-time conversations, replicate official support agents, and reference knowledge bases to appear more legitimate. Often, they use phishing tactics to create cloned websites that trick victims into entering their credentials.
On the flip side, scammers may employ AI agents to reach out to legitimate services such as banks or government programs, attempting to obtain a target’s data or even reset login credentials.
Automated Misinformation and Smear Campaigns
Hackers now deploy AI chatbots to propagate misinformation on an unprecedented scale. These bots fabricate and disseminate false narratives across social media, targeting news feeds, forums, and comment sections with misleading information. Unlike traditional misinformation tactics that required manual effort, AI agents can now automate this entire process, allowing falsehoods to spread more quickly and convincingly.
By automating the creation of authentic-looking social media accounts, these bots can generate and engage with posts to propel misinformation forward. With a sufficient number of these bots circulating online, they can sway uninformed or undecided individuals to align with their narrative.
Beyond mere deception, hackers employ AI-driven misinformation campaigns to direct traffic towards scam websites. They often blend fake news with fraudulent offers, enticing victims to click on malicious links. Because these posts often go viral before any fact-checking can occur, many individuals unknowingly assist in spreading misinformation further.
How Can I Protect Myself?
While hackers are employing AI across a variety of tasks, its most significant role is enhancing social engineering attacks. To defend against AI-powered scams, we must be more vigilant about securing our privacy and validating the authenticity of messages, posts, and profiles.
- Limit Sharing Personal Information: Reduce your chances of being targeted by thinking carefully before sharing personal details on social media. Scammers often use this information to create customized attacks.
- Be Cautious with Unexpected Communications: If you receive an unsolicited call, message, or email from someone unfamiliar, verify their identity with friends, family, or colleagues before responding.
- Watch Out for Deepfakes: AI-generated deepfakes can convincingly imitate voices and appearances. Be wary of surprise video calls and messages from high-profile individuals. Always check for verification badges, follower or subscriber counts, and the interactions on their posts.
- Think Before You Click: Phishing links often masquerade as regular posts on social media. If the play button seems flat or altered, or if the post appears confusing—like resembling both a video and an image link—it’s wise to avoid engagement.
- Verify News Posts: Whether you seek to avoid scams or stay informed, cross-check information across multiple sources. Also, be vigilant in the comments section—bot accounts frequently have usernames ending in numbers to secure availability during account creation.
While AI chatbots provide convenience, they also empower hackers with sophisticated scamming tools. It’s essential to remember that many AI-assisted scams still follow familiar patterns inherent in traditional scams. They are just harder to detect and more widespread. By staying informed and verifying all online interactions, you can protect yourself from these evolving threats.