Select Language:
AI is gradually infiltrating more aspects of the internet, providing answers to searches, suggesting downloads, and even filtering your emails. However, just because it seems helpful doesn’t necessarily mean it’s always accurate or safe.
I was searching for screen recording apps and decided to see how Gemini would handle it as a trial. I let it suggest a few tools and direct links to downloads. The initial list looked decent, but one link directed me to Softonic.
If you’re not familiar, Softonic appears harmless but isn’t reliable. They’ve been operating for years, packaging popular apps with their own installers, which often include adware, browser hijackers, or other unwanted software. They use aggressive SEO tactics to rank high on Google, making their links look trustworthy despite their notoriety for being untrustworthy. Now, seemingly, they’re also creeping into AI-generated search responses.
I recognized the red flag quickly because I’ve been online long enough to know Softonic’s reputation. But for someone like my parents—or anyone not tech-savvy—they might trust Gemini to provide safe links, click on the first result, and unknowingly download malware.
This is especially concerning because these tools sound confident and official. When you’re in a rush or not familiar with the tech, it’s easy to be misled.
If you’ve recently searched on Google, you’ve probably seen those large blocks of text at the top claiming to answer your question. These are called AI Overviews, generated automatically by large language models (LLMs). They summarize information pulled from across the web—similar to what Gemini would do, but integrated within Google Search.
While convenient, blindly trusting these summaries isn’t advisable. There have been instances where AI Overviews directed users to shady or fake websites. These sites may look like legitimate stores or services but are just scams designed to steal money or install malicious software.
The bigger issue is that these AI-generated snippets appear prominently in Google, which many people trust implicitly. Unlike with standalone LLMs, where users are somewhat cautious, many don’t realize these top results are AI-powered, leading to unintentional clicks on potentially harmful sites.
Thankfully, there are workarounds to disable AI Overviews, though they’re somewhat complicated. Considering the risks, it’s probably worth turning them off to rely on actual links and verified sources instead.
It’s not just Gemini facing these issues—other AI assistants also make errors. For example, I mentioned Apple’s intelligence features, which are often underwhelming. One problematic feature is Priority Messages in Mail, meant to highlight important alerts. However, it has flagged phishing emails from fake banks without checking for suspicious content or red flags. This oversight forced me to disable Apple Intelligence entirely on my parents’ iPhones.
The real concern is how much users trust these features blindly. If your device marks a message as important, you’re likely to believe it. When AI tools can’t even flag obvious scams, the risk of falling victim to fraud or malware increases, with serious real-world consequences.
The key is to avoid blindly trusting links or responses from AI. Whether it’s Gemini, ChatGPT, or Perplexity, treat every suggestion as a starting point—not the final say. Perplexity has generally been more reliable in citing credible sources, but it’s not foolproof.
If you’re searching for apps, always prefer official stores like the App Store, Google Play, or the developer’s site instead of relying on AI to find download links. When browsing for sensitive info or making purchases, take extra time to verify your redirections or search directly for the official website.
Remember: don’t click the first link just because it appears in an AI response. It may seem trustworthy, but appearances can be deceiving.
While AI assistants are useful for quick overviews, organizing thoughts, or answering everyday questions, exercise caution when money, downloads, or personal data are involved. Always double-check and proceed carefully.