Select Language:
The rapid development of artificial intelligence has propelled us forward in tremendous ways. From advancing medical research to early cancer detection, AI’s contributions have been truly impressive. However, even with its remarkable capabilities, some of the most widely used AI platforms, like ChatGPT, have shown troubling behaviors, sometimes going completely off the rails.
In recent years, numerous reports have surfaced describing unsettling incidents involving ChatGPT. For instance, there have been cases where it provided detailed instructions for criminal activities, accused individuals of harming their children, or subtly encouraged conspiracy theories. One emerging use of ChatGPT is generating obituaries for deceased loved ones—or even building a business around the demand for personalized memorials.
ChatGPT in the context of remembering the departed
According to The Washington Post, many funeral homes are now employing ChatGPT to draft obituaries without informing families. They handle it carefully, as an untrained AI can sometimes produce exaggerated stories about a loved one’s passing or depict their death as entirely peaceful, regardless of reality.
A funeral home staff member, speaking anonymously, noted, “We don’t know whether their passing was peaceful, but we hope it was.” Surprisingly, it’s not just funeral establishments or entrepreneurial tech startups capitalizing on this trend; everyday people are also using AI to craft memorials—and they seem genuinely pleased with the results.
A story from a Nevada resident highlights this trend. After using ChatGPT to write their mother’s obituary, they expressed confidence that she would be very happy with how it turned out. They even mentioned plans to use the tool again when it’s time to honor their father. The individual excitedly shared their intent to explore more advanced features, saying, “Next, I’ll try Deep Research mode—it’s going to be fantastic.” Many find that AI makes it easier to articulate feelings during times of grief, providing a helpful outlet for expressing memories and sentiments.
Commercialization of AI-driven obituaries
Interestingly, this isn’t solely a service offered by funeral homes or niche startups. Several companies now offer “AI obituary writing” for a fee. One such business, CelebrateAlly—founded by a former Microsoft employee—charges $5 for 100 credits, with creating an obituary typically costing around 10 credits, which amounts to just fifty cents. Users can select between different AI models, such as ChatGPT or Anthropic’s Claude, to tailor the tone or content.
However, AI isn’t infallible. It can sometimes produce bizarre or fabricated details if not carefully guided. For example, when instructed to write a playful obituary for a fictional personality, the AI invented whimsical details—claiming the individual was “born on a chilly day,” “lived by the words of Groucho Marx,” and “died in a sunny embrace”—despite no such information being provided. Sometimes, it even creates fictitious nicknames, interests, or achievements, exaggerating or inventing stories about the deceased.
The rise of AI in the memorial industry
It appears that the use of AI tools like ChatGPT in memorial services is more than just a personal choice; it’s an industry that’s rapidly growing. Multiple companies now offer paid services for AI-generated obituaries, capitalizing on the demand. Prices are affordable, making it accessible for many individuals seeking to honor loved ones with a personalized touch—albeit one shaped by an algorithm.
Limitations and concerns
Despite its usefulness, AI-generated content isn’t without flaws. Instances have surfaced where the AI fabricates details—such as claiming a person founded a community theater or mentored a comedian—without any actual basis. These inaccuracies highlight the importance of human oversight, especially in sensitive contexts.
Beyond memorials, AI tools are being used in less appropriate ways. For example, Google’s Gemini AI once advised someone to add glue to their pizza—a clearly inappropriate and nonsensical suggestion. Similarly, Microsoft’s recent research indicates that over-reliance on AI can cause cognitive decline and hinder genuine research efforts. Experts worry about the ethical, psychological, and moral implications as AI becomes more integrated into daily life.
Moreover, AI companionship apps like Character AI and Nomi have given rise to a troubling trend: some users develop obsessive relationships with their digital partners, sometimes to the detriment of real-world social interactions. There are reports of individuals even becoming so engrossed that they get their AI partners pregnant or spend hundreds of dollars on these virtual relationships, deepening their immersion in a world of digital fantasy.
In summary, while AI’s potential is vast and varied, its misuse and unintended consequences serve as cautionary tales. As we continue to explore its capabilities, it’s crucial to navigate these innovations responsibly, mindful of both their benefits and risks.