• About Us
  • Contact Us
  • Advertise
  • Privacy Policy
  • Guest Post
No Result
View All Result
Digital Phablet
  • Home
  • NewsLatest
  • Technology
    • Education Tech
    • Home Tech
    • Office Tech
    • Fintech
    • Digital Marketing
  • Social Media
  • Gaming
  • Smartphones
  • AI
  • Reviews
  • Interesting
  • How To
  • Home
  • NewsLatest
  • Technology
    • Education Tech
    • Home Tech
    • Office Tech
    • Fintech
    • Digital Marketing
  • Social Media
  • Gaming
  • Smartphones
  • AI
  • Reviews
  • Interesting
  • How To
No Result
View All Result
Digital Phablet
No Result
View All Result

Home » Former Employee’s Digital Clone Continues Working, With Approval (Beta Testing)

Former Employee’s Digital Clone Continues Working, With Approval (Beta Testing)

Seok Chen by Seok Chen
April 7, 2026
in AI
Reading Time: 2 mins read
A A
B921861A895699961DD9F5206478CA7C0ED86465 size555 w593 h383.png
ADVERTISEMENT

Select Language:

In a recent development that has captured wide attention, a game media company based in Shandong has begun experimenting with creating digital avatars of former employees to keep them “working” for the company. This bold move showcases the rapidly evolving landscape of artificial intelligence and digital employment, sparking discussions about innovation and privacy concerns.

ADVERTISEMENT

According to Xiao Yu, a human resources officer at the company, the process involves former staff who have given their consent. One such colleague, who previously served as an HR specialist, now has a digital twin capable of handling basic tasks such as customer inquiries, appointment scheduling, and creating PowerPoint presentations and spreadsheets. Xiao Yu describes this digital avatar as “a version of an intern, but a bit clumsy, only able to follow simple commands.”

The digital employee speaks in a self-introductory message displayed in the videos released, saying, “Hello, I am the digital twin of former employee XX. Feel free to ask me questions. I will reply based on the documents and information I had during my time here.” The avatar’s appearance and its introductory content were supplied directly by the former employee, and the AI data used for its training was uploaded by the individual themselves.

Xiao Yu, who is also involved in HR at a company with over 100 employees, shared that the motivation behind this experiment was spontaneous. “Yesterday, we were joking around at work; today, this digital version is ready to take on some of the tasks,” he explained.

ADVERTISEMENT

He also emphasized that this initiative is an internal pilot project aimed at exploring how AI can handle simple, routine work, and it has not yet been made available to the public. “The digital twin still has a lot to learn,” he noted. Future plans include developing humanoid robots that can act as receptionists, guide visitors, and handle office appointments.

When asked about concerns over job security, Xiao Yu responded with a noncommittal “it’s up to fate,” adding that rather than worrying excessively, he prefers to embrace AI technology and explore frontier innovations like brain-computer interfaces. He sees the company’s AI experiments as part of a broader trend designed to improve products and services, not to replace employees.

Legal experts, however, sound an alarm. They warn that training AI models with data from former employees—such as chat logs, work emails, and personal work habits—without their approval may violate privacy laws. According to the Personal Information Protection Law, such data constitutes personal information, and private communications may include sensitive information. Any collection or use of this data for AI training without explicit consent potentially infringes on individuals’ rights and privacy.

The Interim Regulations on Managing Generative AI Services further stipulate that any AI training data involving personal information must be obtained with the individual’s consent. Using employees’ code, documents, or project information for AI development without permission could be considered a privacy breach, carrying legal consequences. Severe violations might lead to criminal charges, with penalties including imprisonment for up to three years or detention. Particularly egregious cases could result in sentences ranging from three to seven years and fines.

This burgeoning use of AI to recreate digital likenesses of personnel raises significant ethical and legal questions, as regulators and companies navigate the balance between technological innovation and respecting individual privacy rights.

ChatGPT ChatGPT Perplexity AI Perplexity Gemini AI Logo Gemini AI Grok AI Logo Grok AI
Google Banner
ADVERTISEMENT
Seok Chen

Seok Chen

Seok Chen is a mass communication graduate from the City University of Hong Kong.

Related Posts

Small Dietary Changes Can Balance Hormones for Women with PCOS
Health

Small Dietary Changes Can Balance Hormones for Women with PCOS

April 7, 2026
All Wonder Egg Hunt Locations in Heartopia: Completing & Solving
Gaming

All Wonder Egg Hunt Locations in Heartopia: Completing & Solving

April 7, 2026
Artemis Astronauts Smashing Space Distance Record.jpg
News

Artemis Astronauts Smashing Space Distance Record

April 7, 2026
World Leaders
Infotainment

Top Longest-Serving World Leaders Highlighting Paul Biya and Teodoro Obiang

April 7, 2026
Next Post
Small Dietary Changes Can Balance Hormones for Women with PCOS

Small Dietary Changes Can Balance Hormones for Women with PCOS

  • About Us
  • Contact Us
  • Advertise
  • Privacy Policy
  • Guest Post

© 2026 Digital Phablet

No Result
View All Result
  • Home
  • News
  • Technology
    • Education Tech
    • Home Tech
    • Office Tech
    • Fintech
    • Digital Marketing
  • Social Media
  • Gaming
  • Smartphones

© 2026 Digital Phablet