Select Language:
In a recent development that has captured wide attention, a game media company based in Shandong has begun experimenting with creating digital avatars of former employees to keep them “working” for the company. This bold move showcases the rapidly evolving landscape of artificial intelligence and digital employment, sparking discussions about innovation and privacy concerns.
According to Xiao Yu, a human resources officer at the company, the process involves former staff who have given their consent. One such colleague, who previously served as an HR specialist, now has a digital twin capable of handling basic tasks such as customer inquiries, appointment scheduling, and creating PowerPoint presentations and spreadsheets. Xiao Yu describes this digital avatar as “a version of an intern, but a bit clumsy, only able to follow simple commands.”
The digital employee speaks in a self-introductory message displayed in the videos released, saying, “Hello, I am the digital twin of former employee XX. Feel free to ask me questions. I will reply based on the documents and information I had during my time here.” The avatar’s appearance and its introductory content were supplied directly by the former employee, and the AI data used for its training was uploaded by the individual themselves.
Xiao Yu, who is also involved in HR at a company with over 100 employees, shared that the motivation behind this experiment was spontaneous. “Yesterday, we were joking around at work; today, this digital version is ready to take on some of the tasks,” he explained.
He also emphasized that this initiative is an internal pilot project aimed at exploring how AI can handle simple, routine work, and it has not yet been made available to the public. “The digital twin still has a lot to learn,” he noted. Future plans include developing humanoid robots that can act as receptionists, guide visitors, and handle office appointments.
When asked about concerns over job security, Xiao Yu responded with a noncommittal “it’s up to fate,” adding that rather than worrying excessively, he prefers to embrace AI technology and explore frontier innovations like brain-computer interfaces. He sees the company’s AI experiments as part of a broader trend designed to improve products and services, not to replace employees.
Legal experts, however, sound an alarm. They warn that training AI models with data from former employees—such as chat logs, work emails, and personal work habits—without their approval may violate privacy laws. According to the Personal Information Protection Law, such data constitutes personal information, and private communications may include sensitive information. Any collection or use of this data for AI training without explicit consent potentially infringes on individuals’ rights and privacy.
The Interim Regulations on Managing Generative AI Services further stipulate that any AI training data involving personal information must be obtained with the individual’s consent. Using employees’ code, documents, or project information for AI development without permission could be considered a privacy breach, carrying legal consequences. Severe violations might lead to criminal charges, with penalties including imprisonment for up to three years or detention. Particularly egregious cases could result in sentences ranging from three to seven years and fines.
This burgeoning use of AI to recreate digital likenesses of personnel raises significant ethical and legal questions, as regulators and companies navigate the balance between technological innovation and respecting individual privacy rights.





