For more information please call  800.727.2766

 

Rise of Deepfakes in the Workplace

A recent article in the Anchorage Daily News highlighted the dangers to employees and employers because of deepfakes. In the article’s example, HR told an employee that she was suspended pending an investigation, barred from the workplace, and locked out of her computer. They accused her of leaving sexually explicit voicemails for a company executive. Although it took three weeks, the employee proved that she did not leave those emails and that they were AI-generated fakes.

“Deepfakes” refers to very realistic audio, video, or image fabrications (powered by AI) used to harass, blackmail, and create smear campaigns. They often target women and spread anonymously across websites without the victim knowing. As of 2023, 96% of deepfakes were sexually explicit. By 2024, nearly 100,000 explicit deepfake images and videos were circulated daily across more than 9,500 websites.

Employers should think about how to respond. They may be liable under Title VII if the deepfakes impact workplace dynamics. Failure to act on known or reasonably foreseeable deepfake harassment may expose employers to liability. The federal “Take It Down Act” provides a streamlined process for minors and victims of non-consensual intimate imagery to request removal from online platforms. Employers may want to add language to handbooks about deepfakes, including a commitment to verifying any suspicious media before subjecting an employee to consequences. Train managers and HR teams to spot possible fakes and learn how to respond carefully. Support employees who are the victims. Even when they are cleared, employees may experience social and emotional damage. Consider offering counseling, restoring access, and issuing a public statement where appropriate. Employers should monitor legal developments carefully as states move to regulate the area.