Experts warn of ‘human extinction’ if risks of AI ignored

0
176
Experts warn of ‘human extinction’ if risks of AI ignored


Experts warn of “human extinction” if risks of AI are ignored | Insurance business America

Open AI and Google Air employees warn employers in an open letter about retaliation against employees who raise concerns

Insurance News

By Dexter Tilo

Some current and former employees of artificial intelligence companies are calling on their employers to allow employees to raise concerns about AI without facing retaliation.

In an open letter, employees at Open AI, Google DeepMind and Anthropic said AI company workforces are among the few people who can hold their employers accountable to the public.

“However, comprehensive confidentiality agreements prevent us from raising our concerns except to the very companies that may not be addressing these issues,” the letter said.

And even then, they fear they could face retaliation for voicing their concerns about the technology, employees said.

“Ordinary whistleblower protections are inadequate because they focus on illegal activities, while many of the risks we are concerned about are not yet regulated,” they said.

“Given the history of such cases across the industry, some of us are justifiably fearful of various forms of retaliation. We are not the first to encounter or talk about these issues.”

Commitment to employers

To address these concerns, employees asked AI companies to commit to four principles that protect their workforce from retaliation.

This includes a commitment that employers “will not enter into or enforce any agreement that prohibits ‘disparaging’ or criticism of the company over risk-related concerns, nor will they retaliate for risk-related criticism by impeding personal economic benefits.”

Organizations should also commit to establishing an anonymous process for current and former employees to raise risk-related concerns with the organization.

Employers should also commit to a culture of open criticism and allow current and former employees to publicly raise risk-related concerns about their technologies, as long as trade secrets and other intellectual property are protected.

Finally, employers should also ensure that they do not retaliate against current and former employees who publicly disclose risk-sensitive confidential information after other processes have failed.

According to the signatories, they believe that risk-related concerns should always be raised in an appropriate, anonymous manner.

“However, in the absence of such a process, current and former employees should retain the freedom to communicate their concerns to the public,” they said.

“These risks range from further exacerbating existing inequalities, to manipulation and misinformation, to the loss of control of autonomous AI systems, potentially leading to human extinction,” they said.

However, AI companies have “strong financial incentives to evade effective supervision.”

“AI companies have extensive non-public information about the capabilities and limitations of their systems, the adequacy of their protections, and the risk levels of various types of harm. However, they currently have only weak obligations to share some of this information with them.” “We do not believe that they can be relied upon to share all of this information voluntarily,” the signatories added.

Stay up to date with the latest news and events

Join our mailing list, it’s free!



Source link

2024-06-05 16:31:57

www.insurancebusinessmag.com