HomeTechnologyCurrent, former OpenAI, Google DeepMind employees warn of AI risks

Current, former OpenAI, Google DeepMind employees warn of AI risks

Date:

Popular News

Several former and present staff of OpenAI, and different synthetic intelligence firms printed on Tuesday an open letter voicing their considerations in regards to the fast-paced improvement of the AI trade and the absence of legislation defending whistleblowers.

“AI companies have strong financial incentives to avoid effective oversight, and we do not believe bespoke structures of corporate governance are sufficient to change this,” the workers said.

Signatories of the letter included OpenAI former staff – Daniel Kokotajlo, Jacob Hilton, William Saunders, Carroll Wainwright and Daniel Ziegler, former Google DeepMind worker Ramana Kumar, present DeepMind worker Neel Nanda, previously at Anthropic, and a number of other different nameless former staff.

In the letter, the workers said that they have been anxious about “the serious risks posed by these technologies,” most of that are unknown to the outsiders as firms “currently have only weak obligations to share some of this information with governments, and none with civil society.”

“We do not think they can all be relied upon to share it voluntarily,” they added.

“It’s really hard to tell from the outside how seriously they’re taking their commitments for safety elevations and figuring out societal harms, especially as there is such strong commercial pressures to move very quickly,” one of many staff argued.

Speaking in regards to the whistleblower legal guidelines, the workers said that “ordinary whistleblower protections are insufficient because they focus on illegal activity, whereas many of the risks we are concerned about are not yet regulated.”

The open letter additionally emphasised how the workers are blocked from sharing substantial details about the AI’s capabilities as a result of confidentiality agreements.

“It’s really important to have the right culture and processes so that employees can speak out in targeted ways when they have concerns,” one of many staff demanded.

In a response to the letter, the Microsoft-backed firm stated that it’s happy with its “track record providing the most capable and safest AI systems and believe in our scientific approach to addressing risk.”

OpenAI additionally highlighted that it has an nameless integrity hotline and a Safety and Security Committee to guard whistleblowers.

The Daily Sabah Newsletter

Keep updated with what’s taking place in Turkey,
it’s area and the world.


You can unsubscribe at any time. By signing up you’re agreeing to our Terms of Use and Privacy Policy.
This web site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

Source: www.dailysabah.com

Latest News

LEAVE A REPLY

Please enter your comment!
Please enter your name here