Risk Warning: Beware of illegal fundraising in the name of 'virtual currency' and 'blockchain'. — Five departments including the Banking and Insurance Regulatory Commission
Information
Discover
Search
Login
简中
繁中
English
日本語
한국어
ภาษาไทย
Tiếng Việt
BTC
ETH
HTX
SOL
BNB
View Market
OpenAI responds to employee concerns: No new AI technology will be released until necessary safeguards are in place
2024-06-05 05:50:31
Odaily News OpenAI and Google DeepMind employees jointly voiced concerns that advanced AI risks are huge and that regulation is urgently needed. In response, OpenAI issued a statement today, emphasizing its commitment to providing powerful and safe artificial intelligence systems. OpenAI said that given the importance of AI technology, we agree with the content of the open letter. How to have a serious discussion is crucial to better promote the development of AI technology. We will continue to reach out to governments, civil society and other communities around the world to jointly create a harmonious AI environment. Including anonymous integrity hotlines, the Safety and Security Committee participated by board members and company safety leaders, are effective means of regulating AI. OpenAI pointed out that the company will not release new AI technology until the necessary safeguards are in place. The company reiterated its support for government regulation and participated in voluntary commitments on artificial intelligence safety. Regarding concerns about retaliation, a spokesperson confirmed that the company has terminated non-disparagement agreements for all former employees and removed such clauses from standard resignation documents. (IT Home) Earlier today, 13 former and current employees from OpenAI (ChatGPT), Anthropic (Claude), DeepMind (Google), and experts in the field of artificial intelligence launched a petition called "The Right to Warn About AI". The petition aims to advocate for stronger whistleblower protections to raise public awareness of the risks of advanced artificial intelligence systems. Former OpenAI employee William Saunders commented that when dealing with potentially dangerous new technologies, there should be ways to share information about risks with independent experts, governments, and the public.