OpenAI is plagued by security concerns; understand
Image Credits: Curto News/Bing AI Creator

OpenAI is plagued by security concerns; understand

A OpenAI is a leader in the race for superintelligent artificial intelligence (AGI), but faces growing criticism over security. Former employees and anonymous sources accuse the company of prioritizing results over rigorous safety testing.

ADVERTISING

An anonymous employee told the Washington Post: “They planned the launch party before they even knew if it was safe to launch [the product].”

These concerns are not isolated. Recently, employees publicly called for more security and transparency, and the security team was disbanded after a co-founder left.

A OpenAI claims to take security seriously and cites its commitment to helping other organizations deal with risks, even from competitors. The company also keeps its models closed for security reasons, which generates debate.

ADVERTISING

“We are proud to provide the safest and most capable AI systems,” said a spokeswoman for OpenAI.

Experts warn of the potential risks of superintelligent AI. A report from the USA compares the impact of AI to that of the atomic bomb.

Amid criticism, the OpenAI announced partnerships for secure bioscience research and created an internal scale to track AI progress.

ADVERTISING

But experts point out that these announcements may be superficial actions to deal with the crisis. The real problem is the lack of guarantees for society outside of Silicon Valley.

“AI can be revolutionary,” said FTC Chair Lina Khan, “but the control of these tools by a few companies is troubling.”

If the accusations about security are correct, the OpenAI may not be prepared to lead the development of superintelligent AI. It is urgent that the company prioritize transparency and security for the good of society.

ADVERTISING

Read also

Scroll up