OpenAI CEO Warns of the Dangers of Artificial Intelligence

Sam Altman urged the US Senate to introduce new AI regulations and rules.

On Tuesday, the CEO of ChatGPT developer OpenAI Sam Altman before the US Senate Judiciary Subcommittee on Privacy, Technology, and the Law, warning lawmakers about the potential dangers of artificial intelligence and urging the Senate to introduce new laws to regulate the AI market.

Calling the ongoing AI boom the "printing press moment", Altman noted that without government-imposed safeguards, the constantly-improving artificial intelligence technology might be harmful in a long-run, leading to massive disinformation campaigns and other potential risks.

"We think that regulatory intervention by governments will be critical to mitigate the risks of increasingly powerful models," Altman told to the committee.

Additionally, Altman acknowledged AI's impact on the economy, noting that although the technology will create new jobs, many people might lose the jobs they already have by being replaced with artificial intelligence-powered programs.

"There will be an impact on jobs. We try to be very clear about that," he said. "We believe that the benefits of the tools we have deployed so far vastly outweigh the risks, but ensuring their safety is vital to our work."

Furthermore, Altman proposed several actions that a new agency could potentially take in order to mitigate the potential risks of AI, including the introduction of special permits for AI development companies that would enable them to work in the field and independent audition of companies similar to OpenAI.

You can watch Altman's full testimony here. Also, don't forget to join our 80 Level Talent platform and our Telegram channel, follow us on Instagram and Twitter, where we share breakdowns, the latest news, awesome artworks, and more.

Join discussion

Comments 0

    You might also like

    We need your consent

    We use cookies on this website to make your browsing experience better. By using the site you agree to our use of cookies.Learn more