Sam Altman gave his testimony as part of a broad-view conversation over the future of AI.
Sam Altman, CEO of OpenAI and creator of ChatGPT recently testified before the Senate Judiciary Committee regarding the big picture of artificial intelligence’s future.
The technology has advanced at rocket speeds, leading Congress to seek to understand what it entails.
Only months after OpenAI and ChatGPT began making serious headlines, Congress has made a rare move to try to ensure that Washington can keep up with the speed of the innovations coming out of Silicon Valley.
Lawmakers have long stepped out of the way of technological advancements being developed in Mountain View and Palo Alto. However, as evidence increasingly shows that there is a serious economic, cultural and social downside to reliance on digital technology, both parties were keen to regulate artificial intelligence, if for entirely different reasons.
The rapid onslaught of integration of OpenAI and other artificial intelligence tech has raised concerns.
Policymakers see the fast-moving integration of artificial intelligence into American society as a serious test. They are seeking to demonstrate that the high tech sector is not immune to the types of scrutiny faced by other industries.
Aside from Altman were representatives from other companies invested in artificial intelligence, such as Christina Montgomery, IBM’s chair of its AI ethics board, as well as Gary Marcus, a New York University teacher who has been outspoken in his criticism of AI.
Congress appeared determined not to repeat its past failures to meet the moment, as has been the case with social media. Senator Richard Blumenthal (D-Conn) gave opening remarks which were, in part, generated by AI, acknowledging that Congress has struggled to meaningfully regulate social media.
“Congress failed to meet the moment on social media,” said Blumenthal in his remarks before the Senate Judiciary Committee and the OpenAI CEO, among others. “Now we have an obligation to do it on AI before the threats and risks become real.”
Montgomery went on to acknowledge the obvious risks posed by AI to workers in a spectrum of fields, particularly in areas that have been safe from automation until now.