top of page


AI Oversight Emerges as a Rare Area of Potential Regulatory Agreement

On Tuesday this week, the Senate Judiciary Committee’s Subcommittee on Privacy, Technology, and the Law convened a highly anticipated hearing focused on oversight of artificial intelligence and developing a framework of governance to regulate the emerging technology. The hearing – which members emphasized was the first of many AI-focused sessions that will be held this summer – brought OpenAI CEO Samuel Altman to testify before Congress for the very first time. He was joined by IBM Chief Privacy Officer Christina Montgomery and AI expert scientist Dr. Gary Marcus.

While Congress is no stranger to oversight hearings focused on regulating the tech sector, this hearing felt different than the countless others that have been held in recent years – there was unanimous agreement between policymakers and industry that AI technology needs to be regulated in some capacity (albeit there were still differences regarding what exactly this regulation would look like). The hearing was remarkably bipartisan and demonstrated a strong desire for collaboration between the public and private sectors. There was consensus that the federal government should develop strong rules that govern AI’s growing impact on society and deter its negative uses and consequences, while also ensuring that the U.S. remains at the forefront of innovation as countries around the world also accelerate their AI efforts.

At a time when Congress is more divided than ever on a range of topics (including other areas of technology regulation), AI regulation represents an opportunity for potential bipartisan action with significant involvement from industry in developing value-driven standards that can exist over the long-term as AI continues to develop. The hearing showcased several key areas that policymakers and industry alike will have to tackle as policy solutions are being considered:

Creation of an Independent Agency: The hearing highlighted proposals to create an independent regulatory agency that would oversee AI – including R&D, standards development, mitigation tactics against negative impacts, and more. A strong proposal from the hearing was for such an agency to be authorized to provide licenses to AI systems, along with the authority to take a license away for not complying with established safety standards. While there was agreement that such an agency would be pivotal, policymakers will need to consider how such an agency would operate without risking potential overlap with the jurisdiction of other agencies.

Data Privacy: AI and data privacy are two of the most important areas of technological focus for Congress right now, and this hearing highlighted the considerable overlap between these two areas. Members of Congress strongly asserted their desire to establish a national data privacy standard, and there was agreement from the witnesses that they are working to ensure that AI users are able to opt out from having their data used to train AI systems. In developing AI regulation, Congress will have to consider how to integrate privacy concerns across the board and tailor these protocols not only to the AI of today, but also the AI of the future.

Section 230 and Liability Standards: Throughout the hearing, members from both sides of the aisle discussed their regrets surrounding past regulation of social media companies, stating that they hope to meet the moment on artificial intelligence regulatory action instead of trying to do so retroactively. These concerns especially focused on Section 230 liability protections, which essentially protect social media companies from liability for content posted by independent users on their platforms. Congress will have to consider how to develop a different liability model for AI systems since there is a clear difference between reproduced content posted on social media and the generative content that AI is responsible for.

Market Concentration and Antitrust: Policymakers demonstrated their concerns that as Big Tech accelerates research and development efforts for AI systems, there is a high likelihood that this could lead to another bout of corporate concentration within the technology field. Furthermore, there is real innovation on AI taking place within the open source community and among smaller AI research organizations. Policymakers must understand how to ensure that future regulation does not overly burden smaller entities that do not have the necessary resources to comply with strict regulations that hamper innovation.

Transparency and Disclosure: In discussing the need for enforcement of AI protocols, witnesses and lawmakers alike highlighted the need for greater transparency regarding what AI systems are trained on – stating that this data provides crucial insight into how the AI model works and the biases that are in turn developed within the system. It is clear that policymakers will need to incorporate standards regarding transparency and data disclosure when developing future regulation.

Addressing AI’s Impacts: Lastly, but perhaps most importantly, there are increasingly pervasive fears surrounding the harmful consequences that AI can have on society – especially related to education and copyright, elections workforce opportunities, misinformation/disinformation, cybersecurity, human rights, and more. While this is the most daunting area of consideration, it is vital that policymakers arm forthcoming regulation with the necessary enforcement mechanisms and resources to protect against these harms.


bottom of page