As various foreign jurisdictions pursue legislation to address recent developments in artificial intelligence (AI), the U.S. Senate has followed suit in at least two developments this month.
On Sept. 12, the Subcommittee on Privacy, Technology, and the Law of the Senate Committee on the Judiciary held bipartisan public hearings titled “Oversight of A.I.: Legislating on Artificial Intelligence,” headed by Sens. Richard Blumenthal and Josh Hawley. The next day, on Sept. 13, Senate Majority Leader Chuck Schumer led the first AI Insight Forum, an unprecedented closed-door meeting with business leaders in AI and Big Tech.
The Senate Committee Hearings and Bipartisan AI Framework
The public hearings focused on both how to best govern AI and the Bipartisan Framework for U.S. AI Act (Bipartisan AI Framework) that Sens. Blumenthal and Hawley formally introduced. According to Sen. Blumenthal, the Bipartisan AI Framework represents the first blueprint for enforceable AI protections and holds the promise of putting Congress on the right path to addressing the growing concerns associated with the development and deployment of AI.
Sen. Blumenthal made the initial opening remarks. In his statement, he indicated that he, along with Sen. Hawley, created the Bipartisan AI Framework in an attempt to allow the United States to benefit from the good that AI has to offer while providing appropriate safeguards that mitigate the potential harm it can cause. Sen. Hawley followed and agreed that AI has the potential to do much good for the country. However, he was harsher on the risks associated with its use.
Those who were invited as witnesses and to comment on the issue were Brad Smith, vice chair and president of Microsoft Corporation; Bill Dally, chief scientist and head of research at NVIDIA; and Woodrow Hartzog, professor of law at Boston University School of Law and fellow at the Cordell Institute for Policy in Medicine & Law at Washington University in St. Louis. Generally, all witnesses relayed the same message. They all acknowledged the good that AI has to offer and agreed, to varying degrees, that it must be regulated. They agreed that the Bipartisan AI Framework is a good start but commented that still more should be done.
The Bipartisan AI Framework consists of five key components:
- Establishing a Licensing Regime Administered by an Independent Oversight Body.
First, the Bipartisan AI Framework suggests mandating that companies that develop sophisticated general-purpose AI models (e.g., ChatGPT) or models used in high-risk situations (e.g., facial recognition) register with an independent oversight body. Licensing requirements would include registration of information about AI models, and the approval of such licenses would be contingent on developers maintaining risk management, pre-deployment testing, data governance and adverse incident reporting programs.
This independent oversight body would have the authority to conduct audits of companies applying for licenses and cooperate with others that would be enforcing such standards. Furthermore, it would also monitor and report on developments and impacts on AI, such as those on employment and the economy.
- Ensuring Legal Accountability for Harms.
Second, the Bipartisan AI Framework emphasizes that companies that utilize AI should be held liable — both to enforcers and to victims — when their models and systems breach privacy, violate civil rights or cause other forms of harm. Importantly, the framework would provide that Section 230 of the Communications Decency Act should not apply to AI, and thus would not shield companies from liability.
- Defending National Security and International Competition.
Third, the Bipartisan AI Framework underscores that Congress should use existing means, such as export controls, sanctions and other legal restrictions, to limit the transfer of AI models and other related technologies to adverse nations or nations engaged in gross human rights violations.
Fourth, the Bipartisan AI Framework stresses the promotion of transparency. To achieve this aim, the framework recommends a number of disclosures. First, developers should, in simple language, disclose essential information about the training data, limitations, accuracy and safety of AI models to users and companies deploying such systems. Furthermore, developers would also have to provide independent researchers with access to the information necessary to evaluate their AI model’s performance. Second, notice should be provided so users know whether they are interacting with an AI model or system. Third, AI system providers should be required to provide disclosures of digitally altered pictures or videos, otherwise known as deepfakes. Finally, the oversight body should establish a public database so consumers have easy access to AI model and system information. The database should detail whether a certain system has experienced any questionable incidents or failures that resulted in harm.
- Protecting Consumers and Kids.
Lastly, companies deploying AI in high-risk or consequential situations, such as its use with facial recognition, should be required to implement safety breaks. Such protections would include giving notice when AI is used to make adverse decisions and allow a right in such situations for human review. The Bipartisan AI Framework stresses that consumers should be in control of how their personal data is used in AI systems. In addition, strict limits have been recommended for generative AI involving children. Generative AI is a type of AI that is able to produce a number of high-quality contents, such as text, imagery, and audio and synthetic data. ChatGPT and Craiyon are examples of such systems, and both of them are currently available online.
AI Insight Forum
The closed-door AI Insight Forum, led by Sen. Schumer, was meant as a listening session for lawmakers as they consider future regulations. Sen. Schumer announced that more AI Insight Forums will be held, with any such future meetings likely to be public sessions, and the relevant Senate committees will then be tasked with drafting appropriate legislation.
Some senators criticized the closed-door nature of the meeting and the fact that it was limited to tech companies. Sen. Hawley criticized such sessions as an opportunity for monopolists to shape regulatory frameworks in their favor. Despite the controversy, the meeting saw bipartisan interest, with over 60 senators participating in the closed-door AI Insight Forum.
The Bipartisan AI Framework and the AI Insight Forum serve as important steps toward the regulation of AI. We will continue to monitor how such proposed regulations impact AI innovation and growth.