Call Us Today! 1.555.555.555 |

All Posts

Federal AI Legislation: Should We Hold Our Breath?

Artificial intelligence (AI) is transforming, if not upending, everything we do, and there is general agreement that Congress must provide a national legal framework that addresses the host of public policy challenges created by AI. And yet one would be forgiven for being skeptical that it will manage to send a bill to the President’s desk in the foreseeable future.

Let’s start with policy: what might AI legislation look like? On the table is a smorgasbord of tech policy issues, including but not limited to:

Privacy–what is the legal basis for the ingestion of personal data to train AI algorithms, and what happens to personal data processed by AI systems once they are deployed? These issues are aggravated by the lack of a comprehensive national privacy law. Congress will thus have to bootstrap foundational privacy provisions into an AI bill without kneecapping the development of AI.

Copyright–what is the legal basis for the use of copyright-protected material to train generative AI systems? The problem here is that, in the absence of a license, “fair use” is a case-by-case legal determination–was this specific use of this specific work “fair”?–whereas the current practice is to ingest massive amounts of internet data. This policy battle will be intense.

Bias–AI algorithms that are either poorly developed or trained on inappropriate data sets generate biased outcomes that perpetuate discriminations. It is not easy to draw bright lines and verify that a system is not biased, and this issue has a huge potential to erupt into a partisan battlefield (see below).

Risk–while most agree that ‘AI legislation should be risk-based’, most also disagree about acceptable levels of risk and how risk should be managed. And what to think of the “existential risks” on whose existence the AI safety community cannot even agree?

Explainability–Sen. Schumer has described explainability as “one of the thorniest and most technically complicated issues we face”, because at this stage it remains difficult for the developers of most AI systems to explain why a system produces a particular result.

Accountability–should Congress mandate independent audits to verify compliance, or allow AI developers and implementers to self-assess and self-certify their compliance?

Should section 230 protect generative AI?–Reasonable people will agree to disagree about whether sec. 230 does or does not already apply. Sens. Hawley (R-MO) and Blumenthal (D-CT) have a bill to clarify that, hell no, it should not. Given how many on the Hill have soured on sec. 230, it’s hard to believe Hawley and Blumenthal’s view will not carry the day.

Malicious uses of AI–it is very unclear how to address porn deep fakes, disinformation or AI-invented chemical weapons without delving deep into some of the content moderation dilemmas that have hobbled attempts to reform sec. 230.

That’s the policy, or at least most of it. What about the process? What is happening on the Hill? So far, Congress has been mostly in fact-finding mode, receiving briefings and holding almost a dozen hearings in the last few months alone. Few bills have been introduced.

As Sen. Heinrich (D-NM) has said “One of the interesting things about this space right now is it doesn’t feel particularly partisan. So we have a moment we should take advantage of.” The policy debate around AI has not (yet) taken shape around partisan lines because it is still nascent, legislators do not feel confident enough in their understanding of the technology and its impact to take aggressive positions, and the potential partisan implications–if any–of the issues have not revealed themselves yet.

So far, the most important process-related development was the launch by Sen. Schumer on June 21 of his “SAFE Innovation in the AI Age” framework. He intends to convene in fall 2023 a series of “insight forums” where “the top minds in artificial intelligence (…) the top AI developers, executives, scientists, advocates, community leaders, workers, national security experts” will educate the legislators so that the latter “can translate these ideas into legislative action.” The challenge will be to ensure that this unity of purpose does not break down when the ‘Schumer process’ turns back to the “committee chairs [and their ranking members], once they hear from our forums, to develop the right proposals.

That’s the process. What is the likely outcome? To be sure, congressional appetite for action is strong. There is a widely shared view that, as Sen. Blumenthal has said, “we had the same choice when we faced social media. We failed to seize that moment (…) Now we have the obligation to do it on AI before the threats and the risks become real.” There is also a realization that the U.S. may be the global technology leader but it is not seen as the technology policy leader, that it must offer an alternative to the EU, which is why Sen. Schumer commented that “none [of the foreign AI laws] have really captured the imagination of the world (…) so our goal is to come up with an American proposal (…) and we believe, if it’s good enough, the rest of the world will follow.

There’s no denying that the issues raised by AI are numerous, pressing and momentous, which puts Congress under major pressure to act. And yet these issues are also extremely thorny, which raises the likelihood that substantive disagreements will undermine efforts to build a bipartisan and bicameral majority. Congress’s recent record when tackling tough tech issues is meager.

In today’s Washington, the surest way for an idea to die is to be partisan, and AI is at great risk of being seen as “woke”, to use the dirtiest political word of our era. Considering recent conservative fury against Silicon Valley, it is not hard to imagine the GOP attacking how the diversity, equity and inclusion (DE&I) agenda that it rejects relates to attempts to root out AI bias.

Sen. Schumer’s ad hoc process is intended to bridge the partisan and committee divides–and if anyone on the Hill knows how to navigate the process to produce results, it’s Schumer! That said, he can only make miracles in the Senate: he would need from the House what one rarely sees in Washington these days–bipartisan, bicameral and cross-committee cooperation.

And then there are the roadblocks that have mightily contributed to derailing bills on other tech-related topics: will federal legislation preempt the states (the longer we wait, the likelier it is that states will have legislated and put pressure on their congressional delegation not to displace their efforts–remember how Speaker Pelosi put a broadly supported federal privacy bill to the sword two years ago?), how will the law be enforced (by federal entities only or by State attorneys general too? And will individuals be given a private right of action?), and will a federal agency like the Federal Trade Commission (FTC) be given authority to supplement the statute with regulations? These seemingly mundane considerations often seal the fate of otherwise promising bills.

Finally, let’s not forget that it’s just not easy to understand AI, even when you’re a techie. In characteristically caustic fashion, Sen. Cruz (R-TX) quipped that “Congress doesn’t know what the hell it’s doing in this area. This is an institution where I think the median age in the Senate is about 142. This is not a tech-savvy group.” This is going to slow down and complicate the process.

So if we don’t enact federal AI legislation, then what? An old Washington favorite would be to install a blue-ribbon commission to investigate and propose a way forward. Given how keen everyone on the Hill seems to be to legislate the issue now, a blue-ribbon commission isn’t likely to happen unless Congress fails to come to an agreement. At that point, installing a commission will be a cop out and the likelihood that this will eventually lead to a law will be minimal–even more so as the states will have raced ahead. A souped up variation on the blue-ribbon commission idea is to punt the whole thing to a new agency granted the authority to regulate AI. Republicans, however, generally don’t like to grant broad regulatory power to federal agencies, and they like even less to create new agencies.

Meanwhile, federal and state regulators will continue to use their existing authorities, see the statement jointly issued by four federal regulators, which the FTC swiftly put into practice by opening a wide-ranging investigation of OpenAI in stark contrast to the warmth with which its CEO Sam Altman was received on the Hill when he testified in May. In parallel, the courts will have the opportunity to weigh in on some of the issues raised by AI, most notably privacy and copyright.

Finally, to keep regulators at bay as well as reassure the public and influence the legislative process, industry should continue to develop–and hopefully live up to–best practices, egged on by NIST and its world-class AI Risk Management Framework.

Request a consultation with Franck Journoud here to discuss your organization’s interests in AI regulation or overall tech regulation and its impacts.


Go to Top

Thank you for your request to engage this expert. Please provide the information below, and someone will be in touch with you shortly.

Please Fill All required fields to proceed..

Thank you for your request! A member of the Poligage team will get back to you shortly.