State of Play on Artificial Intelligence Rulemaking
By Andy Keiser
February 29, 2024
Artificial Intelligence (AI) continues to be a top issue in Washington with vigorous debates on the promises and potential pitfalls. Recent legislative and regulatory action has been occurring on several fronts. I categorize the debate into four buckets. On one end, a laisse fare, hands-off approach. On the other end, a fear of AI advancing beyond human control leading to a desire to stop AI development now. The two categories in the middle support a light touch regulatory framework or a more heavy-handed approach by creating a new federal agency to regulate AI, for example.
Below is a summary of the ongoing lines of effort in Washington, state capitals and around the world.
Bipartisan Working Group on AI
In May 2023, the Senate created a working group to pursue the opportunities and tackle the threats presented by AI. Members of the group include Senate Majority Leader Chuck Schumer (D-NY) and Senators Mike Rounds (R-SD), Todd Young (R-IN), Martin Heinrich (D-NM) The working group has hosted a series of briefings to provide additional information to members of the Senate. These briefings have included a first-of-its-kind forum that featured top tech leaders including Elon Musk, Mark Zuckerberg and Bill Gates and a panel discussion comprised of academic and industry experts focusing on how AI is transforming health care.
The current plan is for this group to make recommendations to the full Senate via a white paper scheduled to be released in the coming weeks. You could see a subset of those recommendations make their way into the Fiscal Year 2025 National Defense Authorization Act (NDAA), into standalone legislation such as that has been teased by Senate Commerce Chair Maria Cantwell or perhaps both.
Legislation in Congress
The 118th Congress has at least 22 pieces of legislation pending relating to AI. Some of the legislation covers oversight of the federal government’s approach to AI governance and regulation, requirements for federal agencies and the private sector to disclose the use of GenAI broadly and the use of AI in certain contexts, such as political advertisements, assessing export controls for national interest technologies, including AI, to the People’s Republic of China, and support for the use of AI in various areas, including cybersecurity, classification and declassification systems, advanced weather modeling, wildfire detection, airport efficiency and safety, precision agriculture, and prescribing certain pharmaceuticals.
One such bill was introduced by a bipartisan group of House lawmakers that would force federal agencies and their vendors to employ the National Institute of Standards and Technology’s (NIST) AI Risk Management Framework. This framework sets guidelines for the responsible development and use of AI systems within government offices and in the private sector.
Each year the National Defense Authorization Act (NDAA) ends up being a catch-all for desired policy improvements at the Department of Defense (DoD) and beyond, which includes provisions for artificial intelligence. The final version of last year’s NDAA contained the following AI provisions of note: giving the Pentagon and State Department several new AI-related responsibilities, establish a Chief Digital and Artificial Intelligence Officer Governing Council for the military to provide policy oversight to ensure the responsible ethical employment of data and artificial In terms of the State Department, the bill establishes an office of a Chief Artificial Intelligence Officer, who will, among other things, act as the principal advisor to the Secretary of State on the ethical use of AI and advanced analytics in conducting data-informed diplomacy.
White House Actions
On October 30, 2023, President Biden issued an Executive Order attempting to ensure that America leads the way in seizing the promise and managing the risks of AI. The order will create new standards for AI Safety and Security; aid in protecting Americans’ privacy; advance equity and civil rights; provide guidance to stand up for consumers, patients, and students; support workers; promote innovation and competition; advance American Leadership Abroad; ensure responsible and effective government use.
In more recent developments, Deputy Chief of Staff Bruce Reed convened the White House AI Council on January 29. The White House released a fact sheet stating, “Agencies reported that they have completed all of the 90-day actions tasked by the E.O.” A major point of interest, albeit controversial, is the new industry reporting requirements on AI systems announced by the Commerce Department that are being authorized by the Defense Production Act.
Cybersecurity and Infrastructure Security Agency
On January 23, 2034, the Cybersecurity and Infrastructure Security Agency, along with along with the FBI, NSA, and allied cyber security centers from Canada, New Zealand, the UK, and Australia Issued joint guidance on engaging with AI. The focus on this guidance is on using AI systems securely rather than developing secure AI systems; the authoring agencies encourage developers of AI systems to refer to the joint Guidelines for Secure AI System Development.
Specifically, organizations are provided with a summary of threats to AI systems, best practices for managing risk, and recommended mitigations for self-hosted and third-party hosted AI systems. The guide also offers mitigation considerations for organizations with guiding questions to help organizations assess an AI system’s cybersecurity implications.
Federal Trade Commission (FTC)
The FTC has been investigating whether Microsoft, Google, and Amazon have been engaging in non-competitive practices by investing and creating partnerships with AI startups OpenAI and Anthropic. Per the FTC Chair Lina Khan, “We’re scrutinizing whether these ties enable dominant firms to exert undue influence or gain privileged access in ways that undermine fair competition across layers of the AI stack.” The FTC has issued subpoenas to the five mentioned companies. The primary concern that the federal government has revolves around the advantages these AI startups would receive by partnering with these big-tech companies and their immense computing resources.
The Federal Trade Commission is held its Tech Summit on January 25, 2024, which aimed to bring representatives from across academia, industry, civil society organizations and government agencies to jump into AI’s potential influence across the tech economy. The summit featured panel discussions on AI and cloud infrastructure, data, consumer applications and more, with speakers such as Atur Desai, deputy chief technologist for law and strategy at the Consumer Financial Protection Bureau, and Tania Van den Brande, director of economics at the U.K. communications regulator Ofcom. Chair Lina Khan and commissioners Alvaro Bedoya and Rebecca Kelly Slaughter delivered remarks in between sessions.
Department of Defense (DoD)
DoD has taken steps to accelerate adoption of data, analytics and AI at the enterprise level, although key implementation guidelines remain in progress. Established in February 2022 following the consolidation of several entities, including the Joint Artificial Intelligence Center, Defense Digital Services, and the Chief Data Officer, the Chief Digital and Artificial Intelligence Office (CDAO) is now the single entity responsible for these issues and reports directly to the Deputy Secretary of Defense.
A June 2023 Government Accountability Office report identified several deficiencies and recommended CDAO prioritize establishing DoD-wide AI acquisition guidance, including leveraging key private sector perspectives. In response, CDAO highlighted the November 2023 Data, Analytics, and Artificial Intelligence Adoption Strategy, which includes guidance for DoD components on adopting and scaling AI capabilities, with additional guidance planned for release in March 2024. CDAO is also planning to create a federated AI construct implementation plan and publish a DoD instruction to serve as Department-wide AI acquisition guidance by September 2024. DoD’s FY24 budget request includes $1.8 billion for AI purposes.
Other Biden Administration Actions
In January, the National Institute of Standards and Technology (NIST) launched the AI Risk Management Framework with the intention that it would improve the ability to incorporate trustworthiness considerations into the design, development, use, and evaluation of AI products, services, and systems. From this work there has been some movement with legislation based on this framework along with Executive Orders.
The National Science Foundation (NSF) launched a two-year pilot for a shared AI research infrastructure, in a program mandated by the White House executive order on AI. The NAIRR will provide free access to advanced computing, datasets, powerful AI models and user support to American researchers and educators. NSF is working with 10 other agencies on the pilot and leveraging donations and in-kind services from 25 companies and foundations including, OpenAI, Hugging Face, the Omidyar Network the Allen Institute, and NVIDIA.
The Department of Commerce has issued guidelines building out from past Executive Orders that aim to prevent cloud services being used by hostile foreign nations to create malign AI models. To track the creation of such models, the regulations require US cloud providers to report AI model training to Commerce whenever they have knowledge of a covered transaction.
AI on the Campaign Trail
AI has already influenced the 2024 election cycle. Non-Profits Common Cause and Public citizen have expressed interest to state officials in Alabama, Louisiana, and Wisconsin to regulate the use of AI in the communications of campaigns. These petitions are directed at the possible use of deceptive AI communications and should fall under fraud or deceptive misrepresentation of campaign laws. The New Hampshire attorney general’s office stated that they were investigating a ‘robocall’ that was impersonating President Joe Biden, informing Democrats to not vote in the New Hampshire Primary.
On the campaign trail, former President Trump has called for more regulation on AI enabled ‘deep fakes.’ Predicting artificial intelligence “will be a big and very dangerous problem in the future,” Trump has suggested that “strong laws ought to be developed against AI.”
State Legislation on AI
States are also beginning to write new laws, some of which ensure transparency of reporting, monitor government use of AI. Oklahoma is looking to form regulations for AI that would require developers to conduct safety and impact assessments and make those results publicly available. Another bill directs the state’s Office of Management and Enterprise Services to compile an inventory of the government’s use of AI, and to take steps to ensure it’s not discriminatory or harming the public.
California is also looking to establish standards for AI models and promote responsible use by state government entities. The package of bills directs the California Department of Technology to establish safety standards for AI services and prohibits the state from using AI services that don’t meet those standards. It also aims to bolster AI innovation by fostering partnerships by establishing the California AI Research Hub.
Think Tank Positioning
Along the ideological spectrum, there are differing views on how AI should be regulated. The American Enterprise Institute, for example, has posited that regulation should be done within the industry. They have recommended that there be an AI compact where industry leadership among AI enterprises takes the lead to achieves the following: to foster collaboration among leading firms in the AI sector, promote meaningful dialogue between both private and public sectors, advocate for an evidence-based, agile regulatory framework, and accelerate the widespread adoption of responsible AI practices.
The Heritage Foundation, on the other hand, proposes the need for more regulation to restrict AI development.
More tech friendly and center-left think tanks are promoting a more light-touch regulatory framework by boosting funding or authorities for existing agencies such as the FTC.
International Efforts On AI
United Kingdom Prime Minister Rishi Sunak sat down with Elon Musk in early November 2023 for a conversation on artificial intelligence, AI safety, and the role governments can play in mitigating risks associated with AI.
At the World Economic Forum, AI technology applications and how to regulate it was overheard in more circles than ever before at this year’s annual meeting, known to host hundreds of discussions and speeches on wide-ranging topics, from diplomacy to environment to the latest technologies.
The United Nations has also started the process of reviewing the future of AI. The UN created the AI Advisory Board which convened on October 26, 2023. The board issued an interim report with the final report published ahead of the Summit of the Future in the summer of 2024.
In April 2021, the European Commission proposed the first EU regulatory framework for AI. It states that AI systems that can be used in different applications to be categorized based on the threat to users. Further, The European Commission created an EU-wide Artificial Intelligence Office. The purpose is to supervise the enactment of the Artificial Intelligence Act rulebook and check companies’ compliance.
Additional regulations are being considered by the Council of Europe’s Committee on AI. The idea is to negotiate a global AI treaty that sets standards that respect democratic principles. Currently, the Biden administration is insisting on maintaining carveouts for the private sector that would exempt systems that are developed for national security purposes.
Have questions about AI regulation and legislation, or want to privately discuss how any of these efforts may impact your organization? Book a private consultation with Andy Keiser here.