Overview
On October 30, 2023, President Joe Biden signed an executive order that establishes new federal standards and regulations for the safe and ethical use of artificial intelligence (AI). Covering a wide set of sectors potentially impacted by the growth of AI, this order is the most robust action taken yet to mitigate the risks of AI. The order is being executed amid a swirl of developments in the private sector, with Microsoft, along with OpenAI CEO Sam Altman, taking on a larger role in developing the AI of the future. While the executive order provides important guidance for the US to move forward on AI regulation, further action by Congress and federal agencies will determine whether the US can build a regulatory framework that allows for continued growth.
Background
As the next step following the administration’s Blueprint for an AI Bill of Rights in 2022, October’s executive order aims to balance technological progress with protecting Americans’ privacy and safety. The new standards set by this executive order come in continuation of the administration’s stated priority to manage the development of AI responsibly. To implement these standards, the order directs numerous federal agencies to develop guidelines for the safe and ethical development of AI, particularly as it relates to managing risks related to consumer privacy, intellectual property, national security, and public safety. Biden’s executive order also covers labor protections, the use of AI in health care, and expanding immigration for those with AI skills, among other subjects.
The order guarantees federal support for privacy-enhancing technologies, directs agencies to re-evaluate the use of personally identifiable information, and urges Congress to adopt comprehensive data privacy legislation. The order also recognizes the need for provenance technologies, which can verify the origin and history of content, and directs the Department of Commerce to develop guidance on watermarks that could function like a “nutrition label” for authenticating AI-generated content.
One of the more prescriptive public safety elements of the order requires AI system developers to share certain information with the government, such as the times when they are training an AI model and the results of safety tests performed on those models. The consequences of a failed safety test are unclear, but this provision uses the Defense Production Act (DPA) of 1950, which gives the president authority to direct private companies to allocate resources and prioritize projects in support of national defense. The DPA was also used by both the Trump and Biden administrations in response to the COVID-19 pandemic.
Since the executive order was released in late October, OpenAI has undergone leadership changes with the ousting and reinstatement of co-founder and CEO Sam Altman. The reinstatement of Sam Altman as CEO came with a stronger partnership between OpenAI and Microsoft. This brought attention to OpenAI’s nonprofit status and (more importantly) its relationship with Microsoft. As noted in our June tech digest, Altman has been outspoken on the need for AI regulation and has testified on it before Congress, while Microsoft has gone as far as creating its own Blueprint for Governing AI, which called for “safety brakes” and a licensing regime to govern AI. Microsoft now has an extensive partnership with OpenAI, which includes having a nonvoting position on OpenAI’s board and a $10 billion investment. As the Financial Times reports, the US Federal Trade Commission (FTC) and the United Kingdom’s Competition and Markets Authority are looking at this partnership over antitrust concerns.
Why Is This Important?
The Biden administration seeks to confront several issues head-on with the executive order. Among them are concerns about consumer privacy, intellectual property, and misinformation, all of which have been extensively detailed in this digest over the last year.
Earlier this year, the FTC issued a “Civil Investigative Demand” to OpenAI concerning potential violations of consumer protection laws related to its language model, GPT-2. The FTC is concerned about early signs of market concentration in AI, as well as AI-enabled scams.
In November, during a speech at the Stanford Institute for Economic Policy Research, FTC Chair Lina Khan noted that the FTC is closely examining how Big Tech companies (like Microsoft) provide tools to AI startups and developers. She argues that access to such supplies, including cloud infrastructure and data processing units, could undermine competition if clear regulations are not in place. AI “hallucinations” (describing the generation of false information), deep fakes, and the mass deception efforts of fraudsters also highlight the need for action. Just this year, AI images showing explosions in Washington, DC, caused the stock market to dip before they could be flagged as fake, according to reporting by NPR. In addition, artists and creators have already filed multiple lawsuits against big tech companies like Meta and Microsoft for using their work to train AI models, according to Reuters.
The executive order focuses on addressing intellectual property violations and the spread of misinformation in part through labeling or “watermarking” AI-generated content. While 15 major tech companies, including Amazon, Google, Meta, Microsoft, and OpenAI, have already voluntarily pledged to label AI-generated content, according to the US Chamber of Commerce, this move represents a first step toward creating norms surrounding the labeling of AI content. Additional guidance and clarity on the standards for watermarking are important, as some watermarks are only included in the metadata and are not immediately visible to consumers. Reporting by WIRED has also found that some already existing labels can be easily tampered with or removed.
The executive order also urges Congress to enact comprehensive privacy legislation. Despite high public approval and a near-unanimous 53-2 vote from the House Committee on Energy & Commerce, a Congressional stalemate in July of last year thwarted the progress of a comprehensive data privacy bill called the American Data Privacy and Protection Act, as reviewed in August’s Tech Digest. Several bills have since been introduced, but Congress has yet to pass comprehensive data privacy legislation, which some lawmakers told Roll Call is a necessary foundation for any regulatory framework for AI.
What Happens Next
The order gave federal agencies between one and nine months to begin work on the various provisions, but most agencies have over six months to develop guidance on the most relevant items. By the summer of 2024, federal agencies should begin receiving public input and publishing findings on many of the directives. Throughout that process, next year’s Congress will look to tackle both comprehensive data privacy and AI legislation to move the US closer to a robust framework for technology regulation.
Check our Tech Regulation Tracker for updates on AI policy and regulation.