House task force releases sweeping end-of-year report on AI
The House Task Force on Artificial Intelligence (AI) released its sweeping end-of-year report Tuesday, laying out a roadmap for Congress as it crafts policy surrounding the advancing technology.
The 253-page report takes a deep dive into how the U.S. can harness AI in social, economic and health settings, while acknowledging how the technology can be harmful or misused in some cases.
“This report highlights America’s leadership in its approach to responsible AI innovation while considering guardrails that may be appropriate to safeguard the nation against current and emerging threats,” task force co-chairs Jay Obernolte (R-Calif.) and Ted Lieu (D-Calif.) wrote in a letter to Speaker Mike Johnson (R-La.) and Minority Leader Hakeem Jeffries (D-N.Y.).
The report follows a months-long probe by Obernolte, Lieu and 22 other congressional members, who spoke with more than 100 technical experts, government officials, academics, legal scholars and business leaders to produce dozens of recommendations for different industry sectors.
Amid both excitement and concerns over the emerging technology, lawmakers introduced more than 100 bills regarding AI use this session, though most did not make it across the finish line, leaving Congress with an uncertain path forward on the issue.
The report seeks to serve as a blueprint for future legislation and other actions, breaking recommendations into 14 areas of society ranging from healthcare to national security to small businesses and more.
Intellectual property issues have been a key point of contention in the AI space, prompting numerous lawsuits against major AI companies over the use of copyrighted content to train their models.
While the lawmakers noted that it’s still unclear whether legislation is needed, they recommended that Congress clarify intellectual property laws and regulations.
The task force also emphasized the need to counter the growing problem of AI-generated deepfakes. While lawmakers have advanced several anti-deepfake bills, none have managed to clear Congress.
With the rise of synthetic content, the report noted that there is not one, perfect solution to authenticate content and suggested that Congress focus on supporting the development of multiple solutions.
They also recommended that lawmakers consider legislation that would clarify the legal responsibilities of the various individuals involved in the creation of synthetic content, including AI developers, content producers and content distributors.
Another key debate amid the rapid development of publicly accessible AI models has been open versus closed systems. Open systems give the public access to the inner workings of AI models and allow others to customize and build on top of them.
The principal concern with open systems is that they can be manipulated by nefarious actors. However, the task force found that there is limited evidence to suggest that open AI models should be restricted.
The lawmakers urged Congress to focus on the real, demonstrable harms from AI, while also evaluating the risks of chemical, biological, radiological and nuclear threats using the technology.
When it comes to federal agencies, the lawmakers said the benefits of the government’s use of AI are “potentially transformative,” while noting improper use can risk individual privacy, security and fair treatment of all citizens.
Lawmakers found knowledge about AI varies widely across the federal workforce and recommended agencies pay close attention to the “foundations of AI systems,” to harness its uses, including the reduction of administrative bureaucracy
Still, the report noted the federal government should be mindful of algorithmic-informed decisions and recommended agencies should be transparent about AI’s role in governmental tasks.
It comes nearly two months after the Biden administration issued its first-ever national security memorandum on AI. The memo similarly urged US agencies to take advantage of AI systems for national security and maintain an edge over foreign adversaries.
Lawmakers in Tuesday’s report acknowledged U.S. rivals are adopting and militarizing AI and recommended Congress oversee AI activity related to national security along with the policies for autonomous weapons use.
Johnson said Tuesday the task force report gives leadership a heightened understanding about the technology. It comes after the Speaker signaled hesitation earlier this year over overregulating the AI development space.
“Developing a bipartisan vision for AI adoption, innovation, and governance is no easy task, but a necessary one as we look to the future of AI and ensure Americans see real benefits from this technology,” Johnson wrote in a release Tuesday.
Describing the report as “serious, sober and substantiative in nature,” Jeffries added, “I’m encouraged by the completion of the report and hopeful it will be instructive for enlightened legislative action moving forward.”
Jeffries told reporters last week he is hoping AI-related legislation is included in Congress’s continuing resolution, which has yet to be released as lawmakers race against a shutdown deadline.
While lawmakers appear hopeful about AI’s uses, the report also recognized the potential shortcomings of AI, specifically when it comes to civil rights.
“Adverse effects from flawed or misused technologies are not new developments but are consequential considerations in designing and using AI systems,” the report stated. “AI models, and software systems more generally, can produce misleading or inaccurate outputs. Acting or making decisions based on flawed outputs can deprive Americans of constitutional rights.”
To counter this, the lawmakers recommended humans maintain an active role to help identify flaws when AI is used for high-stakes decisions, and said regulators must have the tools and expertise to address these risks.
One way this could be done is by having AI expert agencies work with regulators to develop specific research programs focused on identifying the different risks, the lawmakers said.