Blog Layout

From Silicon Vallley to Capitol Hill: AI's Journey Towards Regulation

April 29, 2024
It is crucial to have a cohesive nationwide framework for regulating Artificial Intelligence. However, with the current state of Congress, this goal may seem distant. Nevertheless, the need for such a framework cannot be overstated.

AI has immense potential for positive outcomes but also carries significant risks. Americans have been using AI for years — not entirely realizing that Google Maps or Spotify’s Recommendations are based on and utilize AI technology.

Last week, The Hill hosted a panel entitled “Advancing America’s Leadership in AI.” The panel brought together industry leaders and policymakers to discuss forward-thinking policies for implementing guardrails to protect Americans from potential AI-generated harm.

As AI continues to grow in various aspects of our lives, ensuring its safe and effective deployment becomes increasingly vital. Michael Kratsios, the Managing Director at Scale AI, raised a crucial point during last week’s panel: “How do we make the average person feel secure in embracing AI technologies?”

The most straightforward approach is demonstration and leading by example, as suggested by Kent Walker, the President of Global Affairs at Google. Walker believes that a proactive approach to AI in which the government not only facilitates the deployment of the technology but also leads by example in utilizing these tools for the public good — such as disaster relief efforts and workforce training programs — is a great first step towards making Americans feel better about embracing this new technology.

Walker also stressed the urgent need for the United States to lead in the global race of AI innovation saying, “With Japan leading in smart robot density per capita and China’s ambitious AI+ initiative, the United States cannot afford to lag behind.” The U.S. must maintain its global competitiveness in this rapidly evolving field.

With the rapid advancement of AI, concerns regarding privacy, bias, and job displacement loom large. To address these concerns, regulations must evolve to be “Use Case Specific” rather than industry-specific, as recently advocated for by the Office of Management and Budget in a guidance memo. By tailoring regulations to specific sectors and domains, we can ensure that AI is wielded responsibly and ethically, fostering trust among the public.

The federal government can instill confidence in AI's power by demonstrating the tangible benefits of AI adoption and ensuring its development is guided by complementing principles rather than replacing human labor.

States Are Stepping Up as the Laboratory for Innovation

This year alone, state legislators across the United States will review over two dozen AI-related pieces of legislation. Utah, Oregon, and New Mexico led the charge by enacting early legislation regulating AI in political campaign communications.

California legislators believe stepping into the AI void and creating early regulatory environments is crucial. In the landscape of AI regulation, there is a notable absence of a cohesive nationwide framework. So, California (with its progressive inclinations) is looking to fill this regulatory void, recognizing that IA is perhaps the most significant technological development since the internet.

California’s Senate Bill 1047 proposes mandatory risk assessment testing for large-scale AI models before deployment, ultimately holding companies responsible for harm caused by their technology. Senator Scott Wiener’s bill could inadvertently establish a national benchmark for AI model safety if the United States Congress fails to act.

However, Senate Bill 1047 does face opposition from major tech trade groups and business organizations, such as the Chamber of Progress and the California Chamber of Commerce. Critics argue that the proposed legislation is excessively broad and burdensome, stifling innovation and disproportionately and adversely impacting small startup companies.

Despite many states taking noticeable strides, progress in Congress continues to be sluggish and unfocused. While bills like Senator Amy Klobuchar’s (D-MN) “Protect Elections From Deceptive AI Act” await action, comprehensive federal measures to address AI risks remain elusive.

Coordinated efforts across all levels of government are imperative to ensure the integrity of AI while combating the spread of disinformation. It is abundantly clear—especially as states step up to advance America’s AI challenges—that a unified, federal approach to AI regulation is becoming a necessity.

Kyle Wiley, a former Senior Advisor at the U.S. Department of Energy, is Chief Accounts Officer at Connector, a boutique government relations firm with offices in Washington, D.C., and Dallas, Texas.
Back to Media
Share by: