US draft AI Foundation Model Transparency Act proposed

This website will offer limited functionality in this browser. We only support the recent versions of major browsers like Chrome, Firefox, Safari, and Edge.
U.S. Representatives Anna Eshoo (D-CA) and Don Beyer (D-VA), who serve as Co-Chair and Vice Chair, respectively, of the Congressional Artificial Intelligence (AI) Caucus, have introduced proposals for the AI Foundation Model Transparency Act, an ‘ambitious legislation to promote transparency in artificial intelligence foundation models.’
Here we pick out the key points:
According to the bill's accompanying press statement (and as reflected in the draft bill):
Foundation models are ‘artificial intelligence models that are trained on broad data, generally use selfsupervision, contain billions of parameters, and are applicable across a wide range of contexts or applications.’
The concern is that ‘Widespread public use of foundation models has also led to countless instances where the public is being presented with inaccurate, imprecise, or biased information’. The causes are several, including biases and limitations in the data the model was trained on. There are significant risks in specific use-cases, such as healthtech and fintech, that use of AI systems may create, perpetuate and worsen discrimination.
The AI Foundation Model Transparency Act intends to:
If you would like to discuss how current or future regulations impact what you do with AI, please contact Tom Whittaker, Brian Wong, David Varney, Lucy Pegler, Martin Cook or any other member in our Technology team.
“Artificial intelligence foundation models commonly described as a ‘black box’ make it hard to explain why a model gives a particular response. Giving users more information about the model—how it was built and what background information it bases its results on—would greatly increase transparency,” said Beyer. “This bill would help users determine if they should trust the model they are using for certain applications, and help identify limitations on data, potential biases, or misleading results. When a model’s bias could lead to harmful results like rejections for housing or loan applications, or faulty medical decisions, the importance of this reform becomes clear and very significant.”
https://beyer.house.gov/uploadedfiles/one-pager_ai_foundation_model_transparency_act_.pdf