U.S. policing AI at companies to make sure it doesn’t violate civil rights
2023.04.25 13:34
© Reuters. FILE PHOTO: The logo of OpenAI is displayed near a response by its AI chatbot ChatGPT on its website, in this illustration picture taken February 9, 2023. REUTERS/Florence Lo/Illustration
By Chris Prentice
(Reuters) – U.S. officials on Tuesday warned financial firms and others that use of artificial intelligence (AI) can heighten the risk of bias and civil rights violations, and signaled they are policing marketplaces for such discrimination.
Increased reliance on automated systems in sectors including lending, employment and housing threatens to exacerbate discrimination based on race, disabilities and other factors, the heads of the Consumer Financial Protection Bureau, Justice Department’s civil rights unit, Federal Trade Commission and others said.
The growing popularity of AI tools, including Microsoft (NASDAQ:) Corp-backed Open AI’s ChatGPT, has spurred U.S. and European regulators to heighten scrutiny of their use and prompted calls for new laws to rein in the technology.
“Claims of innovation must not be cover for lawbreaking,” Lina Khan, chair of the Federal Trade Commission, told reporters.
The Consumer Financial Protection Bureau is trying to reach tech sector whistleblowers to determine where new technologies run afoul of civil rights laws, said Consumer Financial Protection Bureau Director Rohit Chopra.
In finance, firms are legally required to explain adverse credit decisions. If companies do not even understand the reasons for the decisions their AI is making, they cannot legally use it, Chopra said.
“What we’re talking about here is often the use of expansive amounts of data and developing correlations and other analyses to generate content and make decisions,” Chopra said. “What we’re saying here is there is a responsibility you have for those decisions.”