Fed's Bowman: Regulators 'must have an openness' to AI
WASHINGTON — Federal Reserve Gov. Michelle Bowman said banking regulators should create a supervisory atmosphere around artificial intelligence that creates space for banks to try new use cases while evaluating those use cases based on their risk to the bank and the broader financial system.
Speaking at the 27th Annual Symposium on Building the Financial System of the 21st Century, Bowman said it is critical that regulators not reflexively close the door on the expanded use of AI out of a concern for the potential risks it could pose versus more tried-and-true methods and technologies.
“We must have an openness to the adoption of AI,” Bowman said. “We should avoid fixating on the technology and instead focus on the risks presented by different use cases. These risks may be influenced by a number of factors, including the scope and consequences of the use case, the underlying data relied on, and the capability of a firm to appropriately manage these risks.”
Regulators have been grappling with how to carefully graft AI to the financial system. Acting Comptroller of the Currency Michael Hsu on Thursday warned against regulators giving banks or other financial firms too much leeway in exploring applications for artificial intelligence. He preferred an approach where regulators and banks “co-learn” about the emerging technology.
Consumer Financial Protection Bureau Director Rohit Chopra recently encouraged regulators and lenders to develop a new, fairer credit-scoring model based on artificial intelligence.
Bowman said Friday that most of the public dialogue around AI tends to emphasize either the transformative or catastrophic potential of the technology, but that dialog misses the point for regulators, whose primary concern is preserving the financial system while also allowing it to grow, change and innovate. To that end, regulators should first consider a more foundational question: how should regulators define AI?
“I have no strong feelings about the ideal or optimal definition of AI, and some version of the many definitions floating around are probably adequate for our purposes,” she said.
“A broad definition of AI arguably captures a wider range of activity and has a longer ‘lifespan’ before it becomes outmoded, and potentially never becomes outdated. But a broad definition also carries the risk of a broad — and undifferentiated — policy response. This vast variability in AI’s uses defies a simple, granular definition, but also suggests that we cannot adopt a one-size-fits-all approach as we consider the future role of AI in the financial system.”
Bowman concluded that regulators would be wise to approach AI through the lens of the reliability, efficiency and risks associated with the specific use case for the technology rather than toward the technology in general. Banks already use AI in a variety of ways, she said, but as they take on more risk-sensitive tasks, regulators should focus on how and whether the technology works rather than whether it is AI. Adopting a more prohibitive stance could force AI technology outside the regulatory perimeter, which could concentrate risks in ways that can come back to haunt the banking system.
“An overly conservative regulatory approach can skew the competitive landscape by pushing activities outside of the regulated banking system or preventing the use of AI altogether,” Bowman said. “Inertia often causes regulators to reflexively prefer known practices and existing technology over process change and innovation. The banking sector often suffers from this regulatory skepticism, which can ultimately harm the competitiveness of the U.S. banking sector.”
Bowman said AI could also have the potential to benefit government agencies themselves. For example, if AI were able to find a way to use alternative datasets to either corroborate, contradict or refine economic data, that would give the Federal Open Market Committee — the Fed’s interest rate-setting body — better data on which to make its monetary policy decisions.
“As I have often noted, the data relied on to inform the Federal Open Market Committee decision-making process often is subject to revisions after the fact, requiring caution when relying on the data to inform monetary policy,” Bowman said.
“Perhaps the broader use of AI could act as a check on data reliability, particularly for uncertain or frequently revised economic data, improving the quality of the data that monetary policymakers rely on for decision-making. Additional data as a reliability check or expanded data resources informed by AI could improve the FOMC’s monetary policymaking by validating and improving the data on which policymakers rely.”
When asked following her prepared remarks whether AI could potentially replace regulators’ own money-laundering oversight in light of recent enforcement actions against TD Bank and Bank of America, Bowman demurred, saying the technology could certainly be a tool for human supervisors but shouldn’t be viewed as an alternative to them.
“The possibilities at this point are endless,” she said. “I think it’s a bit premature to replace human teams with AI, but it does seem like a technology that can be very beneficial to assist those teams with the work that they’re doing, probably more efficiently.”