Much like safety brakes in elevators make people comfortable using them in tall buildings, high-risk artificial intelligence systems need specialized precautions to ensure they don’t bring society crashing down.
Brad Smith, the Microsoft president and vice chair, drew that comparison Tuesday in his testimony before the U.S. Senate Judiciary Committee Subcommittee on Privacy, Technology, and the Law, agreeing in principle with a new bipartisan framework for AI legislation from U.S. Sens. Richard Blumenthal (D-CT) and Josh Hawley (R-MO), leaders of the subcommittee.
Among other things, the Blumenthal-Hawley framework would establish an independent oversight body and require companies developing AI systems for “high risk” applications to go through a registration and licensing process.
A key principle is that highly capable AI systems controlling critical infrastructure should “remain under human control at all times,” Smith agreed in his written testimony, submitted in advance of the hearing.
“Although the type of ‘safety brake’ would likely vary depending on the system and how it was used, all of them should have the ability to detect and avoid unintended consequences, and to disengage or deactivate the AI system in the event of unintended behavior,” he wrote, describing a potential regulatory approach.
Speaking via phone with GeekWire after the subcommittee hearing, Smith addressed the need to avoid unintended consequences in such an AI licensing system; offered his outlook on where the legislative process could go from here; and discussed Hawley’s criticism during the hearing of Microsoft’s policy allowing kids as young as 13 years old to use the company’s Bing AI chatbot.
Continue reading for edited excerpts from the interview.
Can you summarize where you stand on the Blumenthal-Hawley framework?
Smith: As a company, Microsoft is very supportive of the framework that Senators Blumenthal and Hawley have put together. We think it takes things in the right direction. We think it identifies the right problems. We think it has good solutions in terms of a licensing system for high-risk systems. Not everything, but those things that involve more risk.
Then having an independent oversight body makes sense to us. Focusing on the rights and needs of consumers and citizens, and doing it in a way that thinks about both companies that develop AI and deploy it, it all makes good sense. And we’re very enthusiastic they’re pursuing it in a bipartisan way. I think that’s critical.
One thing that struck me is that there will only be a certain number of companies that will have the legal horsepower and financial resources to go through these licensing regimes.
Smith: I think you’re putting your finger on something that’s critical. A lot of thought needs to be given to who should have to get a license, and for what use. We would not make it for everything. But it should be for those kinds of applications that are most powerful and most impactful on people’s rights, that could create higher risk.
I think there’s also a lot that has to be considered in terms of how onerous this should be. I don’t think one should want a licensing regime that is open, in effect, to only a small number of companies. That would be a huge step backwards. I don’t think one should want a licensing regime that would slow the pace of innovation.
There was a long sidebar on the issue of age verification and the use of AI by teenagers. Is there anything you’d want to clarify from the exchange with Senator Hawley?
Smith: Look, the purpose of the hearing was not to talk about age verification or even the age limits for some AI apps. But I appreciate the question that he had. … I think that Bing Chat is a useful tool to help people do research to solve their math problems, learn how to speak another language, learn how to code. It’s extraordinarily useful. And I think we have, and can continue to create, a safety architecture that puts guardrails so that it prevents people from using it for things that would harm themselves or others. I think that is a fundamental goal.
Frankly, the more I think about it, the more I’m struck by the fact that, for a 9th grader or a 10th grader, a sophomore in high school in the United States, there’s way more benefit than the risk of abuse in letting somebody use Bing Chat, because I think they’re likely to learn and be more successful as students in school.
What’s the logical path forward in terms of timeframe and general outlook for some kind of legislation?
Smith: I think we’ll start to see bills introduced. I think we’ll see more hearings. I think Senator [Chuck] Schumer may have more forums, and then my guess is, in 2024, there’ll be some legislation that may start to move forward.
I think at the same time, we’ll see more action from the executive branch. And that will come faster than the legislative branch. That’s typically the way things work. We may well see an executive order this fall that builds on the voluntary commitments. So I think we’re seeing, as one typically does, two speeds: faster in the executive side of the government, a little bit slower in the Congress.
Are there specific issues that are most significant in your mind that still need to be resolved among these different tracks?
Smith: I just think there are a lot of details that need to be worked through. I thought that came through well at the hearing. I’d really applaud the effort to create a framework, that’s a good place to start. Because then you can go from deciding whether your framework is right to really building up the blueprint in more detail, and then you can construct the house on top of the blueprint.
So it’s a good formula. And I think that’s fundamentally what they’re doing. And they’re doing it in a bipartisan way. They’re listening, people are learning. We should applaud when the government works that way.
See this page to watch the full Senate subcommittee hearing and read the written testimony from Smith, Boston University law professor Woodrow Hartzog, and Nvidia Chief Scientist William Dally.