Brad Smith, the current President of Microsoft, explained as part of a recently aired interview that the kinds of things that artificial intelligence chatbots have the possibility of doing will soon need to be heavily regulated by legislators before any irresponsible companies have time to step in and release products that are able to do things which could wreak havoc on society as we know it, such as teaching people the best way to make bombs.
Smith issued the comments as part of an interview held on the CBS News show: “60 Minutes.” He also spotlighted the positives that he believes will end up coming from the public maintaining open access to AI technology.
Leslie Stahl, the show’s co-host, questioned of smith about just what could be done to stop a company in China such as Baidu from putting out a product that could be horrendously misused to do something quite nefarious.
“I think we’re going to need governments, we’re gonna need rules, we’re gonna need laws,” he explained. “Because that’s the only way to avoid a race to the bottom. I think it’s inevitable.”
“I think we're going to need governments, we're going to need rules, we're going to need laws.”
— 60 Minutes (@60Minutes) March 6, 2023
Lesley Stahl: I’m wondering if you think you may have introduced this AI Bot too soon?
Brad Smith: I don’t think we’ve introduced it too soon. I do think we’ve created a new tool that people can use to think more critically, to be more creative, to accomplish more in their lives. And like all tools it will be used in ways that we don’t intend.
Lesley Stahl: Why do you think the benefits outweigh the risks which, at this moment, a lot of people would look at and say, “Wait a minute. Those risks are too big”?
Brad Smith: Because I think– first of all, I think the benefits are so great. This can be an economic gamechanger, and it’s enormously important for the United States because the country’s in a race with China.
Smith also mentioned possible improvements in productivity.
Brad Smith: It can automate routine. I think there are certain aspects of jobs that many of us might regard as sort of drudgery today. Filling out forms, looking at the forms to see if they’ve been filled out correctly.
Lesley Stahl: So what jobs will it displace, do you know?
Brad Smith: I think, at this stage, it’s hard to know.
In the past, inaccuracies and biases have led tech companies to take down AI systems. Even Microsoft did in 2016. This time, Microsoft left its new chatbot up despite the controvery over Sydney and persistent inaccuracies.
Remember that fun fact about penguins? Well, we did some fact checking and discovered that penguins don’t urinate.
Lesley Stahl: The inaccuracies are just constant. I just keep finding that it’s wrong a lot.
Brad Smith: It has been the case that with each passing day and week we’re able to improve the accuracy of the results, you know, reduce– you know, whether it’s hateful comments or inaccurate statements, or other things that we just don’t want this to be used to do.
Lesley Stahl: What happens when other companies, other than Microsoft, smaller outfits, a Chinese company, Baidu. Maybe they won’t be responsible. What prevents that?
Brad Smith: I think we’re going to need governments, we’re gonna need rules, we’re gonna need laws. Because that’s the only way to avoid a race to the bottom.
Lesley Stahl: Are you proposing regulations?
Brad Smith: I think it’s inevitable-
Lesley Stahl: Wow.
Lesley Stahl: Other industries have regulatory bodies, you know, like the FAA for airlines and FDA for the pharmaceutical companies. Would you accept an FAA for technology? Would you support it?
Brad Smith: I think I probably would. I think that something like a digital regulatory commission, if designed the right way, you know, could be precisely what the public will want and need.