Politicians Need to Learn How AI Works—Fast

This week, US senators heard alarm testimony suggesting that unchecked AI could steal jobs, spread misinformationand generally “go quite wrong,” in the words of OpenAI CEO Sam Altman (whatever that means). He and several lawmakers agreed that the US may now need a new federal agency to oversee the development of the technology. But the hearing also saw agreement that no one wants to kneecap a technology that could potentially increase productivity and give the US a lead in a new technological revolution.

Worried senators might consider talking to Missy Cummingsa onetime fighter pilot And Engineering and Robotics Professor At George Mason University. She Studies use of Ai and Automation in Safety Critical Systems Including Cars and AirCraf. T, and Earlier This Year Returned to Academia after A Stint at the National Highway Traffic Safety Administration, Which oversees automotive technologyincluding Tesla’s Autopilot and self-driving carsCummings’ perspective might help politicians and policymakers trying to weigh the promise of much-hyped new algorithms with the risks that lay ahead.

Cummings told me this week that she left the NHTSA with a sense of profound concern about the autonomous systems that are being deployed by many car manufacturers. “We’re in serious trouble in terms of the capabilities of these cars,” Cummings says. They’re not even close to being as capable as people think they are.”

I was struck by the parallels with ChatGPT and similar chatbots stoking excitement and concern about the power of AI. Automated driving features have been around for longer, but like large language models they rely on machine learning algorithms that are inherently unpredictable, hard to inspect, and require a different kind of engineering thinking to that of the past.

Also like ChatGPT, Tesla’s Autopilot and other autonomous driving projects have been elevated by absurd amounts of hype. Heady dreams of a transportation revolution led automakers, startups, and investors to pour huge sums into developing and deploying a technology that still has many unsolved problems. There was a permissive regulatory environment around autonomous cars in the mid-2010s, with government officials loath to apply brakes on a technology that promised to be worth billions for US businesses.

After billions spent on the technology, self-driving cars are still beset by problemsand some auto companies have pulled the plug on big autonomy projects. Meanwhile, as Cummings says, the public is often unclear about how capable semiautonomous technology really is.

In one sense, it’s good to see governments and lawmakers being quick to suggest regulation of generative AI tools and large language models. The current panic is centered on large language models and tools like ChatGPT that are Remarkably good at answering questions and solving problemseven if they still have significant shortcomings, including confidently fabricating facts.

At this week’s Senate hearing, Altman of OpenAI, which gave us ChatGPT, went so far as to call for a licensing system to control whether companies like his are allowed to work on advanced AI. “My worst fear is that we—the field, the technology, the industry—cause significant harm to the world,” Altman said during the hearing.

Source link

Recommended For You

About the Author: News Center