[ad_1]
Most broadly, we are likely to see the strategies that emerged last year continue, expand, and begin to be implemented. For example, following President Biden’s executive order, various US government agencies may outline new best practices but empower AI companies to police themselves. And across the pond, companies and regulators will begin to grapple with Europe’s AI Act and its risk-based approach. It certainly won’t be seamless, and there’s bound to be a lot of discussion about how these new laws and policies actually work in practice.
While writing this piece, I took some time to reflect on how we got here. I think stories about technologies’ rise are worthy of reflective examination—they can help us better understand what might happen next. And as a reporter, I’ve seen patterns emerge in these stories over time—whether it’s with blockchain, social media, self-driving cars, or any other fast-developing, world-changing innovation. The tech usually moves much faster than regulation, with lawmakers increasingly challenged to stay up to speed with the technology itself while devising new ways to craft sustainable, future-proof laws.
In thinking about the US specifically, I’m not sure what we’re experiencing so far is unprecedented, though certainly the speed with which generative AI has launched into our lives has been surprising. Last year, AI policy was marked by Big Tech power moves, congressional upskilling and bipartisanship (at least in this space!), geopolitical competition, and rapid deployment of nascent technologies on the fly.
So what did we learn? And what is around the corner? There’s so much to try to stay on top of in terms of policy, but I’ve broken down what you need to know into four takeaways.
1. The US isn’t planning on putting the screws to Big Tech. But lawmakers do plan to engage the AI industry.
OpenAI’s CEO, Sam Altman, first started his tour de Congress last May, six months after the bombshell launch of ChatGPT. He met with lawmakers at private dinners and testified about the existential threats his own technology could pose to humanity. In a lot of ways, this set the tone for how we’ve been talking about AI in the US, and it was followed by Biden’s speech on AI, congressional AI insight forums to help lawmakers get up to speed, and the release of more large language models. (Notably, the guest list for these AI insight forums skewed heavily toward industry.)
As US lawmakers began to really take on AI, it became a rare (if small) area of bipartisanship on the Hill, with legislators from both parties calling for more guardrails around the tech. At the same time, activity at the state level and in the courts increased, primarily around user protections like age verification and content moderation.
As I wrote in the story, “Through this activity, a US flavor of AI policy began to emerge: one that’s friendly to the AI industry, with an emphasis on best practices, a reliance on different agencies to craft their own rules, and a nuanced approach of regulating each sector of the economy differently.” The culmination of all this was Biden’s executive order at the end of October, which outlined a distributed approach to AI policy, in which different agencies craft their own rules. It (perhaps unsurprisingly) will rely quite heavily on buy-in from AI companies.
[ad_2]
Source link