three years ago, the company forced out Timnit Gebruthe co-lead of its ethical AI team, essentially over a paper that raised concerns about the dangers of large language models. Gebru’s concerns have since become mainstream. Her departure, and the fallout from it, marked a turning point in the conversation about the dangers of unchecked AI. One would hope Google learned from it;
And then, just last week, Geoffrey Hinton announced he was stepping down from Google, in large part so he’d be free to sound the alarm bell about the dire consequences of rapid advancements in AI that he fears could soon enable it to surpass human intelligence. (Or, as Hinton put it, it is “Quite conceivable that humanity is just a passing phase in the evolution of intelligence.”)
And so, I/O yesterday was a far cry from the event in 2018, when the company gleefully Demonstrated Duplexshowcasing how Google Assistant could make automated calls to small businesses without ever letting the people on those calls know they were interacting with an AI. It was an incredible demo. And one that made very many people deeply uneasy.
Again and again at this year’s I/O, we heard about responsibility. James Manyika, who leads the company’s technology and society program, opened by talking about the wonders AI has wrought, particularly around protein folding, but was quick to trans ition to the ways The company is thinking about misinformation, noting how it would watermark generated images and alluding to guardrails to prevent their misuse.
There was a demo of how Google can deploy image provenance to counter misinformation, debunking an image search effectively by showing the first time it (in the example on stage, a fake photo purporting to show that the moon landing was a hoax) was indexed. It was a little bit of grounding amidst all the awe and wonder, operating at scale.
And then … on to the phones. The new Google Pixel Fold scored the biggest applause line of the day.
The phone may fold, but for me it was among the least mind-bending things I saw all day. And in my head, I kept returning to one of the earliest examples we saw: a photo of a woman standing in front of some hills and a waterfall.
Magic Editor erased her backpack strap. Cool! And it also made the cloudy sky look a lot more blue. Reinforcing this, in another example—this time with a child sitting on a bench holding balloons—Magic Editor once again made the day brighter and then adjusted all the lighting in the photos so the sunshine would look more natural. More real than real.
How far do we want to go here? What’s the end goal we are aiming for? Ultimately, do we just skip the vacation altogether and generate some pretty, pretty pictures? Can we supplant our memories with sunnier, more idealized versions of the past? Are we making reality better? Is everything more beautiful? Is everything better? Is this all very, very cool? Or something else? Something we haven’t realized yet?