How Google is making up for lost time

The company is finally bringing AI to the places that matter

How Google is making up for lost time
Google CEO Sundar Pichai speaks at I/O on Wednesday in Mountain View. (Google)

The arrival of ChatGPT last year sent a rare shiver through Google’s spine. For years the company had positioned itself as a leader in the development in artificial intelligence. Suddenly, though, a product from the upstart OpenAI rocketed to tens of millions of monthly users — and observers began asking whether Google had squandered its lead.

Within weeks, leaders at the company declared a “code red” — a signal that the time to begin shipping AI features was now. (It was widely reported that CEO Sundar Pichai declared the code red, but he later told me that it wasn’t the case.)

A handful of products have shipped since — most notably Bard, the company’s ChatGPT analog. But on Wednesday, at the company’s annual developer conference, the floodgates opened. At Google I/O, a torrent of new AI features were announced, touching nearly every part of the company’s product lineup.

For the most part, these products will ship “in the coming weeks,” or “later this year.” Until then, all we really have to go on is the demonstrations we saw in demonstrations and pre-conference press briefings.

But while I imagine the features will vary in quality and usefulness, one thing is becoming clear about the near-term AI future: technology alone is not enough to totally reset the competitive landscape. Incumbents can gain significant ground simply by bringing new features into the products that people are already using — and getting users to switch platforms is proving more difficult some imagined it would be.

Let’s take platform switching first. In February, Microsoft re-launched Bing with generative AI search results powered by ChatGPT. The company hoped it would be a moment that consumers gave Bing a second look — and would perhaps give Microsoft a chance to peel off meaningful market share from its much bigger rival.

Three months later — and on the eve of Google adding generative AI results to its own search engine — that project appears to have stalled. Citing a report from the research firm YipitData, The Information reported Wednesday that Bing’s share of searches on desktops had grown just 0.25 percent in the past three months. Microsoft told the outlet that the growth rate was higher on mobile devices, and perhaps it will grow on desktops as well in the coming months.

But the same story noted that ChatGPT receives more than 65 million visits per day, compared to 40 million for 14-year-old Bing. People who want to use OpenAI’s chatbot are largely going straight to the source — and Microsoft, which is just one of dozens of companies integrating OpenAI’s technology in the hopes that it will open up new revenue streams, is finding that API access is a commodity rather than a growth engine. (I’m sure Microsoft will eventually find plenty of ways to make money from AI, starting with all the infrastructure services it provides OpenAI through its Azure platform. But still.)

The lesson here is that, with the possibly lone exception of ChatGPT, users are mostly not seeking out AI as a destination unto itself. Rather, they’re waiting for it to transform into useful products and services — ideally, products and services that they’re already using.

Last week I wrote about AI’s missing interface, and the challenges presented by a technology whose interface design begins and ends with a text box. One way of thinking about I/O this year is that Google began to fill in the missing pieces of that interface with actual product design — a commitment to nudging users, in all sorts of ways, into using AI productively.

Let’s look at a few of those ways. Until now, Bard has been an island unto itself — a sandbox for testing the limits of Google’s large language model, PaLM. Pretty soon, though, you’ll be able to export Bard’s output into Gmail, Docs, and Sheets — the places you were probably going to copy and paste it to anyway. ChatGPT probably records more copy and paste actions than any other website in the world; Google is abstracting that whole process away into a button.

Ideally, though, you’d never have to visit a dedicated website to use generative AI in the first place. For example, at the moment lots of people are having ChatGPT write their emails and then porting them over into their email client of choice. Google is taking the obvious next step: promising that later this year, you’ll be able to just ask Gmail to write the email for you in the message composer window.

I predict ChatGPT sees fewer copy and paste actions after that.

You could also just stick generative AI boxes into existing productivity tools — the way Google showed yesterday with its “sidekick” feature. In one of the day’s best demos, Google executive Aparna Pappu showed off the sidekick in Docs. As she imagined writing a short story about a missing seashell with her niece, the sidekick chimed in with contextual suggestions. What happened to the seashell, it wanted to know.

Then the sidekick offered some suggestions: maybe it was stolen by a jealous mermaid. Maybe it was taken by a time traveler. Maybe it was eaten by a squid.

If you’re a 10-year-old writing a short story, this is going to be a lot of fun. And it probably doesn’t even come across to the average user as AI per se — instead it just feels like a new creative tool that takes a popular existing product and makes it more useful.

There were a lot more demos like that yesterday. I was struck by one that generated speaker notes from a set of slides — sure to be a godsend for procrastinating workers everywhere — and another that created a list of dishes that people were bringing to some potluck based on an attached Google Sheet.

Viewed one way, some this stuff can feel pretty mundane. But in the near term, this is how AI is going to start working its way into our lives. Soon enough, we probably won’t think of it as AI anymore. (A recurring and somewhat defensive theme of yesterday’s keynote is that Google has already shipped lots of stuff that uses machine-learning but for whatever reason doesn’t meet our ever-shifting definition of what counts as AI. Searching for “dogs” in Google Photos, for example.)

There’s surely another column to be written here about Google’s planned changes to search, which will put a module of generative AI results on top of the standard 10 blue links. But I want to wait until I can actually try it for myself to get a better sense of how disruptive it feels.

For now, with search and everything else, Google has positioned AI not as an all-knowing oracle but as a useful starting point for many tasks. Google’s AI will write the first draft; offer alternate paths to consider; or do a cursory scan of a new subject you’re interested in. This has the benefit of being how people actually use AI in practice today, and it’s smart of Google to lean into that message rather than something more grandiose.

Ultimately, I still believe the AI opportunity will be much bigger than one company. But in a moment when all these large language models are converging to become roughly functionally equivalent, no one is going to win the game on technology alone.

AI is moving from a science problem to a product design and marketing problem, and the latter are things that Google has had a lot of experience with.

A better metaverse

The best thing I saw at Google I/O was Project Starline, an experimental piece of hardware that asks: what if the person on your next Zoom call was a hologram?

The year-long discussion we had about the metaverse from 2021 to 2022 often touched on the idea of “telepresence” — technologies that allow people to feel as if they are physically present with someone even when they are only being represented digitally. Other than Zoom, the best we have been able to do on this front is to strap on ungainly headsets, navigate ourselves into pixelated conference rooms, and talk to legless cartoon versions of our colleagues and loved ones.

Project Starline, which remains early in its development and would need to get radically cheaper to go mainstream, requires only that you sit down in front of the TV-like device and turn it on. There are no headsets, glasses, or headphones to fiddle with — just a person talking to you, in three dimensions and at admirably high resolution. 

Andrew Nartker, Starline’s general manager, demonstrated it for me while sitting in a separate booth. When he went to give me a fist bump, his hand appeared to come through the TV screen. Later, he offered me an apple, and the effect was just as realistic. And all the while, Nartker’s voice tracked his movements as he changed positions, enhancing the illusion that he was right there in front of me.

In reality, he was in a booth a few feet away from the one I was sitting in. I’m sure that behind the scenes there were hidden technological enhancements that you might not find in the real world: a rock-solid data pipe linking the devices, for example. And in my conversation with Googlers yesterday, it was clear that the primary obstacle to Starline’s development will be making it much less expensive than it is today. (No one would tell me how expensive it is, but if you told me the whole setup cost a million dollars or more it would not seem excessive, relative to the quality of the experience.) 

The good news is that there are signs Starline is coming down the cost curve. Google said this week that it has begun testing the device with partners including Salesforce, T-Mobile and WeWork, as well as at Google itself.

Given the challenges, and all the cost-cutting going on at Google and elsewhere, few would be surprised if Starline ultimately proves to be vaporware. But there’s something profound here that Meta’s metaverse hasn’t come close to achieving: a convenient, comfortable, ergonomic form of video chat that I could easily imagine myself doing for hours. 

I’m sure I’ll take my share of meetings in virtual reality over the next few years, if only because of how much cheaper they are than installing Project Starline at my house. 

The minute that changes, though, my webcam and headset are going into a drawer.


On the podcast this week: Kevin and I take a ride in one of Cruise’s robot-taxis, which are becoming more widely available in San Francisco. Then, Cruise CEO Kyle Vogt stops by to talk about our self-driving future. PLUS: I report live from I/O.

Apple | Spotify | Stitcher | Amazon | Google


Google I/O

Just look at all this stuff!


Governing


Industry


Those good tweets

For more good tweets every day, follow Casey’s Instagram stories.

(Link)

(Link)

(Link)


Talk to us

Send us tips, comments, questions, and a Project Starline unit for our houses: casey@platformer.news and zoe@platformer.news.