Why I'm having trouble covering AI

If you believe that the most serious risks from AI are real, should you write about anything else?

Why I'm having trouble covering AI
“a pen in the shape of a question mark, digital art” / DALL-E

It’s going to be a big week for announcements related to artificial intelligence. With that in mind, today I want to talk a bit about the challenges I’ve found in covering the rise of generative AI as it works its way into the product roadmaps of every company on my beat.

Unlike other technological shifts I’ve covered in the past, this one has some scary (and so far mostly theoretical) risks associated with it. But covering those risks is tricky, and doesn’t always fit into the standard containers for business reporting or analysis. For that reason, I think it’s worth naming some of those challenges — and asking for your thoughts on what you think makes for good journalism in a world where AI is ascending.

To start with, let’s consider two recent perspectives on the subject from leading thinkers in the field. One is from Geoffrey Hinton, an AI pioneer who made significant strides with neural networks, a key ingredient in the field’s recent improvements. Last week Hinton left his job at Google in part so he could speak out about AI risk, and told the New York Times’ Cade Metz that “a part of him … now regrets his life’s work.”

“It is hard to see how you can prevent the bad actors from using it for bad things,” Hinton said. Among his concerns: a flood of misinformation that makes it impossible to discern what is true; massive job losses through automation; and killer robots.

So that’s one set of possible outcomes. Here’s another, from Jürgen Schmidhuber, who is sometimes called “the father of artificial intelligence.” He argues AI fears are misplaced, and that whatever bad actors do with AI can likely be countered with good actors using AI.

Here’s Josh Taylor in the Guardian:

Schmidhuber believes AI will advance to the point where it surpasses human intelligence and has no interest in humans — while humans will continue to benefit and use the tools developed by AI. This is a theme Schmidhuber has discussed for years, and was once accused at a conference of “destroying the scientific method” with his assertions.

As the Guardian has reported previously, Schmidhuber’s position as AI’s father is not undisputed, and he can be a controversial figure within the AI community. Some have said his optimism about the rate of technological progress was unfounded and possibly dangerous.

Whether you find yourself more inclined here to believe Hinton or Schmidhuber seems likely to color how you might cover AI as a journalist. If you believe Hinton’s warnings, and we are starting down a path that leads to killer robots or worse, it could make sense to center that risk in all coverage of AI, no matter how seemingly benign the individual announcement.

If, on the other, you’re more sympathetic to Schmidhuber, and think that all of the problems created with AI will resolve themselves without causing much damage to society at all, you’d probably spend more time covering AI at the level of products and features and how people are using them in their lives.

The reason I’m having trouble covering AI lately is because there is such a high variance in the way that the people who have considered the question most deeply think about risk. When the list of possible futures ranges from fully automated luxury communism to a smoking ruin where our civilization used to be, where is the journalist supposed to begin? (The usual answer is to talk to a lot of people. But the relevant people here are saying very different things!)

All of this is on my mind lately for a couple reasons. One is that I recently spent some time talking with AI safety researchers who I thought made a convincing case that, no matter how much time executives and regulators spend warning us about the risks here, the average person still probably hasn’t grappled with them enough. These folks believe we essentially need to shut down AI development for a long while, invest way more money into safety research, and prevent further commercial development until we’ve developed a strategy to avoid the worst outcomes.

The other reason it’s on my mind is that Google I/O is this week. On Wednesday the company is expected to showcase a wide range of new features drawing on its latest advancements in generative AI, and I’ll be there to cover it for you. (The Wall Street Journal and CNBC appear to have scooped some of the announcements already.)

The Google announcements represent the fun side of AI: the moment when, after years of hype, average people can finally get their hands on new tools to help them with their work and daily lives. Even the most diehard believer in existential risk from AI can’t deny that, at least for the moment, tens of millions of people are finding the tools extremely useful for a broad range of tasks.

One of my biases is that I started writing about tech because I love stuff like this: incremental advances that help me research faster, write better, and even illustrate my newsletter. Even as I’ve increasingly focused my writing on business coverage and tech policy, the instinct to say “hey, look at this cool thing” remains strong within me.

And if — please! — Schmidhuber’s benign vision of our AI world comes to pass, I imagine I’ll feel fine about any incremental product coverage I did along the way to point people to useful new tools.

But what if Hinton’s vision is closer to the mark? (And it seems noteworthy that there are more AI researchers in his camp than Schmidhuber’s.) Will I feel OK about having written a piece in 2022 titled “How DALL-E could power a creative revolution” if that revolution turns out to have been a step on the road to, uh, a worse one?

Thinking through all this, I have in mind the criticism folks like me received in the wake of the 2016 US presidential election. We spent too much time hyping up tech companies and not enough time considering the second-order consequences of their hyper-growth, the argument went. (It’s truer to say we criticized the wrong things than nothing at all, I think, but perhaps that’s splitting hairs.) And while opinions vary on just how big a role platforms played in the election’s outcome, it seems undeniable now that if we could do it all over again we would probably cover tech differently from 2010 to 2016 than a lot of us, myself included, actually did.

The introspection we did after 2016 was easier in one key respect than the question we face now, though. The tech backlash of 2017 was retrospective, rooted in the question of what social networks had done to our society.

The AI question, on the other hand, is speculative. What is this thing about to do to us?

I don’t want to set up a false dilemma here. The question is not whether AI coverage should be generally positive or generally negative. There is clearly room for a wide range of opinions.

My discomfort, I think, comes with the heavy shadow that all AI coverage has looming in the background — and the way that the shadow often goes unacknowledged, including by me. So many of the leading researchers and even AI executives spend a great deal of time warning of potential doom.

If you believe that doom is a serious possibility, shouldn’t you mention it all the time?

Or, as Max Read has written, does that sort of warning only end up hyping up the companies building this technology?

I haven’t come to any solid conclusions here. But today I offer a couple minor evolutions as my thinking changes.

One, I updated Platformer’s About page, a link to which gets emailed to all new subscribers, to add AI as a core coverage interest. On that same page, I also added this paragraph to the section on what I’ve come to believe:

Artificial intelligence promises to bring powerful advances in productivity and creativity. But it also poses serious and potentially existential risks to life as we know it. My coverage of AI is rooted in the belief that fears of massive disruption may be justified, and require urgent attention.

Adding a few lines on an About page isn’t of great to use to readers who happen upon the odd story from me here or there. But the nice thing about writing a newsletter is that many of you are dedicated readers! And now hopefully you have a more complete understanding of how I’m thinking about a subject I expect to return to often in the coming years.

At the same time, I am going to be writing about the AI products that platforms release along the way. Understanding how AI will shape the future requires having a good sense of how people are using the technology, and I think that means staying up to date with what platforms are building and releasing into the world.

When I write about these tools, though — even the most fantastically useful of them — I’ll strive to maintain the baseline skepticism that I tried to bring to this piece.

I’ll end what has been a long and uncharacteristically meta reflection here by saying the situation I’m describing here isn’t unique. Plenty of journalism is rooted in uncertainty in how events will play out, from US politics to climate change. Take your pick of potential catastrophes, and there’s probably a group of journalists figuring out how to capture the full range of perspectives in 1,200 words.

And personally, I started writing a daily newsletter because of the way it freed me from having to write a definitive take in every story. Instead I could just show up a few times a week, tell you what I learned today, and give you some ways to think about what might happened next.

It’s not perfect, but it’s the best that I’ve come up with so far. If you have other ideas, though, I’m all ears.


Governing


Industry


Those good tweets

For more good tweets every day, follow Casey’s Instagram stories.

(Link)

(Link)

(Link)


Talk to us

Send us tips, comments, questions, and your thoughts on AI coverage: casey@platformer.news and zoe@platformer.news.