On Monday StackOverflow, a question-and-answer platform for developers to get help writing code, said it would temporarily ban users from posting answers generated by the buzzy new bot ChatGPT. The bot, which is a free product of the artificial intelligence startup OpenAI, has captivated tech enthusiasts since its surprise release on Wednesday. But while it can often be shockingly accurate in its answers, it can also be loudly and confidently wrong.
“The primary problem is that while the answers which ChatGPT produces have a high rate of being incorrect, they typically look like they might be good and the answers are very easy to produce,” wrote the mods (emphasis theirs). “As such, we need the volume of these posts to reduce [...] So, for now, the use of ChatGPT to create posts here on Stack Overflow is not permitted. If a user is believed to have used ChatGPT after this temporary policy is posted, sanctions will be imposed to prevent users from continuing to post such content, even if the posts would otherwise be acceptable.” […]
This is one of several well-known failings of AI text generation models, otherwise known as large language models or LLMs. These systems are trained by analyzing patterns in huge reams of text scraped from the web. They look for statistical regularities in this data and use these to predict what words should come next in any given sentence. This means, though, that they lack hard-coded rules for how certain systems in the world operate, leading to their propensity to generate “fluent bullshit.”
Stack Overflow’s move to ban ChatGPT capped off an unusually eventful three-day period in tech, in which early adopters alternately thrilled at the potential of a powerful new set of capabilities, and recoiled at the tool’s high potential for harm and disruption.
For years, tech giants and startups alike have been laying the groundwork for a world in which AI augments our productivity even as it threatens to overwhelm us with its output. Seemingly every big keynote I’ve attended over the past several years has devoted one or more segments to the coming AI era, as companies like Google, Microsoft and Meta strain to convince us that their innovations will advance the state of the art without plunging the world into chaos.
In the end, though, it has been the unconventional startup OpenAI that has arguably done the most to bring that coming AI era into focus: first with DALL-E, the powerful text-to-image generator that now often illustrates this newsletter; and now with ChatGPT, a chatbot that can handle an impressively wide variety of tasks: answering questions at a far greater depth than Google, Siri, or Alexa could typically handle; writing code and spotting mistakes in the code that others have written; and dashing off poems, song lyrics, and screenplays with surprising skill.
Screenshots of ChatGPT’s work took over the Twitter timeline over the weekend, and by Sunday night OpenAI CEO Sam Altman said the tool now has more than 1 million users.
And it’s easy to see the appeal. ChatGPT excels both at serious tasks — fixing broken code, writing syllabi, crafting sensitive emails — and dumb fun. I’ve used it to write a tribute to my favorite gay bar, a Real Housewives-style character tagline for Sonic the Hedgehog, and a theme song for Hard Fork. Not only does the technology do mostly all of this stuff well, it does so instantly.
And for the moment at least, all of it is free to use, and remains uncluttered by advertising. (The costs to OpenAI are likely significant; Altman says the computing power to answer each query costs an average of a few cents.)
It’s instructive, I think, to compare this experience to Google, which for two decades now has been the default destination for many of the (non-creative) queries that early adopters are now running through ChatGPT. Google can answer plenty of questions perfectly fine, but it won’t spot errors in your code, it won’t write a recipe for you, and aside from suggesting a word or two, it won’t write emails or documents for you, either.