Everybody knows about ChatGPT. And everybody knows about ChatGPT’s propensity to “make up” facts and details when it needs to, a phenomenon that’s come to be called “hallucination.” And everyone has seen arguments that this will bring about the end of civilization as we know it.

I’m not going to argue with any of that. None of us want to drown in masses of “fake news,” generated at scale by AI bots that are funded by organizations whose intentions are most likely malign. ChatGPT could easily outproduce all the world’s legitimate (and, for that matter, illegitimate) news agencies. But that’s not the issue I want to address.

I want to look at “hallucination” from another direction. I’ve written several times about AI and art of various kinds. My criticism of AI-generated art is that it’s all, well, derivative. It can create pictures that look like they were painted by Da Vinci–but we don’t really need more paintings by Da Vinci. It can create music that sounds like Bach–but we don’t need more Bach. What it really can’t do is make something completely new and different, and that’s ultimately what drives the arts forward. We don’t need more Beethoven. We need someone (or something) who can do what Beethoven did: horrify the music industry by breaking music as we know it and putting it back together differently. I haven’t seen that happening with AI. I haven’t yet seen anything that would make me think it might be possible.  Not with Stable Diffusion, DALL-E, Midjourney, or any of their kindred.

Related work from others:  Latest from MIT Tech Review - This startup’s AI is smart enough to drive different types of vehicles

Until ChatGPT. I haven’t seen this kind of creativity yet, but I can get a sense of the possibilities. I recently heard about someone who was having trouble understanding some software someone else had written. They asked ChatGPT for an explanation. ChatGPT gave an excellent explanation (it is very good at explaining source code), but there was something funny: it referred to a language feature that the user had never heard of. It turns out that the feature didn’t exist. It made sense, it was something that certainly could be implemented. Maybe it was discussed as a possibility in some mailing list that found its way into ChatGPT’s training data, but was never implemented? No, not that, either. The feature was “hallucinated,” or imagined. This is creativity–maybe not human creativity, but creativity nonetheless.

What if we viewed an an AI’s “hallucinations” as the precursor of creativity? After all, when ChatGPT hallucinates, it is making up something that doesn’t exist. (And if you ask it, it is very likely to admit, politely, that it doesn’t exist.) But things that don’t exist are the substance of art. Did David Copperfield exist before Charles Dickens imagined him? It’s almost silly to ask that question (though there are certain religious traditions that view fiction as “lies”). Bach’s works didn’t exist before he imagined them, nor did Thelonious Monk’s, nor did Da Vinci’s.

We have to be careful here. These human creators didn’t do great work by vomiting out a lot of randomly generated “new” stuff. They were all closely tied to the histories of their various arts. They took one or two knobs on the control panel and turned it all the way up, but they didn’t disrupt everything. If they had, the result would have been incomprehensible, to themselves as well as their contemporaries, and would lead to a dead end. That sense of history, that sense of extending art in one or two dimensions while leaving others untouched, is something that humans have, and that generative AI models don’t. But could they?

Related work from others:  Latest from MIT : 3 Questions: Fotini Christia on racial equity and data science

What would happen if we trained an AI like ChatGPT and, rather than viewing hallucination as error and trying to stamp it out, we optimized for better hallucinations? You can ask ChatGPT to write stories, and it will comply. The stories aren’t all that good, but they will be stories, and nobody claims that ChatGPT has been optimized as a story generator. What would it be like if a model were trained to have imagination plus a sense of literary history and style? And if it optimized the stories to be great stories, rather than lame ones? With ChatGPT, the bottom line is that it’s a language model. It’s just a language model: it generates texts in English. (I don’t really know about other languages, but I tried to get it to do Italian once, and it wouldn’t.) It’s not a truth teller; it’s not an essayist; it’s not a fiction writer; it’s not a programmer. Everything else that we perceive in ChatGPT is something we as humans bring to it. I’m not saying that to caution users about ChatGPT’s limitations; I’m saying it because, even with those limitations, there are hints of so much more that might be possible. It hasn’t been trained to be creative. It has been trained to mimic human language, most of which is rather dull to begin with.

Is it possible to build a language model that, without human interference, can experiment with “that isn’t great, but it’s imaginative. Let’s explore it more”? Is it possible to build a model that understands literary style, knows when it’s pushing the boundaries of that style, and can break through into something new? And can the same thing be done for music or art?

Related work from others:  Latest from Google AI - Pre-trained Gaussian processes for Bayesian optimization

A few months ago, I would have said “no.” A human might be able to prompt an AI to create something new, but an AI would never be able to do this on its own. Now, I’m not so sure. Making stuff up might be a bug in an application that writes news stories, but it is central to human creativity. Are ChatGPT’s hallucinations a down payment on “artificial creativity”? Maybe so.

Similar Posts