In a fascinating op-ed, David Bell, a professor of history at Princeton, argues that “AI is shedding enlightenment values.” As someone who has taught writing at a similarly prestigious university, and as someone who has written about technology for the past 35 or so years, I had a deep response.
Bell’s is not the argument of an AI skeptic. For his argument to work, AI has to be pretty good at reasoning and writing. It’s an argument about the nature of thought itself. Reading is thinking. Writing is thinking. Those are almost clichés—they even turn up in students’ assessments of using AI in a college writing class. It’s not a surprise to see these ideas in the 18th century, and only a bit more surprising to see how far Enlightenment thinkers took them. Bell writes:
The great political philosopher Baron de Montesquieu wrote: “One should never so exhaust a subject that nothing is left for readers to do. The point is not to make them read, but to make them think.” Voltaire, the most famous of the French “philosophes,” claimed, “The most useful books are those that the readers write half of themselves.”
And in the late 20th century, the great Dante scholar John Freccero would say to his classes “The text reads you”: How you read The Divine Comedy tells you who you are. You inevitably find your reflection in the act of reading.
Is the use of AI an aid to thinking or a crutch or a replacement? If it’s either a crutch or a replacement, then we have to go back to Descartes’s “I think, therefore I am” and read it backward: What am I if I don’t think? What am I if I have offloaded my thinking to some other device? Bell points out that books guide the reader through the thinking process, while AI expects us to guide the process and all too often resorts to flattery. Sycophancy isn’t limited to a few recent versions of GPT; “That’s a great idea” has been a staple of AI chat responses since its earliest days. A dull sameness goes along with the flattery—the paradox of AI is that, for all the talk of general intelligence, it really doesn’t think better than we do. It can access a wealth of information, but it ultimately gives us (at best) an unexceptional average of what has been thought in the past. Books lead you through radically different kinds of thought. Plato is not Aquinas is not Machiavelli is not Voltaire (and for great insights on the transition from the fractured world of medieval thought to the fractured world of Renaissance thought, see Ada Palmer’s Inventing the Renaissance).
We’ve been tricked into thinking that education is about preparing to enter the workforce, whether as a laborer who can plan how to spend his paycheck (readin’, writin’, ’rithmetic) or as a potential lawyer or engineer (Bachelor’s, Master’s, Doctorate). We’ve been tricked into thinking of schools as factories—just look at any school built in the 1950s or earlier, and compare it to an early 20th century manufacturing facility. Take the children in, process them, push them out. Evaluate them with exams that don’t measure much more than the ability to take exams—not unlike the benchmarks that the AI companies are constantly quoting. The result is that students who can read Voltaire or Montesquieu as a dialogue with their own thoughts, who could potentially make a breakthrough in science or technology, are rarities. They’re not the students our institutions were designed to produce; they have to struggle against the system, and frequently fail. As one elementary school administrator told me, “They’re handicapped, as handicapped as the students who come here with learning disabilities. But we can do little to help them.”
So the difficult question behind Bell’s article is: How do we teach students to think in a world that will inevitably be full of AI, whether or not that AI looks like our current LLMs? In the end, education isn’t about collecting facts, duplicating the answers in the back of the book, or getting passing grades. It’s about learning to think. The educational system gets in the way of education, leading to short-term thinking. If I’m measured by a grade, I should do everything I can to optimize that metric. All metrics will be gamed. Even if they aren’t gamed, metrics shortcut around the real issues.
In a world full of AI, retreating to stereotypes like “AI is damaging” and “AI hallucinates” misses the point, and is a sure route to failure. What’s damaging isn’t the AI, but the set of attitudes that make AI just another tool for gaming the system. We need a way of thinking with AI, of arguing with it, of completing AI’s “book” in a way that goes beyond maximizing a score. In this light, so much of the discourse around AI has been misguided. I still hear people say that AI will save you from needing to know the facts, that you won’t have to learn the dark and difficult corners of programming languages—but as much as I personally would like to take the easy route, facts are the skeleton on which thinking is based. Patterns arise out of facts, whether those patterns are historical movements, scientific theories, or software designs. And errors are easily uncovered when you engage actively with AI’s output.
AI can help to assemble facts, but at some point those facts need to be internalized. I can name a dozen (or two or three) important writers and composers whose best work came around 1800. What does it take to go from those facts to a conception of the Romantic movement? An AI could certainly assemble and group those facts, but would you then be able to think about what that movement meant (and continues to mean) for European culture? What are the bigger patterns revealed by the facts? And what would it mean for those facts and patterns to reside only within an AI model, without human comprehension? You need to know the shape of history, particularly if you want to think productively about it. You need to know the dark corners of your programming languages if you’re going to debug a mess of AI-generated code. Returning to Bell’s argument, the ability to find patterns is what allows you to complete Voltaire’s writing. AI can be a tremendous aid in finding those patterns, but as human thinkers, we have to make those patterns our own.
That’s really what learning is about. It isn’t just collecting facts, though facts are important. Learning is about understanding and finding relationships and understanding how those relationships change and evolve. It’s about weaving the narrative that connects our intellectual worlds together. That’s enlightenment. AI can be a valuable tool in that process, as long as you don’t mistake the means for the end. It can help you come up with new ideas and new ways of thinking. Nothing says that you can’t have the kind of mental dialogue that Bell writes about with an AI-generated essay. ChatGPT may not be Voltaire, but not much is. But if you don’t have the kind of dialogue that lets you internalize the relationships hidden behind the facts, AI is a hindrance. We’re all prone to be lazy—intellectually and otherwise. What’s the point at which thinking stops? What’s the point at which knowledge ceases to become your own? Or, to go back to the Enlightenment thinkers, when do you stop writing your share of the book?
That’s not a choice AI makes for you. It’s your choice.