Blog

The Appearance of Wisdom: What Can Plato Teach Us About AI?

by Jamie Flinchbaugh on 03-17-26

Plato may be an unusual person to turn to for a perspective on AI. In trying to predict where AI goes and what it means for work, for learning, and even for humanity, we look to the history of technological adoption. The internet is the most common reference point, mostly because many of us lived it.

The fundamental thesis many present is that humanity survives technological adoption and often thrives through it, but this isn’t a foregone conclusion. Each wave of technological advancement comes at a cost, or a risk, that is up to us to overcome. Therefore, passively letting technology unfold is a path likely to minimize the gains and maximize the risk. Adoption of AI cannot be unthinking.

In past technological waves that threatened human cognition, we could turn to the internet, the calculator, the printing press, or the invention of writing itself.

Plato’s Phaedrus, a dialogue between Socrates and Phaedrus, includes an Egyptian myth. In it, the god Theuth presents the concept of writing to King Thamus, stating:

“This invention, O king, will make the Egyptians wiser and will improve their memories; for it is an elixir of memory and wisdom that I have discovered.”

The response from Thamus, in part, includes this warning:

“For this invention will produce forgetfulness in the minds of those who learn to use it, because they will not practice their memory. Their trust in writing, produced by external characters which are no part of themselves, will discourage the use of their own memory within them. You have invented an elixir not of memory, but of reminding; and you offer your pupils the appearance of wisdom, not true wisdom, for they will read many things without instruction and will therefore seem to know many things, when they are for the most part ignorant and hard to get along with, since they are not wise, but only appear wise.”

While this seems like it could have been written about AI, it is about the seemingly innocent technology of writing. Professor Neil Postman, in Technopoly, states, “Every technology is both a burden and a blessing; not either-or, but this-and-that.”

Will AI make people appear wise, or smart, or informed, or productive, when they are in fact not? Does it produce a lack of critical thinking while encouraging deference to whatever AI tells us? Does it create misplaced trust? All of these risks are already true in part.

But the gains from writing far outweighed any harm. We, of course, wouldn’t have the views of Socrates and Plato without it. Maybe writing did hurt people’s memories, as my dependence on a grocery list would demonstrate, but the gains were transformational.

AI will surely have a cost to humanity, but if we seek to preserve critical thinking, discernment, creativity, and so on (both beyond and within AI), we can maximize the gains and minimize the risks.

This isn’t a take on whether AI is good or not, as it is here to stay. This is about what we do as a society in a world in which AI exists.

We must change education, at all levels, to help elevate critical thinking and collaboration. The ability to own your thinking while engaging with others (whether that means another human or AI) will be increasingly essential. Quoting the writer Wendell Berry, “It is not from ourselves that we will learn to be better than we are,” and therefore why can’t AI be part of bettering ourselves? 

We must not value production of materials as an indicator of work. Already in organizations, people are working harder because they can produce more material, which also leads to everyone else having to consume more of it. But besides the fallacy that more is better, much of what is being produced has been called workslop, a term coined by researchers from the Stanford Social Media Lab and BetterUp Labs to describe AI-generated work content that lacks the substance to meaningfully advance a given task. Its creation actually adds to the work without advancing it. We have historically treated being in more meetings, writing more emails, and producing more presentations as signs of being productive. Peter Drucker challenges us to remember that “There is nothing so useless as doing efficiently that which should not be done at all.”

We know that AI is here. We know that its path forward is not yet fully determined. We know that it carries risks to humanity, both individually and collectively. And what I propose is that we must put more focus on how humans evolve in an AI world than we do on shaping AI itself.