
“You have not discovered a potion for remembering, but for reminding…”
Socrates
- What does a 2,000-year-old warning from Socrates have to do with AI in schools today?
- Are we mistaking access to information for real understanding?
- How can AI become a partner in deep learning—not just a shortcut to shallow thinking?
- Practical ideas for educators and leaders navigating the AI age with clarity and care.
I’ve had quite a bit of feedback in response to my last FutureMakers newsletter which featured a number of articles on the topic of artificial intelligence. One of these came from my daughter and her husband who had come across an interesting article titled Socrates on the forgetfulness that comes with writing which they thought might interest me.
The article records a conversation with Socrates over 200o years ago, in which he warns that the invention of writing would lead to the decline of memory and true understanding. Socrates refers to a conversation between Theuth (an ancient Egyptian divinity credited with first discovered number and calculation, geometry and astronomy, as well as the games of checkers and dice, and, above all else, writing) and Thamus (the Egyptian king at the time). Here’s a quote that from the article that starts things off… (emphasis mine)
“… when they came to writing, Theuth said: “O King, here is something that, once learned, will make the Egyptians wiser and will improve their memory; I have discovered a potion for memory and for wisdom.” Thamus, however, replied: “O most expert Theuth, one man can give birth to the elements of an art, but only another can judge how they can benefit or harm those who will use them. And now, since you are the father of writing, your affection for it has made you describe its effects as the opposite of what they really are. In fact, it will introduce forgetfulness into the soul of those who learn it: they will not practice using their memory because they will put their trust in writing, which is external and depends on signs that belong to others, instead of trying to remember from the inside, completely on their own. You have not discovered a potion for remembering, but for reminding; you provide your students with the appearance of wisdom, not with its reality. Your invention will enable them to hear many things without being properly taught, and they will imagine that they have come to know much while for the most part they will know nothing. And they will be difficult to get along with, since they will merely appear to be wise instead of really being so.”
Reading these words today, it’s hard not to hear the echoes in our current conversations about artificial intelligence – especially generative tools like ChatGPT. The similarities are striking. Socrates feared that writing would erode our ability to think deeply. Now, many educators and policymakers express similar concerns about AI. But history offers us a helpful perspective – and a challenge. Just as writing ultimately became an essential tool for learning and knowledge-building, so too might AI, if we approach it with the same mix of caution and creativity?
New Technologies Often Trigger Fear Before Transformation
Socrates’ critique of writing was rooted in his concern for deep, embodied wisdom – knowledge that is lived, practiced, and internalised through dialogue and reflection. He feared that the externalisation of knowledge (i.e. writing it down) would encourage a kind of cognitive laziness. Why commit something to memory or wrestle with a concept internally, when you can simply refer back to a written record?
This fear of shallowness – of losing depth, nuance, and meaningful engagement – has echoed through history with each new technological leap. When the printing press made texts widely accessible, there were concerns it would flood society with low-quality writing and reduce the value of oral teaching. Television was feared for turning complex ideas into entertainment soundbites. The internet brought fears of distraction, misinformation, and shortened attention spans. Now, it’s AI’s turn.
The common thread in all these responses is a deep anxiety about how we come to know, not just what we know. New technologies shift the terrain of learning. They make knowledge more abundant, more accessible – but potentially more superficial if we don’t adapt our practices alongside them.
But history also shows us something else: each time, society and education have evolved. The printing press gave rise to public education and literacy movements. The internet enabled global collaboration, open learning, and new forms of creativity. These transformations didn’t happen automatically—they required intentional shifts in pedagogy, policy, and culture.
So rather than ask, “Will AI harm education?” the more constructive question is: “How will we shape education (and society) to adapt and evolve in ways that deepen, rather than dilute, learning?” Avoiding shallowness isn’t about rejecting the tool. It’s about reimagining the context in which it’s used – ensuring that AI becomes a partner in deeper inquiry, not a substitute for it.
From Memory to Cognition: Rethinking What It Means to Know
The world in which Socrates lived valued memory as the cornerstone of learning. To know something was to carry it within you – to recall it, recite it, and reproduce it in discourse. This made sense in a time when information was scarce and oral tradition was paramount.
But in today’s education systems, we’ve already shifted – at least in theory – toward valuing critical thinking, creativity, and the ability to synthesise information from multiple sources. Memorisation still has its place, particularly in building foundational knowledge and fluency. Yet it is no longer the pinnacle of educational achievement (though some still view exams testing memory and recall as the ultimate measure of success).
The introduction of AI is forcing us to take this shift more seriously. When machines can generate summaries, write essays, or solve equations in seconds, it becomes clear that mere recall or reproduction is not enough. The role of the learner must evolve. To support that evolution, we must intentionally design learning experiences that foster the kind of knowledge that is relational, contextual, and transferable – that is, knowledge that enables students to make connections, apply ideas in new situations, and think critically about meaning and impact.
So what might this look like in practice? The following list provides just a few examples of the sorts of things great teachers are doing already, and which we should be encouraging more of as part of our learning design:
- Problem-based learning where students investigate real-world challenges – such as designing a sustainable local transport plan, analysing media bias in coverage of an event, or proposing ethical guidelines for the use of AI in healthcare. These tasks require research, synthesis, perspective-taking, and solution-focused thinking.
- Cross-disciplinary projects that mirror how knowledge is applied in the real world—for example, combining science, ethics, and policy in a unit on climate change or public health. In these kinds of learning experiences students must integrate information from different domains and understand context, trade-offs, and systems thinking.
- Metacognitive practices that ask students to reflect on how they learn, not just what they’ve learned. For example, using learning journals, peer feedback, or structured self-assessment helps students develop awareness of their own thinking processes.
- Socratic dialogue and debate, where students are encouraged to question assumptions, explore alternative viewpoints, and develop arguments that go beyond surface-level opinions.
- Authentic assessments such as portfolios, exhibitions, simulations, or oral defences – where learners demonstrate understanding in ways that are personal, purposeful, and often unpredictable. These forms of assessment prioritise depth over speed or rote performance.
Each of these approaches challenges students to engage with knowledge not as static content to be recalled, but as a dynamic resource to be interpreted, applied, and interrogated. They reflect the kind of human thinking that AI cannot replicate (yet?): judgement, empathy, value-driven reasoning, and the capacity to navigate ambiguity.
In this context, AI is not a threat to learning, but a catalyst for reimagining what we truly want education to achieve. It challenges us to clarify the distinction between knowing about something and understanding it. That distinction is not new – it’s the same as what Socrates was wrestling with it over 2000 years ago. Back then he made a sharp observation about writing: that it offers the appearance of wisdom without its substance. Writing, he argued, does not help us remember, but only reminds us of what others have said. In other words, it can store and transmit information, but it cannot teach us to know in the deepest sense.
That insight feels especially relevant in the age of AI. It can generate plausible text, summarise arguments, even imitate thought. But it doesn’t understand in the way we think of understanding. This presents a danger: the illusion of wisdom. Students may feel they “know” something simply because they can prompt AI to produce a coherent answer. The same could be said of some teachers when using an AI to generate lesson plans. But just as we’ve learned how important it is to critically engage with written sources, we now apply that knowledge to interrogating and refining what AI produces for us.
Implications for Educators and Policymakers
Rather than resist AI, we need to be thinking more about how we might re-frame our approach. Students, too, must be part of this conversation – not just as users of AI, but as ethical agents, creative collaborators, and informed decision-makers in their own learning. The key is to move from fear and restriction to thoughtful integration – recognising that while the technology is powerful, it’s how we use it that will ultimately shape its impact on learning. In the end, our response to AI is not just a question of tools or techniques – it’s a reflection of what we believe education is for: to cultivate wisdom, agency, and the capacity to contribute meaningfully to the world.
Here are some key areas you could be focusing on in your own context:
AI literacy is now essential
Understanding how AI works (e.g. its strengths, limitations, and biases) is as foundational today as learning to read and write was in Socrates’ time.
For example, a teacher might guide students in using an AI tool like ChatGPT to generate a draft essay introduction, then have them evaluate it together: What does it get right? Where is it vague or generic? What perspectives are missing? This opens up conversations about reliability, voice, and authorship.
Just as we teach students to critically read a source, we now need to teach them to critically engage with AI-generated content. This is not just digital literacy – it’s a new form of epistemic literacy.
Pedagogy must evolve
Rather than asking students to “find an answer,” we should be designing tasks that ask what should we do with this answer? or how might others see this differently? We should be requiring students to emphasise judgement, critical analysis, and ethical reasoning – not just task completion
For instance, in a social studies unit, students could prompt an AI tool to summarise different historical interpretations of a conflict – then compare these to their textbook, discuss potential biases, and create a timeline that includes both factual events and differing viewpoints. The AI provides a starting point – but the thinking happens in the critique and reconstruction.
This type of task not only makes the most of what AI can offer but strengthens the very human capacities of discernment, empathy, and reasoned judgement.
Assessment systems need revisiting
Traditional assessments that reward formulaic answers are ripe for disruption. If an AI can ace your test or write your assignment, the problem may not be the AI – it’s more likely to be a problem with the task design. Learning tasks should focus on process, originality, and application in real-world contexts – not just outputs that can be easily replicated by machines
For example, instead of a written report on renewable energy, imagine students presenting a proposal to their local council for improving sustainability in their community. They might use AI tools to research policy models or generate a data summary—but they must explain their rationale, cite sources, anticipate counter-arguments, and field live questions.
Assessments like this are harder to “fake” and far richer in evidence of genuine understanding.
Teacher support is vital
None of this can happen unless we support teachers to engage with AI confidently and ethically. This means professional learning that goes beyond technical training to include reflective dialogue about practice, values, and pedagogy.
For example, you could set aside time in staff meetings for teachers to experiment with AI tools together – testing lesson ideas, comparing outputs, and discussing how to scaffold student use. As principal, you could invite teachers to co-create a set of classroom guidelines for ethical AI use, shaped by their shared values and curriculum goals.
Just as students need models of critical engagement with AI, so do teachers. Banning AI use may feel like the safest short-term solution, but thoughtful integration is what leads to long-term empowerment.
Shaping the Future, Not Resisting It
Socrates was not wrong to be concerned about the erosion of wisdom. His caution reminds us that the medium through which we learn affects how we learn – and who we become. But his fears about writing did not stop humanity from embracing its possibilities. Instead, we learned to teach differently. We developed literacy. We created libraries, books, universities, and digital archives. We embedded writing into the fabric of civilisation.
We are at a similar juncture now. The question isn’t whether AI will change education – it already is. The question is: will we lead that change with vision, integrity, and creativity? Or will we simply react with fear? If we want students not just to appear wise but be wise, our job is to ensure they learn how to think, reflect, and discern – even when AI can do the easy parts for them.
As educators, leaders, and policy shapers, we now face a pivotal choice: will we respond to AI with the same imagination and resolve that previous generations brought to writing, printing, and digital networks? The challenge is not just to manage AI, but to design education in ways that deepen understanding, affirm human values, and prepare learners for a world where thinking with machines is part of everyday life.
Let’s resist the temptation to retreat. Let’s step forward – with wisdom, with curiosity, and with a shared commitment to shaping a future where education doesn’t just adapt to change, but helps lead it.

