in

AI Tested on Plato’s 2,400-Year-Old Problem: ChatGPT’s Response Sparks Debate

Plato vs AI
Plato vs AI

Ancient Greek Philosophy Meets Artificial Intelligence

A groundbreaking study has revealed how ChatGPT was challenged with one of Plato’s most famous 2,400-year-old mathematical problems—and its response surprised researchers. Conducted by experts from Cambridge University and Hebrew University, the experiment tested whether artificial intelligence can approach knowledge and reasoning in a way similar to human learning.

At the center of this study lies Plato’s classic question posed through Socrates: “How do you double the area of a square?” While the puzzle has intrigued philosophers and mathematicians for centuries, ChatGPT’s attempt provided both creative insights and flawed reasoning, opening a new conversation about the limits of machine learning.


The Challenge of Plato’s Square Problem

Plato originally introduced the problem in his dialogue “Meno”, as Socrates guided a young boy to discover mathematical truths without formal instruction. This experiment has long been cited in debates about whether knowledge is innate or acquired through experience.

When researchers posed the same challenge to ChatGPT, the AI system initially attempted a non-traditional approach, relying on algebraic reasoning rather than geometry. According to Dr. Nadav Marco, one of the lead authors, “The system tried to generate a solution based on patterns from its previous knowledge, much like a human learner would, even though it made logical mistakes.”

This outcome demonstrated that AI’s problem-solving path can mirror human trial-and-error learning, instead of simply repeating pre-learned information.


Surprising Mistakes and Insights

To deepen the experiment, the team asked ChatGPT to double the area of a rectangle. This time, the AI argued that using the diagonal could not achieve the desired result, ultimately denying the existence of a geometric solution. The answer left the researchers both puzzled and fascinated.

What stood out was that the incorrect response was not copied from training data but rather generated through reasoning. This indicated that ChatGPT can synthesize new answers by combining existing concepts, even when the outcome is mathematically inaccurate.


What Does This Reveal About AI Learning?

The research highlights a striking similarity between ChatGPT’s learning process and Lev Vygotsky’s educational theory of the “zone of proximal development.” This concept describes the gap between what a learner already knows and what they can achieve with proper guidance.

In essence, AI is not simply a database of answers—it attempts to navigate knowledge gaps, experimenting with reasoning in ways that resemble human learners. This suggests that future AI systems could potentially be trained not just to deliver answers, but to demonstrate learning progression.


Expert Reactions: Should Students Trust AI Proofs?

Professor Andreas Stylianides, a mathematics education specialist, cautioned against over-reliance on AI:

“Unlike textbook proofs, students should not assume that AI-generated proofs are valid. However, evaluating these AI responses could itself become a valuable educational exercise, encouraging learners to analyze and critique reasoning.”

This perspective underscores the role of AI not as a replacement for human understanding but as a tool for enhancing critical thinking.


Why This Matters for Philosophy and Education

The study, published in the International Journal of Mathematical Education in Science and Technology, reveals more than just AI’s limitations. It reflects how artificial intelligence is forcing humanity to revisit age-old philosophical debates:

  • Is knowledge innate or learned?
  • Can machines truly “understand,” or do they simply mimic patterns?
  • Should AI mistakes be seen as failures—or as evidence of creative reasoning?

For educators, the implications are profound. AI could soon play a central role in teaching problem-solving skills, provided students are also trained to question and evaluate machine outputs.


The Bigger Picture: AI as a Modern Socratic Dialogue

Just as Socrates used questioning to guide his students toward deeper insights, AI may serve as a digital counterpart to the Socratic method, pushing learners to confront misconceptions and refine their reasoning.

While ChatGPT’s responses were not mathematically flawless, they mirrored the process of human discovery—mistakes, insights, and reflection. This opens the possibility of a new educational paradigm where AI acts as a partner in learning, rather than just a provider of solutions.

Written by ugur

Ugur is an editor and writer at Need Some Fun (NSF News), specializing in technology, world news, history, archaeology, cultural heritage, science, entertainment, travel, animals, health, and games. He produces in-depth, well-researched, and reliable stories with a strong focus on emerging technologies, digital culture, cybersecurity, AI developments, and innovative solutions shaping the future. His work aims to inform, inspire, and engage readers worldwide with accurate reporting and a clear editorial voice.
Contact: [email protected]