Part 2: When AI Supports Thinking and When It Starts to Replace It

By: David - Principal, Researcher, and Co-Founder

After our first student conversation about AI, I left with a deeper understanding of what is actually happening in classrooms from the student perspective. Students spoke candidly about when they use AI, why they use it, and how often they turn to it when they are confused, unsupported, or unsure what a teacher is asking of them.

That first conversation helped surface student behavior.

This second meeting helped us go deeper into student thinking.

As a principal, that distinction matters to me. It is one thing to know that students are using AI. It is another thing entirely to understand how they believe it is affecting their learning, their confidence, their independence, and their ability to think for themselves.

That was the focus of our second meeting.

I wanted to better understand when AI actually helps students think more deeply and when it begins to do too much of the thinking for them. I also wanted to hear what students believe teachers may not yet understand about AI’s role in learning.

What students shared was thoughtful, honest, and nuanced.

They did not describe AI in simplistic terms. They did not frame it as entirely good or entirely harmful. Instead, they described something more complex and more useful: AI can either support thinking or replace it, depending on how it is used.

One student shared that using a structured AI tool, like a teacher-created Gem, helped her learn how to prompt less structured AI in better ways later on. That comment stood out to me because it suggests something important: when students are given structure, they can begin to use AI more intentionally. In that case, the tool was not replacing thought. It was helping shape it.

That is an important distinction.

Students also talked about ways to use AI without crossing into cheating. One idea they raised was asking AI to show the steps rather than just giving the answer. In their view, this matters because the process leads students toward their own reasoning. It slows things down. It requires engagement. It invites thinking rather than bypassing it.

That tells me students do understand the difference between support and substitution.

At the same time, they were very honest about the risks.

One student said plainly that using AI can make him feel too dependent on it. That statement captured something I suspect many students feel but may not always say out loud. Even when they know AI can be useful, they are aware that it can quickly become something they lean on too much. They understand that a tool designed to help can also become a crutch.

That level of self-awareness is important. It tells me students are not simply consuming AI without reflection. Many of them are already wrestling with questions of dependence, balance, and personal responsibility.

Students also pointed out something else we cannot ignore: AI is not always right.

They noted that it can provide incorrect answers, and when it does, students sometimes have to challenge it and push back. That may sound simple, but it highlights a much larger issue. Effective AI use requires more than access. It requires judgment. It requires students to know enough to question what they are seeing, to recognize when something feels off, and to keep pressing instead of accepting the first response.

That is a form of literacy we have not fully taught yet.

Another powerful takeaway from this meeting was students’ view of who is benefiting most from AI right now. Some felt that teachers are benefiting more than students are, while students often benefit the least because they are not allowed to use AI in meaningful ways. That observation stayed with me.

If adults are exploring the advantages of AI for planning, efficiency, and productivity while students mostly encounter restriction, then we need to pause and consider what message that sends. Students notice that difference. They notice when the adults around them are encouraged to experiment with new tools while they are mostly warned about misuse.

That does not mean there should be no guardrails. It does mean we should think carefully about whether our approach is helping students learn how to use AI responsibly or simply keeping them at a distance from a tool that is already shaping their world.

Students also made an important connection between AI use and instructional design.

They talked about the value of checkpoints that do not impact their grade. In their view, those moments matter because they reduce pressure and create opportunities to learn along the way. Without low-stakes checkpoints, students said they are more likely to use AI simply to find answers and complete tasks. But when learning includes structured, lower-pressure opportunities to practice and get feedback, students are more likely to stay engaged in the process.

That insight deserves attention.

It suggests that student AI use is not just about temptation or character. It is also about the conditions we create in our classrooms. When everything feels high stakes, efficiency becomes more attractive than learning. When students have room to make mistakes, ask questions, and build understanding over time, they are less likely to use AI as a shortcut.

That is not just an AI issue. That is a teaching and learning issue.

One of the clearest lessons from this second meeting is that many students still do not fully understand how AI can be used to support learning in productive ways. Because of that, they are often unable to clearly describe what strong, responsible use actually looks like. That matters.

It tells me that students do not just need rules about AI. They need models. They need examples. They need opportunities to learn how to use these tools in ways that strengthen thinking rather than weaken it.

In other words, they need guidance.

As I reflect on this second conversation, I keep coming back to the same idea: students are more thoughtful about this than we sometimes assume. They are aware of the tradeoffs. They can name the difference between help and dependence. They understand that AI can support learning in one moment and undercut it in another.

That kind of honesty is exactly why student voice matters.

If schools want to respond wisely to AI, then we cannot build our response around fear alone. We cannot base it only on adult assumptions or worst-case scenarios. We have to listen carefully to what students are telling us about their actual experience.

This second meeting reminded me that the real challenge is not deciding whether AI should exist in education. It already does. The real challenge is whether we are helping students use it in ways that preserve thinking, deepen learning, and build independence.

That is the work in front of us.

And our students have more to teach us about that than we may realize.

Next
Next

Doing the Work Before It Shows