Stop Just Treating AI Like an Issue of Morality. Instead, Let’s Talk.
Stop Just Treating AI Like an Issue of Morality. Instead, Let’s Talk.
Jack Zhou | 1/14/26
The only thing on my wishlist for Christmas this year was AI. I mean, come on—ChatGPT has practically been my private tutor when it comes to understanding physics before a test or helping me fix my grammar for English essays. Yet, there is one field where I have never used AI: speech and debate.
But the reason why is not quite that simple. The reason is not because I think AI is a terrible resource for prepping extemp speeches or cutting cards (although it's atrociously bad at cutting cards). Rather, the reason is that I'm scared of how my usage will be perceived by my fellow debaters. Indeed, I have a wide variety of debaters on my team, each with their own opinions. I have one teammate who uses AI to find sources for every single extemp speech, while I have others who view AI as the epitome of mental degradation and immorality. If I am being honest, I have always leaned toward the latter, viewing AI as wrong because of the harm to critical thinking, ethics, and all of the basic repeated points that everyone has heard at this point. Yet, the issue of AI is not as black and white as it may seem.
In fact, AI can have many positive uses that do not necessarily have to compromise morals. For example, 2025 NSDA US Extemp Champion Robert Zhang stated in an interview with EIF that he has used Perplexity (an AI-powered search engine) to search for court cases. While some people may use AI to prep entire speeches, diminishing any form of critical thinking, the usage of AI as a search engine may maintain critical thinking while improving efficiency. This example suggests that it is not necessarily the usage of AI that is inherently bad, but rather the type of usage. There is no doubt that using AI does eliminate the need for certain skills, but it also creates new efficiencies to focus on other ones instead–a trend that is evident in extemp’s history. Extemp has evolved significantly as an event over the decades. What used to be an event of filing research with physical copies before competition has evolved into an event where the internet has given people the ability to access practically anything they want during the draw. Thus, an argument could be made that it would only make sense for AI to be the next progression toward improving research efficiency.
AI can also be used as a practice tool. More and more, I see AI-based programs being created that may be used to support debaters without costing any critical thinking. While most websites are in their infancy and may not have enough troubleshooting, the idea of a personal practice round companion on your laptop could be a game changer in accessibility. Even an AI to help people research more efficiently could close the prep gap between large and small schools. While I am sure there could be arguments made on how AI could diminish the level of critical thinking, the usage of AI as an accessibility resource could be acknowledged as well.
Now, I need to be clear that I am not writing this blog to make a definitive stance on AI. (If you want to read a blog that makes a strong stance on AI, read Jana Schodzinski's blog here.) I recognize that with all the benefits that could come with AI that I mentioned, there are just as many harms as well. Rather, all this blog is arguing is the need to treat AI not simply as a moral issue that is black and white, but rather to recognize it as a part of the future of this activity that has dimensions to it. No matter what, there will be people that use AI. We have to recognize that these people are not morally flawed individuals, but rather people who have different opinions on the role of AI. Instead of ostracizing these people, we need to talk to them instead, because the truth is that they could have really good reasons to use AI. Or if they do not have any good reason, then the other person could try to convince them to avoid its usage. The key here is dialogue, which only comes from recognizing the validity of each side.
The importance of discourse becomes even more critical in the context of rules set by the National Speech and Debate Association and other forensics organizations. For example, in the Texas Forensics Association (TFA), AI has been banned in events like congress. Despite this, it is incredibly unlikely that this will lead to widespread change because of how easy it is to violate the rules. For example, at one tournament in Texas, the person running extemp draw banned the usage of AI in the room, and even walked around to check people’s screens, yet as soon as she looked away competitors would switch tabs to ChatGPT. Even NSDA recognizes the difficulty of enforcement, stating that "an outright ban was difficult to enforce," and as a result they have no such rule. Clearly, rules are not enough to create widespread change. Community norms are what truly drive behavior.
One of the biggest catalysts of change are social norms. In the speech and debate community, governing bodies like NSDA create rules, but it is up to social norms to not only enforce them, but also to create unwritten rules that change with the times and meet the needs of competitors. For example, nowhere in the NSDA High School Unified Manual does it even mention the word "kritik" in the rules, yet kritiks have become an integral part of debate events. Or in extemp, nowhere does it mention that judges are obligated to provide time signals, yet in many rounds judges provide them anyway. What both of these examples have in common is the underlying factor of social norms. It has become so customary for kritiks to be allowed or for time signals to be given, largely because they are norms accepted by the community. The impact of these norms runs deep. The mutual understanding of what is acceptable between competitors creates consistent conditions for members to compete in that allows for fair, equitable competition. Unfortunately, artificial intelligence threatens all of that. The reason why is not because of any inherent harm of AI, but because of our lack of discourse. That very lack of discourse is what has prevented community norms from being created, leaving thousands of people unsure where the future of this activity lies and leaving the space open for inequity to form.
The Solution
We cannot keep treating AI like it's beneath us, because in reality it's right in front of us. AI is going to be a part of the future of speech and debate no matter what. The question is not whether AI will be used, but how we, as a community, decide to approach or even integrate it responsibly. Here's what we can do:
For Individual Competitors:
Competitors need to engage in honest conversations with your teammates, coaches, and competitors about AI usage. Instead of assuming the worst about someone who uses AI differently than you do, ask them why. Understanding different perspectives is the first step toward building community-wide norms. If you use AI, be transparent about how you're using it and be willing to defend your choices in good faith.
For Coaches and Teams:
Coaches should facilitate open discussions about AI with their teams. Rather than simply banning it or allowing it without guidance, create team norms that reflect your values while acknowledging the reality of AI's presence in the activity. Help students develop discernment about when and how AI can be used ethically.
For the Broader Community:
People cannot be afraid to voice their opinions on AI because of fear of shame from the community. We need to hear from students, coaches, and judges about their experiences and/or concerns and keep an open mind. Only through widespread dialogue can we develop the social norms that will actually govern AI usage in practice.
Organizations like NSDA and TFA should continue to monitor AI developments and adjust policies as needed, but without many enforcement mechanisms it really comes down to us as competitors, coaches, and members of the speech & debate community.
The perception of AI as a moral good or evil has hindered our own ability to think critically about the role of AI in the future. Rules are not enough—we need norms, which only come from discussion. Without these conversations, we leave the door open to inequity and inconsistency that will harm the activity we all love.
While NSDA can make rules, only fellow debaters and the cultural norms we set can truly enforce them. The future of forensics depends on our willingness to engage with AI thoughtfully, transparently, and collectively. Let's start talking.