Artificial Success: How the Utilization of Generative AI Destroys Forensics
Artificial Success: How the Utilization of Generative AI Destroys Forensics
Picture this: you are a novice competitor in extemporaneous speaking, attending your first-ever tournament. You’re entered in international extemporaneous speaking and admittedly have not “locked in” enough to understand even half the concepts covered in-round. Your first topic area is Oceania, with the question you drew concerning the impacts of Australia’s failed 2023 Voice to Parliament referendum. You sigh. Then, you remember that, abiding by NSDA guidelines, you can legally use generative AI to format your entire speech. Within the first minute or so of your prep-time, you type your question into Snapchat’s “MyAI”, ask it to answer the question in the format of an extemporaneous speech, and — in seconds —have a complete outline. All you have to do is insert some stats, and you’re set. Using this method, you win the tournament.
To be frank, this is absurd. From novice extempers at locals generating outlines to national interp-champions generating intros and transitions, AI destroys creativity and intellectualism in all events and levels of competition. With the new generation of competitors having complete, unbridled access to generative AI, districts must establish new guidelines that restrict AI use and abuse, as it defeats every standard the forensics community stands for.
Generative AI, which includes any type of AI that generates original content, has been accessible to the greater speech and debate community for years. Though the concept of widespread use is only now gaining traction in the media concerning integrity and morality, mainstream usage has been on a rapid incline for three years. OpenAI, arguably the driving force of domestic AI usage, released ChatGPT to the public on November 30th, 2022. The AI chatbot immediately gained mass attention internationally, reaching 100 million users in just two months. Since then, countless other generative AI applications have been made available to the public, with nearly no federal or state restrictions. Unfortunately, this rationale is apparent not only by legal standards, but also our communities.
While both the NSDA and NCFL have put out statements and regulations to some extent, neither has outwardly opposed AI usage. Since 2023, NSDA nationals have followed the same general set of guidelines concerning AI use: AI cannot be cited as an individual source, all material quoted/referenced by AI can be fact-checked at any time, and competitors can not use AI to generate speeches or cases in their entirety. Aside from these general restrictions, competitors may use AI as a resource to frame arguments, structure speeches, or gain inspiration in essentially any metric. In November of 2024, the NSDA established that it will stand by its existing guidelines concerning AI use in extemp, specifically, and that an outright ban is too hard to enforce and “ran contrary to the educational goals of speech and debate”. The NCFL poses similar ideals, with perhaps even looser regulations. Its official statement claims that students should continue to use AI as a tool responsibly, and to simply refer to existing NCFL bylaws when questioning the validity of the extent of AI usage.
The forensics community can continue to claim that AI is a “useful tool” as much as we want, but the fact remains that in the status quo, AI is not half the tool our society claims it to be. Factually speaking, its accessibility is not up to par with its credibility. The most prominent example of this is referred to as AI hallucinations. AI hallucinations are essentially instances where generative AI produces claims, statistics, or other stories that can be highly believable, but are entirely nonsensical. This includes fabricating sources, generating historically inaccurate information, or otherwise inventing data.
Though some AI hallucinations are extremely apparent, most of the time, it is nearly impossible to differentiate fact from fiction unless doing independent research. This can be mitigated through fact-checking, but this leads to two potential outcomes: either users abstain from fact-checking, resulting in the spread of misinformation, or they do fact-check, which means the use of AI was futile from the beginning, considering independent research was the outcome regardless. Outside of hallucinations, AI is also riddled with data biases. Data bias is showcased through prejudiced and discriminatory content, whether outwardly bigoted or simply perpetuating systemic bias. In generative AI specifically, this is primarily shown through historical and measurement bias. Historical bias is when AI mirrors past social inequalities, whereas measurement bias is when AI records data from a clouded lens, overlooking those of underrepresented groups. This is only the beginning concerning factual inaccuracies in generative AI. However, outside of logistical aspects, AI usage is also morally incorrect.
AI is inherently theft, in nearly all aspects of the word. In generative AI specifically, works are copied in order to train AI models. This ranges from published research studies to independent digital artwork and everything in between. Since AI is trained through massive online databases, it is inherently incapable of properly citing its sources. This means that even if presented an appeasing outcome, the material used to compose it will never be properly credited. But, perhaps most notably, the usage of AI is beyond morally corrupt due to its environmental impacts. This is because it requires massive amounts of energy to operate. In 2024, AI data centers consumed around 415 terawatt-hours of electricity. This number is projected to double by 2030, reaching nearly 100 terawatt-hours. To put this into perspective, by 2027, global AI energy consumption will roughly equate to the consumption levels of the entirety of Sweden. These incomprehensible levels will lead to unforeseeable environmental impacts. Greenhouse gas emissions will continue to rapidly increase as these data centers continue mass energy consumption, pushing climate change and extreme weather events. Additionally, the data centers in question quickly overheat due to this overconsumption. Data centers use water to “cool down” their software, which leads to significant waste levels of water, especially in areas already struggling with water scarcity. AI already consumes about six times as much water as the entirety of Denmark, and this is only set to increase as AI becomes more utilized by the general public.
My point isn’t just that AI usage is generally bad — it's that it goes against literally every fundamental aspect of speech and debate. The NSDA claims that forensics can boost classroom performance, communication abilities, and critical thinking skills as a whole. AI perpetuates the exact opposite of all of these benefits: it promotes educational illegitimacy through plagiarism, creates feedback loops that discourage human communication, and — above all — destroys critical thinking skills in young minds through normalizing academic dishonesty. It generally diminishes the point of speech and debate. As a six-year speech and debate competitor, in my eyes, the primary objective of forensics is to give youth an outlet to form opinions backed by factual evidence, and to personally and competitively advocate for them while growing intellectually. If these opinions are not our own, facts are unethically sourced, and arguments are artificially generated, each of these efforts are futile, and speech and debate become nothing more than yet another extracurricular to add to your college application.
Integrity is crucial to speech and debate. In an activity that foremost strives to uplift young voices, there is no space for a tool that does nothing but steal and destroy. For the sake of preserving forensics as we know it, we must educate ourselves on the harms of AI usage, abstain from it, and advocate for change to preserve the soul of speech and debate for generations to come.