//

Editorial: The Dangers of ChatGPT and How To Combat Them

6 mins read

The New York City Department of Education recently banned the use of ChatGPT, an AI-powered chatbot, in schools. ChatGPT, which stands for “Chat Generative Pre-Trained Transformer,” was launched late last year by OpenAI, an artificial intelligence company founded by business magnate Elon Musk, and has since garnered millions of users at an unprecedented rate.

The technology behind ChatGPT is nothing short of revolutionary. Its extensive capabilities range from writing essays and poetry to composing music and art. The chatbot, accessible and easy to use, contains millennia worth of knowledge and can answer challenging questions and explain complex topics in a matter of seconds. If used responsibly and in small amounts, ChatGPT is a highly convenient tool and a healthy alternative to search engines.

So where’s the threat? Why would such a seemingly useful tool be banned from use?

Increasingly, people are using ChatGPT in professional settings, either to help prepare work materials or study for tests. Indeed, we’re all guilty of using ChatGPT from time to time to explain a challenging topic the same way we’re all guilty of having used SparkNotes or Google to help us understand confusing texts.

And that’s okay — provided that we don’t become entirely reliant on such tools. Artificial intelligence becomes dangerous if it is so heavily relied upon that it can replace human activity. For example, recently, an increasing number of ChatGPT users have been caught using the chatbot to write their essays or do their homework. Having an essay written for you may seem convenient in the short term, but, over time, as artificial intelligence becomes more powerful, important skills such as writing will eventually be lost to artificial intelligence and rendered obsolete. The ultimate threat of ChatGPT is that, as it is used more and more, its knowledge and capabilities will expand to the point where it can produce writing, music and code at the same level as humans. This may not seem like a threat now, but, in the long run, it is an existential threat to the very nature of humanity.

Something clearly has to be done to combat ChatGPT’s rapid growth. While preventing its use in the classroom is a good first step, it will not do much to solve the problem, as students can easily log in on personal accounts and use the bot from home. Moreover, even if ChatGPT is somehow banned from use entirely, it will not be long before similar, perhaps even more powerful chatbots are built. 

The best way to counter the danger of artificial intelligence, especially in schools, is through strict regulation, both on a local and federal level. On a local level, schools such as Fieldston must ensure that the use of AI is heavily monitored. This can be achieved through tools such as DetectGPT, a detector built by a Stanford University graduate student, which can identify the usage of ChatGPT up to 95% of the time. Additionally, schools must establish a precedent under which the use of ChatGPT on assignments is punished similarly to — or perhaps, given its wide-ranging capabilities, even more heavily than — plagiarism. This does not interfere with people’s individual right to use ChatGPT; it simply prevents people from using it in a dishonest and harmful manner.

On a broader scale, state and federal education departments can also do a lot to fight the problem. One worthwhile regulation would be to require OpenAI and similar companies to keep a database of all users and interactions. This will allow people to use the chatbot safely while preventing cheating and long-term use. An even more efficient regulation would be to demand that OpenAI build its own detector bot, similar to DetectGPT only more accurate, that uses its own technology to detect precisely when ChatGPT has been used. The device would function similarly to a plagiarism detector, and would also not interfere with people’s right to use ChatGPT in small amounts. The combination of these two laws, as well as careful monitoring by school teachers and administrative bodies — such as Fieldston’s own academic integrity board (AIB) — would do a great deal in protecting academic honesty.

It may not seem like that important of an issue today, but, at this rate, AI is an imminent threat not only to the integrity of the classroom but to humanity as we know it. If we care about the preservation of virtue and the cultivation of merit, we must do something to combat the problem.

Leave a Reply

Your email address will not be published.

Latest from Blog