According to a 2023 survey by the Pew Research Center, about one in every five teenagers in the United States who are familiar with ChatGPT have used the software to complete their schoolwork. The study also found that most ChatGPT use comes from 11th and 12th graders, with 9th and 10th graders and middle school students lagging slightly behind. This number has only grown since the survey took place last year, and indicates a greater societal trend in the use of artificial intelligence.
This epidemic of AI has also made its way to Fieldston. According to Assistant Principal for Student Life and Chair of the Academic Integrity Board (AIB) Grace Yun, there are currently more cases of students cheating on schoolwork using chatbots than ever. While other technologies existed in the past, such as Google Translate or paraphrasing tools, Yun notes that ChatGPT presents unprecedented challenges given the tool’s singular abilities, making AI-related AIB infractions all the more challenging to tackle and prevent. Moreover, many prominent AI bots exist apart from ChatGPT, like Claude, each with their own unique capabilities, such as the ability to scan documents or produce notes and summarizations. Yun expresses concern that increased AI use might diminish the importance of writing skills, saying, “One concern that educators have is that there is now a tool that could potentially replace a process which is important for students to learn: writing.” In response to the recent influx of AI, she anticipates a future with more in-person assignments, such as in-class essays and tests, where students cannot use AI-generated material. “No school has the answer,” adds Yun. “But we do have to embrace [AI] in the sense that it is a reality, and ask, how can we use it in the classroom to enhance learning, and where do we draw boundaries?” This raises the question of when, if ever, it is appropriate to use artificial intelligence in a work setting.
Educators’ opinions differ regarding how they should react to and accommodate the growing usage of generative AI, with many asserting that AI ought to be removed from classrooms entirely. For example, following their 2025 ban of ChatGPT, the New York City Department of Education released a statement that said, “while the tool may be able to provide quick and easy answers to questions, it does not build critical-thinking and problem-solving skills which are essential for academic and lifelong success.” By this view, access to AI in schools will only impair the educational process of students. Yun agrees with this, arguing that ‘Ultimately, [AI] is really handicapping students. They’re not building the skills that they need.”
On the other hand, some see AI as a technological development that will bring our world and education into a new age. One LA Times opinion writer compares the argument that AI impairs students’ critical thinking to Socrates’s archaic assertion that writing ought to be avoided since it weakens people’s memories, asserting that both are equally false and narrow-minded. Along these lines, some educational ministries around the world are adopting an accepting stance regarding ChatGPT. The state of South Australia approved the usage of ChatGPT in their educational institutions, with education minister Blair Boyer saying, “I don’t think we can bury our head in the sand here and just think that ChatGPT or artificial intelligence are an overnight sensation that is going to disappear.” Boyer, like many others, insists that the increasing use of ChatGPT is inevitable, and ought to be embraced as soon as possible.
Kurt Vega, Fieldston’s computer science teacher, largely agrees with this perspective. He often integrates AI into his teaching, preparing sample codes and conducting research for his lessons using ChatGPT. He also allows his students to use AI, with his Data Structures course embedding an AI assistant into the very platform used to write code and do homework. Vega maintains a bullish take on AI, seeing it as a tool that will only rise in importance in the future. He believes that as AI tools continue to improve and grow stronger, being able to handle more of the tedious tasks in programming, humans will be able to hold on to the responsibility of tackling more complex problems. Vega adds that AI is not a tool limited to computer science, saying, “any discipline that involves the manipulation of symbols will feel the impact and benefit from the use of AI.” Like Boyer, he views the usage of AI in schools as “inevitable,” and thinks that it will bring new avenues of exploration for students in a way that is “incredibly rewarding and fun.”
Vega’s beliefs on the usage of AI prove to be quite unique when compared to other departments at Fieldston, though this largely results from the distinct nature of computer science and the projects that his students do. Fieldston’s history department, for example, has taken extensive action to prevent the usage of AI in its assignments. According to history teacher Dr. Nancy Banks, many teachers in the department require that their students take all their information from a few pre-selected sources. Banks also mentions the increasing usage of in-class essays as a means of reducing the influence of AI. AI certainly poses a unique challenge for fields of study that focus on writing, given the particular ease with which students can use AI to generate a full, well-written essay in a short amount of time. Banks also expects that new AI tools will emerge over time, forcing the department to adjust further. Yun, a science teacher herself, agrees that it will be hard for educators to keep up with its growth. She recommends that teachers assign in-class freewrites at the start of semesters to gain an understanding of different students’ writing capabilities and styles; that way, any use of an external source will be quickly noticed in future assignments.
Although there are currently many differences between departments in AI policy, Fieldston has also begun tackling the issue at an administrative level. Head of School Joe Algrant takes a practical view of AI in schools, describing it as an inevitability that educators and students must learn to live with. He emphasizes the importance of understanding the benefits of AI while exercising caution and good judgment in its use, saying, “I think we should embrace, albeit carefully, the possibilities of AI. We just need to understand it as a tool for learning and teach it that way, not as a shortcut or replacement.” According to Algrant, there are several threatening and frightening nightmare scenarios related to AI, especially considering the speed of its development; however, the healthier alternative is a world that learns to live with it. “It can bring many positive changes and make certain jobs easier,” he adds. “It will replace certain jobs, but that should allow for the creation of new jobs and options.”
Algrant also acknowledges the threat to education that can stem from AI if used improperly, asserting that teachers will have to change their assignments so that ChatGPT and other AI bots cannot do the work themselves. “The different divisions here have started to formulate rules around AI,” he notes, “and we will need to develop policies and practices that guide us, in developmentally appropriate ways from elementary to middle to upper.”
Among these policies are a series of workshops to faculty on the subject, which will continue every year as AI changes. Although keeping up with AI advancement will be a challenge, Algrant insists that doing so is necessary as it continues to change the nature of education and society as a whole. He compares AI to the creation of the calculator in math in that “the calculator was thought to signal major problems in math education at one point, but it has only empowered us to know math more deeply.
AI is, of course, way more advanced and will require perhaps some form of regulation, but education will be at the core.” While the comparison certainly has its merit, AI is, indeed, unquestionably a more advanced — and dangerous — tool than a calculator. As Yun says, “With a calculator, you still have to know how to use it in some way to get the right answer. Whereas with AI, you just type in ‘answer this question’ and it does it for you.”
The rise of artificial intelligence extends beyond the classroom. Apart from threatening students’ critical-thinking and writing skills, it also instills unhealthy tendencies in people, such as laziness. Even in cases where it is used in moderation, it still provides an easy way for students to avoid doing challenging work, a habit which may come back to bite them later in life and, on a greater scale, ultimately lead to a less productive society. Another danger in the rise of AI is its negative impact on creativity and the arts. Some AI devices are able to produce music and visual art, rendering human artistry useless. Furthermore, ChatGPT can write entire stories based on very limited prompts, in any style requested. This threat to human creativity is perhaps AI’s greatest danger.
Just as AI can replace art, it may also quickly replace people’s jobs. “We are not the only industry that is grappling with AI,” remarks Yun. “Every other day there is some headline about the impact of AI on a different field. There are some fields where people’s jobs are at stake.” In addition to its effects on individuals and society, technology can be used in very damaging ways to cause harm to others. As Algrant said, “We must always be properly skeptical, but that can’t stop us from learning and growing.”
At the end of the day, AI is already deeply ingrained into our society, and it will only grow stronger as time goes on. The question remains: is AI an unfortunate inevitability, or a mere technological advancement that we must learn to incorporate into our everyday lives? Ultimately, institutions must strike a balance between the two, where people can use AI as a supplement, rather than resort to complete dependence on it.