–Using ChatGPT in Education | Image by The Alphabet
Key Points
- Educational institutions have evolved in their approach to AI-generated content, embracing it cautiously while adapting assignments and addressing integrity concerns.
- Detection efforts, including Turnitin’s AI tool, raise concerns about biases and limitations in both AI detection and generative AI, leading institutions to reject them.
- It’s better for educators to counter AI cheating with proctoring, adapt teaching with chatbots, and ethically guide students through AI’s transformation.
- As educators navigate the evolving landscape, embracing AI tools like ChatGPT becomes crucial, recognizing their transformative potential and aligning strategies with this shift.
- Educators should assume students use AI tools for assignments, redirect resources from unreliable AI detection, focus on understanding AI’s strengths, and view the post-ChatGPT era as a learning journey to reshape classrooms while guiding students through AI’s influence.
The release of OpenAI’s advanced chatbot indeed sparked concerns among educators. Its generative capabilities raised worries about the potential for growing academic cheating and plagiarism, as it could easily spawn coherent and sophisticated content on a wide range of topics. On top of that, this also engendered speculations that such technology could make certain educational aspects, such as high school English classes, less relevant.
How Educators Respond to Using AI in Education
As a response, universities and educational institutions engaged in discussions about revising and strengthening their plagiarism policies to address the challenges posed by AI-generated content. Some school districts went as far as to block or prohibit the use of such technology, as ChatGPT, within their networks to maintain the integrity of their educational environments.
However, early concerns among educators regarding the buzz around generative AI have gradually transitioned into a more practical stance as almost a year has passed. Many students have become aware of the technology’s inclination to spit out false information. In light of this, David Banks, the chancellor of New York City Public Schools, has expressed the district’s intention to embrace generative AI, despite its previous prohibition from school networks.
A growing number of teachers are now concentrating on assignments that necessitate critical thinking, leveraging AI to ignite fresh discussions in classrooms, while also adopting a cautious attitude towards tools claiming to identify AI-driven cheating.
Furthermore, educational institutions and educators now find themselves in a somewhat uncomfortable position. They are not only struggling with an uninvited technological development but are also confronting the prospect of its profound impact on their roles and the world in which their students are growing up.
–Is AI bad news for education | Video by Sky News
In rural Arlington, South Dakota, Lisa Parry, a K–12 school principal and AP English Language and Composition teacher, has chosen a “cautiously embracing” approach toward generative AI in the current school year.
Although concerned about the potential for cheating facilitated by ChatGPT, which is accessible on school networks, she highlights that plagiarism has long been a concern for educators. To gauge students’ abilities, Parry traditionally has her students complete initial assignments in class. This year, she plans to have her English students utilize ChatGPT as an enhanced brainstorming tool, describing it as “a search engine on steroids” for generating essay topics. Parry acknowledges the dual nature of ChatGPT’s impact, recognizing its potential to enhance learning while also posing a risk to academic integrity.
Parry’s perspective aligns with the notion that ChatGPT could mimic the role of calculators in mathematics, providing substantial assistance in laborious aspects of writing and research, thereby enabling students to achieve more. However, educators are confronted with the challenge of comprehending the full scope of the technology’s potential before reaching a consensus on its optimal utilization. Lalitha Vasudevan, a professor of technology and education at Teachers College at Columbia University, emphasizes the uncertainty surrounding how emerging technologies will ultimately unfold, even as they are introduced into educational settings.
Tools to Weed Out Plagiarism Generated by ChatGPT
The ongoing endeavor to identify cheaters, regardless of the involvement of generative AI, persists. Turnitin, a widely used plagiarism checker, has devised an AI detection tool aimed at highlighting sections of written content potentially generated by AI. Unfortunately, these systems are not entirely error-proof, as Turnitin acknowledges a false positive rate of around 4 percent in its detector’s ability to determine whether a sentence was authored by AI.
–Can ChatGPT trick Turnitin software | Video by ecologicaltime
Due to the potential for false positives, Turnitin also suggests that educators engage in discussions with students instead of resorting to accusations of cheating. Annie Chechitelli, Turnitin’s chief product officer, clarifies that the tool is meant to provide educators with information to make informed decisions, emphasizing its imperfection.
The limitations of Turnitin’s tool in detecting AI-generated work mirror the limitations embedded in generative AI: The danger of bias persists within these AI detection systems. There are concerns that AI detection tools might erroneously identify certain writing styles or vocabularies as AI-generated, especially if they heavily rely on essays from a particular demographic, such as white, native English speakers, or high-income students.
English language learners, for instance, may face a heightened risk of being falsely flagged. A recent study unveiled a staggering 61.3 percent false positive rate when seven distinct AI detectors were employed to assess Test of English as a Foreign Language (TOEFL) exams. The potential for errors arises due to the shared traits between English learners and AI; both utilize simpler sentence structures and less intricate vocabularies.
Just as ChatGPT was trained using content from the internet, Turnitin’s system was trained on submissions from students and AI-generated writing. These submissions encompassed papers from various groups, including English language learners and students from historically underrepresented backgrounds, in an attempt to alleviate biases.
As a consequence, specific educational institutions are adopting a stance against the integration of AI-generated work detection tools. The Teaching Center at the University of Pittsburgh recently made it clear that it does not endorse AI detection tools due to their unreliability, subsequently disabling the AI detection component within Turnitin. Similarly, Vanderbilt University announced in August its decision to deactivate the AI detector.
Surprisingly, even OpenAI, the architect behind ChatGPT, acknowledges its inability to accurately ascertain whether text was produced by its chatbot. In July, the company terminated a tool named AI Classifier, introduced merely months earlier in January, due to its poor accuracy in identifying text origins. OpenAI communicated its ongoing pursuit of more effective methods to detect AI in language. However, the company declined to elaborate further on the tool’s inaccuracies or its forthcoming strategies in this arena.
Think Outside of the Box
As AI systems fall short in their ability to effectively combat cheating, educators are exploring alternatives for prevention of plagiarism. The practice of live proctoring, where an invigilator monitors students via webcam during tests or assignments, gained significant momentum during the pandemic and continues to be employed. Similarly, monitoring softwares that track students’ activities on their devices remain in use, although both approaches raise substantial concerns about privacy.
Generative AI’s remarkable capability to reproduce internet content contrasts with its limited critical thinking abilities. To bridge this gap, some educators are adapting their teaching strategies. Emily Isaacs, the executive director of the Office for Faculty Excellence at Montclair State University, suggests an innovative approach. Educators might consider submitting assignments to a chatbot and analyzing the output it generates. If the chatbot effortlessly produces acceptable work, it might indicate the need to refine the assignment itself.
–How can teachers and students use ChatGPT and AI | Video by Bloomberg Technology
This ongoing dynamic of adapting to new challenges isn’t unfamiliar. Isaacs draws parallels between the difficulties presented by generative AI and past issues such as copying from books or the internet. The task for educators remains the same: to convey to students the inherent value of learning.
David Joyner, a professor at the Georgia Institute of Technology, encourages his students to view AI as a tool for learning rather than a replacement for it. He recently incorporated an AI chatbot policy into his syllabus. Describing his draft policy language on a platform called X (formerly known as Twitter), Joyner likens using an AI chatbot to collaborating with a peer. While students are permitted to discuss ideas and work with both classmates and AI-based assistants, the work they submit must ultimately be their own. Joyner underscores the importance of guiding students to effectively use AI while maintaining the integrity of their academic pursuits.
Even educators at the middle school level recognize the urgency of preparing students for a world increasingly influenced by AI. Theresa Robertson, a STEM teacher in a suburb of Kansas City, Missouri, intends to lead discussions with her sixth-grade students about the nature of AI and its potential to reshape their lives and work. Robertson emphasizes the necessity of working with AI rather than sidestepping it, aiming to educate students about its ethical dimensions and incubate a comprehensive understanding of its implications.
The Future of Teaching
A standardized approach to teaching in a post-ChatGPT era has yet to emerge, and there’s no universally accepted “best practice” in this arena. In the US, teacher guidance remains scattered. While the US Department of Education issued a report with recommendations on integrating AI into teaching and learning, the decision on whether students should have access to ChatGPT in classrooms is ultimately left to individual school districts.
Educators are also grappling with the aftermath of the previous upheaval in education—the Covid-19 pandemic. Jeromie Whalen, a high school communications and media production teacher, and a PhD student at the University of Massachusetts Amherst, observes that many educators remain cautious about ChatGPT. Whalen notes that educators are still processing the learning gaps stemming from emergency remote learning. Incorporating ChatGPT into lesson planning is less of an exciting prospect and more akin to adding another task to an already endless to-do list for weary teachers.
However, an outright ban on ChatGPT carries its own risks. Noemi Waight, an associate professor of science education at the University of Buffalo, investigates how K–12 science teachers leverage technology. She points out that while this tool increases teachers’ responsibilities, prohibiting ChatGPT in public schools deprives students of the opportunity to learn from this technology. This is particularly detrimental to low-income students and students of color who rely more heavily on school-based devices and internet access. Banning such tools could further deepen the digital divide.
For some educators, generative AI is paving the way for new conversations. Bill Selak, the technology director at the Hillbrook School in Los Gatos, California, began using ChatGPT to generate prompts for Midjourney, an AI image generator, following the tragic mass shooting at the Covenant School in Nashville in March 2023. Recognizing that he wasn’t a natural illustrator, Selak sought a means to process his grief over the incident. Midjourney provided an image that helped him channel his emotions, prompting him to bring the idea to two fifth-grade classes at his school.
These two classes tackled significant topics: racism in America and climate change. Selak collaborated with each class to develop prompts on these subjects using ChatGPT, which were then fed to Midjourney, and the results were refined. For the racism prompt, Midjourney produced three faces in different colors, while for climate change, it generated three distinct outdoor scenes featuring homes and smokestacks connected by a road. Students subsequently discussed the symbolism embedded in each image.
Generative AI enabled students to engage with complex and emotional concepts in ways that a traditional essay assignment might not have allowed. According to Selak, it provided an avenue for them to participate in discussions that deviated from the norm for significant conversations. He notes that it unexpectedly amplified human creativity and provided an unanticipated avenue for engagement.
–How AI Could Save (Not Destroy) Education | Video by TED
As educators move forward in this evolving landscape, embracing the integration of AI tools like ChatGPT becomes a pivotal step. This embrace entails recognizing the transformative potential of AI in education and adopting strategies that align with this paradigm shift. Amidst this transition, several actionable suggestions emerge to guide educators in navigating the integration of AI technology into their pedagogical practices.
Useful Guides to Use AI Tools in Education
The first and foremost piece of recommendation, even though it might provoke a range of uncomfortable reactions from educators, is that teachers should invest less effort into cautioning students about the limitations of generative AI but instead spend more time understanding the full tapestry of this technology, especially its strengths.
In the past year, numerous educational institutions attempted to discourage students from using AI by highlighting the unreliability of tools like ChatGPT, stressing their tendency to produce nonsensical responses and generic text. While this critique accurately applied to early AI chatbots, it holds less weight for the current upgraded models. Resourceful students are discovering how to achieve improved outcomes by presenting more advanced prompts to the models.
Consequently, students in various schools are surpassing their instructors in comprehending the potential of generative AI when used effectively. The cautions issued last year about flawed AI systems may now appear less pertinent, given that GPT-4 can now achieve passing grades at prestigious institutions like Harvard.
For educators seeking a quick education on AI, resources are readily available. Organizations like aiEDU offer AI-focused lesson plans, and the International Society for Technology in Education does the same job. Some teachers have even created platforms to share recommendations with their peers; for instance, faculty at Gettysburg College have established a website that offers practical guidance on incorporating generative AI into teaching.
In addition, educational institutions should cease relying on AI-based detection programs to identify instances of cheating. Despite the proliferation of numerous tools in the market claiming to identify AI-generated writing, none of them exhibit consistent reliability. These programs yield numerous false positives and can be easily deceived by tactics such as paraphrasing. This sentiment is further supported by OpenAI, the creator of ChatGPT, which recently discontinued its AI writing detector due to its notably “low rate of accuracy.”
While it’s conceivable that AI companies might eventually introduce labeling methods, such as “watermarking,” to make their models’ outputs more easily identifiable, or that improved AI detection tools may surface in the future, the current reality suggests that the majority of AI-generated text remains virtually indistinguishable. Consequently, educational institutions should direct their resources and technology budgets toward other avenues rather than placing reliance on such detection methods at present.
The third recommendation to educators, especially those in high schools and colleges, is to operate under the assumption that every student is employing ChatGPT and similar generative AI tools for every assignment across all subjects, unless they are under direct supervision within the school premises.
While this presumption might not hold entirely true in most educational institutions, it serves as a useful rule. Certain students might abstain from using AI due to moral reservations, its lack of suitability for their particular tasks, restricted access to such tools, or fears of being caught.
Nevertheless, the notion that nearly all students are using AI tools outside the classroom might be closer to reality than educators might think. This perspective offers a pragmatic approach for teachers seeking to adapt their instructional strategies. Why they assign a take-home exam or an essay on a literary work like “To Kill a Mocking Bird” when it’s highly probable that a majority of students, barring the strictest rule followers, will turn to AI to complete it? Conversely, why do not transition to proctored exams, in-class essays, or collaborative activities if it’s understood that ChatGPT is as pervasive among students as popular social media platforms like Instagram and Snapchat?
The final piece of guidance for schools wrestling with the challenges of generative AI is as follows: Consider this upcoming year—the inaugural year of the post-ChatGPT era—as a valuable learning journey and don’t anticipate getting everything perfect from the outset.
The potential ways in which AI could reshape the classroom are numerous. Ethan Mollick, a professor at the Wharton School of the University of Pennsylvania, believes that AI will prompt more educators to embrace the “flipped classroom” approach—where students learn material outside of class and apply it during class time. This approach is less susceptible to AI-based cheating.
It’s worth recognizing that some of these experiments may not yield the desired results, while others could prove successful. This is entirely acceptable. We are all in the process of acclimatizing to the presence of this novel and unconventional technology. Occasional missteps should be anticipated. However, students require proper guidance in navigating generative AI, and institutions that dismiss it as a passing trend or an adversary to be defeated will overlook a valuable opportunity to assist their students.
For More about ChatGPT and Education Resources
- Official Website of ChatGPT: https://openai.com/chatgpt
- Turnitin: https://www.turnitin.com/
- Artificial Intelligence and the Future of Teaching and Learning: Artificial Intelligence and the Future of Teaching and Learning Insights and Recommendations
- Midjourney: https://www.midjourney.com/home
- aiEDU: https://www.aiedu.org/
- International Society for Technology in Education: https://www.iste.org/
- Chat GPT Detectors: https://chatgptdetectors.com/: