With the rise of generative AI tech such as the chat robot ChatGPT, many American universities have begun to adjust course content. In fact, there are reports that some schools are changing teaching methods and employing other preventive measures. Last month, Antony Aumann, a philosophy professor (prof) at Northern Michigan Univ, read what he called “the best paper in the class” while grading papers for a class he taught. This essay is well-paralleled, well-exemplified, and rigorously argued.
Aumann asked his students if he had written the essay himself. The student admitted to using ChatGPT. Such chatbots can convey information, explain concepts, and generate opinions in simple, automatically generated sentences. In this case, we can conclude that the student did not write that paper, ChatGPT did.
The discovery so alarmed Aumann that he decided to change the way he writes his thesis for the course this semester. He plans to ask students to write first drafts in class using browsers and computers with limited access. Students must also explain each content revision in subsequent drafts. Aumann may also stop having students write papers for the next few semesters. He plans to integrate ChatGPT into the curriculum by allowing students to evaluate the chatbot’s answers.
“What’s going on in the classroom will no longer be, ‘Here’s some questions, let’s talk about that,'”Aumann said, but “something like, ‘What is this robot thinking?'”
U.S. univ. profs, department chairs, and administrators like Aumann are starting to overhaul classroom teaching in response to ChatGPT. This will trigger a sea change in teaching methods. Some profs are completely redesigning the courses they teach. They are now bringing in more oral exams, group discussion assignments and handwritten content assessments in place of dissertations.
Read Also: ChatGPT is smarter than an average student
ChatGPT effect is huge – many things will change
OpenAI, an AI laboratory released ChatGPT in November last year. This feature has quickly hit the forefront of this tech wave. ChatGPT automatically generates logical and clear text based on short prompts. Many people use it to write love letters, poems, fan fiction, and even complete homework. This has affected the teaching of many middle and high schools in the United States. Teachers need to distinguish whether students are using chatbots to do homework. To prevent cheating, some public schools in New York City and Seattle have banned the use of ChatGPT on the campus network and access devices. However, students can easily find ways to access ChatGPT.
But in U.S. higher education, colleges and univs have been reluctant to ban AI tools. This is because the schools doubt the move would be effective. They also do not want to infringe on academic freedom. This means that the way of teaching on American college campuses is changing. Joe Glover, provost of the Univ. of Florida, said, “We should have an overall policy that explicitly supports the authority of faculty to manage the curriculum,” rather than addressing specific ways of cheating. “It won’t be the last innovation we have to deal with either,”
The starting point for Glover et al. is that generative AI is still in its infancy. OpenAI is expected to soon release another AI tool, GPT-4, which is better at generating text than ChatGPT. Google has developed its own chatbot LaMDA, and Microsoft is to invest $10 billion in OpenAI. Silicon Valley startups such as Stability AI and Character are also working on generative AI tools.
A spokesperson for OpenAI said the lab is aware that programs developed may be used to mislead the public. It claims that it is working on a tech that will help people identify what is automatically generated by ChatGPT. ChatGPT has now jumped to the top of the teaching agenda in many univs. Administrators are setting up working groups and leading university-wide discussions on how to deal with ChatGPT. Much of the work will focus on guidance on adapting to generative AI tech.
Gizchina News of the week
University profs abandon after-class open-book assignments
At George Washington Univ in Washington, D.C., Rutgers Univ in New Brunswick, N.J., and Appalachian State Univ in Boone, N.C., profs are phasing out after-class open-book assignments. This used to be a major assessment method for academic courses but it appears to be vulnerable to chatbots. Instead, profs now choose more from classwork, handwritten essays, group assignments and oral exams.
Simple requests like “write five pages about this or that” are gone. Instead, some professors crafted questions they thought were too smart for chatbots. They now ask students to write what they understand about a topic based on their lives or current events.
Sid Dobrin, dean of the English department at the Univ of Florida, said students “plagiarize because assignments can be plagiarized”.
Frederick Luis Aldama, director of the humanities at the Univ. of Texas at Austin, revealed that he plans to teach niche content in ChatGPT that may not have much useful info, such as Shakespeare’s early fourteenth. Poetry, not A Midsummer Night’s Dream.
Chatbots could push “people who gravitate toward raw, authoritative texts out of their comfort zones and into things that aren’t online,” he said.
Academics to get tougher standards
To prevent plagiarism, Aldama and other profs say they plan to create stricter standards. These standards will help them to grade the work based on what they expect. Now, it is not enough for an essay to have a topic, introduction, supporting paragraphs, and conclusion.
“We need to improve our game,” Aldama said. “We need to infuse the imagination, creativity and innovative analysis that is usually considered an A-level essay into a B-level essay.”
Univs. are also working to give students an in-depth understanding of new AI tools. The Univ. at Buffalo in New York and Furman Univ. in Greenville, South Carolina, both said they plan to embed discussions of AI tools in required courses.
“We had to add a scenario to this so students could see concrete examples,” said Kelly Ahuna, director of the Office of Academic Integrity at the University at Buffalo. “We want to be able to prevent things from happening, not react to them when they happen.”
Other universities are also trying to draw the line on the spread of AI. Washington Univ. in St. Louis and the University of Vermont in Burlington are revising their academic integrity policies to include generative AI in their definition of plagiarism.
AI tools have many positives but their misuse is massive
The misuse of AI tools will likely never end, so some profs and universities say they plan to use detection tools to root out the practice. Plagiarism-detection service Turnitin said it will add more AI features to recognize ChatGPT and other AI features this year. More than 6,000 faculty from Harvard, Yale, Rhode Island, and more have also signed up to use GPTZero. GPTZero can quickly detect AI text, said program developer Edward Tian, a senior at Princeton University.
Of course, some college students also see the value in using AI tools to enhance learning outcomes. Lizzie Shackney, 27, a student at the University of Penn’s School of Law and School of Design has been using ChatGPT. But she also has concerns. ChatGPT sometimes misses the point and gives wrong ideas as well as miscites sources, Shackney said. Penn doesn’t currently have any regulations for such a tool, and Sackney doesn’t want to rely on it in case the school later disables or deems using ChatGPT as cheating.
Other students have no such concerns. They shared on the forum that they had submitted papers or questions answered by ChatGPT and sometimes helped other students do the same. TikTok content on ChatGPT topics has been viewed more than 578 million times. Many people are sharing videos of writing papers and solving coding problems with ChatGPT.