ChatGPT — OpenAI’s language model that took the world by storm last December — has been called many things. Depending on who you ask, ChatGPT means the end of the college essay or “the best thing to happen to teaching since the Socratic method." It even ran for AMS president.
Computational linguistics professor Dr. Garrett Nicolai warned against thinking of ChatGPT as “artificial intelligence,” instead describing it as a chatbot or an automated language model.
“I hesitate to say ‘intelligence’ in this case, because it’s not actually doing any thinking,” said Nicolai. “It’s just producing words based on previous stuff that it’s seen.”
Still, it can provide a passable approximation of many basic writing forms, including essays and articles. So, how are UBC professors and experts dealing with this new technology?
UBC does not currently have a campus-wide policy on the use of technologies like ChatGPT. A committee did convene in early 2023 to discuss its implications and released an FAQ page that says ChatGPT policies are up to instructor’s discretion.
The FAQ also says passing off ChatGPT-generated text as student work is plagiarism.
Detection software can identify whether text was produced by ChatGPT, so Nicolai warned students to think twice before using it. However, the technology is evolving fast, and the detection software that works today might be obsolete tomorrow.
“Computational linguistics and natural language processing are the fields most closely related to something like [Chat]GPT, and they’re moving along really, really quickly,” said Nicolai. “I see more advanced versions of ChatGPT coming along every week.”
In response, some UBC professors have shifted how they approach assessment.
Nicolai said professors are “changing the types of questions that are asked, so less … simple factoids that you can get out of looking at the internet … maybe going towards more applied-type questions that are more difficult for ChatGPT to be able to mimic.”
Some institutions like Sciences Po, which partners with UBC to offer a dual degree program, have outright banned use of the language model in coursework.
According to Nicolai, some of those concerns might be overblown. While ChatGPT can complete simple tasks that follow common linguistic patterns, it is not very successful so far at producing longer pieces of writing.
“It might be able to write something that looks a little bit like an essay, but using things like rhetorical devices to build upon an argument, it’s probably not going to be able to do that.”
Citations are another area where ChatGPT struggles — it may be able to apply patterns from academic papers to identify what types of sentences should end with a citation, but rather than pull a relevant source from the internet, it will more often just make up a name. That does not hold up under scrutiny. It has similar issues with math, and often fails to perform calculations accurately.
Dr. J. Logan Smilges, an English professor who specializes in the rhetoric of technology and Queer and disability studies, is not concerned about ChatGPT. Rather, they describe the panic about AI plagiarism as “reactionary.”
“To me, the concerns that other faculty have, that administrators have, about students’ use of the technology has absolutely nothing to do with their learning and all to do with whether or not they trust their students,” they said.
“When I give an assignment, I trust that students are going to do it, if they have the time to do it, if they have the capacity to do it, if they feel competent and supported and encouraged to do it.”
Nicolai and UBC spokespeople also emphasize that ChatGPT is a source of opportunities: to generate ideas, to spark dialogue and to promote critical engagement with the ethics of machine learning.
“There are other ways of approaching this technology a little bit more generously, that communicates trust in our students,” said Smilges.