Navigate

Artificial Intelligence Is On Campus. Calm Down. For Now.

Generative AI is kind of like a brain — a massive neural network, billions of connections trained with a complex algorithm on a ginormous amount of data. (Illustration: Carolina Alumni/Hailey Hodges ’19)

Professors are worried less about students using AI to cheat and more about how it should be used.

By Mark Derewicz
(a real human)

A flawed, sarcastic human spent time talking to smarter humans and contextualizing facts and opinions to write this rather decent article about ChatGPT, the new god of generative artificial intelligence — a large language-learning machine currently causing humans to cower in fear of an apocalyptic future, or at least one where colleges are irrelevant.

They won’t be. Not yet. And probably not ever, according to Carolina professors who’ve been engaged in a decades-long dance with advancing technology while teaching students to think critically.

ChatGPT (GPT stands for generative pretrained transformer) is an extremely souped-up, multimillion-dollar search engine that interprets nuanced commands and immediately generates answers to prompts in the form of original essays, lists, outlines or whatever — even jokes or bad poetry. Generative AI is kind of like a brain — a massive neural network, billions of connections trained with a complex algorithm on a ginormous amount of data.

Every time you engage ChatGPT, it responds like a human, all the way down to making excuses for its errors and making up stuff, which AI creators lovingly dub “hallucinations.”

“It’s amazing and also a bit creepy,” said Carolina sociologist and library sciences professor Tressie McMillan Cottom. As she wrote in The New York Times in December, a month after the latest version of ChatGPT busted onto the scene, “AI writes prose the way horror movies play with dolls. Chucky, Megan, the original Frankenstein’s monster. The monster dolls appear human and can even tell stories. But they cannot make stories. Isn’t that why they are monsters? They can only reflect humanity’s vanities back at humans.”

Yet, humans, too, can be artificial. That is, some humans don’t care about originality, which is why some cheat and steal. It’s also why pop culture is riddled with retreads and caricatures.

The AI wizards foisting the tech upon humanity whether we want it or not — like nuclear weapons and the Kardashians — know their creation is human-ish, which is why the AI geniuses keep tinkering to make the tech even more human or to surpass humanity, if such a thing is possible.

The company OpenAI released the first version of ChatGPT in 2020, when the world was preoccupied with a pandemic. Version 4, which was designed to produce more truthful answers to prompts, appeared March 14. Chances are, by the time you’re reading this article, a more advanced program will be available. Microsoft and Google have their own versions, of course, and all of them — as well as even creepier generative AI platforms not limited by language — have massive implications for parts of society, including academia.

“ChatGPT is incredibly impressive,” said Kenan-Flagler Business School professor Mark McNeilly, who created an extensive presentation on generative AI and ChatGPT he’s shared with faculty, students and administrators at Carolina. “Across academia and industries, there’s a mixture of excitement, awe and fear about ChatGPT. People think they’ll lose their jobs.” While it’s unclear at this point how many human jobs AI could eventually eliminate, the potential list is long and includes computer coders and programmers, software engineers, financial analysts, accountants, writers, reporters, ad copywriters, paralegals, tutors, teachers — even professors.

McNeilly and other professors and students I spoke with shared many concerns about AI, but cheating in college does not top their lists.

“Issues surrounding ChatGPT and AI are way bigger than worrying about students cheating,” McNeilly said. “That’s actually easily covered by current academic principles.”

Cheaters gonna cheat

Humans have been cheating, cutting corners, cajoling and lying ever since we became conscious beings. Ethics, morality and religion have served as bulwarks against the human proclivity to game the system.

For decades, students scribbled notes and formulas on their sneakers’ rubber soles to cheat on exams. They spent nights devising tiny, meticulous, almost artistic cheat sheets so complex that one would wonder why they didn’t spend that time simply learning the material. Rumors swirled of college students stealing professors’ tests from unlocked offices. Former students sold old papers to gullible undergrads.

The digital age and COVID-inspired-at-home learning pushed ethics to the boiling point, making cheating so ubiquitous that “it’s practically unavoidable,” according to journalist Suzy Weiss at the online media company The Free Press, whose reporting this year revealed so much cheating at Ivy League schools and their competitors that honest students were being punished for not cheating, because courses are often graded on a curve.

Modern students can upload crib sheets to the notes app on their Apple Watches. They can hack into professors’ computer files to find exam questions or old course materials. They can use digital tools and tutorial software to produce work and get a good grade without engaging in time-consuming requirements such as studying and learning.

At the risk of sounding glib: Cheaters are gonna cheat, and no one has curtailed their desire to try. Academia was in crisis long before now, according to Weiss and others.

Enter ChatGPT, which has passed law and medical-board exams with ease. It can certainly produce all kinds of stuff students can attempt to pass off as original work, causing professors everywhere to gnash their teeth in indignation.

Yet not one of the nearly dozen Carolina professors I talked to in various departments said cheating was at the top of their list of concerns. “I almost hate to put it this way, but ChatGPT is democratizing cheating,” said Mohammad Jarrahi, an associate professor in the UNC School of Information and Library Science. “For years, rich kids have been paying others to write their papers.” Now, any student can use ChatGPT on the cheap.

Jarrahi, McMillan Cottom and others said they’re not very concerned about cheating because understanding the technology allows them to design coursework so that it’s impossible to merely ask ChatGPT a question or four to complete an assignment. “We talk openly in class about how ChatGPT can help us learn,” McMillan Cottom said, “not do the work for us.” McNeilly and Jarrahi said nearly the same thing.

History professor Chad Bryant, who participated in a ChatGPT faculty panel in February, said, “In some ways, I think ChatGPT will be more of a problem in high school or middle school, where assignments can be more like, ‘read something, repeat information in five paragraphs.’ Professors should already be creating more creative assignments than that.”

For instance, Bryant asked students to connect his lectures on 20th-century history to the Soviet-era memoir they’re reading. He also asked students to consider the memoir as a primary source and list examples of information they can trust from the memoir, what they shouldn’t trust and why. These assignments require critical thinking, and they’re dependent on Bryant’s expertise as a historian and a teacher. It’s possible to use ChatGPT to help with this assignment, and Bryant is game for that. Using it to cheat wholesale is not possible, he said.

Carolina students said their peers are trying to use ChatGPT to cheat, but the success rate is not clear.

“I know some students have gotten caught using ChatGPT for essays or even for writing code,” said Ritvika Yeyuvuri, who will begin her sophomore year as a computer science and physics major this fall. “It seemed pretty obvious their work came from ChatGPT.” According to UNC’s honor code, disciplinary action for using ChatGPT to cheat is up to the instructor.

Cheaters are gonna cheat. Academia was in crisis long before now. Enter ChatGPT, which has passed law and medical-board exams with ease. (Illustration: Carolina Alumni/Hailey Hodges ’19)

Neil Sharma ’23, who was co-director of academic affairs and professional development in student government, said, “I’m not seeing the cheating, and I haven’t heard about people using it in a dishonest way. I see students looking for a way it can help them learn.”

If ChatGPT is so smart that it can help students learn and create new content, then wouldn’t the program be smart enough to detect something it produced? Carolina provost and astrophysicist Chris Clemens thought so. He asked ChatGPT to write several essays. Then he emailed the essays to McNeilly, who asked ChatGPT if it had produced these essays and others it had not produced.

“It was infallible at detecting its own work,” Clemens said, “as long as the essays were over 500 words.” Under 500 words, ChatGPT struggled to catch itself. Clemens didn’t know why.

Other smart humans, such as Princeton undergrad Edward Tian, have created apps to detect ChatGPT content. Tian’s app is successful almost all of the time, according to reports, and professors will use it.

But Clemens has another message. “I think some professors already cut corners to make it too easy for students to cheat,” he said. Multiple choice exams, for instance, are easy to grade but inferior at determining what’s been learned. Clemens refuses to use them.

“I want a chicken to score a zero on a test, and a chicken would score a 20 on a multiple-choice exam, just by luck,” Clemens said. Chickens would score zero on written essay exams. “Essays are a bear to grade, but really hard to cheat on,” he said. “That’s the tradeoff. Making sure students can’t cut corners will take more work.”

To use or not to use

If professors have to work harder to battle ChatGPT cheaters, could professors use ChatGPT to save time in other areas of their jobs, and could students use it to their benefit? And shouldn’t students know how they might need to use ChatGPT and other generative AI in the workforce?

“Yes,” Clemens said. “I gave it an assignment to write a letter promoting Stan Ahalt to dean of [the] new School of Data Science and Society, which I had already done, and ChatGPT gave me the same kind of language I’d get from a standard HR letter, which I don’t typically use.”

In April, after several discussions with professors and administrators, Clemens asked McNeilly and Ahalt to form the UNC Generative AI Committee, which began meeting May 18 and is charged with creating guidelines — not policies — to ensure the ethical use of AI by students and professors, to help students develop AI skills they might need in their careers and to assist faculty instruction. McNeilly said syllabus guidelines for student use of generative AI will be in place across all schools at Carolina by the start of the fall semester.

The approach seems in line with what the thinking is among professors and students I spoke with. They said banning ChatGPT would be akin to prohibiting calculators, Google searches or Wikipedia. Early on, each raised the hackles of professors, who then managed to incorporate each into academic life without the erosion of academic standards. But ChatGPT feels different, professors said, because it doesn’t just find the information, it pieces it together in prose and other formats immediately.

“My opinion, as a professor, is we are not going to stop using a tool that adds value to organizations,” Clemens said. “We have to learn how to use it, work with it and teach students to do the same. I’ve played with ChatGPT a little, and my first impression is that it would be a good scaffolding tool.”

In education, scaffolding theory is essentially the opposite of telling students a bunch of information, handing them a blank sheet and giving them an assignment to fill the blank sheet. Instead, scaffolding involves teachers breaking down concepts into parts and using models to show students what’s expected. Then the students build upon the model, thus learning in the process of doing.

Clemens said professors could let students ask ChatGPT to write a five-paragraph essay as a model and then ask students to write their own essays based on the model and other criteria. “If we don’t use ChatGPT like this,” he said, “then we’re missing a great opportunity to teach people how to write.”

McNeilly said students and professors could prompt ChatGPT to write something in different tones: optimistic, pessimistic, factual, professional or comic. Students could learn how and why these approaches are different. “Professors will need to complexify their assignments,” he said. “Some are considering oral exams, like the old days.”

Boring but biased

ChatGPT4 is impressive in its ability to produce comprehensive long-form answers to complicated prompts on nearly every topic. Yet chatbot essays, for example, are not exactly riveting prose, even if you try to tell the bot to write in the style of Carolina’s distinguished English professor Daniel Wallace ’08, author of several novels including Big Fish, which was adapted into a film and Broadway musical. “It writes uniform, flat prose,” Jarrahi said. “Boring patterns and structures.”

Still ChatGPT can be biased, Jarrahi said, largely because so much online content is biased. The bot has promoted torture of religious minorities, screwed up gender and race responses in obvious and ugly ways and refused to provide accurate responses to sticky political inquiries rife with truthiness and spin.

Paradoxically, Jarrahi said ChatGPT has been critical in helping him learn English. Jarrahi, who is from Iran and didn’t learn to speak or write English until he was 23, said the tech has generated first drafts of articles, which taught him proper prose structures and improved his writing, especially of the typical stuffy academic variety.

Teyuvuri said her computer science professors started discussing ChatGPT in class when it was released in November. “It learns from the responses you give it,” she said. “In this sense, you can almost tailor it to yourself.”

ChatGPT can be biased, largely because so much online content is biased. The bot has promoted torture of religious minorities, screwed up gender and race responses in obvious and ugly ways and refused to provide accurate responses to sticky political inquiries rife with truthiness and spin.

For Teyuvuri, that means entering computer code she wrote and allowing ChatGPT to debug it, showing Teyuvuri her errors. Current software cannot find flaws, she said, and professors know she and other students use ChatGPT that way. Some students, however, use ChatGPT to write the code from the start. But McMillan Cottom pointed out having a program write code from scratch has been possible for 15 years simply by cutting and pasting code found through a Google search.

Teyuvuri also said ChatGPT helps her find the most reliable information from thousands of possibilities. It can still be wrong, she said, but it’s a helpful tool, like Wikipedia, which is sourced but not infallible.

Sharma said some of his professors at first didn’t want to talk about ChatGPT because “they thought an essay would no longer be an essay, a prompt no longer a prompt. But now my professors and their [teaching assistants] are warming up to ChatGPT.”

Each professor I talked to said students and professors need to know how specific jobs will change so they know what skills to emphasize or deemphasize. Sharma, who wants to be a lawyer, said he needs to know what tasks won’t be completed by a human lawyer in the near future. He and other students in fields that will be affected by AI will have to be prepared to use any emerging technology.

Jarrahi described ChatGPT as less of a revolution than an evolution. But McMillan Cottom said evolution connotes a good progression. Maybe it is, she said, but like all double-edged swords, generative AI may cut both ways. For every efficiency it offers, it may also create unintended consequences, such as vast amounts of plagiarized content. After all, ChatGPT essentially pilfers from content humans created, faulty though it might have been. (Read the Review’s feature on McMillan Cottom in the November/December 2022 issue, available at: alumni.unc.edu/archive)

A promotional pitch?

Since tech wizards set loose ChatGPT, articles have proclaimed it to be a game changer. It can do so much. Our lives will become magnificently more efficient. All industries will be wholly transformed.

Carolina sociologist and library sciences professor Tressie McMillan Cottom tells her students whenever a new technology appears we should ask two questions: for whom and at what cost?

“Or maybe, just maybe, this is a promotional pitch looking for a buyer,” McMillan Cottom said. “And we should be more skeptical of its radical potential and much more judicious of its risks: misinformation, disinformation and scale.”

She said some people had predicted online courses, which are affordable and offer users scheduling flexibility, would ruin universities. They haven’t. Employers wouldn’t need college graduates because micro credentials and one-off certificates from new web-based companies would suffice. They haven’t. One-laptop-one-child initiatives would end traditional instruction. They haven’t.

For a decade, McMillan Cottom served on subcommittees and talked to media outlets each time a new tech platform stormed onto the scene, which would mean the end of traditional processes, a result that was typically portrayed as bad. “I’m kind of moral panicked out,” she said.

New platforms and ideas were brought to us by venture capitalists hawking products they said would transform society, McMillan Cottom said, similar to how social media companies pitched their wares as transformative technologies, allowing humans to connect across time and space. People would become interconnected like never before, we were told. We’d democratize closed societies. “We need to adopt a stance of cautious pragmatism toward generative AI,” she said. “We tend to buy too much of the marketing as truth.”

McMillan Cottom tells her students whenever a new technology appears we should ask two questions: for whom and at what cost? “If you answer those two questions, you move closer to the reality of a thing and further away from the hype,” she said. For instance, ChatGPT could change the lives of students who don’t have access to quality education or face-to-face interaction, she said.

Sasha Gold, who will be a sophomore this fall semester, said some students in classes with hundreds of peers believe professors don’t have the time to provide the help they need. “In Econ 101, I saw my professor only twice, but I understand this completely,” Gold said. “She seemed so tired and stressed. I don’t want to impose on her when she has hundreds of students, many just like me, who might not understand the material as well as we think we should.”

Gold said if students believe ChatGPT can help, they’ll use it. Professors will have to incorporate it whether they want to or not. “For me, personally, I have to go to the lecture, and listen and take notes with paper and pencil,” she said. “This engages our minds, and that’s how we learn information the best.”

Where does it go?

Ahalt, the dean of the School of Data Science and Society, said, “I’m not comfortable, from an ethical standpoint, about the origin of data used to create generative AI Platforms. I understand we’ve probably given all our provisions away in the fine print, but it still bothers me.”

He’s referring to the fact that humans, for the most part, have created the data and information ChatGPT utilizes to form so-called “original” content. Unless asked, ChatGPT won’t cite its sources. And when it does, the sourcing is often incorrect, McNeilly said. Remember, like humans, ChatGPT “lies” and makes up stuff, based on what it finds online, which is not always accurate — or maybe it was accurate but isn’t anymore.

ChatGPT isn’t bound by plagiarism standards. But humans are. McMillan Cottom said this is an inherent contradiction.

Banning ChatGPT would be akin to prohibiting calculators, Google searches or Wikipedia. Early on, each raised the hackles of professors, who then managed to incorporate each without eroding academic standards. (Illustration: Carolina Alumni/Hailey Hodges ’19)

ChatGPT is not an innocent creation. It was born a sinner, frankly — or at least born ignorant. And other generative AI platforms seem to be the devil’s playthings, creating deepfake images and voice renderings of famous humans.

In March, a photo of Pope Francis decked out in a cool ankle-length poofy white winter coat made the rounds on social media. Many people who saw it thought it was real and refreshing to see a stylish pontiff. McMillan Cottom saw it, too. She nearly shared it with her followers and friends. But a faint radar beeped in her brain, and she decided not to share it. Two days later the photo was revealed as a fake. For many, this was an innocent ruse. But it doesn’t take a prophet to realize the powerful End Times symbol of the poofy papal picture: There is great potential for generative AI to wreak havoc in the disinformation business.

“We have better radar for discerning if a story is real or not,” McMillan Cottom said. “It’s much harder to check sources for images to ensure accuracy. It’s terrifying for a society that has such low trust in each other and political institutions.”

The scariest AI issue to McMillan Cottom is what she calls scale. ChatGPT is scouring so many resources, mostly of human creation, and generating so much content as to render it nearly impossible to track where all this “new” content came from. “And where does all this information go?” McMillan Cottom asked. “That’s far more troubling than the fact that people can use ChatGPT to ‘write a novel’ in a few days and sell it on Amazon.”

What is AI exactly?

Also troubling is that no one can say how intelligent artificial intelligence will get.

Autonomous weapons systems — dubbed “killer robots” — have reportedly killed people in wars. They operate independently, without human decision making, raising the risk of preemptive strikes.

Will AI go to war with us? This seems like a ridiculous question, the stuff of science fiction. But technologists and philosophers aren’t laughing.

During an interview, Carolina philosopher Thomas Hofweber asked me, “If AI is not human, then what is it, precisely?”

I thought this an absurd question, so I laughed. Now, at the end of my work, I’m closer to crying, convinced Hofweber’s question is profound. He and some of his Carolina colleagues in computer science and linguistics started the AI Project this year to discuss the issues surrounding AI’s place in the world.

“Are generative AI machines just another piece of disposable tech, like a toaster or a car or a smart phone?” Hofweber wondered. Or, if ChatGPT already behaves like a human, will future generative AI become a threat in unforeseen ways, like humans have always been but sneakier and more efficient?

The Terminator movie series, with its violent AI Skynet, gives the wrong impression, Hofweber said. The way existential risk works is not that robots with machine guns will come for you, he said. AI control would be more like manipulation through social media and allocation of resources to large computer servers on which AI runs rather than to human beings. Or, he added, something deadly such as microdrone mosquitos that can inject lethal poisons.

“The speed at which AI is advancing makes it very easy to lose track of the dangers that come with it,” Hofweber said.

Mark Derewicz is director of research and national news for UNC Health Communications and the UNC School of Medicine and a freelance writer living in Chatham County.

Share via: