Academic Librarians, Information Literacy, and ChatGPT: Sounding the Alarm on a New Type of Misinformation
Abstract
On a regular basis, I get emails from vendors promising to train me “how to use ChatGPT”—as if there’s a secret prompt that reduces ChatGPT’s propensity for providing inaccurate information. There isn’t, and academic librarians should not be complicit in higher education’s efforts to downplay the negative impact of ChatGPT on student learning. No amount of prompt engineering can prevent ChatGPT from generating responses containing erroneous information and logical fallacies. ChatGPT, and other generative AI tools, hold great potential for improving teaching and learning, but they also hold great potential for undermining it. And, if you’ve chatted with an English Composition instructor lately, then you know that ChatGPT is already undermining the development of student writing. The educational crisis triggered by generative AI has an especially profound impact upon first-year college students, who are sometimes using ChatGPT to bypass the cognitive effort that is essential to their attainment of course learning outcomes and general education outcomes.
Copyright Matthew Pierce
Article Views (Last 12 Months)
No data available
Contact ACRL for article usage statistics from 2010-April 2017.
Article Views (By Year/Month)
2025 |
January: 0 |
February: 3402 |
March: 1386 |
April: 558 |
May: 271 |
June: 233 |
July: 450 |
August: 100 |