The proliferation of generative AI products has led to a wide variety of opinions about the efficacy, ethicality, and potential impact of AI on different dimensions of graduate students’ lives. Here’s one graduate student response that demonstrates the complexly intertwined issues that the topic of AI brings up:
“As a graduate student, I know that many AI advances have powerful applications as tools to improve research and learning. In general, my first analysis of any new technology is based on a phrase from an editorial in Nature: “Don't ask whether AI is good or bad. Ask how it shifts power.” Who benefits from AI and how they use it will be more impactful than the raw capabilities or intended use of any tool.”
In this blog post, we’ll highlight some of the different opinions that Illinois graduate students have about the efficacy, ethicality, and potential impact of AI.
Literacy
Several graduate students discussed the need for greater literacy around AI and how it functions while others voiced concerns about their use of AI potentially impacting their own developing skills
“At first I was wary and nervous about using AI, as I felt confused about how it was perceived regarding academic integrity, plagiarism, and providing ideas that were not original. However, in recent classes, professors have encouraged using AI tools and reframed my thinking about AI as a tool, not an end-all, be-all. One professor stated that AI helps refine thinking, not replace thinking, and that helped me feel more positive about the potential of AI as a resource rather than a replacement for work.
“Over the past year, as AI has entered the public consciousness, my classes have begun to integrate discussions about the value, purpose, and use of AI in librarianship as we think about how we could use AI to better serve our communities or if we should use AI at all. As someone who plans to go into children’s librarianship, I’m thinking about how libraries can teach the safe and ethical use of AI to children, who are the most vulnerable members of our society, as part of our media literacy instruction.”
“I’m really excited about AI. I use AI in my research and am using AI to facilitate several public-facing projects. I think it behooves us as researchers to use whatever tools we have to make our work more efficient and more accessible.”
“I personally think that thoughtful work will not be replaced by AI, no matter how advanced the technology is. Because technology is based on the data of the masses, it does not prefer alternative voices, which academic debates sometimes value.”
“AI can be a valuable and powerful tool, but I currently have strong reservations surrounding the development of such technology. As it currently stands, AI has no true ‘intelligence’ and can be predicted; however, if it comes to a point where a software can have truly new ideas I become very concerned. AI tools also have the potential to further erode student understanding as it will become a race to learn to use the tool and not the subject matter.”
“I feel like I am depending too much on them, ChatGPT especially. It’s a good tutor, guide, and map, but I am not very sure that I am absorbing what they taught me. It’s pretty exciting, of course, living with this technology and seeing this development, but, at the same time, I’m scared a bit to lose my ability to think by myself, not depending on them.”
Trustworthiness
Some graduate students pointed out issues with the trustworthiness (or lack thereof) of AI. While there are some researchers at Illinois actively working on this issue, other graduate students voiced concerns about AI's impact on academic integrity.
- “I’m a PhD student in the CS department working on trustworthy AI research. I’m incredibly hopeful that AI will offer fantastic tools to help people accomplish things faster/better/safer in the long run...”
“A lot of the mainstream AI programs that people use just gather information on a topic – almost any topic that can be asked – and spit back a plausible sounding but often incorrect answer. It is also getting harder to detect AI writing in schoolwork. The AI programs are getting better faster than the detector programs ever could. Moreover, the detector programs are AIs themselves and have abysmally high false positive rates, especially for dense technical writing styles. This is a problem and while I'm not sure what the answer is, given the rapidly evolving technology, we'll have to address these concerns sooner or later.”
“Researchers are being called on to add our voices to the rampant speculation and hasty regulatory attempts; in a situation as complex and rapidly-evolving as this, it often feels like irresponsible speculation to make firm claims or recommendations. However, given the blatantly unethical uses of AI that have recently leveraged the growing power of LLM and other machine learning, we need to be firm on certain clear boundaries.”
“I’ve heard a lot about how ChatGPT is actively getting worse... and over the last few months I’ve also watched the quality of Google search results decline... I’ve mostly stopped using AI because I don’t feel I can trust chatbots or the companies that run them to provide accurate information.”
“I’m concerned about the possibility that an AI database will scrape students’ work on the Web and use it to generate ‘new’ content that plagiarizes us. I’m also concerned about AI systems replicating and amplifying human biases, whether because of deliberate manipulation or because their algorithms and databases include designers’ and users’ implicit biases.”
“One huge way for me that AI is impacting me as a grad student is that I am terrified of being accused of plagiarism. It’s haunting me and really stressing me out.”
Socioeconomic Impact
Several grad students described their concerns about AI's potential impacts on society, some noted the potential impact on their own future careers, while others pointed to a rapidly changing sociopolitical climate.
- “It motivates me to study more seriously and focus on developing substantive skills other than technical skills that hopefully can’t be replaced by AI.”
- “On a personal level, I’m skeptical of AI use in general. It’s a powerful tool, and I think it has great potential to be used for good, but in a capitalist society, I think it’s more likely to be used to scrape intellectual property and replace workers rather than improving people’s lives.”
“I’m quite nervous about the possible economic impact – will too many people lose their jobs too fast? Will this cause public perception on AI to turn sour? I think a lot of qualms people have about AI are actually rooted in financial reasons, so I feel that AI will ultimately pressure politicians to enact new economic policies in response (although given our current political climate I also worry about whether this will go well too).”
An earlier post in this seriesyou’re interested in contributing to this ongoing conversation, please take a moment to respond
Bri Lafond is a PhD candidate affiliated with the Center for Writing Studies, the Department of English, and the Department of Gender & Women's Studies. She is currently the Career Exploration Fellow for Graduate College Communications.
John Moist is the Communications Specialist for the Graduate College at Illinois. He holds degrees from Mount Aloysius College, Baylor University, and the University of Illinois. In his spare time he enjoys making music, playing board games, drinking espresso, and watching movies. He lives in Champaign, Illinois with his wife Kaitlyn, and their cat Mildred.