AUSG voted to pass its first Artificial Intelligence-related resolution on Saturday, April 11. Resolution-21-012 demands Eagle Service show how much AI use will be permitted in a course, by both students and professors.
This resolution comes shortly after the University issued new guidelines on ethical AI usage under the Office of Information Technology. The guidelines outline that “course syllabi should clearly state policies and expectations about the use of AI, whether prohibited or permitted.”
Senator Grantham Smith, a freshman in the School of Public Affairs and sponsor of the resolution, said he hopes the bill will provide more transparency for students.
“I thought to myself, there’s no way for students to be able to see what classes make use of AI and the extent of which they do,” Smith said.
The resolution also outlines four proposed categories to measure the AI resources, or Large Language Models, allowed within each course. The categories range from “LLM technology required in writing processes” to “LLM technology not allowed in writing processes.” Smith said he hopes students will be able to search with those categories in Eagle Service.
“We hear the excuse a lot that AI is such a novel and new thing that you have to work fast and with decisiveness on,” Smith said. “But that doesn’t mean that the policies shouldn’t be transparent and the usage of it in the classroom shouldn’t be transparent.”
Regina Curran, the director of privacy and cyberpolicy in the Office of Information Technology, also co-sponsored the resolution.
Curran is a University professor and co-chair of the AI working group.
“AI is not a zero-sum game, it’s not a sort of all or nothing,” Curran said. “That’s really the other thing that I’m hoping this [resolution] will do, is generate more thoughtful, robust conversations about [AI’s] use, especially its use in higher ed.”
Following the resolution’s passing in the undergraduate senate, it will enter an advocacy stage and later involve administrative input. However, future iterations of the bill will need more distinctly defined categories, according to Smith.
“The four categories would have to be defined in a more expanded way, which is something that, since Saturday, I’ve been thinking about how I would be able to do [that],” Smith said.
Both Smith and Curran said they hope this resolution encourages more discourse on AI use in higher education.
“I strongly believe that anything like this, this resolution and related [policy], pushes people to have more conversations about AI itself … I think that’s a good thing,” Curran said.
This article was edited by Natalie Hausmann, Payton Anderson and Gabrielle McNamee. Copy editing done by Avery Grossman, Mattie Lupo and Ava Stuzin.



