TLC Lunch and Learn: AI and Academic Dishonesty: Reframing the Narrative
The availability of large language models (LLMs) and other forms of (generative) artificial intelligence has challenged instructors to re-think and re-tool their course exercises and assessments. While many instructors acknowledge that students' uncritical reliance on such tools may discourage fundamental knowledge acquisition and skill development, entrench social biases, and contribute to societal, ethical, and environmental injustices, many also understand (all too painfully) the difficulties of persuading students to invest time, effort, and energy into their own learning.
In this "lunch and learn", Scott Cassidy (McDougall Faculty of Business) will discuss the disconnect between instructor and student perspectives in terms of student motivations (and the role that the traditional classroom evaluative structure plays in tacitly encouraging unauthorized AI use). Perspectives will be shared regarding what instructors can do to compellingly frame "traditional" academic learning and its place in an AI-enabled learning environment. Emphases will include considering justice principles in student communications, shifting narratives around AI use and reliance, and the tricky question of evaluating "process" versus "product" in student assessments.
Please note that this "lunch and learn" is aimed at instructors who do not actively incorporate LLMs or other forms of (generative) artificial intelligence into their course learning activities - and who find they are struggling with students who use such tools in an unauthorized way that subverts their learning in traditional pedagogical activities. Instructors who encourage AI use, or who build course learning activities around such tools, are certainly welcome to attend (and are likely to offer valuable perspectives); however, please be aware that the nature of the session is geared more towards helping instructors who do not do so.