Navigating Generative AI in Higher Education: Understanding Student Perceptions of Faculty Usage
Explore key student perceptions of faculty Generative AI use in higher education, highlighting concerns about reliability, critical thinking, and pedagogical integrity. Learn how institutions can foster trust and effective AI integration.
The rapid integration of Generative Artificial Intelligence (GenAI) into higher education has sparked widespread discussion, primarily focusing on regulating student use. However, a recent study by Jie Gao, Jiayi Zhang, and Dan Chen, published as a preprint on arXiv, sheds crucial light on a less explored, yet equally vital, aspect: student perceptions of faculty GenAI usage in teaching and grading (Gao et al., 2026). This research uncovers significant concerns among students regarding AI's impact on educational quality, faculty engagement, and the fundamental principles of learning. Understanding these perceptions is paramount for institutions aiming to integrate AI responsibly and effectively, ensuring technology enhances, rather than detracts from, the educational experience.
The Evolving Landscape of AI in Education
Generative AI tools, such as large language models (LLMs) like ChatGPT, have swiftly transformed how students engage with learning material. They are increasingly utilized for brainstorming, clarifying complex concepts, and obtaining formative feedback on assignments. Concurrently, instructors are exploring GenAI's potential for tasks like designing course materials, generating practice problems, streamlining feedback processes, and assisting with grading. This dual-sided adoption underscores the need for a comprehensive understanding of how all stakeholders perceive AI's role.
The benefits of AI for students, often cited as increased efficiency, accessibility, and personalized learning support, are well-documented. However, the application of GenAI in faculty teaching and assessment roles introduces a unique set of challenges related to pedagogy, ethics, and trust. While AI promises scalability and instructional assistance, it simultaneously questions long-held assumptions about an instructor's responsibilities, the fairness of assessment, and the irreplaceable human element in teaching. This context highlights a potential imbalance where students might readily accept AI as a personal learning aid but resist its use by their educators.
Investigating Student Attitudes Towards GenAI Usage
To address this critical gap, the study investigated students' comparative attitudes toward GenAI use for both their own learning and for faculty in teaching and grading roles. The researchers analyzed survey responses from 156 undergraduate and graduate students, primarily from universities in the United States, Canada, the United Kingdom, Australia, and France. These students represented diverse academic backgrounds, from Arts and Psychology to Computer Science and Engineering. The survey aimed to understand if an "asymmetry" existed in students' acceptance of GenAI across these different contexts and to identify the specific concerns driving student resistance to faculty use.
The study categorized students into four distinct groups based on their attitudes:
- GenAI Optimists: These students support GenAI integration for both themselves and their faculty.
- Student Support Group: Students in this group endorse their own use of GenAI but oppose faculty use.
- Faculty Support Group: This group supports faculty use of GenAI but not their own.
- Non-supporters: These students oppose GenAI use by both students and faculty.
The findings revealed a nuanced landscape: a significant 37% of participants did not support GenAI use by either students or faculty, indicating a broad skepticism. Conversely, 31% were GenAI Optimists, supporting its use in both contexts. This suggests that while a substantial portion of students are open to AI, a large segment harbors reservations about its general applicability in education.
Primary Concerns: Reliability, Critical Thinking, and Pedagogical Integrity
Beyond classifying attitudes, the research delved into the qualitative concerns students expressed regarding faculty GenAI usage. A thematic analysis of open-ended responses highlighted several key areas of apprehension, predominantly centered on pedagogical issues.
- GenAI Quality Concern: A dominant concern, voiced by a striking 79% of students, revolved around the validity and reliability of GenAI-generated responses. Students feared that faculty reliance on AI could lead to "errors in information and grading" or result in "false information" being disseminated. This concern directly impacts the trust students place in the educational content and assessment feedback they receive. For any institution considering AI deployment, ensuring the accuracy and robustness of the underlying AI models is paramount, often requiring custom AI solutions tailored to specific educational standards and rigorous validation processes.
- Pedagogical Concern: An equally significant issue, highlighted by 37% of students, was the fear that faculty overreliance on GenAI could create a "futile cycle." Students worried that such reliance might reduce faculty critical thinking skills and diminish teaching quality. Concerns included the potential for instructors to become "less involved in teaching" or that "teaching methods will lack individuality," leading to a more detached and less personalized learning environment. This speaks to the broader implications for the quality of education and the unique value proposition of human educators.
- Professional Concern: Students expressed anxiety that faculty's increased dependence on GenAI could foster a sense of "laziness" and a "loss of essential engagement." This concern touches upon the human element in teaching, suggesting that technology should augment, not replace, the dedicated effort and personal connection faculty provide.
- Ethical Concern: While less prominent in this specific study, students also raised concerns about ethical issues such as plagiarism and bias, reflecting broader societal debates about AI's responsible use.
Implications for Higher Education and Responsible AI Deployment
The findings of this study (Gao et al., 2026, source paper) offer crucial insights for educational institutions globally. The strong student resistance to faculty GenAI use, particularly driven by concerns about AI's reliability and its potential to undermine pedagogical quality, cannot be overlooked. Institutions must develop clear policies, robust training programs, and transparent communication strategies to address these anxieties.
- Prioritize Accuracy and Validation: Given that nearly 80% of students questioned GenAI's reliability, institutions must implement rigorous validation processes for any AI tools used in teaching or assessment. This may involve human oversight, clear guidelines for faculty on verifying AI output, and selecting AI solutions that demonstrate high accuracy and transparency. Providers like ARSA Technology, with expertise since 2018 in developing production-ready AI systems, emphasize accuracy and reliability in their deployments across various industries.
- Preserve Pedagogical Integrity: The fear of a "futile cycle" and reduced faculty critical thinking is a serious indictment of potentially unchecked AI adoption. AI tools should be positioned as assistants that enhance efficiency and augment human capabilities, not as substitutes for intellectual engagement or personalized instruction. Faculty training should focus on how to use GenAI critically, ethically, and in ways that enrich the student experience and deepen learning outcomes. This might include using AI for administrative tasks, content generation drafts, or initial feedback, always with a human review and finalization.
- Foster Transparency and Trust: Open communication about how and why GenAI is being used is essential. Students need to understand the pedagogical rationale behind AI integration and be reassured about the measures in place to ensure fairness, privacy, and data security. Institutions could explore developing custom web applications that serve as transparent portals for AI-assisted processes, showing how AI is applied and allowing for student feedback.
- Develop Clear Guidelines and Policies: Comprehensive guidelines for faculty GenAI use are necessary. These policies should address ethical considerations, data privacy, the role of human oversight, and the types of tasks for which AI is deemed appropriate or inappropriate. The focus should be on how AI can enhance the teaching-learning process without compromising academic integrity or the quality of instruction.
By acknowledging and proactively addressing student concerns, higher education institutions can pave the way for a more thoughtful, ethical, and ultimately more effective integration of Generative AI. This means moving beyond a focus on technological capability to a deeper understanding of human-centered design and pedagogical impact.
To explore how robust AI and IoT solutions can be integrated into your operations with a focus on reliability, privacy, and tangible impact, you can contact ARSA.
**Source:** Gao, J., Zhang, J., & Chen, D. (2026). To Use or Not to Use: Investigating Student Perceptions of Faculty Generative AI Usage in Higher Education. arXiv preprint arXiv:2603.25932. Retrieved from https://arxiv.org/abs/2603.25932