[Teaching Tips] Practical Strategies for AI and Assessment & Teaching and Thinking with AI (2025.01.10.)
- 교무팀
- Hit1357
- 2025-03-04
Practical Strategies for AI and Assessment & Teaching and Thinking with AI Sang-eun Lee, Min-young Ku, Ye-jin Kim |
The role of AI in higher education is an important topic, not only in Korea but also in the United States and other countries. In this edition of Teaching Tips, we would like to introduce the keynote speech from The Teaching Professor Conference on AI in Education, an online conference hosted by Magna Publications in Wisconsin, USA.
This paid conference runs from December 5, 2024, to February 17, 2025, and allows participants to join asynchronously. It features two keynote speeches along with about 20 presentations by professors from various U.S. universities. The conference includes concurrent sessions on topics such as:
Ethical and Inclusive AI Practices in Education
Innovative Approaches to AI in Faculty Development and Professional Growth
Integrating AI into Course Design
AI-Powered Tools for Enhancing Student Engagement and Learning
AI Fundamentals and Emerging Technologies in Education
The rise of AI in higher education presents new opportunities for learning while also introducing challenges that must be addressed. Educators need to find ways to enhance student engagement without compromising the quality of education while also tackling ethical concerns related to academic integrity and AI usage. The keynote speeches from the AI in Education Conference, Leon Furze’s Practical Strategies for AI and Assessment and José Antonio Bowen’s Teaching and Thinking with AI, aim to provide insights into these pressing issues faced by universities and faculty members. This edition of Teaching Tips summarizes the key takeaways from both presentations, offering valuable ideas for integrating AI into education and supporting university instructors in navigating the evolving landscape of AI-enhanced learning.

1. Practical Strategies for AI and Assessment
Leon Furze is an international consultant, author, and speaker with over 15 years of experience in secondary and higher education, as well as leadership. He has published numerous books and articles and helps educators understand the implications of generative AI in education through consulting and advisory work.
In his keynote speech, he introduces The AI Assessment Scale (AIAS), a framework designed to help educators and students use AI appropriately and transparently. Leon Furze discusses the vulnerabilities of assessment in the AI era, the role of educators, and how to integrate AIAS into educational programs.
A. All Types of Assessment Are Vulnerable to AI
First, Leon Furze emphasized that all types of assessments are, in some way, vulnerable to AI. Whether it is essay writing, written assessments, multiple-choice questions, or online exams, no method can fully guarantee academic integrity or be completely protected against technological interference. In reality, it is impossible to fully control students' access to AI technology or accurately determine whether they have used AI.
Even with online exam lockdown browsers, students may find ways to bypass restrictions—for example, by discreetly closing windows and taking screenshots every few seconds to use ChatGPT for answers. Moreover, as technology advances, even offline written exams will become more vulnerable. AI could be embedded in smart glasses or contact lenses, allowing students to bring AI-assisted devices into the exam room. There is growing concern that AI will undermine traditional university assessments, as AI can handle many aspects of learning on behalf of students. Since AI has gained the ability to visually recognize images, graphs, maps, and symbols, its use in subjects like mathematics and science has also become unavoidable. Additionally, AI detection software is not entirely effective at identifying AI-generated content, making it impossible to completely prevent AI-driven vulnerabilities in university assessments. This means that the challenge is not limited to writing assessments—every type of assessment is now susceptible to AI.
B. Understanding the Essence of Education and Focusing on Assessment Validity
The emergence of AI may have revealed the inefficacy of long-standing educational practices. In Australia, for example, issues have surfaced regarding the commercialization of education. The goal of increasing university enrollment by 40% is driven not by educational reasons but by economic benefits. People essentially pay for degrees and perceive education as a series of checklist tasks. In secondary education, the focus has shifted primarily to major exams that determine university admission. Leon Furze argues that in this situation, we need to revisit fundamental discussions about what education truly is and what purpose assessment serves.
He referenced a paper by Philip Dawson from Deakin University, titled Validity Matters More Than Cheating. The paper asserts that ensuring assessment validity is far more useful and important than merely addressing academic misconduct or assessment-related issues. Validity should focus on guaranteeing learning rather than making moral judgments about students. When we fixate on cheating—when we label a student’s use of ChatGPT for essay writing as misconduct—we may overlook the fact that students are simply using available tools and technologies to complete their assigned tasks.
Does a student’s submitted work truly reflect their actual skills? Does the content they produce accurately represent their abilities and knowledge? If we return to these fundamental questions, whether students use AI becomes much less relevant. In some cases, AI may even help students demonstrate their abilities more effectively. Consider an example related to validity. If an instructor requires students to submit an assignment within 24 hours, many students will inevitably resort to using ChatGPT because they have multiple other responsibilities. In this case, the way the assignment is structured essentially encourages AI usage. Therefore, we need to carefully examine how assessment design impacts assessment validity. By designing diverse and valid assessments, we can strengthen overall validity. University instructors may never be able to completely eliminate academic misconduct. However, by taking the time to systematically design assessments and diversify assessment methods and formats, they can ultimately collect more valid and meaningful data.

C. The Role of Educators
Since August 2023, Leon Furze has conducted a study based on test results involving all students and faculty at the British University of Vietnam. The research found that what students value most is transparency—having educators clearly explain appropriate ways to use AI. After the university implemented the AI Assessment Scale (AIAS), incidents of academic integrity violations related to AI significantly decreased. This shift occurred because the perception of academic integrity in relation to AI changed. While misuse of AI still occurs, the approach no longer assumes that all AI usage is inherently negative. As a result, instructors experienced a noticeable reduction in the burden of monitoring whether students were violating academic integrity through AI use in class.
Leon Furze conducted further research after discovering that at some universities, students had a much better understanding of how generative AI works, what it can do, and how it can contribute to learning. He was particularly struck by the creative ways students applied AI. Based on these findings, he argues that educators should not only tell students that they can use AI for planning, ideation, development, and research but also provide clear instructions on how to do so effectively. If students are allowed to use AI in the planning stage of an assessment, educators must first be prepared to explain how to integrate AI appropriately. This highlights the increasing importance of faculty expertise. The AI Assessment Scale serves as a tool to help educators design assessments. With their professional expertise, educators determine the appropriate level of AI usage, which is first reflected in assessment design. They must then engage students in discussions about whether, when, and how AI should be used.
Educators need to focus on their subject-matter expertise while understanding how technology interacts with different academic disciplines—and, at times, when it should be challenged. Both educators and students must decide what forms of AI usage are appropriate, what is not, and why. Furthermore, educators must be able to clearly define and communicate what will be done to ensure fair and valid assessments of students' abilities with clarity, transparency, and validity. AI technology companies and scholars from other fields cannot dictate the best ways to use AI in education. Educators themselves must reach these conclusions because they are the subject-matter experts and have the authority to shape the narrative.

2. Teaching and Thinking with AI
The second keynote speaker, José Antonio Bowen, received the 2018 Ernest L. Boyer Award in recognition of his contributions to higher education in the United States. He is currently a Senior Fellow at the American Association of Colleges and Universities (AACU). Bowen has won teaching awards at Stanford University and Georgetown University and is the co-author of Teaching with AI: A Practical Guide to a New Era of Human Learning (2024) with C. Edward Watson, which explores practical AI-based teaching methods.
AI is expected to transform how people work and think. It will redefine the standard for average performance and become increasingly creative. This means that AI can replace tasks that are typically performed by average students, pushing companies to seek employees who can perform above average. In response to this shift, the education sector is being called upon to develop students into experts.
In his presentation, José Antonio Bowen explores the essential skills that education must now teach, the curriculum changes required to prepare students for this new reality, and strategies for reducing academic misconduct while enhancing student motivation in an era of rapid technological change.
A. How to Ask Better Questions and Get Better Answers from AI
There are two key strategies for using AI effectively. First, using a variety of AI tools is essential. Some of the most well-known generative AI models include ChatGPT, Claude 3.5 Sonnet, Gemini, and Copilot. In addition to these, José Antonio Bowen recommends exploring additional tools like Latimer, which is based on ChatGPT but has been further trained on verified and licensed data, resulting in a different way of processing information.
The second strategy for obtaining better answers is experimenting with different prompts. Since each AI model generates different responses to the same question, refining prompts through multiple attempts can lead to better results. Additionally, research shows that the verbs used in a prompt influence AI responses. Using active verbs tends to make generative AI more responsive and dynamic. One method to improve AI-generated answers is by actively grading and scoring its responses. For example, you can ask AI to write a 150-word explanation of the TCA cycle or a 200-word summary of the 2020 election, then review its output and assign a score. The key is to request answers that can be evaluated objectively. Another effective prompting technique is assigning AI a specific expert role. Since generative AI operates as a language model, it responds more effectively when given a clear role or context. Instead of treating AI as a human, it can be more useful to think of it as an intern or a partner that requires human guidance.

Generative AI Ecosystem, Image Source: José Antonio Bowen(2024). Teaching and Thinking with AI
B. AI-Based Student Assessment and Grading Methods
Generative AI can be used for student assessment and the creation of new exam questions as part of course design. José Antonio Bowen suggests first asking AI to generate a course syllabus tailored to different types of students. Next, instructors can provide AI with lecture content and request step-by-step learning strategies to establish learning objectives and structure the course. Additionally, AI can assist in developing course materials by summarizing relevant videos, books, or other resources. Instructors can also improve lesson plans by describing students' proficiency levels and asking AI for adjustments. AI can further contribute to creating a question bank by generating exam questions. For instance, an instructor might request 25 simple essay questions about a specific concept and then select and refine five of them for the final exam. AI can also be used for grading student submissions. Instructors can provide AI with grading criteria and ask it to evaluate assignments while explaining its reasoning. This can help maintain fairness in grading by reducing unconscious bias—where instructors may unknowingly award higher scores to students whose reasoning aligns with their own.
Since students may use AI for academic misconduct, AI detection tools have also been introduced. José Antonio Bowen proposes an AI usage policy template that instructors can follow: Decide whether AI use is allowed or prohibited and consider how AI can enhance student learning. If AI is allowed, determine whether students must share the prompts they used in their assignments.
Explore creative ways to incorporate AI into learning while considering the use of AI detection tools and how information about them will be shared with students. Clearly define students' final accountability for their assignments, ensuring they understand their responsibilities in AI-assisted work.
We have briefly summarized the keynote speeches from the Teaching Professor Conference on AI in Education, an online event hosted by Magna Publications. Leon Furze focused on practical strategies for AI and assessment, while José Antonio Bowen discussed teaching and thinking with AI. Leon Furze emphasized that the emergence of generative AI should prompt educators to reconsider the essence of education. Instead of focusing solely on preventing academic misconduct, the priority should be on how to assess student performance effectively. As seen in the case of the British University of Vietnam, educators must clearly define academic integrity and teach students how to use generative AI appropriately.
Meanwhile, José Antonio Bowen explored how educators can integrate generative AI into teaching and the policies needed when incorporating AI into education. Although their topics differed, both speakers shared a common perspective: rather than prohibiting AI use, educators should focus on teaching students how to use it effectively.
Generative AI is becoming a game-changer across various fields. However, some students outside the classroom lack access to AI due to factors such as gender, economic status, and other environmental barriers. This situation only widens the socioeconomic divide. Education should ensure that no student is left behind, guiding them to think independently and use available tools responsibly.
We hope this edition of Teaching Tips helps educators support students in using generative AI effectively in their learning journey.
References:
https://www.magnapubs.com/teaching-professor-conference-on-ai-in-education/
https://www.magnapubs.com/teaching-professor-conference-on-ai-in-education/plenaries/
Sang-eun Lee, Min-young Ku, Ye-jin Kim, Practical Strategies for AI and Assessment & Teaching and Thinking with AI (CTL Teaching Tips #54). Seoul: Sungkyunkwan University, Center for Teaching and Learning Innovation.



