Project members
Juliana Dresvina, Daniel Gerrard, Leif Dixon, Gavin Thomas (Humanities Division).
Project summary
Developing an AI-powered research companion for undergraduate History students to help navigate pre-modern period papers by guiding them to relevant resources from Oxford's vast archives, with a focus on enhancing reading lists and supporting critical engagement without enabling essay writing.
View final project report (PDF)
AI in Teaching and Learning at Oxford Knowledge Exchange Forum, 9 July 2025
Findings from projects supported by the AI Teaching and Learning Exploratory Fund in 2024–25 were presented at the AI in Teaching and Learning at Oxford Knowledge Exchange Forum at Saïd Business School on Wednesday, 9 July 2025.
Project team members each presented a lightning talk to all event participants, and hosted a series of small group discussions.
Follow the links below to view the lightning talk recording and presentation slides for this project.
View presentation slides (PDF)
Project case study
Project overview and implementation
Brainard the Fox is a specialized AI chatbot designed to support undergraduate students struggling to choose the reading for their essays from vast bibliographies. Its pilot is trained on secondary scholarship from reading lists in early medieval British history, creating a localized model that could provide targeted academic guidance without relying on commercial AI platforms. Students could query Brainard about specific topics (such as "Gildas," "Vikings in Britain," or "Why did William I win the Battle of Hastings?") and receive structured responses including contextual summaries, relevant reading suggestions, recommended reading orders, and key quotes.
Rationale and benefits
Roundtable discussion at AI in Teaching and Learning at Oxford event, 9 July 2025
The primary rationale was to address the overwhelming nature of academic research for undergraduate students who often struggle with where to begin when approaching complex historical topics. Brainard provided several key benefits: it offered immediate, accessible guidance that made daunting topics feel more manageable; it suggested logical reading sequences that helped students build understanding progressively; and it provided concise summaries that contextualized topics within broader historical frameworks. For students, this meant reduced anxiety about starting essays and more confidence in their research approach. To tutors, it offered a way to extend personalized guidance beyond tutorial hours.
Challenges and limitations
Testing revealed some limitations due to the narrow scope of initial training data. The tool occasionally recommended outdated sources or failed to include obvious primary texts because they were not included in the training data (approximately 100 sources). This also meant recommendations could be repetitive or too broad for specific essay questions. Some testers noted that students could potentially use Brainard to generate essay frameworks rather than develop critical thinking skills independently. Additionally, when asked about concepts outside its training data, the tool sometimes struggled with methodological questions.
Learning outcomes and future development
The testing process revealed that Brainard's greatest strengths lay in its ability to provide structured guidance and reduce student anxiety about approaching new topics. The "suggested order of reading" feature was particularly well-received. However, the experience highlighted the importance of emphasizing that AI tools should complement, not replace, close engagement with primary sources and independent critical thinking. Moving forward, we would expand the training data significantly and add clear disclaimers about source dates and limitations. The tool would benefit from integration with library systems to provide direct links to resources, and clearer guidelines about appropriate use to prevent academic dependency.
Reception and impact
Feedback from colleagues and students was generally positive. Graduate students and librarians appreciated the tool's ability to provide coherent starting points for research, while one faculty member expressed concerns about potential academic shortcuts. A valuable insight came from a colleague who demonstrated how easily the tool could generate essay content, highlighting the need for clear guidelines about appropriate use. This feedback reinforced that AI teaching tools must be designed to enhance rather than replace fundamental academic skills, with careful consideration of how they integrate into pedagogical practice.