AI-assisted Annotations for Histology

Project members

Sharmila Saran Rajendran, Helen Christian, Mary McMenamin, Damion Young, Rumyana Smilevska (MSD).

Project summary

The Labbot project aims to enhance inclusive learning and student engagement in medical histology lab practicals by using an AI-driven chatbot to support first-year preclinical students in addressing queries and interacting with instructors. 

View final project report (PDF)

AI in Teaching and Learning at Oxford Knowledge Exchange Forum, 9 July 2025

Findings from projects supported by the AI Teaching and Learning Exploratory Fund in 2024–25 were presented at the AI in Teaching and Learning at Oxford Knowledge Exchange Forum at Saïd Business School on Wednesday, 9 July 2025.

Project team members each presented a lightning talk to all event participants, and hosted a series of small group discussions.

Follow the links below to view the lightning talk recording and presentation slides for this project.

 

View presentation slides (PDF)

Project case study

Studying cells and tissues is a key component of Year 1 in the pre-clinical medicine “Organisation of the Body” course and the biomedical sciences curriculum. As part of this, students participate in practical sessions led by subject experts, including Dr. Mary McMenamin and Dr. Helen Christian. These sessions involve studying tissues under microscopes and using the digital platform CSlide, developed by Medical Sciences Learning Technologists, Damion Young and Jon Mason. CSlide features a comprehensive histology slide collection, which the subject leads use as a teaching aid to help students identify regions of interest. 

To further enhance the learning experience, a 65-inch Digital View board (smartboard) was introduced by Sharmila Rajendran in recent years. This smartboard allows the CSlide platform to be used on a large screen, making practical sessions more interactive. CSlide enables students to access a wide range of scanned histological slides, supporting flexible learning through remote access, self-study, and review of practical content. Its interactive tools—such as zooming, panning, and detailed slide exploration—are widely used by both staff and students during the practicals.  

The MSD Learning Technologists had previously implemented a manual annotation prototype within the platform. However, navigating to specific regions of interest within large, scanned slides remained a challenge. To address this, an AI-assisted histology project was launched with support from the AI Teaching and Learning Exploratory Fund, led by Sharmila Rajendran. The project aimed to explore how AI models could enhance the annotation process by automatically segmenting and outlining specific cells or cell clusters by using the existing prototype as a template. This innovation sought to reduce the manual workload for educators while enriching the learning experience for students by providing clearly annotated resources that encourage autonomous, interactive, and engaging study. 

The AI workflow began by tiling high-resolution slide images into 512×512 pixel sections. Low-quality tiles were filtered out, followed by performance steps including segmentation and clustering. The results were consolidated into a JSON output, which was then integrated into CSlide V2 for viewing. Throughout the process, academic staff contributed critical histological expertise by manually annotating slides and evaluating AI-generated outputs. This collaborative, feedback-driven loop allowed academics to review AI-generated results, engineers to refine model parameters, and technologists to ensure platform usability and integration with educational goals. 

The AI model was tested across a variety of tissues with different cellular architectures, including skeletal muscle, pancreas, adrenal gland, thyroid gland, and posterior pituitary, among others. Results showed that the pancreas achieved high segmentation accuracy of clusters in exocrine regions, the adrenal gland performed well in the zona glomerulosa and zona fasciculata, and skeletal muscle tissue demonstrated excellent segmentation of individual muscle cells. The thyroid gland showed partial success with follicular cells, though the model struggled with flattened epithelium and artefacts. In contrast, the posterior pituitary exhibited poor performance, with the model incorrectly outlining empty fibrous regions instead of actual cells. 

Examples of AI segmentation on five tissue types, ranging from excellent (skeletal muscle) to poor (pituitary gland), showing varying accuracy in outlining cellular structures.

These findings indicate that the segmentation model performs optimally in tissues with moderate cell density, distinct nuclear morphology, and sufficient spatial separation. However, it struggles with densely packed, fibrous, or architecturally complex tissues. This highlights the need for tissue-specific model tuning and the inclusion of diverse training samples to improve cross-tissue performance. 

Although AI integration with CSlide V2 has not yet been introduced to students, the groundwork has been laid for a significant future impact on education. For academic staff, AI-assisted tools will enable more efficient annotation by allowing entire clusters of cells or structures to be labelled collectively. Staff can also edit, delete, or refine annotation polygons and manually add new labels or regions. This approach reduces annotation time while maintaining expert oversight and ensuring the quality of teaching resources. 

For students, AI-assisted annotation promises enriched visual guidance and, pending further development, the potential integration of self-assessment tools. Several technical and practical challenges were encountered during the project. The model exhibited inconsistent performance across different tissue types, and the large slide sizes required significant processing power, which was heavily reliant on GPU acceleration. Additionally, the initial setup of the Azure environment for deploying and managing pipeline components posed technical hurdles. While segmentation and clustering were generally effective, classification accuracy remained limited. As such, the project focused on outlining and grouping structures rather than naming them. 

Interdisciplinary collaboration proved crucial, with regular input from academic staff, engineers, and technologists supporting continuous iteration and improvement. As the AI models performed inconsistently across tissues, the need for adaptive or specialised approaches became clear. Automation significantly improved efficiency, but the inclusion of human expertise in reviewing AI outputs was essential to ensure educational accuracy and relevance.  

Looking ahead, several areas require further development. These include the detection and grouping of similar cells or structures, automated polygon generation around detected elements, and exploring whether it is possible to identify and then manually or automatically select models that perform best on different tissue types used in teaching. The team will also investigate how to integrate the AI pipeline with CSlide V2 to process hundreds of slides effectively, while addressing concerns around cost and resource allocation.  

While student-facing features have yet to be deployed, academic staff played an active role in testing and refining the pipeline. Their feedback guided improvements and ensured alignment with teaching objectives. As interactive features are introduced, student engagement and feedback will be central to the next phase of development.