Title:
Multimodal Agentic Lab Assistant for Scientific Research and Cardiac Image Analysis
Poster
Preview Converted Images may contain errors
Abstract
Laboratory research still relies largely on doing manual tasks, specifically extracting scientific evidence from literature and measurements from biomedical images. The Multimodal Agentic Lab Assistant unifies both in a single interactive workflow and connects research based evidence to help support results from experimental lab work. The agent was built on Gemini 2.5 Flash and orchestrated via LangChain and DeepAgents and decides which tools to invoke for each query. Additionally, it chains literature retrieval with image analysis, and consults a feedback-memory tool that stores preferred response patterns and avoids rejected ones. Developed with a retrieval-augmented pipeline, the agent indexes chunked papers from PeS2oX, PubMed, and COREX-18 to answer scientific questions with cited sources. Image analysis was developed with a vision module that detects cardiac ablation lesions, auto-locates a ruler when present, and reports lesion width, depth, and extruded area. In practice, researchers and students can cut manual measurement error, move from image to measurement to cited evidence in one session, improving efficiency and accelerating decision-making in cardiac ablation research and clinical education.
Authors
| First Name |
Last Name |
|
Andrew
|
Lester
|
|
Gordon
|
Chau
|
Advisors:
| Full Name |
|
Matthew Magnusson
|
Leave a comment
Submission Details
Conference URC
Event Interdisciplinary Science and Engineering (ISE)
Department Computer Science (ISE)
Group Computer Science - Independent Projects
Added April 20, 2026, 9:40 a.m.
Updated April 20, 2026, 9:41 a.m.
See More Department Presentations Here