This document presents a prototype for a medical chatbot that integrates various specialized models, aiming to simplify user interaction with complex diagnostic tools. The system utilizes OpenAI's large language models (LLMs) to process both image-based inputs (like X-rays for pneumonia and OCT scans for ocular conditions) and text-based physiological data for diabetes prediction. A key innovation is the chatbot's ability to extract relevant parameters from natural language, trigger appropriate diagnostic models, interpret outputs, and deliver user-friendly responses. The author emphasizes that this modular approach allows for the easy addition of new diagnostic capabilities, positioning the chatbot as a central "hub of medical models" to enhance accessibility and user experience in bioinformatics, while acknowledging the importance of human oversight.
Learn more: Robodoc: a conversational-AI based app for medical conversations