Universal Radiology
AI-assisted X-ray interpretation for low-resource clinical environments
This research aims to bridge a gap between well-studied, high-performing radiology models and a critical shortage of diagnostic expertise in low-resource clinical environments.
The focus is end-to-end usability: capture, correction, inference, and outputs that clinicians can evaluate and trust.
Addressing Radiologist Scarcity
Focused on regions with as few as 1 radiologist per 1M+ people
Overview
Universal Radiology began with a clear mismatch:
- In research settings, machine learning models were achieving radiologist-level—and in some cases superior—performance on specific diagnostic tasks
- In many parts of the world, there are virtually no radiologists available
Clinicians are often responsible for interpreting X-rays themselves, working with:
- Low-quality machines
- Film-based scans
- Photos captured on mobile devices
The gap was not theoretical. It was operational.
This project started as an attempt to understand whether these two realities could be brought together:
Can high-performing radiology models be adapted to work in the environments where they are most needed?
Approach
This required more than applying existing models.
It involved simultaneously:
Synthesizing current research Studying state-of-the-art radiology models, their performance, and their limitations
Running practical experiments Testing how these models behave under degraded, real-world input conditions
Working directly with clinicians Understanding how imaging is actually captured, interpreted, and used in practice
The goal has been to build toward a system that is not just technically capable, but usable and trustworthy in clinical settings.
What Was Built
A working prototype system designed around real-world constraints:
Mobile capture pipeline Smartphone-based capture of X-rays (film or screen)
Image correction + preprocessing Handling:
- Perspective distortion
- Moiré patterns
- Contrast and histogram normalization
Model inference pipeline GPU-backed inference using containerized infrastructure
Interpretability layer Gradient-based heatmaps to expose model reasoning
The system treats imperfect input as a given, not an edge case.
Key Technical Focus
Adapting models to a new domain
Most radiology systems assume clean, standardized inputs.
This work focused on:
- Photos of X-rays with inconsistent quality
- Device-specific artifacts
- Environmental variability
Approach:
- Preprocessing pipelines tailored to degradation patterns
- Experiments with domain adaptation techniques to improve robustness
Image correction as a prerequisite to accuracy
Model performance was heavily dependent on input quality.
Key work included:
- Perspective reconstruction
- Artifact reduction (e.g., screen interference)
- Normalization pipelines to stabilize inputs
These steps were often as impactful as model selection itself.
Interpretability and usability
Clinical adoption depends on more than output accuracy.
The system incorporates:
- Gradient-based heatmaps for visual explanation
- Ongoing exploration of transformer-based models for text-based interpretations
The focus is on producing outputs that clinicians can evaluate, not just receive.
Engineering Approach
- Dockerized GPU infrastructure for reproducible, scalable inference
- End-to-end system design from capture -> preprocessing -> inference -> output
- Research-driven development grounded in current literature
- Continuous clinician feedback loops to validate real-world usability
Research & Field Work
Delivered an oral presentation at AFCEM (African Conference of Emergency Medicine)
Traveled to Botswana to present and gather feedback from clinicians
Ongoing collaboration with:
- Medical student in Tanzania
- Planned work with a hospital in Uganda
Significant effort has gone into understanding the environments this system is intended for, not just building in isolation.
Result
I created a working prototype and presented this work as an oral presentation at AFCEM. Research and development are ongoing, with continued collaboration and feedback from clinicians in the field.
Presented at AFCEM in Botswana
Presented a working prototype in an oral presentation at the African Conference of Emergency Medicine
Current Status
Active research and prototyping.
Current focus:
- Dataset collection from real-world conditions
- Further domain adaptation experiments
- Exploration of multimodal / transformer-based systems
Development is intentionally cautious, with emphasis on safety and clinical validity.