David Dasa

Medical Doctor | PhD Researcher in Clinical AI and Digital Health | HFC Medical Fellow at Scale AI

I study how healthcare AI systems can be evaluated, deployed, and improved in ways that are clinically useful, equitable, and trustworthy. My work spans clinical AI evaluation, contactless physiological sensing, and simulation-based healthcare training.

Research Themes

Clinical AI

Evaluation and model behavior

Studying how healthcare AI systems should be assessed for realism, clinical usefulness, and operational fit before they are trusted in practice.

Equity

Signal quality across populations

Investigating how contactless sensing and AI systems behave across skin tones and underrepresented settings, with attention to subgroup performance.

Digital Health

Deployment and evidence

Focusing on clinically grounded implementation, measurement validity, and the boundary between promising prototypes and tools that are ready for use.

Simulation

Applied training environments

Using simulation and interactive systems as one domain for studying AI behavior, team dynamics, and evaluation methods in healthcare contexts.

Selected Work

Artificial Intelligence in Medicine · Sep 2025 · DOI: 10.1016/j.artmed.2025.103270

DASEX Framework

The DASEX framework provides the first structured methodology for evaluating AI-driven non-player characters in XR healthcare simulations. Published in Artificial Intelligence in Medicine, it defines criteria for realism, clinical accuracy, and educational effectiveness.

medRxiv · Oct 2025 · 306 participants · Nigeria

rPPG Equity Study

A field study investigating remote photoplethysmography (rPPG) for contactless blood pressure screening, with 306 participants recruited in Nigeria. The study exposes skin tone bias in existing models and informs equity-focused design of remote monitoring tools.

PhD Research · Bournemouth University · CfACTs+ Studentship

XR-NPC Clinical Simulator

A multi-agent XR simulator embedding AI-driven clinical colleagues into virtual ward environments for team-based training. Designed to enable safe, repeatable high-stakes scenario practice with dynamic dialogue and adaptive guideline adherence.

Approach

Clinical AI work should be evidence-led, bias-aware, and usable in real settings.

Human oversight Subgroup performance Deployment boundaries Clinically grounded evaluation

Get in Touch

Open to collaboration on clinical AI evaluation, digital health equity, and applied healthcare AI research. Connect on LinkedIn.