Evaluation of performance on real-world retinal imaging data showed sensitivities varied widely
THURSDAY, Jan. 21, 2021 (HealthDay News) — Artificial intelligence (AI)-based screening algorithms for diabetic retinopathy (DR) show significant performance differences, according to a study published online Jan. 5 in Diabetes Care.
In an effort to evaluate the accuracy of seven AI algorithms from five companies, Aaron Y. Lee, M.D., from the University of Washington in Seattle, and colleagues evaluated 311,604 retinal images from 23,724 veterans who presented for teleretinal DR screening (2006 to 2018).
The researchers found that although there were high negative predictive values (range: 82.72 to 93.69 percent), there was much variation in sensitivities (range: 50.98 to 85.90 percent). For the arbitrated data set, most algorithms performed no better than humans, but two achieved higher sensitivities and one yielded comparable sensitivity (80.47 percent) and specificity (81.28 percent). Compared with the Veterans Affairs teleretinal graders, one had lower sensitivity (74.42 percent) for proliferative DR. For ophthalmologists, the value per encounter varied from $15.14 to $18.06, and for optometrists, the value per encounter varied from $7.74 to $9.24.
“These results argue for rigorous testing of all such algorithms on real-world data before clinical implementation,” the authors write.
Several authors reported financial ties to pharmaceutical and health care companies.
Abstract/Full Text (subscription or payment may be required)
Copyright © 2020 HealthDay. All rights reserved.