Prescreening Performance of AI Versus Nephrologists

By Charlotte Robinson - Last Updated: February 6, 2025

With its significant ability to analyze patient data, artificial intelligence (AI) has the potential to improve diagnostics and support more precise treatment decisions in nephrology and other areas of medicine.

Advertisement

Clinical prescreening can be time consuming and prone to human errors, especially when it involves a large patient cohort. Nephrologists usually review patient data and study inclusion and exclusion criteria before initiating formal screening, but this prescreening process may not be particularly efficient.

Niloufar Ebrahimi and colleagues conducted a study to determine how accurately and efficiently AI performs prescreening compared with nephrologists. The researchers used Google Forms to distribute a survey regarding four simulated clinical cases. The survey was shared between derived connections from investigators and social media platforms, including X and LinkedIn.

Using inclusion and exclusion criteria from the published NefIgArd clinical trial, participating nephrologists were tasked with determining the prescreening eligibility of each case, using “yes” or “no” responses. Survey respondents were also asked to record and input how long it took to complete their assessments of each case. ChatGPT version 3.5 was used to evaluate the same cases, and the accuracy and speed of the AI was compared with those of the nephrologists.

Thirty-three nephrologists, primarily from the academic setting (69.7%), took part in the study. Of them, 9.1% were professors, 18.2% were associate professors, and 39.4% were assistant professors. Their median years of experience was eight (interquartile range [IQR], 3.5-15).

AI achieved 100% accuracy among all cases and significantly outperformed the nephrologists, whose accuracy ranged from 21.9% to 90.6%. The accuracy of AI was significantly higher than that of the nephrologists for each case and overall (P<.001). The overall accuracy of the nephrologists was 55.9% compared with 99.9% for AI.

AI also produced results faster. AI took an average of 11 (SD, 1) seconds with a median of 11 seconds (IQR, 11-12). Nephrologists, meanwhile, took an average of 117 seconds (SD, 146) with a median of 60 seconds (IQR, 29-120). The nephrologists’ mean rank was 67.93, compared with 4.75 for AI. The speed of AI’s evaluation was statistically significantly faster than that of nephrologists (P=.001).

In summary, the authors said, “Integrating AI in nephrology in certain tasks with clear instructions, such as clinical trial prescreenings, might provide more accuracy and efficiency.” They recommend additional studies.

Source: Ebrahimi N, Glassock RJ, Ghozloujeh ZG, et al. Comparing clinical trial pre-screening “AI vs nephrologist”. #WCN25-606. Presented at the World Congress of Nephrology; February 6-9, 2025; New Delhi, India.

Advertisement