BLUE
JR
José Ramón Paño
@joserrapa.bsky.social
MD, ID, AMS, SALUD, HCUZ, IISA, AI, ES, EU, ESCMID, EIS, CMI-CMIComms, SEIMC
20 followers80 following9 posts
Reposted by José Ramón Paño
CTcarlbergstrom.com

Having humans "review" the decisions doesn't help much when humans are known to be heavily swayed by wrong answers from an AI. For example, even highly trained radiologists reading mammograms are heavily biased by wrong answers from an AI assistant.

At least one human referee will then review each recommendation, said Christopher Sewell, director of the Nevada Department of Employment, Training, and Rehabilitation (DETR). If the referee agrees with the recommendation, they will sign and issue the decision. If they don’t agree, the referee will revise the document and DETR will investigate the discrepancy.

“There’s no AI [written decisions] that are going out without having human interaction and that human review,” Sewell said. “We can get decisions out quicker so that it actually helps the claimant.”
Summary
Incorrect advice by a purported artificial intelligence–based decision
support system impaired the performance of radiologists with varying
levels of expertise, ranging from inexperienced to very experienced,
when reading mammograms.
Key Results
■ In this prospective experiment, 27 radiologists who interpreted 50
mammograms with the assistance of a purported artificial intel-
ligence (AI)–based system were significantly affected by incorrect
suggestions from the system.
■ Inexperienced radiologists were more likely to follow the suggestions
of the AI system when it incorrectly suggested a higher Breast Imaging
Reporting and Data System category compared with moderately
(mean degree of bias, 4.0 ± 1.8 vs 2.4 ± 1.5; P = .044; r = 0.46) and
very (mean degree of bias, 4.0 ± 1.8 vs 1.2 ± 0.8; P = .009; r = 0.65)
experienced readers.
7
JR
José Ramón Paño
@joserrapa.bsky.social
MD, ID, AMS, SALUD, HCUZ, IISA, AI, ES, EU, ESCMID, EIS, CMI-CMIComms, SEIMC
20 followers80 following9 posts