Has NYT Connections gotten harder or have I gotten dumber?
That makes a ton of sense to me. The second idea you mention sounds like a pure task-construal (level 2), and it's cool to see how a similar phenomenon can result from either level. One question here is why low self-esteem would lead to more concrete construals.
TFW you try to define a snippet for a repetitive pattern of code but can't because you already defined that exact snippet with the same name/trigger
This is exactly the kind of content my academic not-twitter experience needs more of
Maybe this could be addressed by giving doctors contextual information that can help them figure out when the AI doesn't work well. I bet they would have done much better if they were told that the test trials (70% AI accuracy) were from a different population. Anyway, thanks for the chat!
That's a good point! I guess one useful takeaway from the study is that if doctors learn to trust an AI that works really well in one context (the first 10 trials), they might continue to rely on it in a different context where it performs poorly.
Well, this study used a fake AI that was 100% accurate then dropped to 70%. But real AI systems are often more accurate than humans (doi.org/10.1038/s415...www.sciencedirect.com/science/arti...). But I'm no expert—maybe you know the research better than me?
But here’s a better example: What if they found that in cases where an HIV test incorrectly came up positive (happens about 1% of the time), doctors were much more likely to make an incorrect diagnosis? Would you say HIV tests bias diagnoses?
Good point! But if through some bizarre circumstance my guess about the weather would determine the fate of all humanity, I would still check the weather rather than relying (solely) on my own instincts.
How is this not like saying “when the weather report incorrectly predicts sun, people are more likely to get caught in the rain without an umbrella”? Should we stop checking the weather because it occasionally leads us astray?