Look at me. Listen to me: If someone is selling you a "search" & "knowledge" "AI" product they claim is smarter & better & faster than you, but it literally cannot be guaranteed to give you simple true facts about consensus reality when asked, then that product DOES NOT WORK and SHOULD NOT BE USED.
First Sundar Pichai, now Tim Cook, both admitting that generative "AI," by dint of its very foundations (cf. "Bias Optimizers"), will not and cannot stop bullshitting (cf. "On Bullshit Engines"). futurism.com/the-byte/tim...
In a new Washington Post interview, Apple CEO Tim Cook admitted that he's not "100 percent" sure the company's AI will stop lying.
It’s like when the FDA has guidelines for a certain percentage of your food allowed to be rat hair and such, except that we’re not talking trace amounts here - could be 50% bad stuff, who knows?
Use generative "AI" to spitball, hypothesize, overcome the tyranny of the blank page? I mean if the environmental costs weren't astronomical & the training corpra weren't largely stolen, then yeah, sure. Use it for facts? Knowledge? To fully *Replace* thought, feeling, & creativity? Absolutely not.
I couldn't like this any harder. Let me try it with a running start.
I'm going to gently push back, recognizing your expertise, and eagerly interested in your response: perhaps it's better to say "not used nearly as broadly as the marketing and sales people are advocating?" As an updated "Clippy," Office assistant, it seems to work very well. It seems to...
I fucken lie to myself as a matter of course to get through life, why would I believe some algorithm!
Seems like the most obvious and cost effective solution is to pull the plug and the write the whole thing off, yet I get the sense that none of them are going to do that. They're just going to keep trying different shades of lipstick on this pig.
AI no matter how good it gets is only as good as its inputs like any machine - the broadest consensus response if it can find that in most circumstances is fine but where there is no broad consensus or the answer has literal life/death/health impacts that isn't going to be good enough.
As I remind people repeatedly, Generative AI is NOT a reliable source of information. It is designed to Align with User Instructions - ie to do what you ask it to, regardless of facts. You might as well use Facebook for your “news” if you trust GenAI for facts.