Predictive text models do not ever "answer your question" They predict what an answer to your question would probably look like. Which is very, very, very different
In other words, âbullshitâ Good bullshit, sometimes- which can be useful! In some contexts Could also be called âbrainstormingâ or âideatingâdepending upon context
the only use i've had for them wrt "answering questions" is to have them act as like sounding boards for me to help me come up with better ways of asking the question in the first place.
There was an Isaac Asimov story named "Liar" published in May 1941 where a robot with the ability to read minds is accidentally created. The wrinkle was that whenever the robot was asked a question about what someone else was thinking, it would just give the answer the person asking wanted to hear.
I'm stealing this. Okay fine, I'm citing this.
I've tried to explain to my (middle school) students that the model knows what a fact looks like but not what are the actual facts and will happily give them a fact shaped object as response to a question.
Hallucinations of a stochastic parrot.
Yeah it was always a novel outgrowth of the process that it started to get really good at being convincing at sounding like it was "solving" rather than continuing based on a prompt.
Yet they can perform surprisingly well
not even what an answer to your question would probably "look like" - what an answer to a question in the training data that looks like your question would probably look like
BS machines