C
Colin
@colin-fraser.net
Driven by industry progress, inspired by provocative leadership, plus don't mind a good pair of shoes or a great @PennStateFball scoreboard either.
482 followers120 following726 posts
For the most part when people talk about AI nowadays they're talking about some kind of application of Machine Learning. As I write these for a general audience, I explain exactly what this means at a high level in the first part of the essay.
One thing about ML is that it's completely expected that an ML system will output errors. So one possible explanation for the Hallucination Problem is that a hallucination is an error and ChatGPT is ML and ML produces errors, ergo ChatGPT will hallucinate. However, I think this is wrong.
C
Colin
@colin-fraser.net
Driven by industry progress, inspired by provocative leadership, plus don't mind a good pair of shoes or a great @PennStateFball scoreboard either.
482 followers120 following726 posts