BLUE
Profile banner
E
enny
@enny.whitefore.st
830 followers301 following445 posts
Eenny.whitefore.st

my biggest issue with chat bots (specifically ChatGPT) is it will, with overwhelming confidence, assert it’s capable of doing something it isn’t. and it’ll often gaslight you in the process. I don’t think the simulated human approach works very well I expect this will be an interesting read

2

Eenny.whitefore.st

the titles and dates it gave in this example just seem to be generic titles it invented rather than say it couldn’t complete a task, which I find odd. bots have constraints, it shouldn’t have to pretend it can do something if it understandably can’t

1
liang.bsky.social

I read the following paper yesterday "Evaluating Verifiability in Generative Search Engines?" Their finders are that GPTs need to be both fluent + informative AND have accurate + adequate citations. Many are simply fluent + informative, but not the latter. https://arxiv.org/abs/2304.09848

0
Profile banner
E
enny
@enny.whitefore.st
830 followers301 following445 posts