Strongly maintain that the way these systems are discussed in popular media is doing massive amounts of damage to how people understand them and their capabilities. These narratives are a detriment to any meaningful regulation and even navigating the existence of the technologies.
Hume AI, a startup founded by a psychologist who specializes in measuring emotion, gives some top large language models a realistic human voice.
Media constantly failing at its job
They’re getting paid to encourage the Eliza effect.
Absolutely this. We already fail spectacularly at recognising that we as users are part of this LLM "intelligence miracle": we unwittingly ascribe intelligence to something that *looks* intelligent—and LLMs are optimised to do exactly that. More anthropomorphism is not needed.