First episode of our new podcast is out! Episode 1: "What is Intelligence?" It's hard to fully answer this question, but we had some great discussions about it with superstars Alison Gopnik and John Krakauer. Give it a listen! complexity.simplecast.com/episodes/nat...
Never thought I would be a podcast host, but... I'm co-hosting this season's Complexity Podcast from @sfiscience.bsky.socialcomplexity.simplecast.com/episodes/tra...
It's not just you who's confused about successor representations (SR), advanced chatbots are too, and I definitely don't blame them. They just need to read my blog: blog.dileeplearning.com/p/a-critique...
Next topic …where is transformer attention in the brain? 😛😇
Today's Q for discussion by the #NeuroAI crowd: In-context learning (i.e. learning from a few examples sans synaptic updates) is a major advantage of large AI models: arxiv.org/abs/2301.00234 Do we have clear evidence of rapid learning with no synaptic changes in the brain? If so, when/where?
With the increasing capabilities of large language models (LLMs), in-context learning (ICL) has emerged as a new paradigm for natural language processing (NLP), where LLMs make predictions based...
Ok, so ICL in transformer also includes synaptic changes in “working memory” …the darned prompt has to be stored somewhere. Just think of the context buffer…someone has to write the prompt into it.
Ok, so ICL in transformer also includes synaptic changes in “working memory” …the darned prompt has to be stored somewhere. Just think of the context buffer…someone has to write the prompt into it.
Ah ok, I thought it was transformers, hence the disconnect
in your post, did "it" refer to animals or transformers?
Not really...it just indexes another part of the circuit...