BLUE
Profile banner
DD
Dan Davis
@bindlestiff.bsky.social
Formerly lots of things including teacher, graphic artist, linguist, and software architect. Currently a luthier. Inveterate AI skeptic.
600 followers918 following4.3k posts
DDbindlestiff.bsky.social

BTW, how could “operating … within a context window” be relevant, since context is meaningless to an LLM? I’ll just stick with the Grady Booch analysis. LLMs often appear to be useful, because their allure is in appearing to be accurate.

1

Cconvolver.bsky.social

Context windows, intuitively speaking, bound the space of random outputs (because LLMs are just fancy autocompletes), so larger context windows let you feed more data in and get better results /based on the tokens in the context window/. The extreme version of this is the (misnamed) “chain of /

1
Profile banner
DD
Dan Davis
@bindlestiff.bsky.social
Formerly lots of things including teacher, graphic artist, linguist, and software architect. Currently a luthier. Inveterate AI skeptic.
600 followers918 following4.3k posts