BLUE
DC
David Chalmers
@davidchalmers.bsky.social
1.2k followers109 following7 posts
DCdavidchalmers.bsky.social

on X, i asked: who endorses the AGI scaling hypothesis: roughly, that scaling current systems and methods will lead to human-level AGI? since bluesky is philosopher-heavy, let me also ask here: which philosophers endorse or have expressed sympathy with the hypothesis, or with something nearby?

6

SPscpritch.bsky.social

Maybe not "endorse", but this Douglas Hofstadter interview had him sounding a bit that way, which surprised me: www.lesswrong.com/posts/kAmgdE...

0
Htimhenke.bsky.social

More computer science than philosophy but @irisvanrooij.bsky.social@olivia.scienceosf.io/preprints/ps... with a theorem (Thm 2) showing that LLMs and any similar approach can never learn something of such high complexity

0
MLmalcolml.bsky.social

... so LLMs etc can do all the same things that an AGI can... after they have been trained to do as such. What's missing is the ability to come up with truly novel approaches on the fly. Quantifying that is hard, because clearly LLMs have a certain amount of novelty.

0
MLmalcolml.bsky.social

My own opinion: I equate LLMs to habitual action in humans. Asking what kind of processing it can or cannot do is the wrong question, because it comes down to how it is trained. What's missing is the rational layer that humans have and which they use to enrich their habitual skills and to train them

0
ORorlandoridge.bsky.social

There is a complication in this area. Current AIs implement something that could be described as a meta-method process, they are able to develop new methods in order to respond to new challenges. It’s difficult to anticipate where this might lead.

0
RRremisramos.bsky.social

It has been 15 years from this and it *sorely* needs an update spectrum.ieee.org/who-is-who-i...

0
DC
David Chalmers
@davidchalmers.bsky.social
1.2k followers109 following7 posts