Nick Shea's terrific book on concepts is now out, and it is open access. Download it and read it! philpapers.org/rec/SHECAT-11
Research on concepts has concentrated on the way people apply concepts online, when presented with a stimulus. Just as important, however, is the use of concepts offline, when planning what to ...
Thanks for your kind words, I'm glad you like the approach! My hope in writing that section was that it could be useful for teaching too.
I wrote a first version of this chapter about a year ago. This is an updated version that includes new research & benefited from comments on the first draft. I can't wait to read other contributions to the Handbook! 12/12
Contrary to what some have claimed, however, this work certainly doesn't undermine all motivations for traditional linguistic theory! The chapter advocates a pluralistic approach to linguistics, with a place for computational models – including (some) LMs. 11/
There's a lot of great work in computational linguistics doing just that – by people like @tallinzen, @weGotlieb, @a_stadt & many others. The @babyLMchallenge is also a great example of ongoing efforts to tackle concerns about the developmental plausibility of model learners. 10/
One of the most promising approaches involves designing experiments with small LMs in carefully controlled learning scenarios to investigate the learnability of specific syntactic features from sparse or indirect evidence. 9/
LMs definitely put pressure on the first claim, but that's not so interesting. More importantly, experiments with model learners trained in plausible learning scenarios can put pressure on the second claim, although evidence is still tentative and subject to many caveats. 8/
The most interesting debate probably has to do with language acquisition. There are 2 versions of the so-called "poverty of the stimulus" argument: an in-principle learnability claim that's largely abandoned, and a developmental claim about induction from sparse data 7/
Whether we can learn anything about linguistic competence from experiments with LMs is much more controversial. Gabe has a great paper arguing for a negative answer; here I push back against that argument (see also chap 7 of Ryan's great new book) 6/ link.springer.com/article/10.1...
Deep learning (DL) techniques have revolutionised artificial systems’ performance on myriad tasks, from playing Go to medical diagnosis. Recent developments have extended such successes to natural lan...
When considering what LMs could possibly be models of (if anything), there's at least 3 options: linguistic performance, linguistic competence, and language acquisition. The first option is the least controversial, since LMs are trained to mimic linguistic utterances (but see chapter for caveats) 5/