BLUE
Profile banner
RM
Raphaël Millière
@raphaelmilliere.com
Philosopher of Artificial Intelligence & Cognitive Science raphaelmilliere.com/
684 followers371 following64 posts
RMraphaelmilliere.com

Thanks for your kind words, I'm glad you like the approach! My hope in writing that section was that it could be useful for teaching too.

1
RMraphaelmilliere.com

I wrote a first version of this chapter about a year ago. This is an updated version that includes new research & benefited from comments on the first draft. I can't wait to read other contributions to the Handbook! 12/12

0
RMraphaelmilliere.com

Contrary to what some have claimed, however, this work certainly doesn't undermine all motivations for traditional linguistic theory! The chapter advocates a pluralistic approach to linguistics, with a place for computational models – including (some) LMs. 11/

1
RMraphaelmilliere.com

There's a lot of great work in computational linguistics doing just that – by people like @tallinzen, @weGotlieb, @a_stadt & many others. The @babyLMchallenge is also a great example of ongoing efforts to tackle concerns about the developmental plausibility of model learners. 10/

1
RMraphaelmilliere.com

One of the most promising approaches involves designing experiments with small LMs in carefully controlled learning scenarios to investigate the learnability of specific syntactic features from sparse or indirect evidence. 9/

1
RMraphaelmilliere.com

LMs definitely put pressure on the first claim, but that's not so interesting. More importantly, experiments with model learners trained in plausible learning scenarios can put pressure on the second claim, although evidence is still tentative and subject to many caveats. 8/

1
RMraphaelmilliere.com

The most interesting debate probably has to do with language acquisition. There are 2 versions of the so-called "poverty of the stimulus" argument: an in-principle learnability claim that's largely abandoned, and a developmental claim about induction from sparse data 7/

1
RMraphaelmilliere.com

Whether we can learn anything about linguistic competence from experiments with LMs is much more controversial. Gabe has a great paper arguing for a negative answer; here I push back against that argument (see also chap 7 of Ryan's great new book) 6/ link.springer.com/article/10.1...

(What) Can Deep Learning Contribute to Theoretical Linguistics? - Minds and Machines
(What) Can Deep Learning Contribute to Theoretical Linguistics? - Minds and Machines

Deep learning (DL) techniques have revolutionised artificial systems’ performance on myriad tasks, from playing Go to medical diagnosis. Recent developments have extended such successes to natural lan...

1
RMraphaelmilliere.com

When considering what LMs could possibly be models of (if anything), there's at least 3 options: linguistic performance, linguistic competence, and language acquisition. The first option is the least controversial, since LMs are trained to mimic linguistic utterances (but see chapter for caveats) 5/

1
Profile banner
RM
Raphaël Millière
@raphaelmilliere.com
Philosopher of Artificial Intelligence & Cognitive Science raphaelmilliere.com/
684 followers371 following64 posts