Over the past half-century, most theories of language have been framed -- sometimes explicitly, usually implicitly -- within the framework of computation via discrete automata. Often referred to loosely as "the symbolic paradigm", this framework has provided a rich and influential vocabulary not only for many linguistic theories but also for many psycholinguistic accounts of language processing.
In recent years, connectionist models of language have been developed which appear to have very different characteristics than the traditional symbolic theories. To some extent, much of this work has been narrowly focussed on specific phenomena, and a number of broader questions remain unanswered: Are connectionist models really different from symbolic models, or are the differences only superficial? If the models are significantly different, what is the nature of these differences? Do the differences provide greater insight into language? Are there rules in networks? Learning plays a large role in connectionist models, but are the networks truly blank slates? Is there any sense in which connectionism might provide a different way of understanding what it means for knowledge to be innate? In this talk, I will address these and related questions.