Principles and Parameters and Manifolds, oh my!

I've recently been reading a lot of papers from the more traditional Generativist literature on computational models for language acquisition, so I'm going to write a post discussing how these traditional approaches relate to the kind of natural language processing (NLP)-style, data-driven, structured probabilistic models I work with (and hinted at in the previous post). Specifically, I'm going to outline the Universal Grammar and Principles & Parameters-based approach to language acquisition. We will see that it looks pretty different from the approaches adopted for grammar induction in natural language processing, which typically involve estimating probability distributions over structural analyses given sentences (and possibly sentence meanings). I'll argue next that they are actually related in a deep way. Specifically, they both propose that children simplify their exploration of the space of possible grammars by learning in a smaller space that is related to the space of possible grammars, and the space proposed by P&P-based approaches is potentially a special case of the spaces used by data-driven techniques.

Generativism

First, notice that, in the opening, I wrote "Generative" with a capital 'G'. Strictly speaking, a "generative grammar" is simply a grammar that is itself (potentially) finite, but produces an infinite set of strings. For example, a grammar with the three rules:

  • N \rightarrow Adj N
  • Adj \rightarrow blue
  • N \rightarrow cat

generates the infinite set containing the strings "blue cat", "blue blue cat", "blue blue blue cat", and so on. Generativism got its start by proposing that human languages are infinite, but, broadly, can be generated by this kind of limited grammar in this kind of way. Such a generative (little 'g') grammar is useful for encoding broad generalizations about linguistic regularities, such as the fact that English nouns can be preceded by multiple adjectives. Most work that tries to address syntax relies on, or is at least compatible with, generative grammar, including work outside of the Generativist (big 'G') tradition.

The Generativist (big 'G') tradition takes this machinery literally: human languages are infinite sets of strings. People don't actually produce most of those strings. For example, people don't say sentences that are 100,000,000,000 words long, but, under the Generativist view, those long strings are part of the language. Formally, they argue that such strings are part of a speaker's "competence," but the kinds of strings a speaker says constitutes their "performance." A speaker's performance reflects an effort to match competence given limited processing resources. Performance may differ from competence on shorter sentences as well. For example, English allows center-embedding:

  • The rat the cat hunted ate the cheese.

Where "the cat hunted" is embedded within "the rat ate the cheese" to identify which rat ate the cheese. However, English speakers have a very difficult time understanding:

  • The rat the cat the dog chased hunted ate the cheese.

Where "the dog chased" is embedded within "the cat hunted" to identify which cat was doing the hunting. So, English allows center-embedding: but how much? One potential solution is that unlimited center-embedding is part of the competence of English, but that speakers are only able to process one level in practice.

The explanatory power of a particular proposed grammar comes from its ability to describe the content of a language (subject to our ability to discern which strings are actually part of the competence). For Generativists, this has meant that a proposed grammar should successfully discriminate strings that are part of the infinite set of strings of a particular human language from strings that are not part of that infinite set (although they may be part of a different human language). (Other kinds of generative grammar take a different notion of what constitutes the "content of a language." For example, Combinatory Categorial Grammar has a very tight relationship between syntax and semantics, and focuses not on describing sentences as strings only but as strings and meanings.)

UG and P&P

Perhaps most famously, Generativists have gone beyond making explanatory generalizations about individual languages, and sought to make explanatory generalizations about all human languages. Just as they propose that the set of possible strings of a particular human language is characterized by a generative grammar, they propose that the set of possible human grammars is characterized by Universal Grammar (UG). These constraints on possible grammars are supposed to rule out languages that are conceivable but impossible as a human language. I won't get into the details of the constraints; they are often highly technical and fairly specific to grammar formalisms (and so when a grammar formalism is revised or abandoned, the details of UG constraints must change). Proposed UG constraints are also rarely evaluated at scale, so it's usually unclear (at least to me) if they actually rule out the kinds of strings they're supposed to.

One of the central motivations for UG has been the fact that children can learn a particular language, which Generativists take to involve identifying the right infinite set of strings. If a language is an infinite set of strings, then the task is difficult: a child only observes a finite number of strings, and must determine, on the basis of this finite set, which infinite set is correct. Since every finite set of strings is a subset of infinitely many infinite sets of strings, the finite set of strings a child observes will never bring down the set of potential infinite languages to one. This has been called "the logical problem of language acquisition." However, UG (supposedly) rules out huge numbers of infinite sets of strings, and the idea is that children manage to identify a single infinite set of strings on the basis of a finite set of strings (the utterances they hear) by using their innately-specified knowledge of UG to eliminate the other infinite sets that are consistent with the finite evidence they have.

This innate knowledge of UG is traditionally taken to be implemented in terms of innately-specified parameters that express broad typological divisions among the world's languages. For example, a child may try to decide whether her language typically puts main verbs in the middle of the sentence, like English, or somewhere else, like Japanese (which typically puts them at the end). This framework is called "Principles and Parameters," or P&P, and, as far as I've seen, the parameters that people actually propose are binary-valued. See Sakas & Fodor (2012) for a fairly comprehensive overview of this basic approach, along with computational simulations with artificial languages.

Dimensionality Reduction

So, what is the deep relation to NLP-style approaches I promised to bring up? Let's first assess the UG and P&P account in terms of Marr's levels of analysis. The computational-level goal is to identify the single infinite set of strings that is consistent with both the finite set of strings a child can see and the innately-specified constraints of UG. The first of these constraints is relatively easy: an infinite set of strings is consistent with a finite set of strings if and only if the finite set is a subset of the infinite set (presumably with some mechanism to identify performance errors in the input). The second constraint is hard: we need to consider every generative grammar that generates an infinite set of strings that both includes the evidence and does not violate a UG constraint. Because two generative grammars may generate the same set of strings (such grammars are said to be "weakly equivalent"), finding that an infinite set is generated by a UG-forbidden grammar is not enough to exclude it. An infinite set of strings is ruled out only if it is generated only by UG-forbidden grammars. Thus, the search space is enormous: we cannot rule out an infinite set of strings unless every grammar that generates it violates UG.

P&P essentially provides a representation that is suitable for exploring this space efficiently. It does this by constructing two auxiliary spaces. First, rather than searching in the space of languages directly, P&P borrows the space of Generative grammars originally devised for descriptive adequacy. This will be an extremely high-dimensional space with, e.g., one dimension for each possible generative rule. A dimension is 1 if that rule is in the target grammar, and 0 if that rule is not. Each conceivable grammar is then a point in this space. If the number of possible Generative rules is finite, like the number of rules in a specific grammar (I'm not actually sure what Generativists think on this point), then this space will clearly be smaller than the infinite space of infinite languages, thus providing the infant with a finite space to explore.

The second auxiliary space constructed by P&P is the Parameter space, which informs the exploration of the Grammar space. This is supposed to be a space with much lower dimensionality, on the order of 10 to 100 dimensions, with each dimension indicating the setting of one of the Parameters. The Parameter space informs the exploration of the Grammar space because each parameter setting rules out most of the Grammar space. Note that the Parameter space is useful for exploring the Grammar space because the infant is equipped with innate hard correlations between the parameter space and the Grammar space: given a Parameter setting, some grammars (and their generated languages) become impossible, and the infant moreover knows which grammars become impossible.

Algorithmically, learning in the Parameter space is supposed to proceed by way of a random walk prompted by "triggers." Specifically, if a child can provide some analysis, the Parameters remain unchanged, but if the parse fails, the infant changes a Parameter. The details of this change vary from proposal to proposal. (It's interesting that this essentially looks like the Win-Stay Lose-Shift algorithm, except most accounts forbid changing a Parameter once it has been set. Win-Stay Lose-Shift is a kind of particle filtering, which is in turn a resource-limited approximation to Bayesian inference.) Thus, UG and P&P fundamentally propose children succeed at language acquisition because they can explore innately-specified and built-for-purpose spaces that are low-dimensional compared to the space of infinite sets of strings.

NLP-style grammar induction does fundamentally the same thing, but picks different kinds of spaces and correlations. Specifically, remember that the DMV that I mentioned in the previous post generates an unobserved parse tree and sentences. It computes the probability of the parse tree as a product of the probability of each arc in the tree (along with some other stuff). It computes the probability of each arc by paying attention to the head word, the dependent word, the arc direction, and the arc valence (whether or not the arc is the first arc of that head word in that direction). This defines a high-dimensional grammar space, analogous to the grammar space of UG and P&P. However, rather than explicitly posit a built-for-purpose and innate Parameter space to reduce the dimensionality of the space to be explored, the DMV relies on statistical correlations.

The figure below illustrates the basic idea. We see a three-dimensional plot with several points that have a strong linear correlation. Even though there are three dimensions, the location of each point can mostly be specified by indicating its location along the best-fit line (y=x for z = 0.5), and completely specified by also indicating its distance from the best-fit line along the best-fit plane (z = 0.5). The intrinsic dimensionality of this surface is thus between 1 and 2, even though it is situated in a 3-dimensional space. It is not more than two because the location of each point can be exactly specified with two numbers (location along best-fit line, distance from the best-fit line along the best-fit plane), and it is close to one because the location of each point can be almost exactly specified with one number (location along the best-fit line). A surface of arbitrary dimensionality is in general called a manifold.

Samples from a 1-2 dimensional manifold in a 3-dimensional space

Samples from a 1-2 dimensional manifold in a 3-dimensional space

In this example, assuming correlational forms in terms of best-fit lines and best-fit planes allowed us to reduce the dimensionality from three to between one and two. The DMV essentially builds a space of projective dependency trees, situates utterances in that space, and assumes the utterances are correlated in terms of arc heads, directions, and valences. These correlations allow us to deal with a lower-dimensional grammatical manifold in the same way. This kind of space is hard to picture because the correlated objects are discrete, their locations in the space are probabilistic rather than hard, and the functional form is defined in terms of arcs of projective trees, but the same basic principle applies.

Thus, just like UG and P&P-based approaches, NLP-style approaches are fundamentally concerned with reducing the dimensionality of a search space. They differ in that UG and P&P-based approaches propose an explicit and innate low-dimensional Parameter space with innately-specified hard correlations between the Parameter space and the Grammar space, while the DMV proposes a latent and learned low-dimensional space with only the correlational form innately specified. The assumed correlational form can be adjusted by changing the likelihood function, and, indeed, people have explored adjusting the likelihood function to enable correlations in terms of multi-arc subtrees and different amounts of structural and lexical information. The final set of models in my dissertation (submitted, but not yet defended) came down to enabling correlations that involve the spoken duration of the head and dependent word.

Implications

So far I've argued that UG/P&P-based approaches differ from more data-driven approaches only in that UG/P&P approaches posit that the reduced search space is 1) innate 2) is related to the grammar space by way of innate, hard correlations. This in and of itself doesn't mean that UG/P&P is "right" or "wrong" as an account of what happens in children's heads during language acquisition. However, it does have certain methodological implications.

The crucial point is to notice that the UG/P&P-based approach is a special case of NLP-style data-driven approaches. If the likelihood function of a data-driven approach allows the right correlations, then that data-driven approach is capable of representing the Parameter space of P&P as a low-dimensional manifold within the grammar space. If a proposed Parameter space actually does explain the observed distributional regularities well, then the data-driven approach will recover that Parameter space; and if the proposed Parameter space explains the observed regularities much better than other potential latent variables, then the data-driven approach will probably recover only the proposed Parameter space.

We can see an example of this in Kwiatkowski et al (2012) (described in more detail in Kwiatkowski's PhD dissertation), who built a (data-driven) online unsupervised learner for Combinatory Categorial Grammar (CCG). Two of the most often proposed Parameters are "head-direction" and "V2." These combine to describe the canonical ordering of Subjects, Verbs, and Objects in a language. English, for example, is typically Subject-Verb-Object: SVO. One of the interesting results from Kwiatkowski et al is that their system rapidly determined that English is SVO, despite having no clear prior bias towards SVO, no explicit switch related to canonical ordering, or even advance knowledge about which words are verbs and which words are nouns. Rather, they analyzed what the learner believed about likely sentences, and found that SVO sentences dominated fairly early in learning.

Hard-coded latent spaces make different empirical predictions from learned latent spaces in terms of the rate of acquisition. If the reduced space is not hard-coded, then children can exploit correlations in the latent space only to the extent that they are supported by the data. First, this means that a hard-coded latent space potentially enables faster learning (although there is a tradeoff, because an elaborate latent space, even if it is hard-coded and correct, will still require substantial data before being useful). Second, if the latent space is learned, the developmental trajectory of children should be largely determined by when different parts of the latent space become evident. Both of these considerations motivate the use of data-driven, NLP-style structured probabilistic models: we cannot know if children exploit regularities before they are evident without measuring when they are evident, and structured probabilistic models measure to what extent latent variables have been identified by evidence.

Working with learned latent spaces also makes it easier to explore potentially useful correlations not only within syntactic systems but between syntax and other systems. Specifically, a lot of people have explored the possibility that children use correlations between syntax and semantics, or syntax and prosody, to learn one or both systems (that Kwiatkowski et al paper actually learns syntax and semantics simultaneously). These accounts, called "bootstrapping" accounts, fundamentally exploit correlations between systems in exactly the same way, and structured probabilistic models provide a unified framework for exploring how correlations within syntax interact with correlations between syntax and other linguistic systems.

Conclusion

Thus, UG/P&P approaches are about dimensionality reduction, just as data-driven approaches are, except UG/P&P approaches assume that the latent space must be hard-coded and related to grammar space by way of hard innate correlations. I've argued that it makes more sense methodologically to explore data-driven models, because they provide a more solid foundation for determining whether or not grammatical forms are easy to identify.

I'll close with some reflection on the goals of each style of grammar induction. UG/P&P, despite its explicit emphasis on "grammar," takes as its goal not the induction of grammatical knowledge from evidence but the induction of an infinite set of strings from a finite set of strings. The functional role of the grammar formalism and associated machinery is just to limit the infinite sets of strings a learner considers. Data-driven approaches take the grammar itself more seriously. For example, Kwiatkowski et al (2012) use the grammar as a mapping from the linear order of words to the semantic composition of those words, and assess the grammar not only in terms of the word strings it produces but also in terms of the accuracy of the meanings it produces. In my own work (e.g. Pate & Goldwater (2013), and a more qualitative and theoretical article in progress), I evaluate the accuracy of the parses against hand-annotated parses, and examine the posterior distribution over parses to identify what kinds of evidence different aspects of the input provide. Other parts of the field of NLP take the syntax seriously in other ways, such as for translating between languages and identifying out-of-vocabulary words.

By emphasizing the evidence for grammatical structures themselves, rather than the evidence for infinite sets of strings, data-driven approaches avoid the "logical problem of language acquisition." We can never see an infinite set of strings, but we can encounter recurring grammatical structures many times, and, on the basis of structures we are confident in, make inferences about new grammatical structures we haven't seen.

Why is there this neglect of grammatical structures themselves in the Generative tradition? I think at least part of it has to do with the Generativists', or at least Chomsky's, view of language as a kind of Platonic ideal rather than an adaptation for communication. Evaluating grammatical structures in terms of their adequacy for specifying meanings tacitly assumes that the functional role of language is communication. This may seem obvious to the layperson, but its apparently controversial among more traditional Generativists. I was at the 2011 Cog Sci conference when Chomsky spoke, and was astonished to hear him deny that language was an evolutionary adaptation for communication. I don't want to put words in his mouth, but he talked about language as some kind of fundamental ability to manipulate and combine symbols, and as having more to do with the human capacity for thought and logic. Its suitability for communication is, according to him, merely a quite convenient side-effect.

I think this view of language is too limiting. I like, in principle, a view of grammar as "a talker's knowledge about language," and see no reason to restrict ourselves to considering only generated strings. It can be useful to focus on only part of language, such as the linear order of words, or linear order plus meaning, or morphology, or whatever, in a particular investigation, but I don't like ruling out certain interactions a priori.

I do think language is about communication. Maybe I'll talk about that more later (we'll get to discuss information theory! yay!), but for now I've gone on enough. Thanks!

Leave a Reply