Neural legos

I just spent a week with neuroscience and vision people, and was pointed to an interesting pair of relatively recent neuroscience papers (by Rodrigo Perin, Henry Markram, and Thomas K Berger) with interesting potentially computational-level implications that I want to think through. First, a disclaimer: I'm not a neuroscientist, and I can't personally evaluate the biological methodology. But, assuming their basic measurements and interpretations are correct, I think the work is really cool.

Continue reading

Principles and Parameters and Manifolds, oh my!

I've recently been reading a lot of papers from the more traditional Generativist literature on computational models for language acquisition, so I'm going to write a post discussing how these traditional approaches relate to the kind of natural language processing (NLP)-style, data-driven, structured probabilistic models I work with (and hinted at in the previous post). Specifically, I'm going to outline the Universal Grammar and Principles & Parameters-based approach to language acquisition. We will see that it looks pretty different from the approaches adopted for grammar induction in natural language processing, which typically involve estimating probability distributions over structural analyses given sentences (and possibly sentence meanings). I'll argue next that they are actually related in a deep way. Specifically, they both propose that children simplify their exploration of the space of possible grammars by learning in a smaller space that is related to the space of possible grammars, and the space proposed by P&P-based approaches is potentially a special case of the spaces used by data-driven techniques.

Continue reading

Why are so many (recent) cognitive models Bayesian?

Computational cognitive science and artificial intelligence go through periods when different kinds of models are fashionable. Early on, it was all symbolic processing with rules and predicate logic, then people got excited about connectionism, then everyone started putting weights or probabilities on their rules. The current big fad in Cognitive Science is Bayesian modeling. I like this fad, so I'm going to talk about it.

In this post about computational modeling, I promised to talk about how a computational model could improve our understanding of cognition without asserting that people actually did what the computational model did. The basic point is that we can use probabilistic modeling to explore the shape of our data, which in turn constrains what kinds of strategies a human learner could successfully use. In this post, I'll discuss how Bayesian modeling allows us to explore the statistical structure of data and characterize what any (decent) probabilistic algorithm would try to do with that data.

Continue reading