I just saw this article by Mike Konczal about the politically-influential 2010 result that economies with over 90% debt-to-GDP ratios perform very poorly. The article discusses one dubious point of methodology and two outright methodological errors, and discusses how a re-analysis of the same data with more solid methodology finds that debt-to-GDP ratio doesn't really seem to affect economic performance. I'm not going to talk about the substance of the papers; as I said in the first post, I don't want this blog to be political, and I'm not an economist anyway. I do want to talk about a potential source of their errors, however.
I've recently been reading a lot of papers from the more traditional Generativist literature on computational models for language acquisition, so I'm going to write a post discussing how these traditional approaches relate to the kind of natural language processing (NLP)-style, data-driven, structured probabilistic models I work with (and hinted at in the previous post). Specifically, I'm going to outline the Universal Grammar and Principles & Parameters-based approach to language acquisition. We will see that it looks pretty different from the approaches adopted for grammar induction in natural language processing, which typically involve estimating probability distributions over structural analyses given sentences (and possibly sentence meanings). I'll argue next that they are actually related in a deep way. Specifically, they both propose that children simplify their exploration of the space of possible grammars by learning in a smaller space that is related to the space of possible grammars, and the space proposed by P&P-based approaches is potentially a special case of the spaces used by data-driven techniques.
I'm just back to Scotland after two weeks in the US, and was getting on a bus into the city center. I stopped and asked the bus driver the ticket price, and heard "
I've got marking to do along with a dissertation chapter to write, so naturally it's about time for another blogpost! A+ discipline. I've been wanting to write a response to an interview with Noam Chomsky provocatively titled Where Artificial Intelligence went wrong. Go ahead and look through the interview, I'll put my reaction below the fold.
Hey all, I realized I promised pictures but the last post had zero (0) pictures. So in this post, I'll sort of recap the previous post but put in some pictures.
Computational cognitive science and artificial intelligence go through periods when different kinds of models are fashionable. Early on, it was all symbolic processing with rules and predicate logic, then people got excited about connectionism, then everyone started putting weights or probabilities on their rules. The current big fad in Cognitive Science is Bayesian modeling. I like this fad, so I'm going to talk about it.
In this post about computational modeling, I promised to talk about how a computational model could improve our understanding of cognition without asserting that people actually did what the computational model did. The basic point is that we can use probabilistic modeling to explore the shape of our data, which in turn constrains what kinds of strategies a human learner could successfully use. In this post, I'll discuss how Bayesian modeling allows us to explore the statistical structure of data and characterize what any (decent) probabilistic algorithm would try to do with that data.
I haven't forgotten about this blog! In the last post, I promised to describe basic computational modelling approaches and what they can tell us about cognition. I've drafted that post a few times, but haven't been satisfied with it. So I'm going to adopt a different strategy and spend more than one post on each computational modelling approach. I want to present background on each approach, that will be technical enough to understand what's going on but not so technical as to dominate readers' time. This involves drawing lots of pictures, which is nice but slow. Then, I will talk about a specific application of the approach. So the posts are coming! And currently half-baked (or one-fifth-baked?) on my hard drive :p
One of the major themes of this blog is going to be computation and the mind, so I thought I'd write up a couple posts talking about what computation has to say about cognition. There are really two claims here. The first claim is that cognition is related to computation; perhaps some kinds of cognitive behavior is an approximation to particular computer programs, or are solving a problem that is best posed in computational terms. The second claim is that cognition is computation, or at least a good approximation to it. This is clearly a much stronger claim.
I might talk about the second claim at some point, but this post is more about the first (if you want to explore the second claim, have a look at John Searle's famous Chinese Room argument; I disagree with the argument, but it's a nice point of entry to the discussion). What could computation have to say about cognition? How could computational models say anything about cognition without asserting the second claim?
My friend posted about his work set-up a few months ago, which sounds like fun.
My main computer is a 15" laptop made by a local British shop. It has an aspect ratio of 16:10 (1680x1050), and runs 64-bit Arch Linux. Most of my work happens on this machine. I run Fluxbox, a lightweight window manager with rudimentary tiling support.
PZ Myers posted yesterday about problems with this idea of "brain uploading". Basically, "brain uploading" is this theoretical technology wherein people would be able to obtain immortality and/or superhuman intelligence by replicating the structure of their brain on a computer. Proponents say it should work because they believe the "computational theory of mind," which states that minds just are computational activity. "Brain uploading" is supposed to be implemented by cutting a brain into very thin slices, scanning those slices with some kind of high-resolution microscope, and then reconstructing the structure of the brain in a software program that emulates how neurons work. Since we take the computational theory of mind for granted, and since the brain is what runs the computer program of the mind, this should be sufficient to resurrect an exact copy of the mind that had been previously run on the brain.