I just spent a week with neuroscience and vision people, and was pointed to an interesting pair of relatively recent neuroscience papers (by Rodrigo Perin, Henry Markram, and Thomas K Berger) with interesting potentially computational-level implications that I want to think through. First, a disclaimer: I'm not a neuroscientist, and I can't personally evaluate the biological methodology. But, assuming their basic measurements and interpretations are correct, I think the work is really cool.
I've got marking to do along with a dissertation chapter to write, so naturally it's about time for another blogpost! A+ discipline. I've been wanting to write a response to an interview with Noam Chomsky provocatively titled Where Artificial Intelligence went wrong. Go ahead and look through the interview, I'll put my reaction below the fold.
Hey all, I realized I promised pictures but the last post had zero (0) pictures. So in this post, I'll sort of recap the previous post but put in some pictures.
Computational cognitive science and artificial intelligence go through periods when different kinds of models are fashionable. Early on, it was all symbolic processing with rules and predicate logic, then people got excited about connectionism, then everyone started putting weights or probabilities on their rules. The current big fad in Cognitive Science is Bayesian modeling. I like this fad, so I'm going to talk about it.
In this post about computational modeling, I promised to talk about how a computational model could improve our understanding of cognition without asserting that people actually did what the computational model did. The basic point is that we can use probabilistic modeling to explore the shape of our data, which in turn constrains what kinds of strategies a human learner could successfully use. In this post, I'll discuss how Bayesian modeling allows us to explore the statistical structure of data and characterize what any (decent) probabilistic algorithm would try to do with that data.
One of the major themes of this blog is going to be computation and the mind, so I thought I'd write up a couple posts talking about what computation has to say about cognition. There are really two claims here. The first claim is that cognition is related to computation; perhaps some kinds of cognitive behavior is an approximation to particular computer programs, or are solving a problem that is best posed in computational terms. The second claim is that cognition is computation, or at least a good approximation to it. This is clearly a much stronger claim.
I might talk about the second claim at some point, but this post is more about the first (if you want to explore the second claim, have a look at John Searle's famous Chinese Room argument; I disagree with the argument, but it's a nice point of entry to the discussion). What could computation have to say about cognition? How could computational models say anything about cognition without asserting the second claim?
PZ Myers posted yesterday about problems with this idea of "brain uploading". Basically, "brain uploading" is this theoretical technology wherein people would be able to obtain immortality and/or superhuman intelligence by replicating the structure of their brain on a computer. Proponents say it should work because they believe the "computational theory of mind," which states that minds just are computational activity. "Brain uploading" is supposed to be implemented by cutting a brain into very thin slices, scanning those slices with some kind of high-resolution microscope, and then reconstructing the structure of the brain in a software program that emulates how neurons work. Since we take the computational theory of mind for granted, and since the brain is what runs the computer program of the mind, this should be sufficient to resurrect an exact copy of the mind that had been previously run on the brain.