One of the major themes of this blog is going to be computation and the mind, so I thought I'd write up a couple posts talking about what computation has to say about cognition. There are really two claims here. The first claim is that cognition is related to computation; perhaps some kinds of cognitive behavior is an approximation to particular computer programs, or are solving a problem that is best posed in computational terms. The second claim is that cognition **is** computation, or at least a good approximation to it. This is clearly a much stronger claim.

I might talk about the second claim at some point, but this post is more about the first (if you want to explore the second claim, have a look at John Searle's famous Chinese Room argument; I disagree with the argument, but it's a nice point of entry to the discussion). What could computation have to say about cognition? How could computational models say anything about cognition without asserting the second claim?

To make this clear, this post will introduce David Marr's levels of analysis for information processing systems. A second post, to be posted (and, um, written) later this week will provide a brief survey of computational modelling approaches to identify the kinds of things each kind of model can teach us. I was going to include both in this post, but then I saw how long just the levels of analysis bit was.

**David Marr's Levels of Analysis**:

David Marr provided a useful way of thinking about cognition in introducing his levels of analysis. He pointed out that complex systems are usually examined at different levels of analysis: if you're studying the physics of gas, for example, you can look at how individual molecules or very small groups interact, or you can look at the behavior of large groups of molecules in terms of temperature and pressure.

Marr pointed out that most, if not all, of cognitive behavior involves processing information, and proposed that information processing systems can usually be decomposed into three levels of analysis:

- The computational level: What does the system do? What is the problem it's trying to solve?
- The algorithmic/representational level: What are the series of steps that the system takes to solve that problem? What do the objects it manipulates throughout those steps look like?
- The implementational level: How is the system actually implemented in hardware?

Here's a possible point of confusion: Marr calls one of his levels "computational," but many kinds of computational models actually address one of the other two levels. This will be made more clear in the survey of modelling approaches below.

Marr uses a cash register as an example information processing system. Suppose we do not know how cash registers work: we just see that they are presented with products, and produce a final bill. Our goal is to improve our understanding of cash registers. At the computational level, we know that the problem solved is the price of the final bill. There are several constraints: If an item is put

into the order, the price should increase, if the item is taken out, the price should decrease as if the item were never put into the order, and so on. Considering these constraints, we conclude a reasonable theory for the computational level is addition.

However, there are many algorithms that can be used to compute addition. One possibility is to just always add the price of an item to the total bill. We could also partition the order into items of similar price, and then increment the bill by the price of a partition multiplied by the size of the partition: if a customer has three potatoes that cost 30p each, we could multiply the price of the partition (30p) by the size of the partition (3), and then increment the bill by that quantity. Another possibility would be to conclude that our computational theory is too hard to compute exactly, and take some reasonable approximation. We could avoid having to deal with fractions of pounds, for example, by rounding the price of each item to the nearest integer pound, and summing those integers.

Finally, there are many ways to physically implement these algorithms. Do we manipulate an abacus according to a set of rules? Do we perform the addition in transistors using a binary representation?

It's important to see that we can seek to make a claim about one level of analysis without making claims about the others. Perhaps we want to see how good addition is as a theory for cash registers, but, with our existing technology, we can't figure out how to add fractional pounds. We might adopt the rounding approximation discussed above **without** claiming that cash registers also perform this approximation. In this case, we could compare the behavior of our approximation when processing supermarket orders with real cash registers, and attribute some of the mismatch to the fact that our approximation might be a less accurate implementation of addition than whatever it is cash registers do.

Alternatively, we could examine the internals of a cash register with a microscope, and notice that it has an interesting board with lots of lines and gadgets hanging off. With lots of experimentation, we might discover that there are regularities in how electricity flows through, and establish that the lines and gadgets are logic gates. This is an implementation level analysis: it describes how physical components of the cash register do something that is useful in algorithms.

The point here, of course, is that humans are cash registers. We can measure how they respond to different stimuli (grocery store orders), and examine their biology, and propose models that try to improve our understanding.

Finally, it is important to point out that some (or all? probably not) cognitive behavior might not decompose into these levels. Perhaps, for example, humans sometimes follow a regular series of steps (an algorithm), but that algorithm doesn't do anything particularly useful. In this case, there is no clear computational level analysis. However, it is hard to show that some behavior does not decompose; doing so is establishing a negative, which is much harder than establishing a positive.

Ok, in part 2, equipped with this approach to viewing cognition, we will examine a few different kinds of computational models and see how they bear on the various levels of analysis.

Pingback: Why are so many (recent) cognitive models Bayesian? | Random Words