On "Brain Uploading"

PZ Myers posted yesterday about problems with this idea of "brain uploading". Basically, "brain uploading" is this theoretical technology wherein people would be able to obtain immortality and/or superhuman intelligence by replicating the structure of their brain on a computer. Proponents say it should work because they believe the "computational theory of mind," which states that minds just are computational activity. "Brain uploading" is supposed to be implemented by cutting a brain into very thin slices, scanning those slices with some kind of high-resolution microscope, and then reconstructing the structure of the brain in a software program that emulates how neurons work. Since we take the computational theory of mind for granted, and since the brain is what runs the computer program of the mind, this should be sufficient to resurrect an exact copy of the mind that had been previously run on the brain.

Since Myers is a biologist, he criticized the putative technology from a biological perspective. He argues that it's much harder to actually preserve brains in a life-like state than advocates of "brain uploading" usually appreciate, and so on. I'm going to talk about problems with this from the perspective of computational cognitive science, because that's my area and I mostly just want an excuse to get started talking about it.

Myers is responding to this manuscript called Whole Brain Emulation: A Roadmap. This document motivates "brain uploading" as a way to get a computer program that runs a mind without having to understand how the brain manages to run minds: all we need to do is figure out how to emulate neurons and synapses (which may be easier than solving strong AI directly), then hook them up in the same way as a real brain. It appeals to the "computational theory of mind" as evidence that this is sufficient to make a digital copy of a mind, and proposes that a mind could be easily enhanced by adding extra processors to the hardware, and so on.

However, while it appeals to the computational theory of mind, it misses the most interesting insight of this view of the mind. The idea is that the brain works by computing various quantities (such as probability distributions over sentence meanings), and that the physical implementation is just a set of procedures for obtaining these quantities. The brain is a biological organ, and accordingly is the result of ages and ages of basically just trial-and-error towards the improvement of the organism's fitness. Almost all of the details we would be trying to preserve would either be related to stuff largely irrelevant to the mind (such as the regulatory hormones of the pituitary gland) or biological hacks of varying accuracy towards estimating these relevant quantities. If the computational theory of mind is true, we can implement a mind by implementing accurate estimators of these quantities, regardless of how neural tissue gets at them.

One could argue that you don't "really" capture the mind unless you manage to appreciate the extent to which its estimates are imprecise. However, the authors of the manuscript are perfectly happy with increasing the capacity of a mind by increasing its hardware. Why not also improve its algorithms?

One reason you might still want to upload a brain has to do with preserving the particular experiences and knowledge of a particular mind. A proponent of brain uploading might say "well, yes, it's not the most efficient way to capture the actual computer program being approximated, but we want to capture that computer program along with the world knowledge that it has." And this is indeed a good point: one of the hardest problems encountered by AI researchers was encoding the sheer amount of contingent world-knowledge a typical person has. If brain uploading can give us a shortcut for encoding all that knowledge of a particular person, then maybe it's still worth investigating.

However, this consideration brings out another potential problem with the idea of "brain uploading." It has a very narrow focus on the brain, which ignores the extent to which the mind interacts heavily with other parts of the body through the central nervous system. Embodied cognition is a field of research that examines the extent to which the mind interacts with our body through the full nervous system, and the amount of world-knowledge that is tied up in interactions with our physical body.

For an example from language (my field), the motor theory of speech perception posits that we perceive speech sounds by simulating which articulations are likely to have produced the sound that we heard. For a simplified example, the English sound associated with the letter "t" is composed of a period of silence followed by a burst of air. Under the motor theory of speech perception, a listener reasons that they can produce that kind of sound by blocking air with their tongue while building air pressure in their mouth behind the tongue, which produces the period of silence, and then releasing the air pressure by moving the tongue down. The listener knows that they produce that series of gestures when they want to produce the abstract speech sound (called a "phoneme") associated with the letter "t," and concludes that the talker must have intended the sound associated with "t." In this sense, the motor theory of spech perception proposes that we perceive the articulatory gestures rather than the speech sounds themselves.

However, to learn which articulatory gestures produce which sounds, the brain must interact with the articulators; the exact shape of the vocal tract, including its volume at each point, for example, is probably not directly represented in the brain. So, if the motor theory of speech perception is really on to something, if we "upload" our mind by only scanning the brain, we might have enough information stored to process languages we know well, but we couldn't learn new varieties. An uploaded brain that knows only English would have an even harder time learning Arabic, for example, because Arabic is full of "pharyngealized" sounds that involve constricting the back of your throat. This "pharyngealization" articulation does not exist in most (any?) varieties of English, and so the uploaded brain would not be able to determine what sort of gestures would produce that acoustic signature.

So this theoretical "brain uploading" technology is probably not realistic as an actual technique someday, although it could be useful as a foil for thinking about what an implemented mind would be like. How would a human-implemented mind compare to a human mind? In what ways would a digital implementation be "embodied"? With the internet, would a digital mind's embodiement extend across the globe? Or even further?

One thought on “On "Brain Uploading"

  1. One related issue that bugs me about this "brain uploading" stuff is that even if it works in preserving someone's mind as it is (which I'm not convinced is even possible), there is more to a person's mind than its current state - there's the way it changes over time. The brain changes and matures in a lot of different ways throughout a person's life, and a lot of those changes are caused by other parts of someone's body.

    If you really wanted brain uploading to work, you'd have to either figure out how the brain changes and find some way to accurately duplicate those changes, or accept that, for instance, a 7 year old whose brain has been replaced with an uploaded one would always think like a 7 year old. ...or I guess you could change the brain in deliberate ways, using hardware upgrades to enhance certain abilities while keeping everything else basically the same, and end up with a supergenius with the emotional maturity of a 7 year old. Of course, that's how we get sci-fi/horror movies, so I'm not sure if that's a good idea either. :)

Leave a Reply