Friday, September 30, 2011

A Test for Consciousness

An article in the June edition of Scientific American called "A Test for Consciousness" (full version available online only for subscribers, so if you're not one you'll have to take my word for what the gist of the text was) proposed something interesting about consciousness, namely that an important part of it is the ability to integrate many components of an experience, much like the human brain does when it looks at a room, and...


Well, to take the room I'm currently in as an example, I see - without necessarily lifting my eye above my laptop screen - a guitar, a TV, a book shelf, a couch, a lamp and a fireplace. These items in the corner of my eye activate parts of my brain that activates other parts of my brain, and so on. For instance, the guitar triggers thoughs about music, the books make me think about "knowledge" (and my love for books) and the whole environment all taken together, especially TV and couch, makes me think this is a living room. Of course, I know this is a living room, but anyone of you looking at this picture of it would have drawn the same conclusion. Right?

Now, seeing "the whole picture" like this allows me to answer some questions that a computer could not. For instance whether something that is here shouldn't be here. Had I photoshopped it to include a Boeing 747 next to the fruit bowl on the table, you might have raised an eyebrow. And this is what the proposed consciousness test consists of. In the article, they give the example of two images: Both with a computer screen, but one with a keyboard in front of it, and one with the keyboard exchanged for a plant, that partially obscures the screen. A computer, they point out, typically couldn't tell which picture "makes sense" but a human could because we would see that the keyboard "belongs" to the monitor, whereas the plant didn't. In short, the authors suggest this as an updated version of the Turing test.

While I think they have an interesting idea about consciousness, I disagree that the kind of test they propose will necessarily detect or reject it. To put it in the simplest terms, I believe that Abraham Lincoln was conscious, but I don't necessarily think he would have passed. And I don't think it's just that particular test (computer screen + plant) that is the problem, I think it would be hard to come up with any test of this kind that wouldn't classify a large part of the human population (living or dead) as not-conscious. The image online (picture of a man photoshopped to look like he's resting on the horizon) is a good test because computers would have a hard time figuring out that in order for that to "work" he'd need to either be several miles tall, or violate the law of gravity or some other very basic assumption about how the world works.

But again, is it clear that a tribesman from Papua New Guinea would pass this test? Would they understand what they're looking at when they see a photo taken from a helicopter or airplane (which I assume this is)? And it's not okay to say "well, you could explain that to them," because that would be cheating. You're presumably not going to "explain" to the computer if the computer couldn't parse it (whatever an explanation in that case would mean). On a sidenote, what's to say that the guy isn't just falling? I suppose it's a bit of a giveaway that he's checking his watch (something unusual for a guy plummeting to his death to do) but this is again a hint-of-something-wrong our medieval ancestors would necessarily overlook.

So I think they're on to something good - we draw on many different intuitions in order to make sense of the world, and that's an important, maybe even defining, part of consciousness - and while the tests they propose in the article don't appear to me to be doing the job they say they are, I'll happily acknowledge that getting a computer to pass a test like this (quite regardless of whether or not Honest Abe would have) would be a major milestone on the way to true AI. But that said, if you have a test that we already know could easily yield false negatives even in living humans, then we should ask ourselves what kind of AI we're actually trying to test for. A machine that thinks, no - experiences! the world exactly the way a modern westerner does? While that would be a fascinating machine indeed, I don't think that's the best goal for artificial intelligence, or likely to ever succeed even if that was someone's intention.

If ("when" would be my bet) we manage to create a conscious machine, the idea of it being anything like us in how it perceives the world, or in the conclusions it draws, or in its experiences through its sensory instruments seems to me to be very unimaginative. What's wrong with just getting it to truthfully answer the question "how do you feel?"

No comments: