Friday, September 30, 2011

A Test for Consciousness

An article in the June edition of Scientific American called "A Test for Consciousness" (full version available online only for subscribers, so if you're not one you'll have to take my word for what the gist of the text was) proposed something interesting about consciousness, namely that an important part of it is the ability to integrate many components of an experience, much like the human brain does when it looks at a room, and...

Well, to take the room I'm currently in as an example, I see - without necessarily lifting my eye above my laptop screen - a guitar, a TV, a book shelf, a couch, a lamp and a fireplace. These items in the corner of my eye activate parts of my brain that activates other parts of my brain, and so on. For instance, the guitar triggers thoughs about music, the books make me think about "knowledge" (and my love for books) and the whole environment all taken together, especially TV and couch, makes me think this is a living room. Of course, I know this is a living room, but anyone of you looking at this picture of it would have drawn the same conclusion. Right?

Now, seeing "the whole picture" like this allows me to answer some questions that a computer could not. For instance whether something that is here shouldn't be here. Had I photoshopped it to include a Boeing 747 next to the fruit bowl on the table, you might have raised an eyebrow. And this is what the proposed consciousness test consists of. In the article, they give the example of two images: Both with a computer screen, but one with a keyboard in front of it, and one with the keyboard exchanged for a plant, that partially obscures the screen. A computer, they point out, typically couldn't tell which picture "makes sense" but a human could because we would see that the keyboard "belongs" to the monitor, whereas the plant didn't. In short, the authors suggest this as an updated version of the Turing test.

While I think they have an interesting idea about consciousness, I disagree that the kind of test they propose will necessarily detect or reject it. To put it in the simplest terms, I believe that Abraham Lincoln was conscious, but I don't necessarily think he would have passed. And I don't think it's just that particular test (computer screen + plant) that is the problem, I think it would be hard to come up with any test of this kind that wouldn't classify a large part of the human population (living or dead) as not-conscious. The image online (picture of a man photoshopped to look like he's resting on the horizon) is a good test because computers would have a hard time figuring out that in order for that to "work" he'd need to either be several miles tall, or violate the law of gravity or some other very basic assumption about how the world works.

But again, is it clear that a tribesman from Papua New Guinea would pass this test? Would they understand what they're looking at when they see a photo taken from a helicopter or airplane (which I assume this is)? And it's not okay to say "well, you could explain that to them," because that would be cheating. You're presumably not going to "explain" to the computer if the computer couldn't parse it (whatever an explanation in that case would mean). On a sidenote, what's to say that the guy isn't just falling? I suppose it's a bit of a giveaway that he's checking his watch (something unusual for a guy plummeting to his death to do) but this is again a hint-of-something-wrong our medieval ancestors would necessarily overlook.

So I think they're on to something good - we draw on many different intuitions in order to make sense of the world, and that's an important, maybe even defining, part of consciousness - and while the tests they propose in the article don't appear to me to be doing the job they say they are, I'll happily acknowledge that getting a computer to pass a test like this (quite regardless of whether or not Honest Abe would have) would be a major milestone on the way to true AI. But that said, if you have a test that we already know could easily yield false negatives even in living humans, then we should ask ourselves what kind of AI we're actually trying to test for. A machine that thinks, no - experiences! the world exactly the way a modern westerner does? While that would be a fascinating machine indeed, I don't think that's the best goal for artificial intelligence, or likely to ever succeed even if that was someone's intention.

If ("when" would be my bet) we manage to create a conscious machine, the idea of it being anything like us in how it perceives the world, or in the conclusions it draws, or in its experiences through its sensory instruments seems to me to be very unimaginative. What's wrong with just getting it to truthfully answer the question "how do you feel?"

Wednesday, September 7, 2011


The other night I had a near-Storm experience, when the topic - over wine - ended up for a short while about evolution and then the words "but science is also just a belief, really" were uttered. I did object to it, but I think my objection was lost/ignored/misheard/ruled out by generally loud conversation that quickly spread like wildfire to some other topic.

But I'm bothered by the statement, not because it's untrue, but because semantics force it to be true. Yes - science is also something that one "believes" in, much like some deity, the fairy godmother or Santa Claus. So the statement, taken literally, is true. There are serious differences between the two kinds of belief that are sadly jumbled up into one concept, but the English vocabulary does not allow for them to be easily expressed. Neither, by the way, does Swedish. Which leads me to wonder if there's a language that actually has separate words for the following two states of the brain:

  • Accepting a statement about reality because it seems likely, or feels right, or makes sense, compared to:
  • Accepting a statement about reality because it has been proven true.

I have more to say about these two sentences in a little bit. First, I want to also address another beef with the word "belief" - that it doesn't separate between "likely" and "virtually certain." Anything anyone deems just past the line of 50% probability gets the tag "belief." All the way up to 99.99%. And, for reasons having to do with social conventions and politeness in discussion, quite a chunk of the one-hundred-percenters, too. For instance:

  • "I believe it will rain tomorrow."
  • "I believe my wife is at work."
  • "I believe my team will win the championship."
  • "I believe this shirt is too small for me."
  • "I believe evolution is true."
  • "I believe there is a fly in my soup."

These are not six equally likely propositions. Yet there is no good way of stating the difference, because we're stuck with the word "believe." Sure, you could try throwing into the mix such phrases as "I hold it to be true that..." or "It seems likely to me that..." or "Surely, it's the case that..." but those are usually awkward. You could also go right ahead and just state matter-of-factly that "Evolution is a fact." Yes, it is. But that actually says something different than sentence number five above. It takes the speaker out of the picture, and I'm a firm believer (see?!) in speaking in first-person as often as I can because I want to talk about how I perceive the world and what I believe, my values, and so on. Maybe this is too fine a point, maybe I'm splitting hairs. But I can't help look at those two sentences and feel a big difference about what they say and what kind of conversation will take place after they're stated.

Maybe it's because of inflation. Maybe the word "belief" was originally used in the sense of "holding something to be true" but was gradually devalued as people used it for less and less certain statements about the world. There are lots of words that have had this happen to them, my favorite being the Swedish word "ganska" which originally meant "certainly" and now means "fairly." We slowly went from being "ganska" sure of what the word meant, to only "ganska" sure about it.

So about the word that is missing, the word that denotes something I hold to be a fact about the world because evidence dictates it versus things I've been told or think I have noticed - what about that? How do we separate between "I believe a full moon gives me headache" and "I believe Jupiter is the largest planet in our solar system?" A minimum of one of those are - for most people - on authority. And only one of them is true. But both of them can correctly be summed up as being things people believe, thus implying a hugely unfair (to Jupiter) similarity in degree of truth. It pisses me off. Can I start saying that I "objectively believe" something?

  • I objectively believe in evolution.
  • It seems likely that it'll rain tomorrow.
  • I'm probably over-obsessing about this topic.
  • I'm almost certain that very few people will read this.

Oh, whatever. Watch this instead:

Saturday, September 3, 2011

Officially retired poker player, am I

So, after quite a hiatus, I finally hammered the last nail into my poker career's coffin: I withdrew all of my bankroll, thus effectively putting to end an era of my life that lasted for 5-6 years, could in many respects be said to have been more formative of my personality than school or college ever was, and from which I bring (almost) no regrets.

And, with that, this blog will either die for real, or be reborn. I'm still not sure. I'm nowadays active on Twitter - @FredrikPaulsson - and I've found myself at times itching to say more than what fits into 140 characters. Perhaps this is the place to do that.

We'll see. Tonight, I'll drink a toast to my poker career, reflecting on its highs and lows, on all the friends I've made, the good times I've had and what I've learned.

Here's to you, poker, old friend. Maybe we'll cross paths again some day.