It's been a while since I've pored over my information theory notes but, since the topic has arisen lately, I thought I'd give those of you who are interested a quick primer (at least part one) in information theory and complexity, at least in the way it pertains to biological evolution and the hideously-misunderstood idea of "design." This is just the first post -- we'll take this as far as readers want to follow it before their brains explode.
In terms of a really fundamental introduction to randomness and design, go here and read that post carefully. Now, forget it entirely as it is complete crap, written by someone who doesn't have the first clue about mathematics or information theory. Let me explain to you how things really work.
What is "complexity"? More to the point, what does it mean to say something is "complex" and, furthermore, how could you measure that? In the mathematical sense, it's not enough to say that something is either complex or not complex -- such a property should be precisely measurable in the mathematical sense. And, amazingly, it is.
At the risk of over-simplifying (and if you're an information theory (IT) wizard, I'm sure you'll be wincing a bit), let's define "complexity" more specifically as something called "Kolmogorov complexity," whose definition is actually fairly straight-forward:
The minimum number of bits into which a string can be compressed without losing information. This is defined with respect to a fixed, but universal decompression scheme, given by a universal Turing machine.
Um ... what does this mean? Let me give you an example.
Say I had a sequence of one million bits and I wanted to transmit that string to you in its entirety. How efficiently could I do that? Obviously, I could just sent the entire one million bits -- that would be the most obvious and brute force way to do it. But what if the string had a pattern to it?
What if the string consisted entirely of one bits? In that case, I could transmit that string, without any loss of information, just by sending the text "A string of one million one bits." And wasn't that simple? Notice how I managed to get the message transmitted using way less than one million bits of data. What that means is that the Kolmogorov complexity of that string was really, really low, simply because it could be encoded using far fewer than the original one million bits.
What if, instead, the string consisted of alternating ones and zeroes? Again, I could be dumb enough to send the entire string but, again, I could save piles of bandwidth and just send "One million alternating ones and zeroes." And note how that second string needed just a little more information than the first. This implies that, depending on how we calculate it, the Kolmogorov complexity of that second string was undeniably slightly higher than the first, but still remarkably low.
And what if we had a string of one million bits that were legitimately randomly generated? Well, if there was no obvious pattern to the bits, it's quite possible that there would be no way to send the string any more efficiently than literally transmitting all one million bits, because this last string has a very high complexity value. Relating back to the definition above, there might be no way to transmit those one million bits short of, well, transmitting all one million bits. No pattern, no short cuts. See how that works? That's what we mean by "complexity."
Any questions? More to come shortly, as we relate complexity to design, order and really, really bad creationist mathematics.