Thoughts on thoughts (II)
I said in my last entry that I'd been inspired to reflection as a result of adding two new games to my puzzle collection, and then I proceeded to comment on something that had been brought to my attention by the first of those games. Here's something that was brought to my attention by the second.
The second of my new games is ‘Twiddle’: a tile rearrangement game in much the style of Rubik's cube, and well suited to much the same kind of solution strategy, which is to put as much of the puzzle in place as you can do simply and then resort for the rest to rote-learned move sequences that (for example) cyclically rearrange three pieces or flip two upside down.
For any puzzle of this type, there's a theoretical ‘God's Algorithm’ for solving it, which can take any position and always find the shortest sequence of moves that return that position to the start state. Normally the set of possible positions is too large to actually analyse this by computer, but for several of the simpler forms of Twiddle it actually turned out to be feasible on a normal desktop PC, so I hacked up some software and gave it a go.
The results are really scary. It turns out, for example, that Twiddle in the default state can always be solved in at most eleven moves (and there are very few positions actually requiring that many; a randomly chosen position is much more likely to take seven or eight). But when I started solving Twiddle puzzles, my move sequence for permuting three pieces took longer than that on its own!
So one obvious question to ask of God's Algorithm, once you have an implementation of it for a particular puzzle, is ‘so if it takes me fourteen moves to permute three pieces and you say anything can be done in at most eleven, you tell me how I can permute three pieces more efficiently’. So I asked that, and it told me – and I kicked myself. The shortest move sequences for simple transformations like this are not magical weirdness; they're obvious things that could have been constructed using the normal mathematical tools one uses to construct Rubik-puzzle operators. They just happen to apply the same techniques more efficiently than I did.
(Asking God's Algorithm to solve a completely scrambled grid, however, is without question magical weirdness. It's really crazy to watch. You can't mentally keep track of where every piece is going, unless your brain is somewhat roomier than mine. You can just about watch the pieces on the top row get put into the right places, and when you do that it doesn't look too different from a human approach to putting the same pieces in place – except that at the moment the top row is finished, mysteriously the rest of the grid suddenly seems to be in order as well. It's creepy.)
What this exercise brought home to me is just how bad the human brain (or at least my brain :-) is at doing things really well. We as a race are staggeringly good at finding ways to do something at all: we've managed to survive and achieve total dominance on this planet in spite of (a) enormous razor-toothed ravening killing machines, (b) insidious deadly organisms much too small to see, (c) countless types of beast which can move faster than us, fly, or otherwise get to food before we do, (d) a rather haphazard trial-and-error body design, (e) natural disasters, ice ages etc, (f) each other, (g) miscellaneous. And we've done this in spite of having intrinsic strength, speed, stamina, stealth and/or armament disadvantages compared to most of the wildlife we've been competing with for survival; our ability to find solutions to problems is without question a trump card that beats all of that lot into a cocked hat.
But it seems to me that we don't solve any of these problems particularly well. Efficiency, optimality and the like are things we strive for desperately hard over centuries or millennia. Finding the best solution to a problem is something the human brain is badly suited for and other approaches can do better: most of the things we have so far trained computers to do for us (add up, play chess, control complex aircraft etc), they turn out to be able to do better than we can.
To some extent this is just me. I know I personally don't have much talent for, or find much satisfaction in, finding better solutions to an existing problem when I could be finding a new problem to solve instead. I can just about cope if I'm given a clear efficiency criterion to meet (‘solve this problem using at most the amount of memory in this computer’), because that way I can use that criterion to begin the process of ruling out possible approaches. But saying ‘just make this better, by any means necessary’ is almost the definition of tedium to me unless I happen to catch sight of an utterly obvious improvement. (Which I will do sometimes, but once the obvious ones are dealt with I have little patience with searching for the less obvious.) In addition, I think I have the stereotypical mathematician's attitude to some extent: to exclaim ‘Ah, a solution exists!’ and go back to bed. For some kinds of problem, once you know it can be done in principle, actually doing it is just boring grunt-work. (But for other kinds of problem, the grunt work is the fun bit: I'm temperamentally a hands-on actually-build-things programmer rather than a blue-sky computer science research type.)
But also, I wonder if this reflects another aspect of the evolutionary pressure under which we evolved. Just as strength, speed, stamina, stealth and nasty pointy teeth all turned out to be much smaller advantages in the long run than a problem-solving brain, I wonder if it might also be the case that we don't need to be able to find the best solutions to problems, because just like physical prowess it's another specialism which can be defeated by a sufficiently determined generalist. I wonder if a species which devoted more brain-space to doing things efficiently at the expense of losing a bit of the flexibility and adaptability of the human brain might have lost out to us just as the sabre-toothed tigers did. The rival tribe over the hill knows how to make flint axes sharper than we can? No matter, because while they were sitting there sharpening things, we were inventing leather body armour. Their hunters can track a food animal and have a 40% better chance of actually catching it than ours? But we've invented farming, so we don't even need to bother any more. And so on.
Then again, I do know that other people are better at optimising things than I am. There are people in my group at work who excel at finding a new and different 0.5% or 1% improvement to the performance of our tools, every week or every two weeks, and over the course of a year those tiny percentages add up impressively. And where I tend to make major decisions based on simple qualitative grounds (if I do it this way it will enable this in future which I like the sound of), I know that ideally I would be making decisions based on overall cost (yes, but it'll also take three times as long and I'll never actually turn out to get any benefit from it); and I know that some people can do that well, or at least better, and that they tend to be the people who succeed in areas like business. (In fact, sometimes I think this is one of the reasons I shy away from business and prefer to work in free software, where counting the cost of things tends to be less of an issue.) So perhaps this is all unnecessarily me-centric, and the paragraph of evolutionary theory above is just a specious moral justification for my personal intellectual limitations.
Anyone have any thoughts? Am I right to think humans as a whole tend to be good at qualitative problem solving and less good at quantitative optimisation? Or am I right to suspect that that's just me trying to excuse my own limitations? Or do some (or most) people function equally well in either mode and the interesting distinction is somewhere else? Or is the question meaningless, and what I see as two substantially different modes of thought are in fact more similar than they feel?
no subject
no subject
no subject