In programming, I often find myself faced with a task that has easy bits and hard bits. My usual policy is to tackle the hard bits first, for two reasons:
- The hard bits are the bits that are most likely to turn out to be actually impossible or infeasible due to some unforeseen wrinkle. So if that's going to happen and the entire project is going to turn out to be doomed, it makes sense to find that out as early as possible so as not to have wasted any more time than necessary.
- If the easy and hard bits are basically similar in structure, so that their methods of solution are also likely to be similar, then doing the easy bits first runs the risk that as I go along I might develop a standard method which works for them only, and then get a nasty shock when I come to the hard bit and have to work it out all over again. By contrast, it's generally much easier to simplify a method that worked for the hardest bit so that it works for the easier cases, so that way I only have to work out my method once.
So this generally seems like a sensible strategy to me. I've used it for nearly all my programming life: I have a clear memory of advocating it to a couple of schoolmates who were giving programming a try when I was twelve.
Just occasionally, though, it backfires on me. In the past week I've had a task to do with hard bits and easy bits, and of course I did the hard bits first. I beat my head against them for days, and after great effort I'd managed to cobble together something which would probably just about work –
(It is of course possible that I wouldn't have reached the insight in question without the experience gained from struggling with the hard bits, so that the apparent waste of time was unavoidable; but in this particular case I don't think so.)