Aug. 3rd, 2017 [entries|reading|network|archive]
simont

[ userinfo | dreamwidth userinfo ]
[ archive | journal archive ]

Thu 2017-08-03 11:43
Deoptimisation can be a virtue

There's a well-known saying in computing: premature optimisation is the root of all evil. The rationale is more or less that tweaking code to make it run faster tends to make it less desirable in various other ways – less easy to read and understand, less flexible in the face of changing future requirements, more complicated and bug-prone – and therefore one should get into the habit of not habitually optimising everything proactively, but instead, wait until it becomes clear that some particular piece of your code really is too slow and is causing a problem. And then optimise just that part.

I have no problem in principle with this adage. I broadly agree with what it's trying to say. (Although I must admit to an underlying uneasiness at the idea of most code being written with more or less no thought for performance. I feel as if that general state of affairs probably contributes to a Parkinson's Law phenomenon, in which software slows down to fill the CPU time available, so that the main effect of computers getting faster is not that software actually delivers its results more quickly but merely that programmers can afford to be lazier without falling below the ‘just about fast enough’ threshold.)

But I have one big problem with applying it in practice, which is that often when I think of the solution to a problem, the first version of it that I am even conscious of is already somewhat optimised. And sometimes it's optimised to the point of already being incomprehensible.

For example, ages ago I put up a web page about mergesorting linked lists; [personal profile] fanf critiqued my presentation of the algorithm as resembling ‘pre-optimised assembler translated into prose’, and presented a logical derivation of the same idea from simple first principles. But that derivation had not gone through my head at any point – the first version of the algorithm that even worked for me at all was more or less the version I published.

Another example of this came up this week, in an algorithmic sort of maths proof – I had proved something to be possible at all by presenting an example procedure that actually did it, and it turned out that the procedure I'd presented was too optimised to be easily understood, because in the process of thinking it up in the first place, I'd spotted that one of the steps in the procedure would do two of the necessary jobs at once, and then I devoted more complicated verbiage to explaining that fact than it would have taken to present a much simpler procedure that did the two jobs separately. The simpler procedure would have taken more steps, but when all you're trying to prove is that some procedure will work, that doesn't matter at all.

I think the problem I have is that although I recognise in principle that optimisation and legibility often pull in opposite directions, and I can (usually) resist the urge to optimise when it's clear that legibility would suffer, one thing I'm very resistant to is deliberate de-optimisation: once I've got something that has been optimised (whether on purpose or by accident), it basically doesn't even occur to me in the first place to make it slower on purpose. And if it did occur to me, I'd still feel very reluctant to actually do it.

This is probably a bias I should try to get under control. The real meaning of ‘premature optimisation is bad’ is that the right tradeoff between performance and clarity is often further towards clarity than you think it is – and a corollary is that sometimes it's further towards clarity than wherever you started, in which case, deoptimisation can be a virtue.

Link1 comment | Reply
navigation
[ viewing | August 3rd, 2017 ]
[ go | Previous Day|Next Day ]