Adages
It's increasingly occurred to me, as I get older and crustier, that I find myself saying things to people that I've said a number of times before.
Some of them are well known adages or observations, of course, but others are things that I wasn't previously aware of, or put together from several other pieces, and they've gradually become a sort of collection of my personal adages and laws and maxims.
A few months ago I idly wondered how many of those I actually had, so I started collecting them in a text file. I've come up with more than I expected, so I thought I'd post them in one go for your entertainment.
Simon's Law of Google has frustrated me since at least 2004: if you google repeatedly for something, fail to find it, and then complain about it loudly and publicly, it will only occur to you after you mouth off that the form of words you used to describe the thing in your rant is the one phrase you hadn't already tried typing into Google, and when you do it will work the first time.
The Law of the Easy Bit is another one I actually bothered giving a name to: if there is an easy bit and a hard bit, you'll devote all your concentration to getting the hard bit right, and embarrassingly mess up the easy bit.
The adage ‘Make simple things simple, make difficult things possible’ is of course well known and not mine; Wikiquote attributes it to Alan Kay. My contribution to it is to find numerous occasions to remind people forcefully that the word in the middle should be AND, not OR. (I'm sure everyone has their own favourite example of software, or something else, which signally fails one side of it due to thinking too hard about the other.)
The wording of the next one tends to waver a bit, but a general theme I keep coming back to is: make things either identical, or totally different. All sorts of annoyances arise when things are similar enough to confuse with each other but different enough to make it important not to confuse them, both in the linguistic case of similar words and also more directly functional questions. (For example, this same principle is the reason why big-
Treating your programming language as a puzzle game is a sign that the language isn't powerful enough. This tends to raise eyebrows when I say it, given my history of treating C as a puzzle game, so I often append even if the puzzles are quite fun! I've so far resisted the temptation to try my hand at language design in an attempt to do properly what I've so far only done by trickery, but there's always a chance that sooner or later I'll be hubristic enough to try…
I'm not sure if the next one should be considered one adage, or two that happen to be worded the same: Write the hard part first. The reason I'm unsure is that there are two reasons I say it, and I suspect they might be actually distinct rather than just ends of a spectrum. One reason is that if a task in many parts has one part that's so hard it might actually turn out to be impossible, then attempting that one first means that if it does turn out impossible you haven't wasted lots of time on the easy parts which now have no use. (In particular, this is advice I give to people contemplating writing puzzle games for my collection: write the random game generator first, because if you can't get that to work, there's no use for the UI code at all.) The second reason is that in cases where the easy job is a special case of the hard one rather than complementary to it, it's probably easier and less error-
I'm tempted to call the next one the ‘Law of Conservation of Astonishment’, alluding to the Principle of Least Astonishment. This applies to systems (software or whatever) which try to cleverly guess from context what you wanted them to do: the less often they guess wrong, the more painful it is when they do –
On the subject of conservation laws, another thing I often seem to find myself saying is that there isn't a Law of Conservation of Blame. That is, if I can argue that something isn't my fault, that doesn't automatically make it your fault (supposing, that is, that you were the only other person whose fault it might plausibly be), and conversely, if it is my fault, that doesn't necessarily mean it's not also yours!
One that came up just the other day was a thing I use to stop myself agonising forever over decisions: the harder it is to choose, the less it matters. The idea is, if it's hard to decide which of the available options is best, that's probably because they're pretty close to equally good, which means that if you do choose the wrong one you won't lose much.
And finally, here's one that I have yet to work out the best concise wording for, but I'd like one, because I find I have to say it a lot. It's not ‘OK because we're going to do X’ unless you actually do X is perhaps the best I can think of right at the moment. For instance, someone at work will institute a laborious manual procedure (e.g. every time you do some reasonably common thing you must make sure to update three separate wiki pages, a whiteboard and a Word document) and promise that it's only a temporary inconvenience because ‘soon’ the proper automation will come along and then it won't be necessary to do it by hand any more. Of course it never quite does, and next thing you know the hopelessly time-
Of course, like most adages, these are figurative and/or approximate and/or unproven and/or not universally applicable. The Law of Conservation of Astonishment claims (without evidence beyond anecdote, of course) a negative correlation between frequency and intensity of astonishment, but in order for it to actually keep the average astonishment per unit time constant the relation would have to be exactly reciprocal, and I've no reason to think it is or even any idea of how you could quantify the question. Sometimes it really is necessary to solve an easy special case of a problem before you have the least idea how to tackle the general case; some hard choices do turn out to include one really bad option that wasn't obvious but you could have spotted it with more thought. And it's not impossible that my instinct to write the hard part first may actually be a contributing factor to the Law of the Easy Bit. As with applying any other principle of this level of vagueness, you generally also need the proverbial ‘wisdom to know the difference’.
no subject
Th next version of .Net should have something like this - http://en.wikipedia.org/wiki/Microsoft_Roslyn
no subject
array-of-double coefficients; // coefficients of a polynomial array-of-double input, output; // desired evaluations of the polynomial // Construct a function to evaluate this polynomial without loop overhead jit_function newfunc = new jit_function [ double x -> double ]; newfunc.mainblock.declare { double ret = 0; } for (i = coefficients.length; i-- > 0 ;) newfunc.mainblock.append { ret = ret * x + @{coefficients[i]}; } newfunc.mainblock.append { return ret; } function [ double -> double ] realfunc = newfunc.compile(); // Now run that function over all our inputs for (i = 0; i < input.length; i++) output[i] = realfunc(input[i]);(Disclaimer: syntax is 100% made up on the spot for illustrative purposes and almost certainly needs major reworking to not have ambiguities, infelicities, and no end of other cockups probably including some that don't have names yet. I'm only trying to illustrate the sort of thing that I'd like to be possible, and about how easy it should be for the programmer.)So an important aspect of this is that parsing and semantic analysis are still done at compile time – the code snippets we're adding to
newfuncare not quoted strings, they're their own special kind of entity which the compile-time parser breaks down at the same time as the rest of the code. We want to keep runtime support as small as we can, so we want to embed a code generator at most, not a front end. The idea is that we could figure out statically at compile time the right sequence of calls to an API such as libjit, and indeed that might be a perfectly sensible way for a particular compiler to implement this feature. The smallest possible runtime for highly constrained systems would do no code generation at all – you'd just accumulate a simple bytecode and then execute it – but any performance-oriented implementation would want to do better than that.Importantly, the function we're constructing gets its own variable scope (
retin the above snippet is scoped to only exist insidenewfuncand wouldn't clash with anotherretin the outer function), but it's easy to import values from the namespace in which a piece of code is constructed (as I did above with the@syntax to importcoefficients[i]). It should be just as easy to import by reference, so that you end up with a runnable function which changes the outer program's mutable state.Example uses for this sort of facility include the above (JIT-optimising a computation that we know we're about to do a zillion times), and also evaluation of user-provided code. My vision is that any program which embeds an expression grammar for users to specify what they want done (e.g.
gnuplot, orconvert -fx) should find that actually the easiest way to implement that grammar is to parse and semantically analyse it, then code-gen by means of calls to the above language feature, and end up with a runnable function that does exactly and only what the user asked for, fast, without the overhead of bytecode evaluation or traversing an AST.no subject
Func<int, int> doubler = x => x * 2;
in Javascript:
var doubler = function(x) { return x * 2 };
I know, there's no "compile time" in JS. But it's equivalent syntax anyway.
If it's deferred until runtime, then the c# syntax is far more complex and unwieldy but probably more flexible: http://blogs.msdn.com/b/csharpfaq/archive/2009/09/14/generating-dynamic-methods-with-expression-trees-in-visual-studio-2010.aspx
no subject
Indeed, it's simpler and hence less flexible. Both those examples are more or less fixed in form at compile time; you get to plug in some implicit parameters (e.g. capturing variables from the defining scope) but you can't change the number of statements in the function, as I demonstrated in my half-baked polynomial example above. I don't know C# well at all, but I know that in JS you'd only be able to do my polynomial example by building the function source up in a string and doing an eval.
(OK, I suppose you can build one up in JS by composing smaller functions, along the lines of
var poly = function(x) { return 0; } for (i = ncoeffs; i-- > 0 ;) { builder = function(p, coeff) { return function(x) { return x*p(x)+coeff; }; }; poly = builder(poly, coeffs[i]); }but I've no confidence that running it wouldn't still end up with n function call overheads every time a degree-n polynomial was evaluated. Also, I had to try several times to get the recursion to do the right thing in terms of capturing everything by value rather than reference, so even if that does turn out to work efficiently it still fails the puzzle-game test.)no subject