simont: (Default)
2017-08-31 10:28 am

High-layer errors

‘To err is human,’ the quotation says. But it's not only human. To err, in general, is the common lot of humans, all other life forms, all mechanical devices, and surely anything else I haven't thought of for which you can specify any idea of a purpose or intention to have erred from.

Something that's closer to being human-specific (though surely still not completely) is to err at a high cognitive layer.

For example, a few years ago I got home from work, carrying my usual rucksack of random stuff. I opened the front door, found a letter on the mat, picked it up, shut the door behind me, carried letter and rucksack to the study, put them down, unzipped the rucksack … and then stood there confusedly wondering why I'd done that, and what I might have wanted out of the rucksack.

Of course, I might have just forgotten what I wanted by the time I got the bag open. But not this time. In this case, what had actually gone wrong was that I had meant to open the letter, not to open the rucksack.

But I'd somehow mistaken that intention, somewhere in my brain, and issued the wrong ‘open this thing’ instruction. And having made that error, the lower layers of my brain's planning apparatus and motor cortex had all cooperated to faithfully implement the wrong instruction they'd been given.

This is a particularly beautiful example of a high-layer cognitive error, because you can easily imagine what a lower-layer error in the same situation would have looked like. I didn't, for example, pick up the paper knife and try to slit the rucksack open with it (or, more likely, stop half way through that motion and find myself confusedly wondering how to proceed). Instead, the erroneous intention was carried through perfectly competently, by choosing all the right subgoals and pursuing them in the right way, and it wasn't until the rucksack was successfully open in front of me that I was confronted with the realisation that I'd made a mistake.

A slightly different example of this phenomenon happened to me the other day, in a computing context. I was sitting at a bash prompt in a git checkout, and I ran git stash, then git pull -r, and then pressed Ctrl-Y … and then sat there confusedly wondering why I'd just pasted some old half-written piece of a past command on to my shell command line.

The right way to complete the sequence would have been the command git stash pop, to restore the uncommitted changes that I'd had in the checkout before realising I needed to do the disruptive pull operation. So why did I press Ctrl-Y instead?

Because in other situations, I'll sometimes be half way through typing a command, and then realise I need to run another command first (e.g. a cd command, so as to be in the right directory for the original command to work). In that situation, my habit is to press Ctrl-A then Ctrl-K, to transfer the half-typed command into bash's paste buffer; then run the preparatory command; and then press Ctrl-Y to paste the original command back on to my command line, after which I can finish typing it and run it.

In other words, the bash-driving part of my brain has two separate procedures for the high-level idea of ‘set this half-done thing temporarily aside, do a disruptive thing, get the half-done thing back from wherever you stashed it’. One is for half-done things in the form of uncommitted work in a git checkout, and is spelled git stash / git stash pop. The other is for half-done things in the form of unfinished shell commands, and is spelled ^A^K / ^Y.

And I'd simply forgotten, half way through the action, which of those two procedures I was in the middle of performing, so I did the second half of the wrong one.

These kinds of high-layer error are fascinating to me. They're often comical – much more so than very low-layer errors, such as a typo, or a fumble with the paper knife. And it's always seemed to me that they shed light on the functioning of the brain in a way that low-layer errors don't.

Also, I find that a common feature of high-layer errors is that they cause a great deal more confusion afterwards. If I make a low-layer motor error like a typo, it will typically be instantly obvious to me that what I meant to do is not the same as what I did do; it might be inconvenient to recover from the mistake, but it's not confusing. But with high-layer errors, in which I've just quite competently performed completely the wrong task, I seem to also have half-convinced myself that that wrong task was what I meant to do, and it will take me several seconds of complete confusion to recover from that belief and figure out what just happened.

For example, in the above cases, after I'd opened the rucksack I spent a while trying to remember what I might have wanted from it; and having pasted some old nonsense on to my shell command line, I started from the assumption that I'd meant to do that and that the nonsense was about to be useful in some way, if only I could just remember what I might have had in mind. In both cases, my prior for ‘you meant to do that and it will make sense in a moment’ was much higher than my prior for ‘this was a weird high-layer error and you didn't really mean to do it at all’.

I suppose that does make Bayesian sense, since high-layer errors seem quite rare – which is another reason I find them notable and interesting when they occur!

simont: (Default)
2017-08-03 11:43 am

Deoptimisation can be a virtue

There's a well-known saying in computing: premature optimisation is the root of all evil. The rationale is more or less that tweaking code to make it run faster tends to make it less desirable in various other ways – less easy to read and understand, less flexible in the face of changing future requirements, more complicated and bug-prone – and therefore one should get into the habit of not habitually optimising everything proactively, but instead, wait until it becomes clear that some particular piece of your code really is too slow and is causing a problem. And then optimise just that part.

I have no problem in principle with this adage. I broadly agree with what it's trying to say. (Although I must admit to an underlying uneasiness at the idea of most code being written with more or less no thought for performance. I feel as if that general state of affairs probably contributes to a Parkinson's Law phenomenon, in which software slows down to fill the CPU time available, so that the main effect of computers getting faster is not that software actually delivers its results more quickly but merely that programmers can afford to be lazier without falling below the ‘just about fast enough’ threshold.)

But I have one big problem with applying it in practice, which is that often when I think of the solution to a problem, the first version of it that I am even conscious of is already somewhat optimised. And sometimes it's optimised to the point of already being incomprehensible.

For example, ages ago I put up a web page about mergesorting linked lists; [personal profile] fanf critiqued my presentation of the algorithm as resembling ‘pre-optimised assembler translated into prose’, and presented a logical derivation of the same idea from simple first principles. But that derivation had not gone through my head at any point – the first version of the algorithm that even worked for me at all was more or less the version I published.

Another example of this came up this week, in an algorithmic sort of maths proof – I had proved something to be possible at all by presenting an example procedure that actually did it, and it turned out that the procedure I'd presented was too optimised to be easily understood, because in the process of thinking it up in the first place, I'd spotted that one of the steps in the procedure would do two of the necessary jobs at once, and then I devoted more complicated verbiage to explaining that fact than it would have taken to present a much simpler procedure that did the two jobs separately. The simpler procedure would have taken more steps, but when all you're trying to prove is that some procedure will work, that doesn't matter at all.

I think the problem I have is that although I recognise in principle that optimisation and legibility often pull in opposite directions, and I can (usually) resist the urge to optimise when it's clear that legibility would suffer, one thing I'm very resistant to is deliberate de-optimisation: once I've got something that has been optimised (whether on purpose or by accident), it basically doesn't even occur to me in the first place to make it slower on purpose. And if it did occur to me, I'd still feel very reluctant to actually do it.

This is probably a bias I should try to get under control. The real meaning of ‘premature optimisation is bad’ is that the right tradeoff between performance and clarity is often further towards clarity than you think it is – and a corollary is that sometimes it's further towards clarity than wherever you started, in which case, deoptimisation can be a virtue.

simont: (Default)
2017-03-29 08:59 am

More parser theory

I had a conversation recently about low-priority prefix operators in infix expression grammars, which left me mildly uncertain about what they ought to mean. So here's a quick straw poll.

Suppose I have an expression grammar in which the multiplication operator * and the addition operator + have their usual relative priority (namely, * binds more tightly, i.e. is evaluated first). Then suppose I – perhaps unwisely – introduce a prefix operator, which I'll call PFX for want of a better name, which has intermediate priority between the two, so that

  • PFX a + b behaves like (PFX a) + b, i.e. the PFX is evaluated first
  • PFX a * b behaves like PFX (a * b), i.e. the PFX is evaluated second.
That's simple enough (if unusual). But things get weirder when PFX appears on the right of another operator. Specifically, what would you imagine happens to this expression:
a * PFX b + c
in which you can't process the operators in priority order (* then PFX then +) because the PFX necessarily has to happen before the *.

Open to: Registered Users, detailed results viewable to: All, participants: 10

In what order should the parser process those operators?

View Answers

PFX then * then + to give (a * (PFX b)) + c
5 (50.0%)

+ then PFX then * to give a * (PFX (b + c))
0 (0.0%)

PFX then + then * to give a * ((PFX b) + c)
0 (0.0%)

None! Report a parse error and demand some disambiguating parentheses.
4 (40.0%)

Low-priority prefix operators misparsed my grandparents, you insensitive clod
1 (10.0%)

simont: (Default)
2017-02-20 08:09 pm

My language design is bad and I should feel bad

Over the weekend, I realised, extremely belatedly, that the expression language I designed for my free-software project spigot has a grammar bug. Specifically, it's a context-dependency analogous to the C typedef bug: the same expression can parse differently depending on whether a given identifier is currently defined to be a function or a variable, which means that you need to do your scope analysis in sync with the parse, so that you can know what set of definitions is currently in scope for the subexpression you're currently looking at.

confessions, lamentations, and parser theory )
simont: (Default)
2016-10-10 04:46 pm

Is there a name for this bad argument?

There's a particular annoying pattern I notice in debate, in which one person criticises another's choice of argument on the basis of a sort of misapplication of pragmatics.

Here's a concrete (if slightly melodramatic) example. Imagine we're drinking together, and you demand, suspiciously, ‘Wait, how do I know you haven't poisoned this bottle of wine?’. To which I respond, ‘I'm drinking from it too, so that would be a really bad idea!’ Now if you were to think, or say, ‘Oh, so you would have poisoned it if we hadn't been drinking from the same bottle?’, you'd be committing this fallacy.

Because in fact, of course, the main reason I haven't tried to poison you is some combination of because I don't want to and because I'm too moral to do such a wrong thing, and both of those reasons would still apply regardless of any detail of who was drinking what.

But you can't check those statements, because they're about stuff entirely inside my own head. So if I'd tried to use either one as a defence, then you'd be no more convinced of my non-murderousness than you are now, because if you can believe I might try to poison you in the first place, then you'd have no trouble also believing that I'd lie about my motives in the course of the attempt. Whereas you can easily verify for yourself that I am indeed drinking from the same bottle, and perhaps you might find it harder to believe I was self-sacrificingly murderous than merely self-interestedly murderous.

(Since this is a silly hypothetical example, let's assume we can disregard all the even more improbable edge cases beloved of fiction, like the poison being in the ice cubes, or smeared on your particular glass in advance, or that I took the antidote beforehand, or have spent ten years building up resistance to iocane powder, etc…)

Anyway. That's why I chose that particular reason as the one to mention: not because it was my core reason or my only reason, but because it was one you'd be more likely to believe, because you could check it yourself.

I think the general pattern of which this is an example is: it's a fallacy to assume that someone who has mentioned one good or bad property of a thing (or reason to do it or not do it, or whatever) must have chosen that particular property because it's the most important or the only one, rather than choosing the property most appropriate to what particular goal the utterance as a whole is trying to achieve.

In this silly example, my goal is to try to convince you that you're safe; so I have to pick a reason that will actually manage to do that job, rather than one that is more important to me but likely to be less effective. In other kinds of debate, one might similarly choose the argument that appeals most to the particular audience one is trying to convince, not the one that is most fundamental in the arguer's own mind. Or you might avoid particular arguments because you know they'll cause some enormous derailing sidetrack. Any number of reasons, really.

So. Does this fallacy have a well-known name?

simont: (Default)
2016-06-21 08:47 am

Regular language

I noticed yesterday after writing a comment in some code that one of my writing habits had changed, without me really noticing. So I thought I'd see what other people's opinions were.

Poll #17528 A regular holy war
Open to: Registered Users, detailed results viewable to: All, participants: 36

How do you write 'regular expression' in abbreviated form?

View Answers

9 (26.5%)

24 (70.6%)

Something else
0 (0.0%)

I only ever write it unabbreviated
0 (0.0%)

I don't ever write it at all
1 (2.9%)

How do you pronounce the g in regexp / regex ?

View Answers

Hard, like in 'regular' (IPA /ɡ/)
22 (61.1%)

Soft, like in 'Reginald' (IPA /dʒ/)
12 (33.3%)

Something else
1 (2.8%)

I never pronounce these abbreviations
1 (2.8%)

simont: (Default)
2016-03-27 01:41 pm

Found in the files

I visited my mum yesterday, and we ended up going through a box file of snippets she'd saved from my childhood. Among them, printed on silvery ZX thermal printer paper that brought a nostalgic smile to my face in its own right, was the following Sinclair BASIC program, printed together with its output.

   5 BORDER 0: PAPER 0: INK 7: CLS
10 DIM x(40)
20 FOR z=1 TO 40: LET a=z*0.1
30 LET y=SQR (1+(a*a))+SQR ((4-a)*(4-a)+4)
40 LET x(z)=y
50 NEXT z
60 LET smsf=1
70 FOR z=1 TO 40: LET a=z*0.1
80 IF x(smsf)>x(z) THEN LET smsf=z
90 NEXT z
95 PRINT "GG end to J", "Road length"
100 PRINT smsf,x(smsf)
GG end to J     Road length
13 5.0001815

I don't know why Mum chose to save that program in particular, out of all the ones I must have printed out in the few years I had the Spectrum and that printer. She guessed that it must have been the first one I wrote, but it seems a bit sophisticated for that – surely there would have been a pile of ‘hello, world’ type things before this quite mathematical one?

Regardless, I had completely forgotten it, and it makes me smile at the fact that what it seems to be doing is numerically testing the proposition that a straight line is the shortest distance between two points. (Line 30 calculates the distance from (0,0) to (4,3) via the point (a,1); the rest of the program tries this for lots of different values of a, and finds the one that gives the shortest total distance, which unsurprisingly is the point where the straight-line route intersects the line y=1, to the limits of rounding error.)

But I do wonder whether this was something I typed in from elsewhere in full, or followed hints in a manual, or what. I'm particularly curious about the variable name smsf and the cryptic legend ‘GG end to J’…

simont: (Default)
2016-03-22 09:53 am

Terminology gap

Yesterday in a technical conversation I used the phrase ‘HP-complete’.

I had intended it, by analogy with ‘NP-complete’, to mean that if the problem under discussion could be solved, the solution would necessarily include a solution to the Halting Problem, i.e. the problem was as hard as the Halting Problem, i.e. uncomputable.

There are several other well known ‘-complete’ phrases which analogise in the same way – ‘Turing-complete’ and ‘AI-complete’ – and it seems to me that ‘HP-complete’ fits right into this framework and has a technically precise and useful meaning. But for some reason it isn't in common usage in the way that those other ‘-complete’ phrases are, so the person I was talking to didn't get what I meant and I had to explain. Bah. I don't see why it isn't part of the standard lexicon, for all the same reasons!

I suppose we already have the word ‘uncomputable’, so you could argue that we don't need ‘HP-complete’ as a synonym. But I think it's not quite a synonym, in that it also conveys a hint about why it's uncomputable, or at least about the train of thought that made me conclude that it was.

(I suppose it's conceivable, in my original context, that I should have chosen ‘HP-hard’ rather than ‘HP-complete’ – I don't think I'd intended to rule out the possibility that the problem under discussion was harder than the Halting Problem :-)

simont: (Default)
2016-03-10 12:06 pm

I find myself noticing scansion

Somewhere in my head is a very specialised pattern-recognition neuron, which seems to fire whenever I encounter – or, even more so, inadvertently write any phrase or sentence that would scan perfectly as the first line of a limerick.

It's a very persistent and attention-grabbing neuron, for some reason. Every time it fires, I have to spend the next half hour vigorously resisting the urge to try to come up with the rest of the limerick. Even if the triggering phrase is something utterly, tediously mundane, as for example the one that emerged from my fingers just now as a hasty commit message: ‘A cleanup I noticed in passing’.

simont: (Default)
2016-03-07 01:30 pm

An unsatisfying resolution

I've not been posting here in a while, and it seems to me that one reason why not is that I increasingly don't feel as if I have the brain-space to put together a long and well thought-out post about anything.

Perhaps, therefore, I should begin to fix this by posting short and/or inconsequential things. To kick off with, here's one that is both.

I lost a sock the other week. I did it in the way you normally lose socks: at one time I had N socks, and at a time considerably later I had N-1, and there were a lot of things that happened in between, so I don't know which was responsible.

(That's what makes the sock lost, of course – if I could narrow down to one event, I could have just gone and got the sock back from wherever that one happened.)

I looked for it everywhere, and it didn't turn up. I resigned myself to having only N-1 of those socks – until I did the laundry yesterday, and found that when I came to hang everything up, I inexplicably had N of them again.

So I found the sock in the same way as I lost it: I don't know what thing I did caused it to reappear.

That's the worst way to find a sock. The problem is solved, but in a way that sheds no light on the mystery! Arrrgh!

simont: (Default)
2015-08-10 12:55 pm

Magic and language in fantasy fiction

Since I've been musing about fiction recently, here's another thought that crossed my mind.

Fantasy fiction often has a magic system involving spells cast in spoken language. But what language? Why does that language work and not another? Or would another language work? Would it depend on the spell? On the caster? On the location? It seems to me that there are quite a few plausible ‘cosmologies of magic’ which would cause different answers to those questions, many of which have specific examples in existing fiction, and I wonder if there are any more I've missed out.

an attempted taxonomy; vague hints at content of many fantasy books )

So, what have I missed?

simont: (Default)
2015-08-03 10:37 am

Random fiction question: non-magical archaeology

A question occurred to me last night. Perhaps the two best known fictional archaeologists (taking the term somewhat loosely), across fiction in all media, are Indiana Jones and Lara Croft. Both of them have in common that they investigate things about which there were rumours of ancient magical powers, or gods, or other such supernatural and powerful stuff. And they're right – the Ark of the Covenant, the Holy Grail, the Dagger of Xian, etc, all really do perform as advertised.

What are the best known examples of fictional archaeologists who do not unearth ancient magical artefacts, and the only thing they ever find out is information about what happened in the past?

For these purposes, I think I'm going to rule that the actual archaeological discoveries have to be part of the plot: having a character who happens to be an archaeologist isn't sufficient, if the story only focuses on some other aspect of their life. (Even if it's a somewhat work-related aspect, such as worries about career progression, or conflicts with co-workers.)

I only managed to come up with one example of this at all, namely Asimov's Nightfall. I'm sure there must be others, though.

simont: (Default)
2015-01-09 09:32 am

Gold, gold, gold, gold, gold

It randomly occurred to me this morning to wonder if everyone else has had the same thought about this, or if it's just me.

For those who have read Discworld novels...

Poll #16335 Gold, gold, gold, gold, gold
Open to: Registered Users, detailed results viewable to: All, participants: 14

In your head, to what tune (if any specific one) do the Discworld dwarfs sing the gold song?

simont: (Default)
2014-11-11 11:00 am

Sleep is a substitute for caffeine, it turns out

After a couple of weekends of hard work on free software recently, I was feeling pretty tired, and decided to dedicate the weekend just past to intensive rest and catching up on my sleep.

It turns out that if I really try hard to catch up on sleep, and succeed, then I become noticeably more caffeine-sensitive. I drank the same amount of coffee yesterday as I would normally drink on a Monday morning, and it made me anxious and unhappy for most of the rest of the day, and then it took me hours and hours to go to sleep last night. So now I've completely undone all of the weekend's good work and am short of sleep again. Arrgh!

And now I think about it, I remember this same thing happening on a few other occasions too, particularly when I get back from holiday. Apparently it's a thing I ought reasonably to be able to predict and allow for. But I didn't – and I can't even use ‘being half asleep’ as an excuse…

simont: (Default)
2014-10-26 07:58 pm

Back once again for the origin/master

This weekend, after preparation and faff lasting several months, I migrated all of my free software projects out of Subversion into a collection of git repositories.

Good grief, it was a faff! It's surprisingly like moving house – there's no end of just-one-more-things that all have to be sorted out, and every so often you turn a corner and find another huge list of things to add to the list, and it's exhausting sorting it all out. I did as much as I could ahead of time (e.g. I did the work of stopping various projects from depending on a monotonic revision number a few weeks in advance of the moment of migration itself), and there's a big list of sortings-out left to be done which I'll get round to once I've rested, but even so, the actual ‘moving day’ still had a lot of bits and pieces I couldn't move to another day. I'll find work restful tomorrow, I think!

simont: (Default)
2014-10-16 01:52 pm

People don't know their classics

At work the other day a fellow developer and I were discussing incomplete array types in C, and at one point in the conversation he referred to them using a phrase which gave me a perfect opportunity to reply ‘Arrays of Unspecified Size? I don't think they exist.’ Sadly, I got a blank look.

And today at lunchtime there was a silly conversation about a car park that had collapsed due to subsidence, on a site with a large rabbit population. It was suggested that the rabbits might have undermined the car park, and someone else said ‘no, don't blame the rabbits’. Of course this is a feed line which physically cannot have any other reply than ‘Bunnies, bunnies, it must be bunnies’ – which elicited no evidence of recognition from any of about six people.

Bah. People don't know their classics!

simont: (Default)
2014-10-13 10:42 am

Childhood predictions

A random childhood memory of mine is of being about 10, and wondering what my life would look like when I was grown up and didn't live with my parents1 any more.

At that age, a thing I was enjoying doing in my spare time was making cardboard models of polyhedra. And I remember having the thought that once I was a grown-up, perhaps I might still feel moved from time to time to sit at home cutting and folding and gluing cardboard, but since I wouldn't be living with family any more, there'd be nobody else to show the things to once I'd made them. I found that thought rather upsetting, and started to think that perhaps being an adult wouldn't be very nice; I had an image of myself spending my life sitting in a house on my own, with nobody to talk to, and even the ways I was accustomed to amusing myself on my own not being as much fun as they used to be.

It was only a passing thought; there were clearly lots of things wrong with it, several of which I spotted fairly soon, so it didn't continue to worry me for very long. But it stuck in my head well enough that even now I can clearly remember having been worried about it once.

During the weekend just past, I put a fair amount of effort into helping [ profile] drswirly design and make a cardboard cube with group theory notation all over its faces, for maths-teaching purposes. There were enjoyable coding challenges involved in writing the program that put all the right notation in the right places and printed out the net; when it was finished, we looked at it and went ‘ooh!’ (because it's quite pretty), and it will also come in actually useful to him and possibly to other maths-type people. And then he rewarded me with cake.

At times like this, I feel an urge to travel back in time and reassure my 10-year-old self that he was wrong in the one part that mattered. I would tell him, it turns out you're right that you'll live on your own when you grow up, and it turns out you're also right that you'll still make polyhedra every so often, but notwithstanding your pessimistic analysis it will still be great fun.

1. (Hmmm, I nearly wrote ‘didn't live at home any more’ there, which made sense in my head but once written down it's clearly a contradiction in terms!)

simont: (Default)
2014-09-23 12:42 pm

Couldn't have done that better on purpose

I was in the kitchen just now, humming to myself as I peeled an avocado.

I don't really know how to peel avocados without getting tasty green goo all over my fingers. So when I'd finished, I licked it off. In the process, in a feat of staggering clumsiness, I managed to (not very painfully) bite my own thumb.

I felt a bit silly about that. But then I suddenly realised something that made it much more amusing than shameful: at the moment I accidentally bit my thumb, the song I had been humming to myself was Dire Straits' ‘Romeo and Juliet’.

Which is perfect, of course!

simont: (Default)
2014-08-01 09:58 am

Approximate Anatidae alignment

It occurred to me last week to wonder: in organisations that predominantly do development in Python, do the managers talk endlessly about having to get all their vaguely duck-shaped objects in a row?

simont: (Default)
2014-07-30 09:30 am

Adages II: The Adagening

Since I made my adages post last year, I've had a few afterthoughts, so here's a second instalment of things I seem to find myself saying a lot.

seven more short pontifications )

As before, of course, none of these should be taken as a universal truth or applicable in all circumstances, and all of them have their implicit ifs and buts and unlesses and exceptions. Caveat lector.