simont

 [ userinfo | dreamwidth userinfo ] [ archive | journal archive ]

 Wed 2019-03-27 13:36
Writing a solver for Net

I had mail yesterday from a computer science student asking how I went about writing the solver for the game Net in my puzzle collection. I took the question literally, i.e. not just ‘what is the algorithm?’ but ‘what was your thought process that allowed you to come up with the algorithm?’. I thought that was a good question, and so I wrote a fairly complete answer.

Having done that, of course, it seems a shame to only send it to a single recipient! So I'll post it here as well, in case anyone else is interested.

long description )
 Sat 2019-03-09 10:34
Lvalue min and max

I was just reading a paper on gcd algorithms, and was struck by the following statement appearing in a snippet of pseudocode:

``max{u, v} := |u-v| / 2;``

in which the obviously intended meaning was ‘Replace the larger of u and v with the value computed on the RHS.’ In other words, the author is implicitly considering the `max` function to return an lvalue.

It had never occurred to me before that `max` could return an lvalue! If I'd wanted to express the same concept myself, I'd surely have either pre-sorted my variables, or written some tedious `if` statements out longhand, or else resorted to the notation of ‘arg max’, in which you replace your set of values to compare with some kind of mapping of indices to values (say, an array or hash), and where `max(set)` simply returns the largest element of the set, `argmax(mapping)` returns whatever index maps to the largest value. I wouldn't even have thought of writing it like this.

I think for pseudocode purposes, any of my other options would have been worse, because each one introduces extra bookkeeping or formalism that the reader has to mentally clear out of the way to get at the underlying concept. So I quite liked this notation for its pedagogical value – but at the same time, I recognised that you'd have to translate it back into one of those other less clear idioms as soon as you wanted to implement the algorithm in a real programming language.

Or do you? Now that I think about it, there's no reason at all why a programming language couldn't define a function like `min` or `max` to return an lvalue. In fact, in C++, a simple implementation is as easy as adding three `&` characters to the most obvious version:

``// beforetemplate<typename T>T max(T u, T v) { return u>v ? u : v; }``
``// aftertemplate<typename T>T &max(T &u, T &v) { return u>v ? u : v; }``

But for maximum flexibility, you'd surely want `max` to return an lvalue if possible, and fall back to an rvalue if not. You'd like to be able to write `max(u,v) = w` as the above pseudocode did, but you also still want to be able to write `x = max(y,1000)` without a compile error.

And that doesn't seem so easy in C++. The obvious approach is to have both of the above template definitions in scope at the same time. Overload resolution will use the by-value version in cases where only that one matches, and you hope it will manage to pick the reference version in cases where both match. But sadly, when I tried it, that hope turned out to be misplaced – the language defines no preference order between those two templates.

I wonder if any other languages allow you to get the lvalue/rvalue flexible version to work!

 Mon 2019-02-18 13:39
In praise of xlogo

For those who haven't encountered it before, `xlogo` is a trivial X11 application that pops up a window showing the X Window System logo.

It's close to being the X equivalent of a ‘hello, world’ program, which makes it a good lightweight initial test case. Whenever I need to do a quick check of my X11 connectivity (which in my case usually means I'm checking that SSH X forwarding is basically alive), `xlogo` is a good choice of program to run: it won't spend ages setting itself up, and unlike text-only alternatives like `xdpyinfo`, it'll pop up a window on the target display, which makes it easy to check that my connection has gone to the right display.

But that's not all `xlogo` is good for. There are several other things I use it for:

As a specification. The source code of `xlogo` is the official location of the definition of the X logo. On one occasion I wanted to put the X logo into some kind of document (though I now can't remember what), and it turned out that the right way to do that was to read the `xlogo` source and hand-translate the drawing code into SVG.

As a ruler. Want to know how big something on your screen is in pixels? Fire up an `xlogo`, line it up with one edge of the thing, resize until the opposite edge lines up too, and if your window manager puts up a tooltip during window resizing (which I think all the ones I've ever used do), then you know the size.

As an alarm clock. One of my favourite ways to get a computer to notify me when a job has finished is to get it to start up a giant `xlogo` large enough to obscure most of my screen – when it's done. This is better than an audible alert because it's less antisocial in an open-plan office; it's better than a momentary window flash because if you happen to be looking away at the time the `xlogo` is still there when you look back; and it's better than something like `echo Done` because there's no risk of the window with the message in it being behind something at the time.

As a colour chart. Want to quickly check what `#012345` looks like, or one of those X-specific colour names like `SteelBlue` or `OliveDrab`? I don't know of a faster way than `xlogo -bg 'whatever'`.

I've also heard of people using it as an X session lifetime process that is, the thing which, when it goes away, causes the rest of your X session to shut down. I don't do that one myself, and I think it's not common these days because desktop environments generally come with something specifically designed to play that role; but it still has some advantages, because `xlogo` is so simple that it's very unlikely to crash by accident, whereas if your session lifetime process is doing almost any kind of real work, it's more likely to run into fatal trouble.

In short, I find it a much more useful program than you might think! The only thing I've always found a bit annoying about it is that pressing ‘q’ doesn't close it – I've always had to go hunting for the close button. But I've just found out that that's configurable, so now I can do

``echo "XLogo*translations: :<Key>q: quit()" | xrdb -merge``

and now my `xlogo`s are easily dismissible, as I've always wanted. Hooray!

 Sat 2019-02-16 16:41

Somehow, in spite of two posts already full of personal adages of mine, I somehow haven't exhausted my supply of them yet. In the last couple of years I've collected another postful of things that I noticed myself saying a lot and realised I've been saying all along.

 Tue 2018-08-28 09:18
ScriptPost

I've been handwriting PostScript on occasion for more than half my life – it's one of those odd side skills that comes in handy more often than you might think – and somehow I think yesterday was the first occasion I can remember when I had to swap the values of two variables.

Usually, in a language where the assignment operator can only change one variable at a time, the idiom for swapping two variables requires three statements, using a temporary variable to save one value in at the point where you've just copied the other value from one variable to the other (so they both hold the same value).

PostScript is such a language. The syntax for doing a swap that way (supposing the variables you're swapping are called `foo` and `bar`) would be

``/tmp foo def/foo bar def/bar tmp def``

in which, for example, `/tmp` pushes a variable name on the operand stack; `foo` pushes a variable's value; and the `def` operator pops both of those operands back off and writes the given value into the variable identified by the given name.

But this is excessively roundabout. PostScript may only let you assign one variable at a time, but its stack structure lets you avoid defining a temporary name: in order to save the old value of the first variable, you only have to push it temporarily on the operand stack, and recover it after the redefinition of the other. For example:

``bar            % push the original value of bar/bar foo def   % set the final value of bar/foo exch def  % set foo to the value pulled back off the stack``

But a neater way avoids even the need for that `exch`:

``/foo bar       % parameters for the 'def' that will set foo/bar foo       % parameters for the 'def' that will set bardef            % set bar, using the second pair of paramsdef            % set foo, using the first pair``

or, once you're using it in anger and not explaining it every time,

``/foo bar /bar foo def def``

I thought that was unexpectedly cute! It's got the same set of words as the two separate assignment operations ‘`/foo bar def`’ and ‘`/bar foo def`’, but it uses the fact that PostScript assignment ‘statements’ are in some sense not atomic specifically, that the evaluation of their RHS and the enaction of the assignment itself can be separated – to interleave those two statements in a way that avoids the usual problem.

And that's why I think I can't ever have had to do that before – if I had, then surely I'd have run into this mild cuteness already!

 Thu 2018-06-14 23:51
I fought the law

I had an idea for a silly word game on the way to the pub.

I remembered the famous song lyric ‘I fought the law and the law won’, and I wondered: what law? If you fought some particular law, might it win in a thematically appropriate way?

For example:

• I fought the law of gravity, and it won by two falls.
• I fought the Law of Large Numbers, and it won … on balance … in the long run.
• I fought Newton's Third Law, and it fought back just as hard.
• I fought the Law of Unintended Consequences, and – BADGER!

I feel there are probably more good ones of these that the combined wisdom of the pub didn't think of…

 Fri 2018-02-16 12:41
Accidental i18n

I told a silly story in the pub last night which I suddenly realise would make a fun post here as well. It's from a few years ago originally, but I don't think it matters.

You may have heard of that old chestnut in which an alleged Cambridge University researcher allegedly claims that people can still read written text with no problems even if the internal letters of each word are arbitrarily reordered, as long as the first and last letters of each word are still the right ones.

This is nonsense, of course, and it's been debunked before. But a few years ago, Gareth and I were discussing it, and I dashed off a Perl one-liner to do that scrambling transformation. (Perhaps it seemed like a good Perl-golf challenge to waste half an hour on, or something like that.)

I got a draft implementation working quickly enough, although it didn't quite fit on one line:

``\$ perl -pe 's!(?<=\b[a-z])[a-z]*(?=[a-z]\b)!join"",map{\$_->[1]}            sort{\$a->[0]<=>\$b->[0]}map{[rand,\$_]}split//,\$&!egi'But soft, what light through yonder window breaks?But soft, what lghit tughroh yedonr woindw bkears?``

But shortly before the working version, I made a small error of a kind that Perl makes uniquely easy: I briefly got my scalar and list contexts confused, tried omitting the `join` step, and this happened:

``\$ perl -pe 's!(?<=\b[a-z])[a-z]*(?=[a-z]\b)!map{\$_->[1]}            sort{\$a->[0]<=>\$b->[0]}map{[rand,\$_]}split//,\$&!egi'But soft, what light through yonder window breaks?B1t s2t, w2t l3t t5h y4r w4w b4s?``

Of course – if you don't explicitly use `join` to turn a list of characters back into a single string, then Perl's default conversion when you use a list in scalar context is to replace it with the length of the list. Slap forehead, mutter ‘oh yes, this is Perl’, fix bug.

But I'm glad I made the mistake, because look at what the wrong program is actually doing: it's exactly a tool for abbreviating long words in the style of ‘i18n’ and ‘l10n’. Of course that's not a hard thing to do, but I was very amused to have managed to do it completely by accident!

 Sun 2018-01-21 22:02
Poetry quasi-competition

I woke up this morning with an unholy combination of two famous lines of poetry running around my head. The mashed-up line scans nicely, but I couldn't work out what would be a good follow-up.

So, readers of this diary in all its locations, please send suggestions to follow after this opening line:

If I should die before I wake, think only this of me:

 Thu 2017-08-31 10:28
High-layer errors

‘To err is human,’ the quotation says. But it's not only human. To err, in general, is the common lot of humans, all other life forms, all mechanical devices, and surely anything else I haven't thought of for which you can specify any idea of a purpose or intention to have erred from.

Something that's closer to being human-specific (though surely still not completely) is to err at a high cognitive layer.

For example, a few years ago I got home from work, carrying my usual rucksack of random stuff. I opened the front door, found a letter on the mat, picked it up, shut the door behind me, carried letter and rucksack to the study, put them down, unzipped the rucksack … and then stood there confusedly wondering why I'd done that, and what I might have wanted out of the rucksack.

Of course, I might have just forgotten what I wanted by the time I got the bag open. But not this time. In this case, what had actually gone wrong was that I had meant to open the letter, not to open the rucksack.

But I'd somehow mistaken that intention, somewhere in my brain, and issued the wrong ‘open this thing’ instruction. And having made that error, the lower layers of my brain's planning apparatus and motor cortex had all cooperated to faithfully implement the wrong instruction they'd been given.

This is a particularly beautiful example of a high-layer cognitive error, because you can easily imagine what a lower-layer error in the same situation would have looked like. I didn't, for example, pick up the paper knife and try to slit the rucksack open with it (or, more likely, stop half way through that motion and find myself confusedly wondering how to proceed). Instead, the erroneous intention was carried through perfectly competently, by choosing all the right subgoals and pursuing them in the right way, and it wasn't until the rucksack was successfully open in front of me that I was confronted with the realisation that I'd made a mistake.

A slightly different example of this phenomenon happened to me the other day, in a computing context. I was sitting at a `bash` prompt in a `git` checkout, and I ran `git stash`, then `git pull -r`, and then pressed Ctrl-Y … and then sat there confusedly wondering why I'd just pasted some old half-written piece of a past command on to my shell command line.

The right way to complete the sequence would have been the command `git stash pop`, to restore the uncommitted changes that I'd had in the checkout before realising I needed to do the disruptive pull operation. So why did I press Ctrl-Y instead?

Because in other situations, I'll sometimes be half way through typing a command, and then realise I need to run another command first (e.g. a `cd` command, so as to be in the right directory for the original command to work). In that situation, my habit is to press Ctrl-A then Ctrl-K, to transfer the half-typed command into `bash`'s paste buffer; then run the preparatory command; and then press Ctrl-Y to paste the original command back on to my command line, after which I can finish typing it and run it.

In other words, the `bash`-driving part of my brain has two separate procedures for the high-level idea of ‘set this half-done thing temporarily aside, do a disruptive thing, get the half-done thing back from wherever you stashed it’. One is for half-done things in the form of uncommitted work in a `git` checkout, and is spelled `git stash` / `git stash pop`. The other is for half-done things in the form of unfinished shell commands, and is spelled `^A^K` / `^Y`.

And I'd simply forgotten, half way through the action, which of those two procedures I was in the middle of performing, so I did the second half of the wrong one.

These kinds of high-layer error are fascinating to me. They're often comical – much more so than very low-layer errors, such as a typo, or a fumble with the paper knife. And it's always seemed to me that they shed light on the functioning of the brain in a way that low-layer errors don't.

Also, I find that a common feature of high-layer errors is that they cause a great deal more confusion afterwards. If I make a low-layer motor error like a typo, it will typically be instantly obvious to me that what I meant to do is not the same as what I did do; it might be inconvenient to recover from the mistake, but it's not confusing. But with high-layer errors, in which I've just quite competently performed completely the wrong task, I seem to also have half-convinced myself that that wrong task was what I meant to do, and it will take me several seconds of complete confusion to recover from that belief and figure out what just happened.

For example, in the above cases, after I'd opened the rucksack I spent a while trying to remember what I might have wanted from it; and having pasted some old nonsense on to my shell command line, I started from the assumption that I'd meant to do that and that the nonsense was about to be useful in some way, if only I could just remember what I might have had in mind. In both cases, my prior for ‘you meant to do that and it will make sense in a moment’ was much higher than my prior for ‘this was a weird high-layer error and you didn't really mean to do it at all’.

I suppose that does make Bayesian sense, since high-layer errors seem quite rare – which is another reason I find them notable and interesting when they occur!

 Thu 2017-08-03 11:43
Deoptimisation can be a virtue

There's a well-known saying in computing: premature optimisation is the root of all evil. The rationale is more or less that tweaking code to make it run faster tends to make it less desirable in various other ways – less easy to read and understand, less flexible in the face of changing future requirements, more complicated and bug-prone – and therefore one should get into the habit of not habitually optimising everything proactively, but instead, wait until it becomes clear that some particular piece of your code really is too slow and is causing a problem. And then optimise just that part.

I have no problem in principle with this adage. I broadly agree with what it's trying to say. (Although I must admit to an underlying uneasiness at the idea of most code being written with more or less no thought for performance. I feel as if that general state of affairs probably contributes to a Parkinson's Law phenomenon, in which software slows down to fill the CPU time available, so that the main effect of computers getting faster is not that software actually delivers its results more quickly but merely that programmers can afford to be lazier without falling below the ‘just about fast enough’ threshold.)

But I have one big problem with applying it in practice, which is that often when I think of the solution to a problem, the first version of it that I am even conscious of is already somewhat optimised. And sometimes it's optimised to the point of already being incomprehensible.

For example, ages ago I put up a web page about mergesorting linked lists; critiqued my presentation of the algorithm as resembling ‘pre-optimised assembler translated into prose’, and presented a logical derivation of the same idea from simple first principles. But that derivation had not gone through my head at any point – the first version of the algorithm that even worked for me at all was more or less the version I published.

Another example of this came up this week, in an algorithmic sort of maths proof – I had proved something to be possible at all by presenting an example procedure that actually did it, and it turned out that the procedure I'd presented was too optimised to be easily understood, because in the process of thinking it up in the first place, I'd spotted that one of the steps in the procedure would do two of the necessary jobs at once, and then I devoted more complicated verbiage to explaining that fact than it would have taken to present a much simpler procedure that did the two jobs separately. The simpler procedure would have taken more steps, but when all you're trying to prove is that some procedure will work, that doesn't matter at all.

I think the problem I have is that although I recognise in principle that optimisation and legibility often pull in opposite directions, and I can (usually) resist the urge to optimise when it's clear that legibility would suffer, one thing I'm very resistant to is deliberate de-optimisation: once I've got something that has been optimised (whether on purpose or by accident), it basically doesn't even occur to me in the first place to make it slower on purpose. And if it did occur to me, I'd still feel very reluctant to actually do it.

This is probably a bias I should try to get under control. The real meaning of ‘premature optimisation is bad’ is that the right tradeoff between performance and clarity is often further towards clarity than you think it is – and a corollary is that sometimes it's further towards clarity than wherever you started, in which case, deoptimisation can be a virtue.

 Wed 2017-03-29 08:59
More parser theory

I had a conversation recently about low-priority prefix operators in infix expression grammars, which left me mildly uncertain about what they ought to mean. So here's a quick straw poll.

Suppose I have an expression grammar in which the multiplication operator `*` and the addition operator `+` have their usual relative priority (namely, `*` binds more tightly, i.e. is evaluated first). Then suppose I – perhaps unwisely – introduce a prefix operator, which I'll call `PFX` for want of a better name, which has intermediate priority between the two, so that

• `PFX a + b` behaves like `(PFX a) + b`, i.e. the `PFX` is evaluated first
• `PFX a * b` behaves like `PFX (a * b)`, i.e. the `PFX` is evaluated second.
That's simple enough (if unusual). But things get weirder when `PFX` appears on the right of another operator. Specifically, what would you imagine happens to this expression:
`a * PFX b + c`
in which you can't process the operators in priority order (`*` then `PFX` then `+`) because the `PFX` necessarily has to happen before the `*`.

Poll #18119 BODMASWTFBBQ
Open to: Registered Users, detailed results viewable to: All, participants: 10

In what order should the parser process those operators?

PFX then * then + to give (a * (PFX b)) + c
5 (50.0%)

+ then PFX then * to give a * (PFX (b + c))
0 (0.0%)

PFX then + then * to give a * ((PFX b) + c)
0 (0.0%)

None! Report a parse error and demand some disambiguating parentheses.
4 (40.0%)

Low-priority prefix operators misparsed my grandparents, you insensitive clod
1 (10.0%)

 Mon 2017-02-20 20:09

Over the weekend, I realised, extremely belatedly, that the expression language I designed for my free-software project `spigot` has a grammar bug. Specifically, it's a context-dependency analogous to the C `typedef` bug: the same expression can parse differently depending on whether a given identifier is currently defined to be a function or a variable, which means that you need to do your scope analysis in sync with the parse, so that you can know what set of definitions is currently in scope for the subexpression you're currently looking at.

 [ xpost | http://simont.livejournal.com/244254.html ]

 Mon 2016-10-10 16:46
Is there a name for this bad argument?

There's a particular annoying pattern I notice in debate, in which one person criticises another's choice of argument on the basis of a sort of misapplication of pragmatics.

Here's a concrete (if slightly melodramatic) example. Imagine we're drinking together, and you demand, suspiciously, ‘Wait, how do I know you haven't poisoned this bottle of wine?’. To which I respond, ‘I'm drinking from it too, so that would be a really bad idea!’ Now if you were to think, or say, ‘Oh, so you would have poisoned it if we hadn't been drinking from the same bottle?’, you'd be committing this fallacy.

Because in fact, of course, the main reason I haven't tried to poison you is some combination of because I don't want to and because I'm too moral to do such a wrong thing, and both of those reasons would still apply regardless of any detail of who was drinking what.

But you can't check those statements, because they're about stuff entirely inside my own head. So if I'd tried to use either one as a defence, then you'd be no more convinced of my non-murderousness than you are now, because if you can believe I might try to poison you in the first place, then you'd have no trouble also believing that I'd lie about my motives in the course of the attempt. Whereas you can easily verify for yourself that I am indeed drinking from the same bottle, and perhaps you might find it harder to believe I was self-sacrificingly murderous than merely self-interestedly murderous.

(Since this is a silly hypothetical example, let's assume we can disregard all the even more improbable edge cases beloved of fiction, like the poison being in the ice cubes, or smeared on your particular glass in advance, or that I took the antidote beforehand, or have spent ten years building up resistance to iocane powder, etc…)

Anyway. That's why I chose that particular reason as the one to mention: not because it was my core reason or my only reason, but because it was one you'd be more likely to believe, because you could check it yourself.

I think the general pattern of which this is an example is: it's a fallacy to assume that someone who has mentioned one good or bad property of a thing (or reason to do it or not do it, or whatever) must have chosen that particular property because it's the most important or the only one, rather than choosing the property most appropriate to what particular goal the utterance as a whole is trying to achieve.

In this silly example, my goal is to try to convince you that you're safe; so I have to pick a reason that will actually manage to do that job, rather than one that is more important to me but likely to be less effective. In other kinds of debate, one might similarly choose the argument that appeals most to the particular audience one is trying to convince, not the one that is most fundamental in the arguer's own mind. Or you might avoid particular arguments because you know they'll cause some enormous derailing sidetrack. Any number of reasons, really.

So. Does this fallacy have a well-known name?

 [ xpost | http://simont.livejournal.com/244076.html ]

 Tue 2016-06-21 08:47
Regular language

I noticed yesterday after writing a comment in some code that one of my writing habits had changed, without me really noticing. So I thought I'd see what other people's opinions were.

Poll #17528 A regular holy war
Open to: Registered Users, detailed results viewable to: All, participants: 37

How do you write 'regular expression' in abbreviated form?

regexp
10 (28.6%)

regex
24 (68.6%)

Something else
0 (0.0%)

I only ever write it unabbreviated
0 (0.0%)

I don't ever write it at all
1 (2.9%)

How do you pronounce the g in regexp / regex ?

Hard, like in 'regular' (IPA /ɡ/)
22 (59.5%)

Soft, like in 'Reginald' (IPA /dʒ/)
13 (35.1%)

Something else
1 (2.7%)

I never pronounce these abbreviations
1 (2.7%)

 [ xpost | http://simont.livejournal.com/243797.html ]

 Sun 2016-03-27 13:41
Found in the files

I visited my mum yesterday, and we ended up going through a box file of snippets she'd saved from my childhood. Among them, printed on silvery ZX thermal printer paper that brought a nostalgic smile to my face in its own right, was the following Sinclair BASIC program, printed together with its output.

``   5 BORDER 0: PAPER 0: INK 7: CLS  10 DIM x(40)  20 FOR z=1 TO 40: LET a=z*0.1  30 LET y=SQR (1+(a*a))+SQR ((4-a)*(4-a)+4)  40 LET x(z)=y  50 NEXT z  60 LET smsf=1  70 FOR z=1 TO 40: LET a=z*0.1  80 IF x(smsf)>x(z) THEN LET smsf=z  90 NEXT z  95 PRINT "GG end to J", "Road length" 100 PRINT smsf,x(smsf)``
``GG end to J     Road length13              5.0001815``

I don't know why Mum chose to save that program in particular, out of all the ones I must have printed out in the few years I had the Spectrum and that printer. She guessed that it must have been the first one I wrote, but it seems a bit sophisticated for that – surely there would have been a pile of ‘hello, world’ type things before this quite mathematical one?

Regardless, I had completely forgotten it, and it makes me smile at the fact that what it seems to be doing is numerically testing the proposition that a straight line is the shortest distance between two points. (Line 30 calculates the distance from (0,0) to (4,3) via the point (a,1); the rest of the program tries this for lots of different values of a, and finds the one that gives the shortest total distance, which unsurprisingly is the point where the straight-line route intersects the line y=1, to the limits of rounding error.)

But I do wonder whether this was something I typed in from elsewhere in full, or followed hints in a manual, or what. I'm particularly curious about the variable name `smsf` and the cryptic legend ‘GG end to J’…

 [ xpost | http://simont.livejournal.com/243508.html ]

 Tue 2016-03-22 09:53
Terminology gap

Yesterday in a technical conversation I used the phrase ‘HP-complete’.

I had intended it, by analogy with ‘NP-complete’, to mean that if the problem under discussion could be solved, the solution would necessarily include a solution to the Halting Problem, i.e. the problem was as hard as the Halting Problem, i.e. uncomputable.

There are several other well known ‘-complete’ phrases which analogise in the same way – ‘Turing-complete’ and ‘AI-complete’ – and it seems to me that ‘HP-complete’ fits right into this framework and has a technically precise and useful meaning. But for some reason it isn't in common usage in the way that those other ‘-complete’ phrases are, so the person I was talking to didn't get what I meant and I had to explain. Bah. I don't see why it isn't part of the standard lexicon, for all the same reasons!

I suppose we already have the word ‘uncomputable’, so you could argue that we don't need ‘HP-complete’ as a synonym. But I think it's not quite a synonym, in that it also conveys a hint about why it's uncomputable, or at least about the train of thought that made me conclude that it was.

(I suppose it's conceivable, in my original context, that I should have chosen ‘HP-hard’ rather than ‘HP-complete’ – I don't think I'd intended to rule out the possibility that the problem under discussion was harder than the Halting Problem :-)

 [ xpost | http://simont.livejournal.com/243239.html ]

 Thu 2016-03-10 12:06
I find myself noticing scansion

Somewhere in my head is a very specialised pattern-recognition neuron, which seems to fire whenever I encounter – or, even more so, inadvertently write any phrase or sentence that would scan perfectly as the first line of a limerick.

It's a very persistent and attention-grabbing neuron, for some reason. Every time it fires, I have to spend the next half hour vigorously resisting the urge to try to come up with the rest of the limerick. Even if the triggering phrase is something utterly, tediously mundane, as for example the one that emerged from my fingers just now as a hasty commit message: ‘A cleanup I noticed in passing’.

 [ xpost | http://simont.livejournal.com/243057.html ]

 Mon 2016-03-07 13:30
An unsatisfying resolution

I've not been posting here in a while, and it seems to me that one reason why not is that I increasingly don't feel as if I have the brain-space to put together a long and well thought-out post about anything.

Perhaps, therefore, I should begin to fix this by posting short and/or inconsequential things. To kick off with, here's one that is both.

I lost a sock the other week. I did it in the way you normally lose socks: at one time I had N socks, and at a time considerably later I had N-1, and there were a lot of things that happened in between, so I don't know which was responsible.

(That's what makes the sock lost, of course – if I could narrow down to one event, I could have just gone and got the sock back from wherever that one happened.)

I looked for it everywhere, and it didn't turn up. I resigned myself to having only N-1 of those socks – until I did the laundry yesterday, and found that when I came to hang everything up, I inexplicably had N of them again.

So I found the sock in the same way as I lost it: I don't know what thing I did caused it to reappear.

That's the worst way to find a sock. The problem is solved, but in a way that sheds no light on the mystery! Arrrgh!

 [ xpost | http://simont.livejournal.com/242912.html ]

 Mon 2015-08-10 12:55
Magic and language in fantasy fiction

Since I've been musing about fiction recently, here's another thought that crossed my mind.

Fantasy fiction often has a magic system involving spells cast in spoken language. But what language? Why does that language work and not another? Or would another language work? Would it depend on the spell? On the caster? On the location? It seems to me that there are quite a few plausible ‘cosmologies of magic’ which would cause different answers to those questions, many of which have specific examples in existing fiction, and I wonder if there are any more I've missed out.

So, what have I missed?

 [ xpost | http://simont.livejournal.com/242590.html ]

 Mon 2015-08-03 10:37
Random fiction question: non-magical archaeology

A question occurred to me last night. Perhaps the two best known fictional archaeologists (taking the term somewhat loosely), across fiction in all media, are Indiana Jones and Lara Croft. Both of them have in common that they investigate things about which there were rumours of ancient magical powers, or gods, or other such supernatural and powerful stuff. And they're right – the Ark of the Covenant, the Holy Grail, the Dagger of Xian, etc, all really do perform as advertised.

What are the best known examples of fictional archaeologists who do not unearth ancient magical artefacts, and the only thing they ever find out is information about what happened in the past?

For these purposes, I think I'm going to rule that the actual archaeological discoveries have to be part of the plot: having a character who happens to be an archaeologist isn't sufficient, if the story only focuses on some other aspect of their life. (Even if it's a somewhat work-related aspect, such as worries about career progression, or conflicts with co-workers.)

I only managed to come up with one example of this at all, namely Asimov's Nightfall. I'm sure there must be others, though.

 [ xpost | http://simont.livejournal.com/242421.html ]