|
|
|
|
I wrote another article, and made an RSS feed For those who were interested in my previous article about aperiodic tiling generation, I've written a sequel, extending the same random generation algorithm to the even newer 'Spectre' tiling discovered by the same people who found the hat tiling in March. Here it is: https://www.chiark.greenend.org.uk/~sgtatham/quasiblog/aperiodic-spectre/ Also, it seems silly for me to make posts here announcing every article of this length I write – especially since I have another in progress already. So I've finally got round to setting up an RSS feed for the area of my website where I publish them. RSS-oriented people should be able to add this to their aggregator of choice: https://www.chiark.greenend.org.uk/~sgtatham/quasiblog/feed.xml And for those reading on Dreamwidth, simont_quasiblog_feed. |
| | |
|
|
I wrote an article about aperiodic tiling generation A few weeks ago, when I heard about the shiny new aperiodic monotile that had been discovered, I immediately had to add it to the collection of grids that you can play Loopy on, in my puzzle collection. This went fine, except for the fact that I got most of the way through implementing one algorithm for generating pieces of the tiling, and then threw it away and started again because I'd had a much better idea. In fact, there are so many things I like about the algorithm I ended up with that I've decided it deserves a proper public writeup. So I've spent the Easter weekend hard at work on producing one. I present: https://www.chiark.greenend.org.uk/~sgtatham/quasiblog/aperiodic-tilings/ |
| | |
|
|
Imaginary bing! I was jolted awake by the first ‘bing!’ of my alarm clock. But wait – I'm on holiday this week, so my alarm clock ought not to be turned on! And it's not. I'd managed to dream the sound. And once the first dream-bing had woken me up, the second bing never came, because by that time I was awake enough to realise the alarm clock wasn't ringing after all. All right, that's just one of those hilarious japes dreams play on you. But the impressive part was that the dream bing occurred within 2 minutes of my actual alarm time! |
| | |
|
|
Silliest housework accident ever I was cleaning my bathroom just now with a bottle of spray cleaner. I put the cleaner down for a moment and bent down to scrub a surface. I heard a momentary clatter, looked up again – and the cleaner bottle was nowhere to be seen. At all. After some searching, and some headscratching, I recovered it from the neighbour's back garden, by knocking on their door and asking nicely if they could check whether it was there. I had put the bottle down on the bathroom window sill, which is wide enough to use as a shelf for storing all kinds of stuff. The window was open. The bottle was nearly empty, hence light enough that a mild gust of wind could just conceivably have knocked it out of the window. In nearly 14 years of living here, that's never happened before, so it seemed pretty unlikely, especially since no such gust of wind had been perceptible to me; but by the Holmesian principle of elimination (it most certainly was not anywhere in the bathroom any more), I concluded that, no matter how improbable, that surely had to be the truth. But just below the bathroom window is a sloping conservatory roof. After ruling out it having come to rest on that roof, and also checking if it had slid down the roof slope and fallen just off the edge into an awkward hard-to-reach part of my garden, the Holmes principle again (with an even stronger ‘no matter how improbable’ clause) said that it must have bounced on my conservatory roof, and somehow achieved enough boing to get all the way over the fence into the neighbour's garden. And it had! Good job too – if I'd gone to the neighbour with that question and it hadn't been there, I'd have sounded even sillier than I did. |
| | |
|
|
Earworm of the day I don't even know Hamilton that well, but even I am capable of getting a sudden earworm when the colleague presenting part of an all-afternoon meeting finishes up with ‘I have massively overrun my slot.’ |
| | |
|
|
Nonsense and nonsensibility I saw a software licence agreement recently in which a subheading read ‘Term and Termination’, and my first thought was that I'd had no idea Jane Austen had once worked as a corporate lawyer. |
| | |
|
|
Dewy-eyed It was chilly this morning. Probably the coldest it's been since First Lockdown kicked off in March. So I went out for my morning walk with full cold weather gear: big coat, fur hat, gloves. Normally, in those clothes, I just have to put up with my actual face still feeling chilly, and rely on blood circulation from the warmer parts to keep it manageable. But not this year! A Covid face mask will be just the thing to keep even my face warm, I thought. And it did: the air around my nose and mouth was at a lovely temperature. A little humid, but not uncomfortably so. But what I wasn't expecting was that my moisture-laden breath, mostly expelled upward and downward because of the mask in the way, would graze lightly across my eyelashes on its way out and – since they are of course not under the face mask and hence much colder than my breath – promptly condense all its moisture again in the form of dew. That was a very strange sensation! I'm used to my eyes themselves watering from time to time, for all the usual reasons human eyes do that. But water getting in my eyes whose source is the tips of my eyelashes feels very different, and I haven't quite got the hang of how to blink it away effectively yet. (Also, being slow and snoozy and early-Monday-morning, it took me ten minutes to even work out what was going on. Ahem.) |
| | |
|
|
Being less good at something impresses people more A long time ago, I was in a nightclub, with a friend, and we ran into a woman who we'd both met a few times before. My friend struggled visibly for a moment, and then correctly remembered her name. She was pleased and flattered. I had known her name immediately without any struggle, but she didn't look flattered at that! A while back a group of my friends used to play a general-knowledge quiz game. One question I particularly remember from it was ‘Which Bond film was Ringo Starr's wife in?’, which we only managed to answer as a team effort: one of us knew Ringo Starr's wife was Barbara Bach, and another knew Barbara Bach was in ‘The Spy Who Loved Me’, but neither of us knew both facts beforehand. That one sticks in my mind as more memorably satisfying than any of the questions that I got right on my own, even though those demonstrated more knowledge on my part. If you release a piece of software with a security hole in it, and then fix it promptly and competently when someone finds it, users will be vocally grateful. You'll get compliments on your dedication and your integrity, and it will increase general trust in you to maintain a security product – far more so, in my experience, than if you'd never introduced the hole in the first place. Psychologically, it's easy to come up with reasons why this general pattern of human behaviour makes sense. But it seems like a cognitive weakness nonetheless: surely there must be a multitude of cases where it creates a perverse incentive to pretend to be less competent than you are, or to make deliberate mistakes so you can earn kudos for fixing them… |
| | |
|
|
Writing a soluble-grid generator for Mines Recently, two people emailed me asking the same question about my puzzle collection: they wanted to know how it's possible for Mines, my Minesweeper implementation, to guarantee that every grid can be solved without guesswork. So I wrote up a longish answer to that question, and sent it to both people. And it seems like a waste not to post it here too. Perhaps it could be article number 2 in the informal series that also includes the article I wrote in March about how the solver for Net works. ( algorithmics ) |
| | |
|
|
The X-Y macro Here's a fun piece of C preprocessor trickery I thought up earlier this year. ( preprocessor noodling ) |
| | |
|
|
Writing a solver for Net I had mail yesterday from a computer science student asking how I went about writing the solver for the game Net in my puzzle collection. I took the question literally, i.e. not just ‘what is the algorithm?’ but ‘what was your thought process that allowed you to come up with the algorithm?’. I thought that was a good question, and so I wrote a fairly complete answer. Having done that, of course, it seems a shame to only send it to a single recipient! So I'll post it here as well, in case anyone else is interested. ( long description ) |
| | |
|
|
Lvalue min and max I was just reading a paper on gcd algorithms, and was struck by the following statement appearing in a snippet of pseudocode: max{u, v} := |u-v| / 2;
in which the obviously intended meaning was ‘Replace the larger of u and v with the value computed on the RHS.’ In other words, the author is implicitly considering the max function to return an lvalue. It had never occurred to me before that max could return an lvalue! If I'd wanted to express the same concept myself, I'd surely have either pre-sorted my variables, or written some tedious if statements out longhand, or else resorted to the notation of ‘arg max’, in which you replace your set of values to compare with some kind of mapping of indices to values (say, an array or hash), and where max(set) simply returns the largest element of the set, argmax(mapping) returns whatever index maps to the largest value. I wouldn't even have thought of writing it like this. I think for pseudocode purposes, any of my other options would have been worse, because each one introduces extra bookkeeping or formalism that the reader has to mentally clear out of the way to get at the underlying concept. So I quite liked this notation for its pedagogical value – but at the same time, I recognised that you'd have to translate it back into one of those other less clear idioms as soon as you wanted to implement the algorithm in a real programming language. Or do you? Now that I think about it, there's no reason at all why a programming language couldn't define a function like min or max to return an lvalue. In fact, in C++, a simple implementation is as easy as adding three & characters to the most obvious version: // before template<typename T> T max(T u, T v) { return u>v ? u : v; }
// after template<typename T> T &max(T &u, T &v) { return u>v ? u : v; }
But for maximum flexibility, you'd surely want max to return an lvalue if possible, and fall back to an rvalue if not. You'd like to be able to write max(u,v) = w as the above pseudocode did, but you also still want to be able to write x = max(y,1000) without a compile error. And that doesn't seem so easy in C++. The obvious approach is to have both of the above template definitions in scope at the same time. Overload resolution will use the by-value version in cases where only that one matches, and you hope it will manage to pick the reference version in cases where both match. But sadly, when I tried it, that hope turned out to be misplaced – the language defines no preference order between those two templates. I wonder if any other languages allow you to get the lvalue/rvalue flexible version to work! |
| | |
|
|
In praise of xlogo For those who haven't encountered it before, xlogo is a trivial X11 application that pops up a window showing the X Window System logo. It's close to being the X equivalent of a ‘hello, world’ program, which makes it a good lightweight initial test case. Whenever I need to do a quick check of my X11 connectivity (which in my case usually means I'm checking that SSH X forwarding is basically alive), xlogo is a good choice of program to run: it won't spend ages setting itself up, and unlike text-only alternatives like xdpyinfo , it'll pop up a window on the target display, which makes it easy to check that my connection has gone to the right display. But that's not all xlogo is good for. There are several other things I use it for: As a specification. The source code of xlogo is the official location of the definition of the X logo. On one occasion I wanted to put the X logo into some kind of document (though I now can't remember what), and it turned out that the right way to do that was to read the xlogo source and hand-translate the drawing code into SVG. As a ruler. Want to know how big something on your screen is in pixels? Fire up an xlogo , line it up with one edge of the thing, resize until the opposite edge lines up too, and if your window manager puts up a tooltip during window resizing (which I think all the ones I've ever used do), then you know the size. As an alarm clock. One of my favourite ways to get a computer to notify me when a job has finished is to get it to start up a giant xlogo – large enough to obscure most of my screen – when it's done. This is better than an audible alert because it's less antisocial in an open-plan office; it's better than a momentary window flash because if you happen to be looking away at the time the xlogo is still there when you look back; and it's better than something like echo Done because there's no risk of the window with the message in it being behind something at the time. As a colour chart. Want to quickly check what #012345 looks like, or one of those X-specific colour names like SteelBlue or OliveDrab ? I don't know of a faster way than xlogo -bg 'whatever' . I've also heard of people using it as an X session lifetime process – that is, the thing which, when it goes away, causes the rest of your X session to shut down. I don't do that one myself, and I think it's not common these days because desktop environments generally come with something specifically designed to play that role; but it still has some advantages, because xlogo is so simple that it's very unlikely to crash by accident, whereas if your session lifetime process is doing almost any kind of real work, it's more likely to run into fatal trouble. In short, I find it a much more useful program than you might think! The only thing I've always found a bit annoying about it is that pressing ‘q’ doesn't close it – I've always had to go hunting for the close button. But I've just found out that that's configurable, so now I can do echo "XLogo*translations: :<Key>q: quit()" | xrdb -merge
and now my xlogo s are easily dismissible, as I've always wanted. Hooray! |
| | |
|
|
Adages III Somehow, in spite of two posts already full of personal adages of mine, I somehow haven't exhausted my supply of them yet. In the last couple of years I've collected another postful of things that I noticed myself saying a lot and realised I've been saying all along. ( much adage about nothing ) |
| | |
|
|
ScriptPost I've been handwriting PostScript on occasion for more than half my life – it's one of those odd side skills that comes in handy more often than you might think – and somehow I think yesterday was the first occasion I can remember when I had to swap the values of two variables. Usually, in a language where the assignment operator can only change one variable at a time, the idiom for swapping two variables requires three statements, using a temporary variable to save one value in at the point where you've just copied the other value from one variable to the other (so they both hold the same value). PostScript is such a language. The syntax for doing a swap that way (supposing the variables you're swapping are called foo and bar ) would be /tmp foo def /foo bar def /bar tmp def
in which, for example, /tmp pushes a variable name on the operand stack; foo pushes a variable's value; and the def operator pops both of those operands back off and writes the given value into the variable identified by the given name. But this is excessively roundabout. PostScript may only let you assign one variable at a time, but its stack structure lets you avoid defining a temporary name: in order to save the old value of the first variable, you only have to push it temporarily on the operand stack, and recover it after the redefinition of the other. For example: bar % push the original value of bar /bar foo def % set the final value of bar /foo exch def % set foo to the value pulled back off the stack
But a neater way avoids even the need for that exch : /foo bar % parameters for the 'def' that will set foo /bar foo % parameters for the 'def' that will set bar def % set bar, using the second pair of params def % set foo, using the first pair
or, once you're using it in anger and not explaining it every time, /foo bar /bar foo def def
I thought that was unexpectedly cute! It's got the same set of words as the two separate assignment operations ‘/foo bar def ’ and ‘/bar foo def ’, but it uses the fact that PostScript assignment ‘statements’ are in some sense not atomic – specifically, that the evaluation of their RHS and the enaction of the assignment itself can be separated – to interleave those two statements in a way that avoids the usual problem. And that's why I think I can't ever have had to do that before – if I had, then surely I'd have run into this mild cuteness already! |
| | |
|
|
I fought the law I had an idea for a silly word game on the way to the pub. I remembered the famous song lyric ‘I fought the law and the law won’, and I wondered: what law? If you fought some particular law, might it win in a thematically appropriate way? For example: - I fought the law of gravity, and it won by two falls.
- I fought the Law of Large Numbers, and it won … on balance … in the long run.
- I fought Newton's Third Law, and it fought back just as hard.
- I fought the Law of Unintended Consequences, and – BADGER!
I feel there are probably more good ones of these that the combined wisdom of the pub didn't think of… |
| | |
|
|
Accidental i18n I told a silly story in the pub last night which I suddenly realise would make a fun post here as well. It's from a few years ago originally, but I don't think it matters. You may have heard of that old chestnut in which an alleged Cambridge University researcher allegedly claims that people can still read written text with no problems even if the internal letters of each word are arbitrarily reordered, as long as the first and last letters of each word are still the right ones. This is nonsense, of course, and it's been debunked before. But a few years ago, Gareth and I were discussing it, and I dashed off a Perl one-liner to do that scrambling transformation. (Perhaps it seemed like a good Perl-golf challenge to waste half an hour on, or something like that.) I got a draft implementation working quickly enough, although it didn't quite fit on one line: $ perl -pe 's!(?<=\b[a-z])[a-z]*(?=[a-z]\b)!join"",map{$_->[1]} sort{$a->[0]<=>$b->[0]}map{[rand,$_]}split//,$&!egi' But soft, what light through yonder window breaks? But soft, what lghit tughroh yedonr woindw bkears?
But shortly before the working version, I made a small error of a kind that Perl makes uniquely easy: I briefly got my scalar and list contexts confused, tried omitting the join step, and this happened: $ perl -pe 's!(?<=\b[a-z])[a-z]*(?=[a-z]\b)!map{$_->[1]} sort{$a->[0]<=>$b->[0]}map{[rand,$_]}split//,$&!egi' But soft, what light through yonder window breaks? B1t s2t, w2t l3t t5h y4r w4w b4s?
Of course – if you don't explicitly use join to turn a list of characters back into a single string, then Perl's default conversion when you use a list in scalar context is to replace it with the length of the list. Slap forehead, mutter ‘oh yes, this is Perl’, fix bug. But I'm glad I made the mistake, because look at what the wrong program is actually doing: it's exactly a tool for abbreviating long words in the style of ‘i18n’ and ‘l10n’. Of course that's not a hard thing to do, but I was very amused to have managed to do it completely by accident! |
| | |
|
|
Poetry quasi-competition I woke up this morning with an unholy combination of two famous lines of poetry running around my head. The mashed-up line scans nicely, but I couldn't work out what would be a good follow-up. So, readers of this diary in all its locations, please send suggestions to follow after this opening line: If I should die before I wake, think only this of me:
|
| | |
|
|
High-layer errors ‘To err is human,’ the quotation says. But it's not only human. To err, in general, is the common lot of humans, all other life forms, all mechanical devices, and surely anything else I haven't thought of for which you can specify any idea of a purpose or intention to have erred from. Something that's closer to being human-specific (though surely still not completely) is to err at a high cognitive layer. For example, a few years ago I got home from work, carrying my usual rucksack of random stuff. I opened the front door, found a letter on the mat, picked it up, shut the door behind me, carried letter and rucksack to the study, put them down, unzipped the rucksack … and then stood there confusedly wondering why I'd done that, and what I might have wanted out of the rucksack. Of course, I might have just forgotten what I wanted by the time I got the bag open. But not this time. In this case, what had actually gone wrong was that I had meant to open the letter, not to open the rucksack. But I'd somehow mistaken that intention, somewhere in my brain, and issued the wrong ‘open this thing’ instruction. And having made that error, the lower layers of my brain's planning apparatus and motor cortex had all cooperated to faithfully implement the wrong instruction they'd been given. This is a particularly beautiful example of a high-layer cognitive error, because you can easily imagine what a lower-layer error in the same situation would have looked like. I didn't, for example, pick up the paper knife and try to slit the rucksack open with it (or, more likely, stop half way through that motion and find myself confusedly wondering how to proceed). Instead, the erroneous intention was carried through perfectly competently, by choosing all the right subgoals and pursuing them in the right way, and it wasn't until the rucksack was successfully open in front of me that I was confronted with the realisation that I'd made a mistake. A slightly different example of this phenomenon happened to me the other day, in a computing context. I was sitting at a bash prompt in a git checkout, and I ran git stash , then git pull -r , and then pressed Ctrl-Y … and then sat there confusedly wondering why I'd just pasted some old half-written piece of a past command on to my shell command line. The right way to complete the sequence would have been the command git stash pop , to restore the uncommitted changes that I'd had in the checkout before realising I needed to do the disruptive pull operation. So why did I press Ctrl-Y instead? Because in other situations, I'll sometimes be half way through typing a command, and then realise I need to run another command first (e.g. a cd command, so as to be in the right directory for the original command to work). In that situation, my habit is to press Ctrl-A then Ctrl-K, to transfer the half-typed command into bash 's paste buffer; then run the preparatory command; and then press Ctrl-Y to paste the original command back on to my command line, after which I can finish typing it and run it. In other words, the bash -driving part of my brain has two separate procedures for the high-level idea of ‘set this half-done thing temporarily aside, do a disruptive thing, get the half-done thing back from wherever you stashed it’. One is for half-done things in the form of uncommitted work in a git checkout, and is spelled git stash / git stash pop . The other is for half-done things in the form of unfinished shell commands, and is spelled ^A^K / ^Y . And I'd simply forgotten, half way through the action, which of those two procedures I was in the middle of performing, so I did the second half of the wrong one. These kinds of high-layer error are fascinating to me. They're often comical – much more so than very low-layer errors, such as a typo, or a fumble with the paper knife. And it's always seemed to me that they shed light on the functioning of the brain in a way that low-layer errors don't. Also, I find that a common feature of high-layer errors is that they cause a great deal more confusion afterwards. If I make a low-layer motor error like a typo, it will typically be instantly obvious to me that what I meant to do is not the same as what I did do; it might be inconvenient to recover from the mistake, but it's not confusing. But with high-layer errors, in which I've just quite competently performed completely the wrong task, I seem to also have half-convinced myself that that wrong task was what I meant to do, and it will take me several seconds of complete confusion to recover from that belief and figure out what just happened. For example, in the above cases, after I'd opened the rucksack I spent a while trying to remember what I might have wanted from it; and having pasted some old nonsense on to my shell command line, I started from the assumption that I'd meant to do that and that the nonsense was about to be useful in some way, if only I could just remember what I might have had in mind. In both cases, my prior for ‘you meant to do that and it will make sense in a moment’ was much higher than my prior for ‘this was a weird high-layer error and you didn't really mean to do it at all’. I suppose that does make Bayesian sense, since high-layer errors seem quite rare – which is another reason I find them notable and interesting when they occur! |
| | |
|
|
Deoptimisation can be a virtue There's a well-known saying in computing: premature optimisation is the root of all evil. The rationale is more or less that tweaking code to make it run faster tends to make it less desirable in various other ways – less easy to read and understand, less flexible in the face of changing future requirements, more complicated and bug-prone – and therefore one should get into the habit of not habitually optimising everything proactively, but instead, wait until it becomes clear that some particular piece of your code really is too slow and is causing a problem. And then optimise just that part. I have no problem in principle with this adage. I broadly agree with what it's trying to say. (Although I must admit to an underlying uneasiness at the idea of most code being written with more or less no thought for performance. I feel as if that general state of affairs probably contributes to a Parkinson's Law phenomenon, in which software slows down to fill the CPU time available, so that the main effect of computers getting faster is not that software actually delivers its results more quickly but merely that programmers can afford to be lazier without falling below the ‘just about fast enough’ threshold.) But I have one big problem with applying it in practice, which is that often when I think of the solution to a problem, the first version of it that I am even conscious of is already somewhat optimised. And sometimes it's optimised to the point of already being incomprehensible. For example, ages ago I put up a web page about mergesorting linked lists; fanf critiqued my presentation of the algorithm as resembling ‘pre-optimised assembler translated into prose’, and presented a logical derivation of the same idea from simple first principles. But that derivation had not gone through my head at any point – the first version of the algorithm that even worked for me at all was more or less the version I published. Another example of this came up this week, in an algorithmic sort of maths proof – I had proved something to be possible at all by presenting an example procedure that actually did it, and it turned out that the procedure I'd presented was too optimised to be easily understood, because in the process of thinking it up in the first place, I'd spotted that one of the steps in the procedure would do two of the necessary jobs at once, and then I devoted more complicated verbiage to explaining that fact than it would have taken to present a much simpler procedure that did the two jobs separately. The simpler procedure would have taken more steps, but when all you're trying to prove is that some procedure will work, that doesn't matter at all. I think the problem I have is that although I recognise in principle that optimisation and legibility often pull in opposite directions, and I can (usually) resist the urge to optimise when it's clear that legibility would suffer, one thing I'm very resistant to is deliberate de-optimisation: once I've got something that has been optimised (whether on purpose or by accident), it basically doesn't even occur to me in the first place to make it slower on purpose. And if it did occur to me, I'd still feel very reluctant to actually do it. This is probably a bias I should try to get under control. The real meaning of ‘premature optimisation is bad’ is that the right tradeoff between performance and clarity is often further towards clarity than you think it is – and a corollary is that sometimes it's further towards clarity than wherever you started, in which case, deoptimisation can be a virtue. |
| | |
|
navigation |
[ |
viewing |
| |
most recent entries |
] |
[ |
go |
| |
earlier |
] |
| | | |