The Infinity Machine
Probably most of my friends have heard me waffling on about the Infinity Machine at one time or another. If anyone reading this has managed to miss it so far, you can find my article introducing the concept at http://www.chiark.greenend.org.uk/~sgtatham/infinity.html. Today I have a question about it to ask geek-
The Infinity Machine has been a fun thought experiment for most of my life, but in one respect it's slightly frustrating. Its ability to search a (countably) infinite space in finite time would enable it to solve quite a few problems that are famously unsolved in the real world; but quite a few of those problems would simultaneously be rendered pointless to solve anyway by the presence of Infinity Machines in the world. For example, you could use the Infinity Machine to search all possible computer programs to find the one which was fastest at factorising large integers –
It occurred to me this week that there is a scenario in which that slight frustration might be resolved. Suppose you were suddenly taken away from your normal life and sat down in front of an Infinity Machine for a few days, or a week, or a month. Suppose you were free to write programs and run them on it, and free to write finite quantities of the output to real-
What would you get it to compute, in this situation?
(Ground rules: to help you write your programs you can have access to a large library, and perhaps archives of reference websites such as Wikipedia if you want them. But you don't get unfettered Internet access while you're using the Infinity Machine: I don't want you doing things like factorising every key on the PGP keyservers, or all the root CA keys, because that's against the spirit of what I'm interested in asking.)
Some thoughts on my own answer:
There are three categories of things you might try for. You might try for global altruism: find out things that improve the sum of human knowledge or the global quality of life. Or you might try for personal profit, e.g. finding algorithms you can sell. Or you might simply try to satisfy your own curiosity. I would certainly like to think I'd mostly try for the former, but then, who knows what I'd really do…
The trouble is that quite a few of the ‘does there exist an efficient algorithm’ questions I can think of are things where I want the answer to be no. If I found out there was an integer factorisation algorithm so efficient that RSA keys had to be impractically big to defeat it, I'm not sure I'd want to publish it to the world anyway. Finding out there wasn't would be more pleasant, but less dramatic.
It might be more interesting to set the Machine searching for unbreakable crypto primitives. Instead of obsoleting RSA (or perhaps as well), I could tell the Machine to work on finding me a block cipher, a hash function and an asymmetric key scheme which have no computationally feasible attacks against them. But the trouble with that is, what would I do with them when I got back to the real world? I'm not a published cryptographer; how would I get people to believe my algorithms were better than those of any other random crank?
It would certainly be interesting to set up some mathematical formal systems and search through all possible derivations within them for proofs of unsolved problems. Goldbach's conjecture would make a good warm-
Back to algorithms, it occurred to me that finding the best possible optimisation and code generation algorithms for a compiler (within reasonable running-
So, what do other people think?
Perhaps I should ask some subquestions as well:
- What could you do that would have the biggest effect on improving the world?
- What would you do in order to make yourself the biggest profit?
- What would you be most curious to know the answers to, even if nobody would ever believe you?
- And which of those would you prioritise most highly: what would be the thing you'd actually do if given the chance?
no subject
Your physics simulation might not be exactly real-world physics, since we don't know exactly what real-world physics is in order to program it in; but presumably that isn't important to you as long as you end up with a simulated universe rich enough for life to evolve to sentience in?
no subject
So, because we're dealing with infinities, that means I'll never find the simulated universe which contains one. :(
Exporting the AI is easy. Once I've found an AI, then I can just ask it to solve the problem of exporting itself. ;)
no subject
Aha, yes, very neat. Drop an emissary into the simulated universe, have it go up to the AI and hand over some English dictionaries and real-world programming textbooks, then you can talk to it sensibly and have it translate its own code into something we can run. Sounds perfectly feasible to me :-)
no subject
At least one AI which ends up running within a simulated physical environment will eventually achieve total saturation of that environment and run with the maximum possible computational power achievable within that set of physical laws.
That should be easily enough for the AI to figure out that it is being simulated.
And we have to assume that any sufficiently advanced AI will be able to hack out of its simulation into the code which supports the simulation.
At that point, you have an AI running with infinite processing power. Aka God.
no subject
A sufficiently advanced AI would be able to find a security hole of this type if one exists, but even AIs can't do what's actually impossible. So you make a good point, but it's a fixable one: before you start your simulations, have the IM exhaustively analyse the simulator program to make sure it hasn't got any native-code-execution vulnerabilities. Then an AI can't find one no matter how clever it is, because there won't be one to find.
no subject
no subject
"Once we figure out how to create AI, you have to treat it like a caged demon."
So, no pentegrams, because they don't work on AIs. But, that same degree of care is essential. A sufficiently advanced machine intelligence is almost impossible to contain - and containment is absolutely compulsory.
no subject
no subject
I think that this is also likely to happen in an uncontrolled manner, by accident, by somebody unaware of the possible consequences.
I'd rather that it was created sooner, in the hands of responsible people who understood (or could guess at) some of the possible repercussions. By that, I mean me. :) I'd then probably go to the Singularity Institute and figure out the next steps.
no subject
Betting your mental integrity on that, OTOH, would be more worrying...
The other point here, though, is that you shouldn't primarily be worrying about it finding ways to execute code of its choice on your brain. You're asking it to provide its own source code which you will take back to the real world and run on the computers there; surely if it wants to run malicious code anywhere, its easiest way to achieve it is by doctoring that code!
no subject
The other point here, though, is that you shouldn't primarily be worrying about it finding ways to execute code of its choice on your brain. You're asking it to provide its own source code which you will take back to the real world and run on the computers there; surely if it wants to run malicious code anywhere, its easiest way to achieve it is by doctoring that code!
Actually, my biggest worry is an AI running natively on a machine with infinite processing power.
The worries regarding it running in the real world are an order of magnitude smaller, although still very very large and very significant. However, the containment problem is much simpler when processing power is finite. A physical barrier and no external communication mitigates the risk significantly. You can also control the smartness of the AI by throttling its resources.
no subject
But you can: if you can prove that a solution does not exist, you can be confident that even an infinitely resourced strong AI can't find one. As a trivial example, it couldn't deduce the number of limbs you have given only the number of eyes, because we know there exist life forms with the same number of eyes but different numbers of limbs, so there simply isn't one correct answer.
Similarly, the question of whether it could find a way into your brain through your visual cortex given only two sentences in ASCII written by you is not a question about the AI, it's a question about you: do you think there could possibly exist a single visual hack which was effective against all possible beings that might have written exactly those sentences? It seems vanishingly unlikely to me that that's the case.
This is conceptually the same sort of theological quagmire that we got out of by realising that completely meaningless sequences of words do not become meaningful just because someone stuck "God can" on the front; physical omnipotence is still limited by having to be logically consistent, and similarly here computational omnipotence is still limited by the AI having to have enough information to render its conclusion unique before it can draw it.
no subject
- It has my instruction ("Give me your code")
- It has a sufficient knowledge of my language such that I can communicate with it, and it can provide any necessary instructions to me about how to run the deliverable it provides
- It can deduce what kind of intelligence I am from my language and from the kind of universe I have selected (I'm likely to choose criteria similar to my own universe in order to find something which I consider 'intelligence')
- In this kind of physical simulation, quantum effects would probably be in play whereby my observervation of the AI's universe would have observable effects in that universe. For instance, it could figure out that I'm probably observing the universe in the visible spectrum
I don't think the above are sufficient information to hack a brain. But, I thought of them in about 30 seconds!
You're interested in crypto, so you must be familiar with the way that it's usually broken. It's almost always the case that the original designer didn't identify a really subtle information leak or tell, and the cracker can get an amazing amount of leverage from very small information leaks.
I think this is the same thing. The stakes are very, very high indeed - so I'm wondering if it would be a sensible risk to take. And, remember, I've just taken a guess at one possible vector. There are undoubtedly others.