The Infinity Machine
Probably most of my friends have heard me waffling on about the Infinity Machine at one time or another. If anyone reading this has managed to miss it so far, you can find my article introducing the concept at http://www.chiark.greenend.org.uk/~sgtatham/infinity.html. Today I have a question about it to ask geek-
The Infinity Machine has been a fun thought experiment for most of my life, but in one respect it's slightly frustrating. Its ability to search a (countably) infinite space in finite time would enable it to solve quite a few problems that are famously unsolved in the real world; but quite a few of those problems would simultaneously be rendered pointless to solve anyway by the presence of Infinity Machines in the world. For example, you could use the Infinity Machine to search all possible computer programs to find the one which was fastest at factorising large integers –
It occurred to me this week that there is a scenario in which that slight frustration might be resolved. Suppose you were suddenly taken away from your normal life and sat down in front of an Infinity Machine for a few days, or a week, or a month. Suppose you were free to write programs and run them on it, and free to write finite quantities of the output to real-
What would you get it to compute, in this situation?
(Ground rules: to help you write your programs you can have access to a large library, and perhaps archives of reference websites such as Wikipedia if you want them. But you don't get unfettered Internet access while you're using the Infinity Machine: I don't want you doing things like factorising every key on the PGP keyservers, or all the root CA keys, because that's against the spirit of what I'm interested in asking.)
Some thoughts on my own answer:
There are three categories of things you might try for. You might try for global altruism: find out things that improve the sum of human knowledge or the global quality of life. Or you might try for personal profit, e.g. finding algorithms you can sell. Or you might simply try to satisfy your own curiosity. I would certainly like to think I'd mostly try for the former, but then, who knows what I'd really do…
The trouble is that quite a few of the ‘does there exist an efficient algorithm’ questions I can think of are things where I want the answer to be no. If I found out there was an integer factorisation algorithm so efficient that RSA keys had to be impractically big to defeat it, I'm not sure I'd want to publish it to the world anyway. Finding out there wasn't would be more pleasant, but less dramatic.
It might be more interesting to set the Machine searching for unbreakable crypto primitives. Instead of obsoleting RSA (or perhaps as well), I could tell the Machine to work on finding me a block cipher, a hash function and an asymmetric key scheme which have no computationally feasible attacks against them. But the trouble with that is, what would I do with them when I got back to the real world? I'm not a published cryptographer; how would I get people to believe my algorithms were better than those of any other random crank?
It would certainly be interesting to set up some mathematical formal systems and search through all possible derivations within them for proofs of unsolved problems. Goldbach's conjecture would make a good warm-
Back to algorithms, it occurred to me that finding the best possible optimisation and code generation algorithms for a compiler (within reasonable running-
So, what do other people think?
Perhaps I should ask some subquestions as well:
- What could you do that would have the biggest effect on improving the world?
- What would you do in order to make yourself the biggest profit?
- What would you be most curious to know the answers to, even if nobody would ever believe you?
- And which of those would you prioritise most highly: what would be the thing you'd actually do if given the chance?
no subject
no subject
Still, it'd be worth a try, I suppose.
no subject
Of course, in either case you could gain credibility with your Fermat proof, which would probably be enough to get someone to take a serious look at anything else you did.
no subject
I want the answer to chess, which may or may not be believed but at least I could be smug about it.
And I want an answer to the problem of getting computers to learn through experience; depending on how much I get to take away with me a programme that can read/write fluent English would be nice but one that had the potential to *learn* to do so would be even better.
no subject
Chess shouldn't be a problem, though. You could certainly find out which player won in perfectly played chess, and you might also try searching for the shortest program capable of producing perfect play within reasonable execution time, to see if it turned out that there was a relatively simple way to make the optimal strategy practically implementable. (As there is in nim, for instance. Presumably for chess it would be far more complex than that, but it might still turn out to be tractable by a finite computer.)
I think I'd be more interested in asking about go than chess, but that's just me.
no subject
I'd be tempted to assume that an IM would by its very nature be an AI - but I'd probably be wrong.
And yes, the search is hard... but what you can do is make a giant neural net and train it with all the available data (rather than whatever subset you have time for), and you could train as many neural nets as you could invent that might work (obviously you generate these somehow, not just think about them yourself). So I think you'd probably have something much better than what we currently have at the end of it.
no subject
Large-scale neural nets are an interesting idea, though, and certainly they neatly dodge my basic methodological objection. I think I wouldn't be confident enough of getting decent results to spend my limited time on that particular approach, but good luck with it if you ever find yourself in this scenario :-)
no subject
I don't know if there are existing molecular dynamics sims that would give good answers given enough cycles (
no subject
no subject
no subject
no subject
I would also run everybody's multislice simulations, starting with my several-billion-atom one that would otherwise need silly amounts of breaking up to do a bit at a time on CamGRID.
no subject
no subject
no subject
Work-wise, I'd like to have a crack at a problem that bugs me about simulations - how do we know when we've done enough simulations?
I'd make my money by devising a good weather forecasting system, and bringing back enough good enough predictions to bet on white Christmasses and the like :)
no subject
no subject
So, I'd go in armed with all the data I could gather on physical constants and laws.
I'd then code up a simulation of the physical world. I can make the efficiency as sloppy as I like, since I have infinite computing power. All I want is accurate.
Finally, let's assume that any sufficiently advanced society will eventually create a working artificial intelligence.
Then?
Evolution, baby. I want to walk out with a disc (or discs) which contain the code for an AI, as evolved inside a virtual society evolved inside the simulation of the physical world.
I can run an infinite number of these simultions, and specify some criteria relating to artifacts of civilisation (objects of more than a certain weight remaining in the sky for more than a certain time, for instance, would let me know when they invent planes) that will let the Machine present me with only the civilisations which are making good progress.
no subject
no subject
Your physics simulation might not be exactly real-world physics, since we don't know exactly what real-world physics is in order to program it in; but presumably that isn't important to you as long as you end up with a simulated universe rich enough for life to evolve to sentience in?
no subject
So, because we're dealing with infinities, that means I'll never find the simulated universe which contains one. :(
Exporting the AI is easy. Once I've found an AI, then I can just ask it to solve the problem of exporting itself. ;)
no subject
Aha, yes, very neat. Drop an emissary into the simulated universe, have it go up to the AI and hand over some English dictionaries and real-world programming textbooks, then you can talk to it sensibly and have it translate its own code into something we can run. Sounds perfectly feasible to me :-)
no subject
At least one AI which ends up running within a simulated physical environment will eventually achieve total saturation of that environment and run with the maximum possible computational power achievable within that set of physical laws.
That should be easily enough for the AI to figure out that it is being simulated.
And we have to assume that any sufficiently advanced AI will be able to hack out of its simulation into the code which supports the simulation.
At that point, you have an AI running with infinite processing power. Aka God.
no subject
A sufficiently advanced AI would be able to find a security hole of this type if one exists, but even AIs can't do what's actually impossible. So you make a good point, but it's a fixable one: before you start your simulations, have the IM exhaustively analyse the simulator program to make sure it hasn't got any native-code-execution vulnerabilities. Then an AI can't find one no matter how clever it is, because there won't be one to find.
no subject
no subject
"Once we figure out how to create AI, you have to treat it like a caged demon."
So, no pentegrams, because they don't work on AIs. But, that same degree of care is essential. A sufficiently advanced machine intelligence is almost impossible to contain - and containment is absolutely compulsory.
no subject
no subject
I think that this is also likely to happen in an uncontrolled manner, by accident, by somebody unaware of the possible consequences.
I'd rather that it was created sooner, in the hands of responsible people who understood (or could guess at) some of the possible repercussions. By that, I mean me. :) I'd then probably go to the Singularity Institute and figure out the next steps.
no subject
Betting your mental integrity on that, OTOH, would be more worrying...
The other point here, though, is that you shouldn't primarily be worrying about it finding ways to execute code of its choice on your brain. You're asking it to provide its own source code which you will take back to the real world and run on the computers there; surely if it wants to run malicious code anywhere, its easiest way to achieve it is by doctoring that code!
no subject
The other point here, though, is that you shouldn't primarily be worrying about it finding ways to execute code of its choice on your brain. You're asking it to provide its own source code which you will take back to the real world and run on the computers there; surely if it wants to run malicious code anywhere, its easiest way to achieve it is by doctoring that code!
Actually, my biggest worry is an AI running natively on a machine with infinite processing power.
The worries regarding it running in the real world are an order of magnitude smaller, although still very very large and very significant. However, the containment problem is much simpler when processing power is finite. A physical barrier and no external communication mitigates the risk significantly. You can also control the smartness of the AI by throttling its resources.
no subject
But you can: if you can prove that a solution does not exist, you can be confident that even an infinitely resourced strong AI can't find one. As a trivial example, it couldn't deduce the number of limbs you have given only the number of eyes, because we know there exist life forms with the same number of eyes but different numbers of limbs, so there simply isn't one correct answer.
Similarly, the question of whether it could find a way into your brain through your visual cortex given only two sentences in ASCII written by you is not a question about the AI, it's a question about you: do you think there could possibly exist a single visual hack which was effective against all possible beings that might have written exactly those sentences? It seems vanishingly unlikely to me that that's the case.
This is conceptually the same sort of theological quagmire that we got out of by realising that completely meaningless sequences of words do not become meaningful just because someone stuck "God can" on the front; physical omnipotence is still limited by having to be logically consistent, and similarly here computational omnipotence is still limited by the AI having to have enough information to render its conclusion unique before it can draw it.
no subject
- It has my instruction ("Give me your code")
- It has a sufficient knowledge of my language such that I can communicate with it, and it can provide any necessary instructions to me about how to run the deliverable it provides
- It can deduce what kind of intelligence I am from my language and from the kind of universe I have selected (I'm likely to choose criteria similar to my own universe in order to find something which I consider 'intelligence')
- In this kind of physical simulation, quantum effects would probably be in play whereby my observervation of the AI's universe would have observable effects in that universe. For instance, it could figure out that I'm probably observing the universe in the visible spectrum
I don't think the above are sufficient information to hack a brain. But, I thought of them in about 30 seconds!
You're interested in crypto, so you must be familiar with the way that it's usually broken. It's almost always the case that the original designer didn't identify a really subtle information leak or tell, and the cracker can get an amazing amount of leverage from very small information leaks.
I think this is the same thing. The stakes are very, very high indeed - so I'm wondering if it would be a sensible risk to take. And, remember, I've just taken a guess at one possible vector. There are undoubtedly others.
no subject
no subject
One of my responses to the question was to try to make a simulation of _this_ universe.
1. Start a simulation of a Schroedinger equation and a big bang, and try all possibilities until you get one that accords exactly with your library.
2. Extrapolate it into the future whilst waving your hands (which through gravitational waves and chaos theory _alter_ the future). When the future state achieves whatever condition you want (say "infinity machine spontaneously falls from sky in London" or "win the lottery" or "world peace") a light comes on and you stop waving :)
3. Profit.
There are only a few flaws:
1. Do you have to know the laws of physics precisely?
2. What if "random rock that happens to have encyclopaedia carved in" is as likely as this universe? Then you may end up with that one.
3. The traditional infinity machine can only do _countably_ infinitely many things iirc. Your potential universes might be uncountable.
4. I'm not positive about the gravity waves/chaos thing. I mean, it's a literally hand-waving solution...
no subject
I think that the idea of a full-universe simulation is one of those ideas. You suddenly hit a whole shedload of horrible violations of information theory the moment you try and put information into or get information out of that simulated universe. Remember - observe something, change that something.
That's why I think my simulated universe is more practical than yours. ;) I don't need mine to be accurate. I just need it to serve as an incubator.
Yours will wildly diverge from whatever reference point (and checkpoints) you set up, thanks to the fact that your simulation will be imperfect (your reference data can never be 100% precise) and that you are also observing it.
no subject
Arguably, yes. :)
no subject
Of course, you could generate an infinite number of universes, some (infinite) subset of which would likely look like ours (although pattern-matching of universes could be a fun problem in and of itself). You then have the problem of choosing one which is both sufficiently like ours and which will remain so for a suitable period of time.
I do wonder if you'd end up with an effect akin to that of using genetic algorithms to program FPGAs. You'd get something that worked, but which was so intricately bound to strange and minute properties of its environment that it would be totally non-portable between universes.
no subject
So, one criterion would be that a question is injected into any universe which contains things which appear to have intelligent characteristics, and the response would have to be something which could be parsed sensibly in this universe.
I think that chaos theory can be handled if you have infinite resources and a precise use case in mind.
no subject
I think that I'd treat the claim of someone who had worked out how to factor big numbers cheaply with a certain amount of respect if they also claimed to have an unbreakable cypher. I suspect that might be good enough to get the big guns involved...
no subject
Try to solve some computational biology issues, either directly or by searching for optimal algorithms.
no subject
I feel like I'm sidestepping the "nobody will ever get their hands on one ever again", but if one can exist in the universe, that should be repeatable.
Might need to build a few Dyson spheres to power the thing, but once you have one, you can fill it with infinite infinite VMs (ridiculous politics notwithstanding).
no subject
Three Ideas
Pick a specific message, such as one pretending to be from God announcing myself as the Messiah, translate it via ASCII into a single number, then search the digits of Pi to find the message.
IMPROVE THE WORLD:
Run infinitely many infinite size simulations of Conway's life for an infinite duration. In each one, fire a stream of gliders (with the presence of a glider being a "1" and an absense being a "0") containing a message. Eg a repeated sequence, then build up to numbers, representation, alphabets, etc. The message would be finite in length but basically request a confirmation of understanding message be sent back to a specific point at a specific time, using the same glider gun code. Set the complexity of return message to be such that it is less likely to randomly occur than intelligent stable life is to evolve. Pick a sampling of those game of Life instances that passed this test, and ask them for advice on solving the world's problems (eg "Here's the structure of DNA and the human genome. Here are the diseases that affect the world. Please provide structure of immortality virus.")
CURIOUS:
Prove Reay's Lemma:
http://www.toothycat.net/wiki/wiki.pl?ReaysLemma
WHAT WOULD I DO?
Use the Conway life evolving trick, but combined with a test for superior intelligence and good will. I could then trade with them, running infinity machine programs on their behalf to solve problems for them. Ask an infinite number of such civilisations to each come up with a program they think I ought to run, run them, and store the questions and answers. If I don't have the storage to take an infinite answer away with me, I could use infinite peer review to pick out the best problems and answers. Another trade point would be to act as a communication channel between the infinite number of such benign civilisations, allowing them to send messages to each other.
Re: Three Ideas
It is interesting to note that everyone talking about simulating universes (effectively playing God) has come at it from the point of view of benefitting themselves.
Would nobody create an artificial infinite afterlife for these infinite number of intelligent beings they are creating for their own use? In fact, given the nature of the infinite machine, you could let each one design and run their own infinite afterlife in which they themselves would have godlike powers to create further subuniverses for themselves.
For that matter, you've talked about asking the AI to port itself out into our world. How about asking it to help port you into its world? Wouldn't you like to live forever in perfect bliss?
On the subject of AI Jails and treating AIs as demons, I think the calculation of odds changes somewhat if you are talking about an infinite number of communities of AIs that have seperately evolved, which you then place into a trust network and ask to rate each other. I'd hope that as well as there being an upward spiral of intelligence, there would also be a spiral for wisdom, where greater wisdom helps ensure that the next generation of AIs is even wiser.
Talking of communication between AI nations that you've given access to infinite computing power to, are there any board or card games that they might find interesting to play among themselves? Chess is out. So is trivial persuit. Poker might work. Multiplayer sparklies might be a contender if played on a suitable board size:
http://www.toothycat.net/wiki/wiki.pl?Sparklies