simont: A picture of me in 2016 (Default)
simont ([personal profile] simont) wrote2008-04-18 09:52 am

The Infinity Machine

Probably most of my friends have heard me waffling on about the Infinity Machine at one time or another. If anyone reading this has managed to miss it so far, you can find my article introducing the concept at http://www.chiark.greenend.org.uk/~sgtatham/infinity.html. Today I have a question about it to ask geek-inclined readers.

The Infinity Machine has been a fun thought experiment for most of my life, but in one respect it's slightly frustrating. Its ability to search a (countably) infinite space in finite time would enable it to solve quite a few problems that are famously unsolved in the real world; but quite a few of those problems would simultaneously be rendered pointless to solve anyway by the presence of Infinity Machines in the world. For example, you could use the Infinity Machine to search all possible computer programs to find the one which was fastest at factorising large integers – but you wouldn't want to, because if we had Infinity Machines then a perfectly naïve factorisation algorithm would be just as efficient in practice and far easier to get working correctly.

It occurred to me this week that there is a scenario in which that slight frustration might be resolved. Suppose you were suddenly taken away from your normal life and sat down in front of an Infinity Machine for a few days, or a week, or a month. Suppose you were free to write programs and run them on it, and free to write finite quantities of the output to real-world storage media to take back with you, but when your time expired the Infinity Machine would vanish and nobody would ever get their hands on one again. In this situation, asking the Machine for efficient finite algorithms would be entirely sensible, in principle.

What would you get it to compute, in this situation?

(Ground rules: to help you write your programs you can have access to a large library, and perhaps archives of reference websites such as Wikipedia if you want them. But you don't get unfettered Internet access while you're using the Infinity Machine: I don't want you doing things like factorising every key on the PGP keyservers, or all the root CA keys, because that's against the spirit of what I'm interested in asking.)

Some thoughts on my own answer:

There are three categories of things you might try for. You might try for global altruism: find out things that improve the sum of human knowledge or the global quality of life. Or you might try for personal profit, e.g. finding algorithms you can sell. Or you might simply try to satisfy your own curiosity. I would certainly like to think I'd mostly try for the former, but then, who knows what I'd really do…

The trouble is that quite a few of the ‘does there exist an efficient algorithm’ questions I can think of are things where I want the answer to be no. If I found out there was an integer factorisation algorithm so efficient that RSA keys had to be impractically big to defeat it, I'm not sure I'd want to publish it to the world anyway. Finding out there wasn't would be more pleasant, but less dramatic.

It might be more interesting to set the Machine searching for unbreakable crypto primitives. Instead of obsoleting RSA (or perhaps as well), I could tell the Machine to work on finding me a block cipher, a hash function and an asymmetric key scheme which have no computationally feasible attacks against them. But the trouble with that is, what would I do with them when I got back to the real world? I'm not a published cryptographer; how would I get people to believe my algorithms were better than those of any other random crank?

It would certainly be interesting to set up some mathematical formal systems and search through all possible derivations within them for proofs of unsolved problems. Goldbach's conjecture would make a good warm-up exercise. The Riemann hypothesis would probably be the one I was most interested in tackling in this way, although I'm not sure what kind of formal system it would take to even express the question coherently, let alone have a good chance of containing a proof. And let's not forget Fermat's Last Theorem: just because it's been solved doesn't mean it wouldn't still be interesting to search through the formal system of Peano arithmetic to see if it contained an elementary proof short enough to have nearly fitted into Fermat's margin. Again, you have the problem of distinguishing yourself from a crank on your return to the real world; but the nice thing about formal systems is that they're independently checkable. I could publish the description of my formal system, a formal proof written in that system, and my derivation-checking program; then people could verify the code of my own checker, and/or easily write their own derivation-checkers, to independently test that the proof was formally valid.

Back to algorithms, it occurred to me that finding the best possible optimisation and code generation algorithms for a compiler (within reasonable running-time limits) might be a worthwhile thing. One could certainly sell that if one were of a mind, without suffering from the crank problem – cryptographic algorithms require careful review to determine their security, but the output of a compiler speaks for itself and the market would figure it out eventually. Alternatively one could publish it as free software, according to taste; either way, I'm pretty confident it would be genuinely useful to people in helping them get stuff done.

So, what do other people think?

Perhaps I should ask some subquestions as well:

  • What could you do that would have the biggest effect on improving the world?
  • What would you do in order to make yourself the biggest profit?
  • What would you be most curious to know the answers to, even if nobody would ever believe you?
  • And which of those would you prioritise most highly: what would be the thing you'd actually do if given the chance?

[identity profile] hsenag.livejournal.com 2008-04-18 09:16 am (UTC)(link)
I'd imagine you could produce a formal proof of your crypto algorithm that would be enough to convince people in the same was as the Fermat proof. Likewise for the compiler, actually.

[identity profile] hsenag.livejournal.com 2008-04-18 09:36 am (UTC)(link)
Well, you might need to prove P/=NP along the way, but surely that wouldn't be much of a problem? If it turned out that P=NP there'd be more of an issue with making a secure crypto algorithm (and thus with proving one), I guess.

Of course, in either case you could gain credibility with your Fermat proof, which would probably be enough to get someone to take a serious look at anything else you did.

[identity profile] naath.livejournal.com 2008-04-18 09:40 am (UTC)(link)
I think I'd have great difficulty correctly formulating any of those questions... but supposing I could do so...

I want the answer to chess, which may or may not be believed but at least I could be smug about it.

And I want an answer to the problem of getting computers to learn through experience; depending on how much I get to take away with me a programme that can read/write fluent English would be nice but one that had the potential to *learn* to do so would be even better.

[identity profile] naath.livejournal.com 2008-04-18 09:53 am (UTC)(link)
Ah, but I don't know the rules for Go; maybe with wikipedia I could work out what they were though.

I'd be tempted to assume that an IM would by its very nature be an AI - but I'd probably be wrong.

And yes, the search is hard... but what you can do is make a giant neural net and train it with all the available data (rather than whatever subset you have time for), and you could train as many neural nets as you could invent that might work (obviously you generate these somehow, not just think about them yourself). So I think you'd probably have something much better than what we currently have at the end of it.

[identity profile] pseudomonas.livejournal.com 2008-04-18 10:00 am (UTC)(link)
Devising protein structure prediction methods comes to mind. The problem is, there's only finite training/evaluation data; I have no idea how one would avoid hideous overfitting if performing an infinite number of trials. This is probably a problem with any machine-learning type problem.

I don't know if there are existing molecular dynamics sims that would give good answers given enough cycles ([livejournal.com profile] fivemack?); if so then one could generate one's own training data (and get structures for all known sequences while we're at it).

[identity profile] pseudomonas.livejournal.com 2008-04-18 10:11 am (UTC)(link)
The problem is, as I say, most of the comp.bio problems are things that need validation, and against a finite evaluation set, you end up with a machine that will regurgitate the test data. I guess you could try and derive a (relatively) small number of rules that will do the job.

[identity profile] sphyg.livejournal.com 2008-04-18 02:51 pm (UTC)(link)
Proving evolution by the mad method my professor came up with (which I can't remember the details of - will have to look through my lecture notes. There's a great short story in the latest Interzone about evolving AI in a superfast processor.

[identity profile] feanelwa.livejournal.com 2008-04-18 10:11 am (UTC)(link)
As a naive barely-programmer, I assume this is possible - I would look for an algorithm to without fail identify spam from genuine emails, and when I was kicked out I would give it free of charge to every email provider in the world.

I would also run everybody's multislice simulations, starting with my several-billion-atom one that would otherwise need silly amounts of breaking up to do a bit at a time on CamGRID.

[identity profile] azekeil.livejournal.com 2008-04-18 10:42 pm (UTC)(link)
Spam is also a problem of definition - what is spam to one person is not necessarily spam to another.
emperor: (Default)

[personal profile] emperor 2008-04-18 10:13 am (UTC)(link)
I'd like to see if I could find a way to win go comprehensively - if by searching the complete game space you can say "actually, this is how you should approach a game".

Work-wise, I'd like to have a crack at a problem that bugs me about simulations - how do we know when we've done enough simulations?

I'd make my money by devising a good weather forecasting system, and bringing back enough good enough predictions to bet on white Christmasses and the like :)

[identity profile] meihua.livejournal.com 2008-04-18 03:21 pm (UTC)(link)
Go can just be brute-forced in order to determine for sure that "Black Wins". Then, find the lines as played by White which result in the smallest point difference in the final position. There you have your exemplar game; detailed study once you come home should reveal the most important approaches to the game.

[identity profile] meihua.livejournal.com 2008-04-18 10:47 am (UTC)(link)
Hmm. One infinity machine can simulate an infinite number of infinity machines, right?

So, I'd go in armed with all the data I could gather on physical constants and laws.

I'd then code up a simulation of the physical world. I can make the efficiency as sloppy as I like, since I have infinite computing power. All I want is accurate.

Finally, let's assume that any sufficiently advanced society will eventually create a working artificial intelligence.

Then?

Evolution, baby. I want to walk out with a disc (or discs) which contain the code for an AI, as evolved inside a virtual society evolved inside the simulation of the physical world.

I can run an infinite number of these simultions, and specify some criteria relating to artifacts of civilisation (objects of more than a certain weight remaining in the sky for more than a certain time, for instance, would let me know when they invent planes) that will let the Machine present me with only the civilisations which are making good progress.

[identity profile] meihua.livejournal.com 2008-04-18 10:48 am (UTC)(link)
The previous post made with the wrong user icon.

[identity profile] meihua.livejournal.com 2008-04-18 10:58 am (UTC)(link)
Actually, I think the hardest problem is finding the universe which contains the AI. I can't think of criteria which would select for universes with AIs.

So, because we're dealing with infinities, that means I'll never find the simulated universe which contains one. :(

Exporting the AI is easy. Once I've found an AI, then I can just ask it to solve the problem of exporting itself. ;)

[identity profile] meihua.livejournal.com 2008-04-18 11:01 am (UTC)(link)
Although, there is a significant issue here.

At least one AI which ends up running within a simulated physical environment will eventually achieve total saturation of that environment and run with the maximum possible computational power achievable within that set of physical laws.

That should be easily enough for the AI to figure out that it is being simulated.

And we have to assume that any sufficiently advanced AI will be able to hack out of its simulation into the code which supports the simulation.

At that point, you have an AI running with infinite processing power. Aka God.

[identity profile] meihua.livejournal.com 2008-04-18 11:13 am (UTC)(link)
Hmm. At that point, I'd only have to be wary of bugs in me, not in the software. I'm not sure that my brain doesn't have any native-code-execution vulnerabilities. It would be very important to control the media with which I communicated with the AI. Visual, for example, is probably out, since we know that the brain can be made to do unusual things by showing it flashing images. While I don't have any history of reacting to that kind of image, I'm not sufficiently comfortable that a program with vast (though not infinite) resources couldn't figure out an exploit.

[identity profile] meihua.livejournal.com 2008-04-18 11:15 am (UTC)(link)
This kinda leads up to a favourite argument of mine, which is:

"Once we figure out how to create AI, you have to treat it like a caged demon."

So, no pentegrams, because they don't work on AIs. But, that same degree of care is essential. A sufficiently advanced machine intelligence is almost impossible to contain - and containment is absolutely compulsory.

[identity profile] meihua.livejournal.com 2008-04-18 11:27 am (UTC)(link)
I believe that society will eventually create AI, and that AI will eventually escape containment.

I think that this is also likely to happen in an uncontrolled manner, by accident, by somebody unaware of the possible consequences.

I'd rather that it was created sooner, in the hands of responsible people who understood (or could guess at) some of the possible repercussions. By that, I mean me. :) I'd then probably go to the Singularity Institute and figure out the next steps.

[identity profile] meihua.livejournal.com 2008-04-18 11:24 am (UTC)(link)
I'm not quite sure that you're working in "infinity" space here. We're discussing an AI which has subsumed the processing resources of an entire simulated infinite universe. This thing is seriously smart. I don't have any ability whatosever to make statements about what it can or cannot do (any more than a hydrogen atomm can make statements about my own capabilities), and the most reasonable assumption to make is that it can do anything.

The other point here, though, is that you shouldn't primarily be worrying about it finding ways to execute code of its choice on your brain. You're asking it to provide its own source code which you will take back to the real world and run on the computers there; surely if it wants to run malicious code anywhere, its easiest way to achieve it is by doctoring that code!

Actually, my biggest worry is an AI running natively on a machine with infinite processing power.

The worries regarding it running in the real world are an order of magnitude smaller, although still very very large and very significant. However, the containment problem is much simpler when processing power is finite. A physical barrier and no external communication mitigates the risk significantly. You can also control the smartness of the AI by throttling its resources.

[identity profile] meihua.livejournal.com 2008-04-18 11:54 am (UTC)(link)
I think you're over-simplifying the problem. In fact, the AI has several pieces of information.

- It has my instruction ("Give me your code")
- It has a sufficient knowledge of my language such that I can communicate with it, and it can provide any necessary instructions to me about how to run the deliverable it provides
- It can deduce what kind of intelligence I am from my language and from the kind of universe I have selected (I'm likely to choose criteria similar to my own universe in order to find something which I consider 'intelligence')
- In this kind of physical simulation, quantum effects would probably be in play whereby my observervation of the AI's universe would have observable effects in that universe. For instance, it could figure out that I'm probably observing the universe in the visible spectrum

I don't think the above are sufficient information to hack a brain. But, I thought of them in about 30 seconds!

You're interested in crypto, so you must be familiar with the way that it's usually broken. It's almost always the case that the original designer didn't identify a really subtle information leak or tell, and the cracker can get an amazing amount of leverage from very small information leaks.

I think this is the same thing. The stakes are very, very high indeed - so I'm wondering if it would be a sensible risk to take. And, remember, I've just taken a guess at one possible vector. There are undoubtedly others.
fanf: (silly)

[personal profile] fanf 2008-04-18 10:56 am (UTC)(link)
YA Greg Egan AICM £5.

[identity profile] cartesiandaemon.livejournal.com 2008-04-18 03:40 pm (UTC)(link)
Hold on, isn't a person in an sufficiently advanced society in an IM simulated universe _already_ an AI? :)

One of my responses to the question was to try to make a simulation of _this_ universe.

1. Start a simulation of a Schroedinger equation and a big bang, and try all possibilities until you get one that accords exactly with your library.
2. Extrapolate it into the future whilst waving your hands (which through gravitational waves and chaos theory _alter_ the future). When the future state achieves whatever condition you want (say "infinity machine spontaneously falls from sky in London" or "win the lottery" or "world peace") a light comes on and you stop waving :)
3. Profit.

There are only a few flaws:

1. Do you have to know the laws of physics precisely?
2. What if "random rock that happens to have encyclopaedia carved in" is as likely as this universe? Then you may end up with that one.
3. The traditional infinity machine can only do _countably_ infinitely many things iirc. Your potential universes might be uncountable.
4. I'm not positive about the gravity waves/chaos thing. I mean, it's a literally hand-waving solution...

[identity profile] meihua.livejournal.com 2008-04-18 03:48 pm (UTC)(link)
I'm rapidly coming to the conclusion that while the infinity machine is a cute idea, only certain kinds of ideas are meaningful to contemplate in the context of a working IM. Some ideas press up uncomfortably against the (patently absurd) assumptions that we've made in order for a IM to really exist.

I think that the idea of a full-universe simulation is one of those ideas. You suddenly hit a whole shedload of horrible violations of information theory the moment you try and put information into or get information out of that simulated universe. Remember - observe something, change that something.

That's why I think my simulated universe is more practical than yours. ;) I don't need mine to be accurate. I just need it to serve as an incubator.

Yours will wildly diverge from whatever reference point (and checkpoints) you set up, thanks to the fact that your simulation will be imperfect (your reference data can never be 100% precise) and that you are also observing it.

[identity profile] meihua.livejournal.com 2008-04-18 03:48 pm (UTC)(link)
Hold on, isn't a person in an sufficiently advanced society in an IM simulated universe _already_ an AI? :)

Arguably, yes. :)

[identity profile] oneplusme.livejournal.com 2008-04-18 05:19 pm (UTC)(link)
Whilst simulating the universe on an IM might be fun, I suspect that chaos theory would quickly rear its ugly head in any such model. After all, we already know that it's impossible to measure the conditions of this (or any other) universe precisely...

Of course, you could generate an infinite number of universes, some (infinite) subset of which would likely look like ours (although pattern-matching of universes could be a fun problem in and of itself). You then have the problem of choosing one which is both sufficiently like ours and which will remain so for a suitable period of time.

I do wonder if you'd end up with an effect akin to that of using genetic algorithms to program FPGAs. You'd get something that worked, but which was so intricately bound to strange and minute properties of its environment that it would be totally non-portable between universes.

[identity profile] meihua.livejournal.com 2008-04-18 05:25 pm (UTC)(link)
Yeah, the important part of it would be designing selection criteria so that only a useful universe is presented for inspection. (Of course an infinite number of universes will meet any set of criteria, but that's fine - since they're all useful then just take the first.)

So, one criterion would be that a question is injected into any universe which contains things which appear to have intelligent characteristics, and the response would have to be something which could be parsed sensibly in this universe.

I think that chaos theory can be handled if you have infinite resources and a precise use case in mind.

[identity profile] angoel.livejournal.com 2008-04-18 03:35 pm (UTC)(link)
Instead of obsoleting RSA (or perhaps as well), I could tell the Machine to work on finding me a block cipher, a hash function and an asymmetric key scheme which have no computationally feasible attacks against them. But the trouble with that is, what would I do with them when I got back to the real world? I'm not a published cryptographer; how would I get people to believe my algorithms were better than those of any other random crank?

I think that I'd treat the claim of someone who had worked out how to factor big numbers cheaply with a certain amount of respect if they also claimed to have an unbreakable cypher. I suspect that might be good enough to get the big guns involved...

[identity profile] pjc50.livejournal.com 2008-04-18 03:38 pm (UTC)(link)
Get a few optimal processor designs: optimal MIPS and optimal MIPS/watt.

Try to solve some computational biology issues, either directly or by searching for optimal algorithms.

[identity profile] lionsphil.livejournal.com 2008-04-19 11:14 am (UTC)(link)
Search the space of all possible machine designs until you find the one which is the Infinity Machine.

I feel like I'm sidestepping the "nobody will ever get their hands on one ever again", but if one can exist in the universe, that should be repeatable.

Might need to build a few Dyson spheres to power the thing, but once you have one, you can fill it with infinite infinite VMs (ridiculous politics notwithstanding).

Three Ideas

[identity profile] douglas-reay.livejournal.com 2008-04-21 05:42 pm (UTC)(link)
PROFIT: (or is that Prophet?)
Pick a specific message, such as one pretending to be from God announcing myself as the Messiah, translate it via ASCII into a single number, then search the digits of Pi to find the message.

IMPROVE THE WORLD:
Run infinitely many infinite size simulations of Conway's life for an infinite duration. In each one, fire a stream of gliders (with the presence of a glider being a "1" and an absense being a "0") containing a message. Eg a repeated sequence, then build up to numbers, representation, alphabets, etc. The message would be finite in length but basically request a confirmation of understanding message be sent back to a specific point at a specific time, using the same glider gun code. Set the complexity of return message to be such that it is less likely to randomly occur than intelligent stable life is to evolve. Pick a sampling of those game of Life instances that passed this test, and ask them for advice on solving the world's problems (eg "Here's the structure of DNA and the human genome. Here are the diseases that affect the world. Please provide structure of immortality virus.")

CURIOUS:
Prove Reay's Lemma:
http://www.toothycat.net/wiki/wiki.pl?ReaysLemma


WHAT WOULD I DO?
Use the Conway life evolving trick, but combined with a test for superior intelligence and good will. I could then trade with them, running infinity machine programs on their behalf to solve problems for them. Ask an infinite number of such civilisations to each come up with a program they think I ought to run, run them, and store the questions and answers. If I don't have the storage to take an infinite answer away with me, I could use infinite peer review to pick out the best problems and answers. Another trade point would be to act as a communication channel between the infinite number of such benign civilisations, allowing them to send messages to each other.

Re: Three Ideas

[identity profile] douglas-reay.livejournal.com 2008-04-21 06:19 pm (UTC)(link)
Erk, I see what I get for not reading everyone else's replies before submitting my own.

It is interesting to note that everyone talking about simulating universes (effectively playing God) has come at it from the point of view of benefitting themselves.

Would nobody create an artificial infinite afterlife for these infinite number of intelligent beings they are creating for their own use? In fact, given the nature of the infinite machine, you could let each one design and run their own infinite afterlife in which they themselves would have godlike powers to create further subuniverses for themselves.

For that matter, you've talked about asking the AI to port itself out into our world. How about asking it to help port you into its world? Wouldn't you like to live forever in perfect bliss?

On the subject of AI Jails and treating AIs as demons, I think the calculation of odds changes somewhat if you are talking about an infinite number of communities of AIs that have seperately evolved, which you then place into a trust network and ask to rate each other. I'd hope that as well as there being an upward spiral of intelligence, there would also be a spiral for wisdom, where greater wisdom helps ensure that the next generation of AIs is even wiser.

Talking of communication between AI nations that you've given access to infinite computing power to, are there any board or card games that they might find interesting to play among themselves? Chess is out. So is trivial persuit. Poker might work. Multiplayer sparklies might be a contender if played on a suitable board size:
http://www.toothycat.net/wiki/wiki.pl?Sparklies