Thoughts on thoughts (III) [entries|reading|network|archive]
simont

[ userinfo | dreamwidth userinfo ]
[ archive | journal archive ]

Tue 2006-03-21 11:15
Thoughts on thoughts (III)

I've always been a little suspicious of attempts to design a human-like artificial intelligence. Of course it often leads to good SF, which I'm strongly in favour of :-) but many things which make good SF are not things you'd want to go round doing in reality.

In particular, I feel strongly that making a computer behave like a human mind misses the vital point about computers, which is that they're good at things humans are not, such as repeatability, reliability, extremely fast linear processing, not getting bored and so on. The most sensible way to use a computer, it therefore seems to me, is to apply it to jobs where those features are virtues, not to try to convert it into a second-rate human. Also, if you want a human-level intelligence, it's surely easier just to hire one: there are a silly number of billions of us in the world already, and quite a few are looking for work!

Every so often I notice a particularly unhelpful feature of the human brain which reinforces this opinion, by making me feel even more strongly that what we need is a partnership with devices which don't have the same weakness, not an attempt to construct yet more things which do.

Today's annoying mental weakness is a failure to track the origin of data. I don't know about anyone else's, but my brain is very very bad at remembering why it thinks a particular thing; I often find myself needing to know this, and it usually takes me significant and irritating effort to track it down, if indeed I manage to do it at all.

This leads to all sorts of practical inconveniences, probably the biggest of which is that if a significant fact which I've known for a while and from which I've made a load of deductions then changes under my feet, it's very difficult to reorganise my brain to cope: for a long time afterwards, out-of-date ‘facts’ will constantly pop up in my stream of thought and present themselves as if they're valid in their own right, and if I'm lucky I recognise each one as having been derived from something which is no longer true. If I'm unlucky, I won't even notice, and will continue to act on the assumption that the derived fact is still accurate, and make a fool of myself. What I'd like to be able to do is to go through my brain pro-actively and find all the things derived from whatever has just stopped being true, and clean them all out in one go so that they don't keep annoying me for days, weeks or months afterwards.

The other particularly annoying effect is when someone challenges me on something I'd taken as fact, and requires me to defend the belief. Of course in this situation it's terribly useful to know why I think it, (a) so I can put together a coherent defence, and (b) in case it does turn out to be based on out-of-date information and I hadn't noticed.

I suppose this is exactly what is meant by the phrase ‘challenged to re-examine one's belief’. I think what I'm trying to say is that an ideal intelligence would never need to do this in response to a challenge, because it would always have a clear idea of its basis for believing any given thing; and whenever such a basis shifted, any beliefs derived from it would immediately be at least marked as obsolete, or better still pro-actively re-deduced on the spot.

But then, I spend a lot of time wishing I was an ideal intelligence. It's probably not all that productive a hobby.

LinkReply
[identity profile] cartesiandaemon.livejournal.comTue 2006-03-21 11:26
many things which make good SF are not things you'd want to go round doing in reality
How to destroy the earth (http://qntm.org/destroy)
Link Reply to this
[identity profile] cartesiandaemon.livejournal.comTue 2006-03-21 11:31
But then, I spend a lot of time wishing I was an ideal intelligence.
LOL. I know exactly what you mean.

In fact, I was thinking about the "built on superceded beliefs" thing. It often causes someone to look really silly when they find they learnt one thing, learnt something else that contradicted it, but never combined them, and then have to admit the problem. But maybe it is necessary; keeping a big table of all combinations of beliefs is a lot of work...

And in fact, beliefs probably need percentages. Anything I learnt as a child is probably normalised to under 70% certainty because it was filtered through my then understanding and one time in three I misunderstood it. Something I got from a book needs to be reexamined to see if the book is still current.
Link Reply to this
[identity profile] cartesiandaemon.livejournal.comTue 2006-03-21 11:36
In particular, I feel strongly that making a computer behave like a human mind misses the vital poin
Indeed. I assume the hope is that if you can make a computer like a human, it would be in fact much better because it would still be good at what it's good at. I don't know if that'd be true.
Link Reply to this | Thread
[personal profile] simontTue 2006-03-21 11:40
Yes, I think you're probably right. I'm inclined to believe this would in fact fail completely to be true, but that's because I found myself generally convinced by Douglas Hofstadter's arguments on the subject, particularly (but not limited to) the one that said boredom was a vital part of human creativity, since it's what drives people to stop and think and find ways to make a job easier rather than mechanically continuing to do it over and over the obvious way.
Link Reply to this | Parent | Thread
[identity profile] cartesiandaemon.livejournal.comTue 2006-03-21 11:47
You're probably right.

Though another possibility is that you could have the different parts working together. If whenever I thought of an equation I instantly saw the solution as calculated by standard techniques by the computer part of the brain, would I be better?

I'd be a *bit* better because I'd be quicker. And I'd be better than the computer because I could experiment with *new* techniques. The question is, could I ever have the benefits of both, and be better than a human with a matlab pc? Could the quick-solver do it's thing, yet trigger the correlation-noticing part of my brain into saying "Hold up, that line three is just like the *last* line three. I can change the variable and get a standard result."
Link Reply to this | Parent
[identity profile] ex-lark-asc.livejournal.comTue 2006-03-21 11:43
Origin of data
Very true; but then if we were all perfect at remembering where we first heard about such-and-such nobody would ever write fiction or music because fundamentally it's all plagiarised to some degree. So I don't think that's necessarily a bad thing.
Link Reply to this | Thread
[identity profile] megamole.livejournal.comTue 2006-03-21 12:31
Re: Origin of data
Yur. 'S your miffic archetypes, innit? It's also one reason why I've never had any inclination to be a writer, because I do feel that one can never write anything truly original without it being incomprehensible.
Link Reply to this | Parent
[identity profile] feanelwa.livejournal.comTue 2006-03-21 12:15
I think maybe the origin of data thing can be useful; if at some level you believed the first fact, e.g. a bear lives in that cave, and then you saw bear shit and bits of dead fish lying around near it and thought, that must be from the bear, but then found out from somebody else that there wasn't a bear in the cave - there would still have to be something behind the bear shit and bits of dead fish, and would still do well to be cautious near the cave. A more modern example - phlogiston. On believing that there was this thing called phlogiston that was taken away from things that burned, many explanations were invented for other similar reactions. Phlogiston turned out to be bull, but the truth, oxygen, was the opposite of phlogiston, and fitted into the other explanations very smartly indeed, just backwards. And indeed, the exchange of electrons in redox reactions that don't involve oxygen is exactly like phlogiston, and if we hadn't had phlogiston in the backs of our minds still, maybe it would have taken us longer to figure out.
Link Reply to this | Thread
[identity profile] philipstorry.livejournal.comTue 2006-03-21 20:58
The phlogiston doesn't exist?

Next you'll be trying to tell us that you don't create steam by combining Hottite with Wettite!

;-)
Link Reply to this | Parent
[identity profile] ewx.livejournal.comTue 2006-03-21 13:46
I want everything to be tagged with a probability and have a rule for propagating this data across deductions. (What would the rule look like? Someone must have done this.)
Link Reply to this | Thread
[personal profile] simontTue 2006-03-21 14:24
The rule in question is Bayes' Theorem, surely? Given that you know A with some probability, you compute P(B) as P(B|A)P(A) + P(B|~A)P(~A). The practical problem is that many deductions are in the form of implications (A=>B, itself true with some non-100% probability since you might be mistaken even in that), so while you might have a clear idea of P(B|A), you're entirely in the dark about P(B|~A).

Also, if you try to use this for inductive rather than deductive reasoning then it runs into the usual problems with picking your prior. Then there's the usual set of pathological cases (a red swivel chair is supporting evidence for the statement "all ravens are black" because it's a clear example of the logically equivalent "all non-black things are non-ravens"; before the year 2000 all supporting evidence for "all emeralds are green" was also supporting evidence for "all emeralds are grue"); I'm not sure whether those can be rephrased as problems with prior choice or whether they're a further layer of annoyance even once you've sorted out your prior.
Link Reply to this | Parent | Thread
[identity profile] cartesiandaemon.livejournal.comTue 2006-03-21 18:15
Then there's the usual set of pathological cases
Are they pathological when almost everything[1] is a non-black non-raven? :)

I thought Bayesian thinking was supposed to not suffer from that problem, but haven't worked out the details yet (eg. http://plato.stanford.edu/entries/epistemology-bayesian/ doesn't satisfy me). I think the flaws in applying the reasoning might involve the alternatives (we naturally assume *most* ravens are *certainly* black) and whether we know the number of objects and the number of ravens.

[1] Perhaps even measure-theoretically almost everything.
Link Reply to this | Parent
[identity profile] kehoea.livejournal.comThu 2006-03-23 10:51

My approach is vaguely similar; I tag everything I learn with how mutable is, and periodically I re-examine those that are mutable to see if they’re still true. The more mutable the thing, the more often it’s re-examined. So, that „wegen“ takes the genitive in German, that Robert Clive established British India (and thus can be used as an example of the top of the social pyramid in England at the end of his life, despite starting as a clerk), that Stalin was Georgian, these are all essentially immutable and do not need to be re-examined once learned. Relatedly, the need to remember where I learned it from mostly disappears; you can say “look at the standard references.

On the other hand, that CPAN’s support for random parts of the MIME specification is patchy, that Solaris is available for free for i386, and especially more domain-specific things--say, the vm sysctl calls to make NetBSD behave like a desktop machine (http://mail-index.netbsd.org/current-users/2003/12/16/0001.html)--tend to be much more mutable; and it’s in _these_ cases that where the thing was learned matters significantly and must be retained.

And something deduced from two or more of something else has the mutability of the least mutable “something else,” with the corresponding necessity to retain where it came from. This whole approach does have the limitation that your judgement of what’s immutable has to be good; for example, I learned that certain programming techniques in C were immutably good, from first principles, and then the processor people went and invalidated energetically the given that all of RAM was ~equally expensive to address, so lots of those programming techniques needed to be re-addressed.

Link Reply to this | Parent
[identity profile] philipstorry.livejournal.comTue 2006-03-21 20:54
Some initial thinking about artificial intelligence led me to a storyline for a novel, which I've been working on for some time.

I'd agree that making an AI human-like is nuts. I think that they'd be incredibly rational (due to the lack of chemical modifiers for their thought processes), leading to an odd, long-term view. Perhaps static or some quantum effect could impart fluctuations in operation that are similar to emotions, but I have my doubts.

I have no doubt that an AI would have perfect recall, though. The interesting problem is how to give an AI the "spark" - the impulse to continue living, to solve problems and to create new concepts/devices, and to occupy its time of its own accord. I'm pretty sure (although I can't say why) that it needs to be sentient to have that "spark", which implicates a very advanced AI.

Interestingly, a sleep cycle for the AI would be needed to clean up its data storage - it must be able to link items to other items, and some kind of maintenance cycle is probably needed to ensure that this happens. During that cycle, the AI could still be running - but it would have limited access to specific memory segments, and it would probably be safest if it used a temporary store for all writes until the process completed - providing for an odd sleep analogue.

Otherwise, the AI would experience what you're describing with its memories.
This leads to the interesting idea that the AI would eventually hit an unusual problem... Although its life could be practically limited by the lifetime of the universe itself (given the right conditions) the AI would find that the longer it lives, the longer it needs to sleep for.

Poor AIs. I wonder if we should make them dream as they sleep, or just torture them with existence without memories for a period in each cycle?
Link Reply to this | Thread
[personal profile] pm215Tue 2006-03-21 22:23
I think you ought to be able to come up with something analogous to incremental garbage collection to allow the AI to ignore the tidyup happening in the background, really.
Link Reply to this | Parent | Thread
[identity profile] philipstorry.livejournal.comWed 2006-03-22 14:22
The problem with incremental garbage collection is oen of scale. When the AI is fresh and shiny and new, there's no issue. But when it has a couple of billion years' worth of accumulated information, on-the-fly garbage collection would be horrific.

There has to be a method of doing this, but I tend to favour a sleep cycle which allows for thorough pruning. Not to say that an AI actually sleeps - it might just slow down whilst garbage collection occurs - but a rough cycle is probably in order. That doesn't mean that the garbage collection can't be done on the fly, though.

My main concern is that I'm not sure it's "just" garbage collection. If you want to build good solid links between data, then you may find yourself doing far more than just tidying up the leftovers from deletions - you may want to be testing for missed links, checking the weightings applied to links, and so forth. That's goind to be a lengthy process, and doing that in the background seems like a good way to use all your time linking rather than thinking.

Of course, maybe linking is thinking... ;-)
Link Reply to this | Parent | Thread
[identity profile] cartesiandaemon.livejournal.comWed 2006-03-22 21:35
And indeed, you may have too much data to consider all combinations and may need to seek pieces that can be productively combined by a non-exhaustive search, for instance putting aside some time to consider random pairs. We could call it "dreaming" :)

Though if you can design it, you'd think you could make sleep a parallel thread that happened except when some specific intensive processing was needed.
Link Reply to this | Parent | Thread
[identity profile] philipstorry.livejournal.comWed 2006-03-22 21:49
Indeed, I think dreaming is possible.

It all depends on how well you partition the maintenance threads from the other "active consciousness" threads. If they're completely seperate, you'll probably get no dreams. If they have a real knock-on effect on the other threads - memories and experiences appearing or disappearing, combining and seperating - then you could call that dreaming. It's certainly some kind of hallucination, at the very least. :-)

Hallucinating computers. Cool!

I'd like to see the maintenance stuff as a background task that runs whilst the AI is still "up" - in fact, that's one of the differences I envisage between AIs and us. AIs get to almost choose when they dream, and how they want to balance their dream/runtime balance. They can opt for complete temporary shutdown and get it over with, or run it so slowly as a background thread that by the time it's finished, they need to start it again - providing an almost permanent dream state.

Danged AIs. They have all the fun!
Link Reply to this | Parent | Thread
[identity profile] cartesiandaemon.livejournal.comWed 2006-03-22 22:40
And you're writing a story suggested by this?
Link Reply to this | Parent | Thread
[identity profile] philipstorry.livejournal.comWed 2006-03-22 23:01
Somewhat suggested by AIs, yes.

I have an entire universe in my mind, in which AIs play a major part. One of the stories set in that universe is about the first ever sentient artificial intelligence. (Created, naturally, by the first ever biological sentient intelligence.) It focuses on how the society that creates it reacts to it, and how it reacts to them.

It's one story of many in that universe. But with the longevity of an AI, the danged things keep cropping up in other stories as supporting characters.

I wouldn't hold your breath to read the story, though. It's been in my mind for years now, and I'm writing down only notes and snippets...
This feels like my magnus opus, and as such I'm determined to write the main story only after I've written surrounding stories which introduce the universe gently - and give me plenty of practice writing, so that I don't foul up the important stories! :-)
Link Reply to this | Parent | Thread
[identity profile] cartesiandaemon.livejournal.comThu 2006-03-23 13:57
Cool! I like your plan, I've often felt similarly. Though it seems most people end up with their best work being one of the subsiduary ones :) Still, it's a good thing to achieve in life.
Link Reply to this | Parent | Thread
[identity profile] philipstorry.livejournal.comSun 2006-03-26 20:26
It's not a case of wanting these stories to be my absolute best, but wanting to at least do them justice.

As a very unartistic person, there's usually a world of difference between what's in my head, and what I create in the physical world. A good analogy would be like working clay - the first time you do it, you end up with something that looks like a lump of clay hammered into shape by an idiot with no talent.

I want to avoid that by practicing as much as possible before take some clay to these stories, so that they look at least half decent when I write them.

If they do turn out to be my best, then so be it. If I go on to write something better, that's OK too. But the important thing is that I'm happy with the results... That I can look back on it and be satisfied that at least it wasn't a complete shambles. ;-)
Link Reply to this | Parent
[personal profile] joshdavisWed 2006-03-22 19:03
Poor semantic memory
The Intarweb is your friend for researching things.
It's a constant battle to disprove kruft and outdated 'knowledge', to reprocess patterns, and to reassess beliefs.

Neural networks are like great compression algorithms.

You're not just replacing a single, unrelated entity each time.
You're having to retrain all of the other bits of knowledge which might have used the one bit as an underlying part of the compression "tables".

Unfortunately, often you'll replace something, and some things will come up and get reprocessed, but some will just get corrupted and forgotten until the come along later to bug you.
Link Reply to this
navigation
[ go | Previous Entry | Next Entry ]
[ add | to Memories ]