simont: A picture of me in 2016 (Default)
simont ([personal profile] simont) wrote2008-04-13 10:55 am

Thoughts on thoughts (IV)

Gosh; it's been a couple of years since I last made a post in this irregular series, which makes it quite irregular indeed.

I had coffee with [livejournal.com profile] feanelwa a couple of weeks ago, and we had a conversation in which it occurred to me that some kinds of programming, perhaps particularly at the level where you're only just getting the hang of it, are a fundamentally introspective process. If you want to program a computer to be able to do some task your own mind already knows how to do, one way to start working out how is to do it or imagine yourself doing it; then you watch your mind's process of thinking about it, closely enough to break it down into smaller steps. Then you write each of those steps in your program, perhaps by applying the same technique again. In other words, you're reverse-engineering algorithms and procedures out of your own subconscious: converting procedural knowledge into declarative. It had never occurred to me to think of it in those terms before, but I'm glad I did, because I've been strongly introspective from a pretty early age and now I feel as if I have a better explanation for why it comes naturally to me.

One obvious thing my brain does, of course, is to think, in the sense of self-aware world-aware general-purpose sentience. It is a source of occasional mild frustration to me that this process doesn't seem to be easily susceptible to the above technique of reverse engineering. It feels (in principle, before you find out how hard a problem AI really is) as if being a competent programmer with the ability to look hard at a sentient mind from the inside really ought to be sufficient to allow one to acquire an understanding of how to replicate the same processes in a computer.

(Yes, I know I've said before – in the previous post in this series and elsewhere that I think constructing true machine sentiences would generally be a bad idea for practical and/or moral reasons. I haven't recanted that, really. But I'm not immune to the lure of curiosity; I want to know how to do it even if I think it probably shouldn't be done, and if I did know how then I would probably feel a strong temptation to try it just to find out whether it worked. So, probably best all round that I remain ignorant.)

Anyway. So recently I was imagining myself sitting down and writing a program-that-thinks, just to see what would be the first point where my imagination realised it wasn't sure what came next. It occurred to me that one of the very first big problems you run up against is the imprecise definition of almost all concepts used in normal human thought (or at least those outside pure maths). For almost any commonly used concept – ‘person’, ‘home’, ‘game’, ‘thought’, ‘happiness’, and so on – it's fairly easy to find borderline cases where there is disagreement or uncertainty about whether something qualifies as an instance of it.

Naturally, the fuzziness of such concepts makes life difficult for software written in the traditional precise and algorithmic sense. You can't typically apply standard propositional logic to a world of fuzzy concepts and expect to get reliable answers back out, because every step of your reasoning has some wiggle room and with a long enough chain of steps you eventually end up finding too many of them have wiggled in the same direction and the final link of the chain is pointing in precisely the opposite direction from the first. People have worked on various approaches to developing systems of computer inference which either try to avoid getting into this situation (e.g. ‘fuzzy logic’, or tagging every statement with a probability and using Bayesian methods to determine the certainty level of every deduction you make), or which try to avoid getting too confused once they are (e.g. by fiddling with the rules of propositional logic to prevent the system being able to deduce that even false things are true once it has a single contradiction). To some extent these have helped, but it's not very surprising that the most successful attempts to make computer programs compete with intelligent people have been from fields fairly close to pure mathematics, such as chess, where this sort of thing isn't much of a problem in the first place.

But what occurred to me, when trying to imagine sitting down and writing a thinking program from scratch, is that that human minds aren't merely capable of dealing with fuzzily specified concepts. Before we deal with them, we must first construct them.

In fact, in some sense every one of us has constructed in their own brain every fuzzy concept we ever think about. Even if in a given case that construction didn't occur ab initio but as a result of someone explaining the concept to us in words, those words are themselves imprecise; so in order to acquire anything one might call an understanding of the new concept, we have had to think about what the explanation actually means and what sorts of things it might or might not cover, and then we construct the concept in our head in basically the same way as we might have done without the explanation. All an explanation really does is to show us where to look for a new concept; we have to build our own actual understanding of it once we can see it.

This was important to me while trying to imagine how a program would think like a human, because I suddenly realised that jumping straight to an attempt to devise modes of reasoning which can operate effectively given some fuzzy concepts is putting the cart before the horse. The very first thing a program would have to do would be to actually construct its fuzzy concepts in the first place; reasoning with them comes later.

So how, and why, do we construct fuzzy concepts in the first place? I think we do it by looking at the world and seeing regularities in it. Initially, those regularities are in the slew of raw data coming in from our perception organs; so we quickly form a lot of fuzzy concepts for kinds of thing we commonly see initially concepts like ‘round thing’ and ‘straight thing’, later on ‘person’ or ‘house’ or ‘car’. Later still we apply the same process more introspectively, which I'll come back to in a moment.

So I saw the process of concept-formation in my imagination as basically a statistical exercise. Imagine a sheet of paper with a lot of dots drawn on it. Suppose the dots aren't evenly distributed, so there are irregularities of various kinds. A human would have no difficulty in pointing and saying ‘look, there's a particularly intense cluster of them’ or ‘there's a mostly empty region’ or ‘hey, there's a set of dots all in a line’. This, I suddenly felt, is basically analogous to the process of concept-forming: a human brain is given a large amount of perceptual or introspective input, and its primary job is to find statistical, approximate regularities and patterns in it. Then it reapplies those regularities to form expectations and opinions about things beyond its immediate perception.

The interesting part is that this process isn't only applied to the data coming in from perception. By watching the inside of our own brain as concepts move past each other, one might observe a general regularity in the patterns in which they recur, leading one to derive things such as rules of reasoning deduction, inference, analogy – as fuzzy concepts in their own right. In other words, we don't have to be shown how to reason or have to have it hard-wired into the brain at source; we intuitively develop our own ideas of how to think, simply by noticing that certain things seem to work in many cases and forming a pattern, then later on arranging other concepts deliberately into the same pattern and seeing where the conclusion of the pattern leads us.

It's only later on that we go to school and learn things like the abstraction of those ideas into formal propositional logic. (Which, incidentally, was invented by people who had the introspective ability to look at those intuitive fuzzy reasoning rules in their own heads and reverse-engineer out the most important points, in the manner I described above referring to programming.) Also at school we tend to have to have it beaten into our heads that not all of these modes of reasoning are as reliable as each other, and that (for example) reasoning by analogy is only a rough guide to what might be true and usually needs to be supported by a more rigorous argument if you want to conclude that something you've thought up by analogy actually is true. But in the absence of school, we'd learn this stuff anyway by hard knocks if we had to; indeed, before schools were ever thought of, we did have to.

So this view of sentience is one in which the fundamental act of intelligence is this statistical and approximate pattern-recognition process which takes vague and imprecise irregularities in data and forms concepts describing them. If you can do that, and do it well, nearly everything else a human brain does follows naturally from it – and you don't need to invent fuzzy logic to deal with the concepts, because the concept-forming framework itself will invent its own forms of logic and self-correct when they aren't quite right.

Of course, I'm well aware that all of this is no more likely to be true than any other crackpot theory about AI dreamed up by someone out of nothing but introspection. And I'm sure that if I did sit down and try to write an intelligent program on this basis (which I couldn't even if I wanted to, because statistics isn't really my thing – I identify a need for it above, but doing it is not my forte), it would turn out to be a lot more complicated than I've just made it sound; there would be no end of weird emergent problems owing to the concept-former having some undesirable property, and as soon as you started tweaking the rules to eliminate those problems you'd find you had wallpaper-bubble syndrome with a vengeance. So I'm not under the impression that I've just revolutionised AI; but I do think that I've at the very least given myself an interesting new way to think about thinking and to perceive what happens in my own brain.

It's also a little depressing, in a way. Suppose it's all true: suppose that the fundamental function of sentience on which all else is based is this concept-forming process, and that the concepts thereby formed are always fuzzy and imprecise, and that it's vital to the whole thing working that they should be so. Then we deduce … that the only thing enabling human beings to function at all is that they are inaccurate. The only thing which enables us to ever get anything even vaguely right is our ability to get things wrong, or at best only vaguely right.

A lot of hard SF which talks about artificial intelligences tends to see them as either inherently, or at least capable of being made, better, faster and above all more precise than human minds. Suddenly I feel that this is a contradiction in terms: a more precise mind would probably fail to be a mind at all. It is our imprecision which enables us to function in the first place, and despite its perceptible downsides, one cannot construct an AI which doesn't suffer from them because its upsides are all-pervasive and indispensable. Intelligence is a huge 90% solution, a great con trick, a hideous hack, and it's a miracle it works at all let alone as well as it does.

aldabra: (Default)

[personal profile] aldabra 2008-04-13 11:09 am (UTC)(link)
Introspecting myself, I think there's great scope for an AI made on this model to be able to think arbitrarily more precisely than I do, because I am limited by headspace constraints and this limits the precision of concepts that I can maintain. Precision happens by nesting concepts, and it doesn't follow from the outer concepts being fuzzy that the inner concepts are as fuzzy. I think if you had a mind built on hardware that expanded over time, rather than starting to contract again after twenty years, you might be able to start with this and end up somewhere more precise.

Possibly by modularising? It seems a great constraint that all one's specialist knowledge has to fit inside the same head. If your AI could over time get access to new sub-systems to populate with specialist knowledge it could maintain more attention on an overview. Internalise organisal structure?
pm215: (Default)

[personal profile] pm215 2008-04-13 11:39 am (UTC)(link)
Also, I think there's scope for:

* standard human brain, but make it run much faster
* standard human brain, but with connection to more traditional "computer" -- instant checking of facts, which ought to make it easier for the AI to spot where its fuzziness is causing problems
* make the AI less impatient; the human brain has evolved to take lots of shortcuts and not necessarily bother thinking things through, because for hard real-time tasks like "avoid getting eaten by lion" you can't afford to take the time to do that. You could make the AI more contemplative.
aldabra: (Default)

[personal profile] aldabra 2008-04-13 11:43 am (UTC)(link)
Yes. You could dispense with having to direct the majority of your thoughts towards keeping fed, housed and warm. Although then you might end up with motivation problems and it devoting all its energies to developing innovative AI porn.

[identity profile] feanelwa.livejournal.com 2008-04-13 11:53 am (UTC)(link)
If you make it less impatient it won't check the facts instantly :)
fanf: (Default)

[personal profile] fanf 2008-04-13 08:10 pm (UTC)(link)
There's a lot of experience from practical AI (such as machine translation) that the more data you can bring to bear, the more accurate you can be. So I think Simon's conclusion in the last paragraph is wrong: AI can be better if it can be bigger than the brain, even if the basic implementation technique are the same.