Thoughts on thoughts (IV)
Gosh; it's been a couple of years since I last made a post in this irregular series, which makes it quite irregular indeed.
I had coffee with feanelwa a couple of weeks ago, and we had a conversation in which it occurred to me that some kinds of programming, perhaps particularly at the level where you're only just getting the hang of it, are a fundamentally introspective process. If you want to program a computer to be able to do some task your own mind already knows how to do, one way to start working out how is to do it or imagine yourself doing it; then you watch your mind's process of thinking about it, closely enough to break it down into smaller steps. Then you write each of those steps in your program, perhaps by applying the same technique again. In other words, you're reverse-
One obvious thing my brain does, of course, is to think, in the sense of self-
(Yes, I know I've said before –
Anyway. So recently I was imagining myself sitting down and writing a program-
Naturally, the fuzziness of such concepts makes life difficult for software written in the traditional precise and algorithmic sense. You can't typically apply standard propositional logic to a world of fuzzy concepts and expect to get reliable answers back out, because every step of your reasoning has some wiggle room and with a long enough chain of steps you eventually end up finding too many of them have wiggled in the same direction and the final link of the chain is pointing in precisely the opposite direction from the first. People have worked on various approaches to developing systems of computer inference which either try to avoid getting into this situation (e.g. ‘fuzzy logic’, or tagging every statement with a probability and using Bayesian methods to determine the certainty level of every deduction you make), or which try to avoid getting too confused once they are (e.g. by fiddling with the rules of propositional logic to prevent the system being able to deduce that even false things are true once it has a single contradiction). To some extent these have helped, but it's not very surprising that the most successful attempts to make computer programs compete with intelligent people have been from fields fairly close to pure mathematics, such as chess, where this sort of thing isn't much of a problem in the first place.
But what occurred to me, when trying to imagine sitting down and writing a thinking program from scratch, is that that human minds aren't merely capable of dealing with fuzzily specified concepts. Before we deal with them, we must first construct them.
In fact, in some sense every one of us has constructed in their own brain every fuzzy concept we ever think about. Even if in a given case that construction didn't occur ab initio but as a result of someone explaining the concept to us in words, those words are themselves imprecise; so in order to acquire anything one might call an understanding of the new concept, we have had to think about what the explanation actually means and what sorts of things it might or might not cover, and then we construct the concept in our head in basically the same way as we might have done without the explanation. All an explanation really does is to show us where to look for a new concept; we have to build our own actual understanding of it once we can see it.
This was important to me while trying to imagine how a program would think like a human, because I suddenly realised that jumping straight to an attempt to devise modes of reasoning which can operate effectively given some fuzzy concepts is putting the cart before the horse. The very first thing a program would have to do would be to actually construct its fuzzy concepts in the first place; reasoning with them comes later.
So how, and why, do we construct fuzzy concepts in the first place? I think we do it by looking at the world and seeing regularities in it. Initially, those regularities are in the slew of raw data coming in from our perception organs; so we quickly form a lot of fuzzy concepts for kinds of thing we commonly see –
So I saw the process of concept-
The interesting part is that this process isn't only applied to the data coming in from perception. By watching the inside of our own brain as concepts move past each other, one might observe a general regularity in the patterns in which they recur, leading one to derive things such as rules of reasoning –
It's only later on that we go to school and learn things like the abstraction of those ideas into formal propositional logic. (Which, incidentally, was invented by people who had the introspective ability to look at those intuitive fuzzy reasoning rules in their own heads and reverse-
So this view of sentience is one in which the fundamental act of intelligence is this statistical and approximate pattern-
Of course, I'm well aware that all of this is no more likely to be true than any other crackpot theory about AI dreamed up by someone out of nothing but introspection. And I'm sure that if I did sit down and try to write an intelligent program on this basis (which I couldn't even if I wanted to, because statistics isn't really my thing –
It's also a little depressing, in a way. Suppose it's all true: suppose that the fundamental function of sentience on which all else is based is this concept-
A lot of hard SF which talks about artificial intelligences tends to see them as either inherently, or at least capable of being made, better, faster and above all more precise than human minds. Suddenly I feel that this is a contradiction in terms: a more precise mind would probably fail to be a mind at all. It is our imprecision which enables us to function in the first place, and despite its perceptible downsides, one cannot construct an AI which doesn't suffer from them because its upsides are all-
no subject
I don't think you can (necessarily) observe the implemented behaviour (in sufficient detail) as a program running *inside the VM*, for your average (real, computer) VM. Just to pick something at random, the VM might not have any facilities for letting you load new code into it. And even if you can determine the behaviour exactly, knowing that "executing instruction X causes effect Y" doesn't mean you know how to actually implement effect Y in your own VM.
(I'm reminded of Greg Egan's _Permutation City_, which includes some entities who think they're determining the underlying rules of physics but are actually dealing with the behaviour of the VM they live in.)
no subject