I've always been a little suspicious of attempts to design a human-like artificial intelligence. Of course it often leads to good SF, which I'm strongly in favour of :-) but many things which make good SF are not things you'd want to go round doing in reality.
In particular, I feel strongly that making a computer behave like a human mind misses the vital point about computers, which is that they're good at things humans are not, such as repeatability, reliability, extremely fast linear processing, not getting bored and so on. The most sensible way to use a computer, it therefore seems to me, is to apply it to jobs where those features are virtues, not to try to convert it into a second-rate human. Also, if you want a human-level intelligence, it's surely easier just to hire one: there are a silly number of billions of us in the world already, and quite a few are looking for work!
Every so often I notice a particularly unhelpful feature of the human brain which reinforces this opinion, by making me feel even more strongly that what we need is a partnership with devices which don't have the same weakness, not an attempt to construct yet more things which do.
Today's annoying mental weakness is a failure to track the origin of data. I don't know about anyone else's, but my brain is very very bad at remembering why it thinks a particular thing; I often find myself needing to know this, and it usually takes me significant and irritating effort to track it down, if indeed I manage to do it at all.
This leads to all sorts of practical inconveniences, probably the biggest of which is that if a significant fact which I've known for a while and from which I've made a load of deductions then changes under my feet, it's very difficult to reorganise my brain to cope: for a long time afterwards, out-of-date ‘facts’ will constantly pop up in my stream of thought and present themselves as if they're valid in their own right, and if I'm lucky I recognise each one as having been derived from something which is no longer true. If I'm unlucky, I won't even notice, and will continue to act on the assumption that the derived fact is still accurate, and make a fool of myself. What I'd like to be able to do is to go through my brain pro-actively and find all the things derived from whatever has just stopped being true, and clean them all out in one go so that they don't keep annoying me for days, weeks or months afterwards.
The other particularly annoying effect is when someone challenges me on something I'd taken as fact, and requires me to defend the belief. Of course in this situation it's terribly useful to know why I think it, (a) so I can put together a coherent defence, and (b) in case it does turn out to be based on out-of-date information and I hadn't noticed.
I suppose this is exactly what is meant by the phrase ‘challenged to re-examine one's belief’. I think what I'm trying to say is that an ideal intelligence would never need to do this in response to a challenge, because it would always have a clear idea of its basis for believing any given thing; and whenever such a basis shifted, any beliefs derived from it would immediately be at least marked as obsolete, or better still pro-actively re-deduced on the spot.
But then, I spend a lot of time wishing I was an ideal intelligence. It's probably not all that productive a hobby.