I've mentioned before that I often find myself noticing fundamental bugs in the way the human brain works and wishing my brain was better designed.
Here's another one: my brain is often very bad at predicting how it would behave in dangerous or scary situations. It's annoyingly common for me to evaluate several courses of action in advance of an event, decide which one I like best, and then when the time comes to actually commit myself then I discover that the one I'd chosen is terribly scary now that it's actually physically staring me in the face rather than being considered as an abstract strategic puzzle.
If I were designing an ideal intelligence, I would give it a properly working imagination. It would be able to set up a hypothetical situation, put itself into that situation, and then reason exactly as if it were real. It would either be able to temporarily completely suppress the knowledge that the situation wasn't real, or alternatively it would just be able to reliably inhibit that knowledge from impinging on its reasoning processes. In fact, now I've written that either/or, I'm not entirely sure I can robustly define the difference between those two possibilities; but either way, the fundamental architecture of my intelligence would be designed in such a way that if it decided it would react a certain way in a scary situation, you could depend on it being right.
Wouldn't it be better to work on the root cause of the discrepancy: i.e. close the gap between what you would do and what you should do?
Consider the case where Simon is arachnophobic and being chased by an escaped circus lion. There are two paths to safety. One is difficult and has a small risk of him being eaten, the second is quick, but has a spider sitting in it.
What he *should* do is ignore the irrational fear of spiders and take the path with least likelihood of being eaten.
What he *would* do is weigh up the two fears (or spiders, or the risk of being eaten), and depending how intense the phobia is, may choose the path that risks being eaten.
What he's asking for is for his ideal intelligence to correctly predict the outcome of the weighing-up in the second instance when considering the hypothetical situation. It strikes me that that's not necessarily the optimal solution.
An intelligence that correctly predicts you'll do something really dumb when afraid is not as ideal as an intelligence that chooses the optimal course of action *despite* the fear.
Obviously if the fear is a rational one (say one of the escape paths from the lion leads through a wasp nest, and you're allergic to wasp stings), the outcome for both types of intelligence would be the same.
In your example with the lion and the spider, I might find it useful to evaluate such a scenario purely in my imagination as a part of the process by which I was attempting to deal with arachnophobia; a willingness to endure a spider in the course of escaping from a lion might be milestone #1 on my therapy progress chart. #2 might be willingness to endure a spider in the course of escaping from something painful but non-lethal (enraged housecat?), #3 willingness to endure a spider instead of facing a verbal bollocking, etc :-)
I don't disagree that a truly ideal intelligence would not have any fears which weren't rational and proportionate to the real danger, and thus might not need this ability as badly as I do. However, I'm a cautious and conservative engineer type, so I plan to design my truly ideal intelligence with multiply redundant safety mechanisms.