Humans find it really hard dealing with fuzzy concepts. Just because we can make a firm decision based on partial information doesn't mean it's sound. See, for example, "framing" in political discussion, "priming", confidence tricks, psychology of gambling, etc.
The interesting thing to build in an AI is not reasoning but motivation.
This reminds me, I have an LJ entry in my head about "statistical morality" and why we can't cope with it; that is, dealing with actions that have a very slight negative effect on a very large number of people, or actions that very slightly increase risks.
The interesting thing to build in an AI is not reasoning but motivation.