Hmm. At that point, I'd only have to be wary of bugs in me, not in the software. I'm not sure that my brain doesn't have any native-code-execution vulnerabilities. It would be very important to control the media with which I communicated with the AI. Visual, for example, is probably out, since we know that the brain can be made to do unusual things by showing it flashing images. While I don't have any history of reacting to that kind of image, I'm not sufficiently comfortable that a program with vast (though not infinite) resources couldn't figure out an exploit.
no subject