aldabra: (Default)
aldabra ([personal profile] aldabra) wrote in [personal profile] simont 2008-04-13 11:09 am (UTC)

Introspecting myself, I think there's great scope for an AI made on this model to be able to think arbitrarily more precisely than I do, because I am limited by headspace constraints and this limits the precision of concepts that I can maintain. Precision happens by nesting concepts, and it doesn't follow from the outer concepts being fuzzy that the inner concepts are as fuzzy. I think if you had a mind built on hardware that expanded over time, rather than starting to contract again after twenty years, you might be able to start with this and end up somewhere more precise.

Possibly by modularising? It seems a great constraint that all one's specialist knowledge has to fit inside the same head. If your AI could over time get access to new sub-systems to populate with specialist knowledge it could maintain more attention on an overview. Internalise organisal structure?

Post a comment in response:

This account has disabled anonymous posting.
If you don't have an account you can create one now.
HTML doesn't work in the subject.
More info about formatting