Introspecting myself, I think there's great scope for an AI made on this model to be able to think arbitrarily more precisely than I do, because I am limited by headspace constraints and this limits the precision of concepts that I can maintain. Precision happens by nesting concepts, and it doesn't follow from the outer concepts being fuzzy that the inner concepts are as fuzzy. I think if you had a mind built on hardware that expanded over time, rather than starting to contract again after twenty years, you might be able to start with this and end up somewhere more precise.
Possibly by modularising? It seems a great constraint that all one's specialist knowledge has to fit inside the same head. If your AI could over time get access to new sub-systems to populate with specialist knowledge it could maintain more attention on an overview. Internalise organisal structure?
no subject
Possibly by modularising? It seems a great constraint that all one's specialist knowledge has to fit inside the same head. If your AI could over time get access to new sub-systems to populate with specialist knowledge it could maintain more attention on an overview. Internalise organisal structure?