I want everything to be tagged with a probability and have a rule for propagating this data across deductions. (What would the rule look like? Someone must have done this.)
The rule in question is Bayes' Theorem, surely? Given that you know A with some probability, you compute P(B) as P(B|A)P(A) + P(B|~A)P(~A). The practical problem is that many deductions are in the form of implications (A=>B, itself true with some non-100% probability since you might be mistaken even in that), so while you might have a clear idea of P(B|A), you're entirely in the dark about P(B|~A).
Also, if you try to use this for inductive rather than deductive reasoning then it runs into the usual problems with picking your prior. Then there's the usual set of pathological cases (a red swivel chair is supporting evidence for the statement "all ravens are black" because it's a clear example of the logically equivalent "all non-black things are non-ravens"; before the year 2000 all supporting evidence for "all emeralds are green" was also supporting evidence for "all emeralds are grue"); I'm not sure whether those can be rephrased as problems with prior choice or whether they're a further layer of annoyance even once you've sorted out your prior.
Are they pathological when almost everything[1] is a non-black non-raven? :)
I thought Bayesian thinking was supposed to not suffer from that problem, but haven't worked out the details yet (eg. http://plato.stanford.edu/entries/epistemology-bayesian/ doesn't satisfy me). I think the flaws in applying the reasoning might involve the alternatives (we naturally assume *most* ravens are *certainly* black) and whether we know the number of objects and the number of ravens.
[1] Perhaps even measure-theoretically almost everything.
My approach is vaguely similar; I tag everything I learn with how mutable
is, and periodically I re-examine those that are mutable to see if they’re
still true. The more mutable the thing, the more often it’s re-examined. So,
that „wegen“ takes the genitive in German, that Robert Clive established
British India (and thus can be used as an example of the top of the social
pyramid in England at the end of his life, despite starting as a clerk),
that Stalin was Georgian, these are all essentially immutable and do not
need to be re-examined once learned. Relatedly, the need to remember where I
learned it from mostly disappears; you can say “look at the standard
references.
On the other hand, that CPAN’s support for random parts of the MIME
specification is patchy, that Solaris is available for free for i386, and
especially more domain-specific things--say, the
vm sysctl calls to make NetBSD behave like a desktop machine (http://mail-index.netbsd.org/current-users/2003/12/16/0001.html)--tend to be
much more mutable; and it’s in _these_ cases that where the thing was
learned matters significantly and must be retained.
And something deduced from two or more of something else has the
mutability of the least mutable “something else,” with the corresponding
necessity to retain where it came from. This whole approach does have the
limitation that your judgement of what’s immutable has to be good; for
example, I learned that certain programming techniques in C were immutably
good, from first principles, and then the processor people went and
invalidated energetically the given that all of RAM was ~equally
expensive to address, so lots of those programming techniques needed to be
re-addressed.
Also, if you try to use this for inductive rather than deductive reasoning then it runs into the usual problems with picking your prior. Then there's the usual set of pathological cases (a red swivel chair is supporting evidence for the statement "all ravens are black" because it's a clear example of the logically equivalent "all non-black things are non-ravens"; before the year 2000 all supporting evidence for "all emeralds are green" was also supporting evidence for "all emeralds are grue"); I'm not sure whether those can be rephrased as problems with prior choice or whether they're a further layer of annoyance even once you've sorted out your prior.
I thought Bayesian thinking was supposed to not suffer from that problem, but haven't worked out the details yet (eg. http://plato.stanford.edu/entries/epistemology-bayesian/ doesn't satisfy me). I think the flaws in applying the reasoning might involve the alternatives (we naturally assume *most* ravens are *certainly* black) and whether we know the number of objects and the number of ravens.
[1] Perhaps even measure-theoretically almost everything.
My approach is vaguely similar; I tag everything I learn with how mutable is, and periodically I re-examine those that are mutable to see if they’re still true. The more mutable the thing, the more often it’s re-examined. So, that „wegen“ takes the genitive in German, that Robert Clive established British India (and thus can be used as an example of the top of the social pyramid in England at the end of his life, despite starting as a clerk), that Stalin was Georgian, these are all essentially immutable and do not need to be re-examined once learned. Relatedly, the need to remember where I learned it from mostly disappears; you can say “look at the standard references.
On the other hand, that CPAN’s support for random parts of the MIME specification is patchy, that Solaris is available for free for i386, and especially more domain-specific things--say, the vm sysctl calls to make NetBSD behave like a desktop machine (http://mail-index.netbsd.org/current-users/2003/12/16/0001.html)--tend to be much more mutable; and it’s in _these_ cases that where the thing was learned matters significantly and must be retained.
And something deduced from two or more of something else has the mutability of the least mutable “something else,” with the corresponding necessity to retain where it came from. This whole approach does have the limitation that your judgement of what’s immutable has to be good; for example, I learned that certain programming techniques in C were immutably good, from first principles, and then the processor people went and invalidated energetically the given that all of RAM was ~equally expensive to address, so lots of those programming techniques needed to be re-addressed.