After skimming a lot more web pages about Bayesian inference (most not very helpful to someone like me), I think I can state the basics. Because I'm realizing that our ability to understand stuff like this depends strongly on the kinds of examples used, and on how they're described, I'll try using an example relevant to our research.

Bayesian inference is a way of reasoning that tells you how new evidence should change your expectations. Think about a yes/no question about reality, such as "Is my culture contaminated?" or "Do USS sites kink during DNA uptake?" In the real world where Bayesian reasoning is helpful, we usually don't approach such questions with a blank slate, but with some prior information about the chance that the answer is yes. (Prior is a technical term in Bayesian reasoning, but don't assume I'm using it correctly here.)

For example, our prior information about the chances of contamination might be that about one in 50 of our past cultures has been contaminated. We don't yet have any real evidence about this culture - we just want to check. Before we do the very simple test of smelling the culture, we know two things about this test. We know that the probability that a contaminated culture smells different is, say 75%. But we also know that our sense of smell can sometimes mislead us, so that 5% of the time we think a uncontaminated culture smells different. So we have three pieces of information, one about the prior probability of contamination (2%) and two about the reliability of the smell test. Bayesian reasoning tells us how to use the information about the test's reliability to change our estimate of contamination probability.

So we sniff the culture and decide it does smell different. What if it didn't smell different? In each case, what would be the revised probability of it being contaminated? Revised is in bold here because Yudkowsky's explanation emphasizes that what we're doing is using our test results to revise our previous estimate.

To proceed we need to combine the two pieces of information we had about the test's reliability with our before-test probability of contamination. One way to think about this is to spell out the different possibilities. This kind of example is easier to understand with integers than with probabilities and percentages, so let's consider 1000 suspect cultures, 20 of which will really be contaminated.

--If our culture is one of the 20 in 1000 that is contaminated:

-----75% of the time (15 cultures) we'll score its smell as different.

-----25% of the time (5 cultures) we'll score its smell as normal (a false-negative result).

--If our culture is one of the 980 in 1000 that is not contaminated:

-----5% of the time (49 cultures) we'll score its smell as different (a false-positive result).

-----95% of the time (921 cultures) we'll score its smell as normal.

So if our suspect culture does smell different, the probability that it really is contaminated rises from 2% to 15/(15+49) = 23%. We used information about the reliability of our test to decide how much we should revise the original estimate. If the culture doesn't smell different, the probability that it is not contaminated would be revised up from 98% to 921/(921+5) = 99.5%.

I'll try to come back to this in another post, after the ideas have had time to settle into my brain.

Posted: September 22, 2017 at 04:04PM

1 day ago in Field Notes

Good post - you've definitely got the gist of it.

ReplyDeleteI just wrote a post discussing the background of Bayesian analysis in a bit more depth. I'd be very interested in hearing your thoughts on my explanation, if you have a moment to review it.