Welcome

Welcome to Bayesians Without Borders! In this blog we will demystify Bayesian technology and non-statistical Bayesian analysis. Topics will include: Bayesian networks of all varieties and all varieties of their application; Bayesian risk assessment; learning Bayesian networks; naive Bayes models; decision analysis and intelligent decision support; causal reasoning; Bayesian inference and argument analysis; Bayesian confirmation theory and philosophy of science. Applications we expect to deal with include: environmental management; biosecurity; bioinformatics; Bayes and the law. However, the applicability of Bayesian technology is limited only by the applicability of probability theory, so we will go well beyond these examples. (For the occasional technical post, we've added a notation page for ready reference.)

We will be inviting Bayesian researchers and analysts to post here on a semi-regular basis. In addition to your comments on posts, which are highly welcome (but will be moderated), we are open to suggestions about topics on which to post, authors or unsolicited posts from you, which we will consider for publication. From time to time we may also post a precis or review of a relevant book.

Bayesians without Borders is meant to be a call to Bayesians everywhere to discuss, critique and inform the public of Bayesian ideas and methods.

2 thoughts on “Welcome

  1. I think a lot of the mysticism reoelvvs around the issue of picking a prior, and how that essentially arbitrary choice can totally change the conclusions you arrive at from your analysis.For me, the best way to clear up the mysticism is to make the switch from probabilities to codelengths. Here's how it works. Instead of trying to find the probability of a data set (what does that even mean?), you're trying to encode it and send it to your friend. There is a one-to-one mapping between codelengths and probabilities, so nothing is lost by doing this. However, when you start thinking in the encoding-data mode, certain things become very clear:1) There is no question but that you have to pick a prior. The prior is just the data format that you and your friend agree on to transmit data. There can be no funny business. You cannot change the prior after seeing the data (overfit), because if you do your friend won't be able to decode it.2) If you get lucky by choosing a data format/prior that happens to match the data well, you will get good (short) codes.3) If your format is really a meta-format, that is it allows the sender to look at the data, analyze it, develop a new specific format, and then send the data using that specific format, then obviously you need to send information regarding the new specific format in advance of the actual data. Furthermore, your choice of meta-format will affect the choice of specific-format and there is just nothing that can be done about this.4) The age-old philosophical question of zero probabilities (do we ever assign anything zero probability?) becomes in this context a rather more practical question: can our data format be used, in principle, to send any observed data set (perhaps with a very long code)?5) In the limit as we assume less and less about the type of data that we observe, and correspondingly make our data format more and more general, we eventually arrive at the Solomonoff distribution. Here the data format is simply the specification of a Turing machine.

    • We will at some point post on these topics, including where priors come from, rational versus irrational priors and minimum message length inference (MML).

Leave a Reply to Hasan Cancel reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>

Current ye@r *