May 2014 Bayesian Network Training with Bayesian Intelligence

We will be holding our next Introduction to BNs workshop at the Mount Albert Research Centre in Auckland, New Zealand, on May 13-14. We still have some places available, so if you would like to attend, please register at:

http://bayesian-intelligence.com/training/training-registration.php

There you will also find a draft workshop schedule with an overview of the topics covered. If you are unable to make these dates, please email me indicating your interest in attending at another time.

For more general information about our workshops, you can visit http://bayesian-intelligence.com/training/ or contact Owen Woodberry on owen.woodberry@bayesian-intelligence.com or on +61 (0)406 924 446.

Finally, please also feel free to pass this info on to anyone you know that may be interested in attending our BN training courses.

Bayesian Watch

— Kevin Korb

I have started a new blog, BayesianWatch, on which I will post (occasionally) on Bayesian argumentation theory and practice. The differences from this blog are: this is the official blog of Bayesian Intelligence, while BayesianWatch is my personal blog; this blog is about Bayesian technology and methods, while the other is about argument analysis, with a Bayesian orientation. Inevitably, there will be some cross talk, however; for example, on that blog you will find a response to the Sally Clark post here.

Notes on Forecasting

— Yung En Chee, University of Melbourne

 

[Editor's note: These ideas were prepared by Yung in support of her participation in a forecasting project and describe techniques she found helpful for predicting specific world events within specific time frames. The context is one where specific possible outcomes of political, economic, etc. events were described, and probability ranges for those outcomes had to be supplied within a few days. The requirement was to provide predictions as accurate and precise as possible without succumbing to over- or under-confidence. I think these ideas are of general interest.]

 

My list of tools and strategies is a mash-up of ideas from other people, including Daniel Kahneman (Thinking Fast, Thinking Slow), a couple of bloggers who were on the Good Judgement team - ‘Dart-Throwing Chimp’ (Jay Ulfelder) and ‘Morendil’ (Laurent Bossavit) and personal reflection. See the following Morendil posts on lesswrong:

Bossavit's tools for prediction:

  1. Favour the status quo (this seems to me more like an empirically derived tip – if you do this you’ll come out ahead in the long run)
  2. Flip the question around
  3. Use reference classes
  4. Prepare lines of retreat (what would make me change my mind about this?) [this is equivalent to think of reasons/evidence that make it unlikely – a sort of correction for overconfidence]
  5. Abandon sunk costs
  6. Consider your loss function (this is more about strategic hedging in response to how the Brier score is computed)

These ideas are, in part, about the psychological aspects of making the prediction, but he doesn’t discuss strategies for finding information, reconciling or evaluating information, forming mental models etc.

Tools I use:

  1. Understand the conditions required for resolution of question.
  2. Iteration (rapid prototyping). Search and scan articles quickly to build up some mental model of what the situation is, who the actors are and understand any processes required for resolution of the question (point 3 below) – go quick, build a crude mental model then refine as more material is encountered/assimilated.
  3. Understand any processes required for resolution of question. E.g. standard operating protocol for declaration of disease status of OIE member (World Organization of Animal Health); process of granting of EU member candidacy; voting and veto powers re UN Security Council, process for obtaining full negotiating member status at TPP (Trans-Pacific Partnership), etc.
  4. If a question involves non-English speaking players make sure to search beyond the elite news sources (e.g. BBCNews, Guardian, Newsdaily, New York Times, Economist, WSJ) for primarily English speakers (e.g. Al-Jazeera, Al-Monitor, Kurdistan news, Ahram online etc.).
  5. Consider whether the uncertainty associated with the question is reducible or irreducible. Uncertainty may be reducible if it’s due to a lack of knowledge or understanding. If I suspect this is the case, I try to find a knowledgeable interpreter/analyst or try to construct Reference Classes and identify plausible Base Rates. Indicators of knowledge: knowledge of historical context, logical argumentation, ability to articulate reasons, provide reasons supported by evidence, ability to recognise and acknowledge where and when things aren’t known (e.g. Aurelan George Mulgan on Japan TPP; Reidar Visser on Iran). If the uncertainty is irreducible (and I consider things like oil, stock and currency prices to fall in this category) then don’t bother looking for more info.
  6. Use Reference Classes, try to estimate Base Rate from any available data and use that (see Kahneman: Thinking Fast, Thinking Slow).
  7. Use Bueno de Mesquita’s ‘factors’ (Position, Influence, Salience, Flexibility/Resolve, see below) to think through the interests of actors involved.
  8. Understand the status quo and consider favouring the status quo (empirically, we would expect the status quo to persist).
  9. Another empirically robust finding from interest group politics: well-organised private interests nearly always win out over diffuse, generally disorganized public interests.
  10. Try and articulate Reasons that would make an event LIKELY and and Reasons that would be it UNLIKELY (this requires adequate research or analysis or reasoning to furnish or construct reasons; will often rely on tally for informing prediction).
  11. Keep looking out for new info till just before the deadline and update estimates accordingly.
  12. Consciously consider our tendency towards What You See IS All There Is (WYSIATI) – this tendency is misleading! Remember that the capacity for unpredictable, unforeseeable events or chains of events is ever present, particularly with long time lines and adjust forecasts accordingly (e.g. UN Sec Council resolution on Mali).

Note: Bruce Bueno de Mesquita is a political scientist who claims to have built a system using Game Theory for making predictions (http://www.predictioneersgame.com/) I think he overreaches and he’s been criticised for not being transparent about his methods. I agree. Nevertheless, I find his key factors for assessing actor interests, stakes, power and influence useful.

  1. Relative potential influence - the relative potential ability of each player to persuade other stakeholders to adjust their approach to the issue to be more in line with the influencer’s perspective. Resources - the potential ability each player has to persuade other stakeholders to support a point of view on the issue in question. The ability to persuade may be derived from holding a position of authority, being an expert, commanding a large budget, or any other factor that makes others listen to someone.
  2. Policy position - position preferred by each stakeholder on the issue, taking constraints into account. This position is not likely to be the outcome the stakeholder expects or is prepared to accept, nor is it likely to be what the player wants in his or her heart of hearts. It is the position the stakeholder favors or advocates within the context of the situation. When a player’s position has not been articulated, it is best thought of as the answer to the following mind experiment: If the stakeholder were asked to write down his or her position, without knowing the values being written down by other stakeholders, what would he or she write down as the position he or she prefers on the issue continuum? To place a numeric value on the position, the investigator must first have defined the issue continuum. The continuum will either have a natural numeric interpretation, such as the percentage of uninsured on health care to be covered under a new policy or the analyst will need to develop numeric values that reflect the relative degree of difference across policy stances that are not inherently quantitative. It is important that the numerical values assigned to different positions (and they can range between any values) reflect the relative distance or proximity of the different solutions to one another.  An easy way to turn player preferences on an issue into numeric values is to place each player on the issue continuum you defined, locating then at the point on the continuum that reflects the policy they support. Then, use a ruler to measure how far each player is from one end of the line that reflects the range of choices. Let the left-hand end of the line equal 0. Then each other point on the line is simply its distance from 0 on the ruler.
  3. Salience - assesses how focused a stakeholder is on the issue. Its value is best thought of in terms of how prepared the stakeholder is to work on the issue when it comes up rather than some other issue on his or her plate. Would the stakeholder drop everything else to deal with the issue? Would the player work on it on a weekend day, come back from vacation, etc.? The more confidently it can be said that this issue takes priority over other matters in the stakeholder’s professional life (or personal life if the issue is about a personal or family matter), the higher the salience value.
  4. Flexibility/Resolve - evaluates the stakeholder’s preference for reaching an agreement as compared to sticking to his or her preferred position even if it means failing to reach an agreement.

Weather and Climate Change: Faulty Logic

—Kevin B Korb

I have a lot of respect for Crikey, the online Australian newsletter. They report on a lot of things other media outlets won't touch, especially bias and disinformation in those other media outlets. But the other day, while reading a piece by Bernard Keane on the inconsistency of Tony Abbott's rejection of a carbon tax, after his having previously advocated one, I read:

Insistence that the planet is not getting warmer — or, as Abbott until recently insisted, is getting slightly cooler — has become more difficult to maintain publicly, despite the faulty logic of linking weather to climate. (B Keane, Crikey, 5 Feb.)

It is certainly the case that weather is not the same as climate. This is pretty clearly revealed by the fact that many global warming deniers are weather forecasters, whereas hardly any climatologists are deniers. (NB: there are a lot more forecasters than climatologists!) But denying a link between weather and climate is simply absurd. The relation between climate, the prevailing weather in a region and season, and the weather itself, on any given occasion, is stochastic: the climate system, plus specific, highly variable, conditions together determine the specific weather. That establishes a kind of probabilistic dependency —i.e., a link —which is widely recognized in society.

For example, only ignorant people or fools now deny that smoking causes lung cancer. This is so despite the fact that many smokers never get lung cancer. Some of them die too soon from other causes, such as emphysema. But many smoke contentedly for decades with no sign of the cancer showing up. Lawyers for tobacco companies used to point this out in trying to make the case that specific complainants had no basis for complaint, because their lung cancers might have been amongst those caused by pesticide exposure or smog or a stray cosmic ray striking a susceptible cell in the lung. But these defences have been abandoned, with even tobacco companies accepting some culpability for the disease in specific cases. The situation is analogous with weather and climate change. Was Katrina specifically due to global warming? Well, obviously not in its totality; but global warming heating the Gulf of Mexico likely contributed to its intensity. Specific events will never be entirely attributable to a broad-scale change, because the broad-scale change will never be entirely responsible for a specific event in all its specificity. Denying a linkage on that basis, however, is a nonsense. An accumulation of extreme weather events, and a statistical assessment showing that the probability of their extremity without global warming is rapidly vanishing, will eventually silence those who claim weather tells us nothing about climate.

Probabilistic dependencies are real. The link they establish, in fact, is just that between a stochastic hypothesis and the evidence which confirms it. Denying such a link is tantamount to denying the statistical foundations of empirical science.

If you want to "sound reasonable" by making some concession or other to global warming deniers, then you should do so by reporting something that is factual, rather than counterfactual. You can point out that many deniers have good dress sense, or sometimes use grammatical sentences, for example. Buying into their dogma about weather versus climate change can all too easily turn into buying into their rejection of science.

Bad Science

− Kevin B Korb

Bad science comes in a number of varieties, at least including the following:

  1. Sloppy science. This might include poor experimental design, poor measurements, slovenly reasoning, insufficient power in one's tests, failure to blind experimenters or subjects, etc. Presumably, the intentions are right, but the execution is wrong.
  2. Pseudo-science. This is fake science. The fakery may be intentional or unintentional. For example, cultists may intentionally generate some large-scale fantasy, while their followers unsuspectingly take it seriously. If the pseudo-scientific methods employed have the look and feel of science, then this is due to simulation or accident, and not due to the proper employment of scientific methods. For Karl Popper, demarcating real from pseudo-science was a kind of mission. He proposed a "falsificationist" criterion: that theories which were (or could be) protected from any possible contrary evidence were non-scientific. Unfortunately, this could never quite be made to work; there are no logical limits to what can be defended, or not, since, as Quine put it, all of our ideas are tied together in a "Web of Belief" (Quine and Ullian, 1978). Still, Popper was certainly on to something: those, such as climate change deniers, who spin excuses and rationalizations no matter what the evidence may be good propagandists, but they are not good scientists.
  3. Cheats. This is also fake science, but most likely not with a view to promoting a false story about the world, but instead a false story about the researcher.

Ben Goldacre's book Bad Science (Fourth Estate, 2009) treats miscreants and violators of scientific method primarily in the first two categories. Being a journalist (and MD) he, perhaps naturally, focuses largely on the aberrations and violations perpetrated by journalists. On his account, they've done quite a lot of damage. For example, around 2005 there were repeated scandals in the UK concerning rampant MRSA in UK hospitals, but the findings were all traceable to a single lab, "the lab that always gives positive results". Apparently, journalists responded to that description by anticipatory salivation, rather than anxious palpitation. It's a ludicrous, and sad, story.

For newcomers to scientific or medical research, Goldacre's book is an entertaining, accessible introduction to a host of issues you will need to know about: experimental design, bias in statistics, cheating by pharmaceutical companies in research and in advertising, the silliness of homeopathy, how we fool ourselves into believing what we want to believe and what measures can be taken to minimize our own foolishness.

For those well versed in these kinds of issues, the book, while a good source of anecdotes, is just a little disappointing. It's important to provide accessible accounts of science and method, but Goldacre goes just a bit far in dumbing things down, in my opinion. Popular science writers should not be assuming that their readers are idiots. He proposes as his motto: "Things are a little more complicated than that". Indeed, they are. Still, on the whole, this is a good and positive contribution to the public understanding of science.


(17 Nov 2012) I think perhaps I was a bit too negative at the end of the note above. Goldacre's book can be seen as an extended plea for a more evidence-oriented treatment of science journalism and, in particular, as a protest against the view that science is just too complicated for ordinary folk to understand — a view which he rightly condemns for promoting appeals to authority for arbitrating scientific disputes, rather than appeals to evidence. The result is a serious dumbing down of public policy debates, including a tendency to portray all sides of a scientific dispute as having equal support, because all sides can call upon any number of "experts". This message certainly needs to be spread. The quality of public debate about topics that concern science is very poor indeed.

Objecting to God

— Colin Howson

A Bayesian evaluation of the evidence, old and new, for the existence of a God of the sort the Abrahamic religions postulate reveals that there really isn't any: on the contrary, such evidence as can be found is very strongly against such a being. In my new book Objecting to God (Cambridge, 2011) I employ Bayesian probability to counter many 'pro-God' arguments in the recent literature, particularly those bits of it discussing the alleged extreme improbability of fine-tuning and the development of complex life-forms. In particular, the "Anthropic Argument" for the existence of a God is no more compelling than its underlying "Anthropic Principle", which I show to be fallacious.

Not only do the Abrahamic religions lack any credible evidential foundation, but their influence is largely malign, embodying codes of ethics both primitive and repressive. In my book I argue on the contrary for a humanitarian ethics based on a more modern version of Aristotle's notion of eudaimonia. Another novel feature of my book is its drawing a parallel between the logico-mathematical paradoxes of the late nineteenth and early twentieth centuries and the ancient theological paradoxes arising from the notion of an omniscient, omnipotent, perfectly good deity. I show how Tarski's celebrated theorem(s) on the indefinability of truth refutes the postulate of omniscience. I also present a critical discussion of Richard Dawkins's well-known attempt to prove that the hypothesis of God is itself extremely improbable.

 

Colin Howson is a Professor of Philosophy at the University of Toronto and Emeritus Professor in the Philosophy Department at the London School of Economics. For a more detailed and careful presentation of these ideas read his book Objecting to God (Cambridge University Press, 2011).