Wednesday, 30 April 2014

New reviews of the book

Two journals have published excellent reviews of the book:

Journal of Statistical Theory and Practice, Vol. 8, March 2014 (full review: 
"By offering many attractive examples of Bayesian networks and by making use of software that allows one to play with the networks, readers will definitely get a feel for what can be done with Bayesian networks. … the power and also uniqueness of the book stem from the fact that it is essentially practice oriented, but with a clear aim of equipping the developer of Bayesian networks with a clear understanding of the underlying theory. Anyone involved in everyday decision making looking for a better foundation of what is now mainly based on intuition will learn something from the book." - Peter Lucas
International Journal of Performability Engineering, Vol.9, No. 3, July 2013, pp 551-553 (full review:
“… this book will be found very useful to practitioners, professors, students and anyone interested in understanding the application of Bayesian networks to risk assessment and decision analysis. Having many years experience in the area, I highly recommend the book.” --William E. Vesely (NASA),

Tuesday, 1 April 2014

Douglas Hubbard posts excellent review of our book on

Douglas Hubbard - author of the brilliant (and top selling) books  How to Measure Anything, The Failure of Risk Management, and Pulse has written a terrific review of our book on This is the review verbatim:

The single most important book on Bayesian methods for decision analysts, March 19, 2014
By Douglas W. Hubbard (Glen Ellyn, IL) Amazon Verified Purchase
This review is from: Risk Assessment and Decision Analysis with Bayesian Networks (Hardcover)

Fenton and Neil have successfully made a "crossover" book that reaches broad audiences on a topic which is too often presented in a dry and esoteric manner. It is rich with illustrations, interesting examples, debunking of common fallacies, and a passionate philosophical position on Bayesian methods vs. the "frequentist" methods common in statistics.

This book is a comprehensive treatment of Bayesian methods but focuses on the particularly powerful models that can be made when conditional probabilities are presented in networks. The authors present a complete algebra of Bayesian networks using both formal expressions and simple diagrams so that almost any reader can be comfortable with the topic. This book does not assume that the reader has even basic training in probabilistic methods (it has a chapter on the basics of probability) but it also does not compromise on substantive content. The reader seeking basic explanations will not feel excluded and the reader seeking more advanced treatments will be satisfied as well.

This is exactly the sort of rigorous thinking that needs to displace the "softer" methods more common in risk assessment and decision analysis. It is presented as an entirely practical solution for managers, not an abstract, academic exercise. The "best practices" committees for PMBOK, ISO, Cobit and managers everywhere would be well advised to read this book before inventing yet another risk assessment or decision analysis method based on fluffy scores.

Doug Hubbard Author of How to Measure Anything (2007, 2010, 2014), The Failure of Risk Management (2009) and Pulse (2011)
The book now has 19 customer reviews on (15 of which are 5-star and 4 of which are 4-star) and 10 customer reviews on (7 of which are 5-star and 3 of which are 4-star).

Last week the book was number 1 top seller on amazon in the 'risk management' category.

Friday, 26 April 2013

Errors in the book (including interesting Binomial distribution error)

Eagle-eyed readers continue to point out errors in the book and we regularly update the Errata page here with the changes:

An especially interesting error (with a fair few ramifications) has been discovered in Example 5.6 on pages 122-23.  The error was triggered in our calculation of the 99th percentile of the particular Binomial distribution there. Basically we wanted to know the following:
what is the number k for which the probability of tossing at least k heads in 100 tosses is 0.01?
Now, because the Binomial is a discrete distribution there is no reason why there should be an integer k for which the probability of tossing at least k heads in 100 tosses is exactly 0.01. In fact it turns out that:
  • for k=61 the probability is 0.017
  • for k=62 the probability is 0.0105
  • for k=63 the probability is 0.006
So, in the original manuscript we just used k=62 because that was easily the closest to 0.01.

However, to make the point in the example to work it was important that the probability was LESS than 0.01. So, whereas the example used k=62, what we strictly needed was k=63. This meant all the subsequent calculations were wrong.
There were also some other errors in the example including a couple that had been introduced by the publishers as they were correct in our final manuscript.

Thursday, 11 April 2013

Bayesian networks plagiarism

If, as they say, imitation is the sincerest form of flattery then we are privileged to have discovered (thanks to a tip off by Philip Leicester) that our work on Bayesian network idioms - first published in Neil M, Fenton NE, Nielsen L, ''Building large-scale Bayesian Networks'', The Knowledge Engineering Review, 15(3), 257-284, 2000 (and covered extensively in Chapter 7 of our book) has been re-published - almost verbatim -  in the following publication:
Milan Tuba and Dusan Bulatovic, "Design of an Intruder Detection System Based on Bayesian Networks", WSEAS Transactions on Computers, 5(9), pp 799-809, May 2009. ISSN: 1109-2750

The whole of Section 3 ("Some design aspects of large Bayesian networks") - which constitutes 6 out of the 10 pages - is lifted from our 2000 paper.  Our work was partly inspired by the work of Laskey and Mahoney. The authors reference that work but, of course, not ours, hence confirming the very deliberate plagiarism.

Milan Tuba and Dusan Bulatovic are at the Megatrend University of Belgrade (which we understand is a small private University) and we had not come across them before now. The journal WSEAS Transactions on Computers seems to be an example of one of the dubious journals exposed in this week's New York Times article. Curiously enough, after a colleague distributed that article yesterday I was going to write back to him saying that I disagreed with the rather elitist tone of the article, which suggests that the peer review process of the 'reputable scientific journals' was somehow unimpeachable - in reality there is no consensus on what journals are 'reputable' and even the refereeing of those widely considered to be the best is increasingly erratic and at times bordering on corrupt (which is inevitable when it relies exclusively on volunteer academics).  But at least I would hope that any 'reputable' journal would still be alert to the kind of plagiarism we now see here.

This is not the first time our work has been very blatantly plagiarised. Interestingly, on a previous occasion it was in a book that was published by Wiley Finance (who I am sure are widely considered one of the most reputable publishers). The book was 'written' by a guy who had been our PhD student for a short time at City University before he vanished without notice or explanation. The book contained large chunks of our work (none of which the 'author' had contributed to, as it predated his time as a PhD student with us) without any attribution. Despite informing Wiley of this, and proving to them that a) the author's qualifications as stated in the book were bogus; and b) the endorsements on the back cover were fraudulent, they did nothing about it.