Skip to main content

Posts

Showing posts with the label Bayesian models of belief

Response to Ben Tappin and Stephen Gadsby

In this post,  Daniel Williams , Postdoctoral Researcher in the Centre for Philosophical Psychology at the University of Antwerp, responds to last week's  post  from  Ben Tappin  and  Stephen Gadsby  about their recent paper " Biased belief in the Bayesian brain: A deeper look at the evidence ".  Ben Tappin and Stephen Gadsby have written an annoyingly good response to my paper, ‘ Hierarchical Bayesian Models of Delusion ’. Among other things, my paper claimed that there is little reason to think that belief formation in the neurotypical population is Bayesian. Tappin and Gadsby—along with Phil Corlett , and, in fact, just about everyone else I’ve spoken to about this—point out that my arguments for this claim were no good. Specifically, I argued that phenomena such as confirmation bias, motivated reasoning and the so-called “backfire effect” are difficult to reconcile with Bayesian models of belief formation. Tappin and Gadsb...

Biased Belief in the Bayesian Brain

Today’s post comes from  Ben Tappin , PhD candidate in the  Morality and Beliefs Lab  at Royal Holloway, University of London, and  Stephen Gadsby , PhD Candidate in the  Philosophy and Cognition Lab , Monash University, who discuss their paper recently published in Consciousness and Cognition, “ Biased belief in the Bayesian brain: A deeper look at the evidence ”. Last year Dan Williams published a critique of recently popular hierarchical Bayesian models of delusion, which generated much debate on the pages of Imperfect Cognitions . In a recent article , we examined a particular aspect of Williams’ critique. Specifically, his argument that one cannot explain delusional beliefs as departures from approximate Bayesian inference, because belief formation in the neurotypical (healthy) mind is not Bayesian . We are sympathetic to this critique. However, in our article we argue that canonical evidence of the phenomena discussed by Williams—in particu...

A Reply to Dan Williams on Hierarchical Bayesian Models of Delusions

This post is a reply by Phil Corlett (Yale)  (pictured below) to Dan Williams's recent post on Hierarchical Bayesian Models of Delusions . Dan Williams has put forward a lucid and compelling critique of hierarchical Bayesian models of cognition and perception and, in particular, their application to delusions. I want to take the opportunity to respond to Dan’s two criticisms outlined so concisely on the blog (and in his excellent paper) and then comment on the paper more broadly. Dan is “ sceptical that beliefs—delusional or otherwise—exist at the higher levels of a unified inferential hierarchy in the neocortex . ” He says, “ every way of characterising this proposed hierarchy... is inadequate .” Stating that “ it can’t be true both that beliefs exist at the higher levels of the inferential hierarchy and that higher levels of the hierarchy represent phenomena at large spatiotemporal scales . There are no such content restrictions on beliefs, whether delusional ...

Hierarchical Bayesian Models of Delusion

Today's post is by Dan Williams, a PhD candidate in the Faculty of Philosophy, at the University of Cambridge. If you had to bet on it, what’s the probability that your loved ones have been replaced by visually indistinguishable imposters? That your body is overrun with tiny parasites? That you’re dead? As strange as these possibilities are, each of them captures the content of well-known delusional beliefs: Capgras delusion, delusional parasitosis, and Cotard delusion respectively. Delusional beliefs come in a wide variety of forms and arise from a comparably diverse range of underlying causes. One of the deepest challenges in the contemporary mind sciences is to explain them. Why do people form such delusions? And why on earth do they retain them in the face of seemingly overwhelming evidence against them? My new paper “ Hierarchical Bayesian Models of Delusion ” presents a review and critique of a fascinating body of research in computational psychiatry that at...

Epistemic Benefits of Delusions (1)

This is the first in a series of two posts by Phil Corlett (pictured above) and Sarah Fineberg (pictured below). Phil and Sarah are both based in the Department of Psychiatry at Yale University. In this post and the next they discuss the adaptive value of delusional beliefs via their predictive coding model of the mind, and the potential delusions have for epistemic benefits (see their recent paper ' The Doxastic Shear Pin: Delusions as Errors of Learning and Memory ', in Cognitive Neuropsychiatry). Phil presented a version of the arguments below at the Royal College of Psychiatrists' Annual Meeting in Birmingham in 2015, as part of a session on delusions sponsored by project PERFECT . The predictive coding model of mind and brain function and dysfunction seems to be committed to veracity; at its heart is an error correcting, plastic, learning mechanism that ought to maximize future rewards, and minimize future punishments like the agents of traditional microec...