All posts by Chris

We need to know more about how groups make decisions

Uta: As chair of the Royal Society’s Diversity Committee I have struggled with communicating the biases that enter into decisions made by selection committees. There is a strong commitment to increase diversity at the Royal Society at all levels, but nothing convinces scientists more than evidence. So it seemed a good idea to collect evidence on how group decisions are made. Not being a specialist in this area myself, I pleaded with Dan Bang and Chris Frith to write a review, and here I am asking them about what they found.


The review has taken over a year and refers to 203 publications. It just appeared in the Royal Society journal Open Science.

Dan, was this the first ever comprehensive review of the literature on decision-making in groups?

There are other excellent reviews which we refer to in our paper, but I think ours is the first to draw on computational modelling of decision-making, and to bring together findings from economics, psychology and cognitive neuroscience.

 Our article is not a review in the classic sense, we do not calculate the effect size of different group interventions or treatments. Instead, we use a general model of decision-making as our starting point, and then ask how individuals and groups solve each of the relevant computations. Continue reading We need to know more about how groups make decisions

Uta: For some reason the question of how we make decisions has always been of great interest to experimental psychologists. Why?

For experimental psychologists decision-making is a very useful framework for studying all kinds of cognitive processes, particularly those we are not aware of. For example, my perception of the letter A is a decision because I choose this interpretation of what I see instead of any other, such as the number 4. So, decision-making is something we do all the time but are usually unaware of.

 Uta: Why has there been a recent upsurge in studies on decision-making?

I think this is due to developments in computational modelling. Computational models allow us to break decision-making down into its component processes, generate predictions for what happens if we tweak one of the processes, and for understanding exactly where decision-making goes wrong or right. I like to think that experimental psychologists have gotten over their fear of maths and have begun to realise what computational approaches can offer.

Your favoured model is Bayesian Decision Theory (BDT). Can you explain what this is, briefly?

BDT describes how to make optimal decisions in an uncertain world. A key aspect is the integration of past experience (prior beliefs) and new evidence (likelihood). It turns out that many decisions, whether made by individuals or in groups, go wrong because of excessive reliance either on past experience or on new evidence.

 Is there a mathematical formula telling you how to make a good decision?

BDT provides the formulas for finding the best action. But the calculations required are often intractable, or too complex and time consuming, to be useful in practice. We can think of the task of finding the best action as one of finding the highest peak in a hilly landscape. BDT requires that you know the contours of the entire landscape. An alternative ‘quick and dirty’ strategy is to visit just a few different points in the landscape; this is often sufficient to get a good idea of where the peak might be. This strategy is called sampling. We can sample from our memories of past actions; we can sample from the internet; we can read about the goodness of some of the most obvious actions; we can ask our friends for advice.

 When is the right time to make a decision?

Somehow you have to recognise whether you have enough knowledge to make a good decision. Do I have taken enough samples or should I take more? If I stop sampling, I may miss a higher peak. If I go on sampling too long, my decision may come too late.

Uta: In the figure, we see a schematic landscape with different peaks and we see people who are ‘explorers’ or ‘exploiters’. I believe this distinction comes from the study of foraging behaviour in animals. If I know where apples are to be found, I exploit this knowledge by going to that tree. But eventually all the apples will have been eaten, and I need to explore to find a new tree. How does this play out in group decision-making?

Explorers and exploiters point us towards the advantages of group decision-making over individual decision-making. In many animals, such as honeybees, there are individual differences in the drive to explore or to exploit. Most bees in the hive are exploiters who go where they know that nectar bas been found. But about 4% are explorers (scouts) who look for new sources of nectar. The scouts can guide others to new sources of food. Among humans, it is perhaps the risk takers and sensation seekers who are the equivalent of these scouts.

Chris, since this is our ” socialminds” blog, can you explain in what way group decision-making is part of the social mind? Are there processes that you identified that are specifically social, rather than general cognitive processes?

The special social feature of group decision-making is that members of the group interact with one another. For example, we are typically unaware of our own biases, but very sensitive to those of others. By interacting with others we can discover our biases and try to overcome them. We prefer to justify our own immediate solution to a problem rather than considering other options. When we interact with others we may be forced to consider these other options.

 Is there a downside to group discussions?

One example is that the biases of one individual might spread through the whole group. Another problem can occur with perceived confidence. When people make decision together, they will often discuss how confident they are in their choice. In many cases, better decisions will be made when the solution of the more confident person is chosen. This is because confidence correlates with competence. But there are exceptions. Group decisions might be dominated by a vociferous and confident individual who is also incompetent.

 Some examples immediately spring to mind…

Dan, what do we know now from the research that you reviewed that we didn’t know before? Was there a finding that surprised you? Is it all common sense, really?

Well, common sense is always easier to spot in hindsight! What we have tried to do is to provide an evidence base for common sense.

 I think the most surprising challenge for decision-makers is living with and accepting uncertainty. We hate uncertainty. So, even if you make the optimal decision, you still may not get a good result. Then, it’s easy to say: oh, I shouldn’t have taken this particular action, but the other action. But this is hindsight bias! Actually, given what you knew at the time, you did take the best action.

 On the other hand, you might sometimes be lucky: you made a poor choice, but through random luck, there was a good outcome. In this case, you might fool yourself into thinking that you actually took the best action and receive praise from your colleagues.

 So it was a new insight that choice and outcome have to be kept separate? Are there any practical implications?

 Indeed – I think that innovation and exploration is often discouraged because people focus too much on outcomes. Especially in groups, people are too afraid to get it wrong. This is because they know they’ll be evaluated on outcome, and not on a careful consideration of the grounds for their choice. This is one important reason why groups tend to stick with business as usual. We feel much more regret after making an unusual choice that turns out to be wrong. When doing business as usual, others can’t point the finger at us.

 Chris, what ideas are ripe for exploring further? What is the next question that the review urges to be studied?

We need to learn more about the role of group leaders. We can think of a group of people with different areas of expertise as a super-brain. The group members are like populations of neurons which perform different functions, but whose output is brought together to make sense of the world. The brain has solved the problem of competition for influence and relies on a central executive system to coordinate information processing. Group leaders need to emulate such an executive system to overcome some of the dangers associated with group decision-making. Here are some practical examples: applying the rule that each member can only speak for a fixed amount of time. Another is pointing out bias where it occurs.

Uta: Here is a puzzle for me. Equality bias is one of the causes you identified as potentially leading to poor decision-making in groups. Do we really assume that all members of our committee are equally good at decision-making? If we have such a bias, how does this square with our irrational tendency to feel that we are better drivers/teachers/parents than other people. Why has it not helped us towards more diversity?

 Equality bias means that equal weight is given to all opinions. It is part of the more general belief that others are more similar to us than they really are. Since not all group members are of equal competence, the bias results in too much weight being given to the opinions of less competent members of the group. If anything, this bias should work in favour of diversity. The difficulty for achieving diversity arises because we feel much more comfortable with people who are really like ourselves. Our shared expectations make them easier to communicate with. But this does not necessarily lead us to make better decisions than we would have made by ourselves.

Uta: I am just speculating, but this equality bias makes me think that if we believe, rather egocentrically, that others are more similar to us than they really are, then we get an unpleasant jolt when we notice the discrepancy between assumed and actual similarities. However, the discomfort is worth it when it leads to better decision-making by pooling diverse opinions.

Dan, are you convinced by your review that groups really make better decisions than individuals?

Yes, absolutely. We discuss many benefits in the paper: more accurate knowledge from pooling of knowledge, better inferences from pooling cognitive resources, better coverage of hypothesis space from mix of exploiters and explorers. But checks and balances must be in place for groups to work.

Uta: What are the hottest recommendations for this committee’s work?

One new idea that is gaining strength is the use of lotteries in decision-making. When choosing which projects to support it is usually easy to pick out the good ones and the bad ones. But there will always be a middle range of equally good projects, only some of which can be supported. Selection at this stage could be done by a lottery. This would eliminate the effects of unconscious biases and would eliminate a lot of fruitless discussion.

Uta: The whole point of the review was to give evidence-based recommendations so that selection committees and panels can make better decisions. Therefore, I’m listing some of them here.

                                   ——————-7 RECOMMENDATIONS——————–

  • Strive for diversity among group members, both in terms of expertise and background. This will allow you to explore the full hypothesis space and avoid narrowing of ideas.
  • Don’t be afraid to weight opinions. Give more weight to the opinion that is based on better knowledge.
  • Be sensitive to differences in expressed confidence. Talking time rarely correlates with expertise. Consider allocating a fixed speaking time to each member and enforcing a no-interruption rule, the latter benefitting people who have minority views, or who are in a minority.
  • Ensure independence of opinions. For example, do not express a preference for candidates before everyone has discussed them. Opinions are subject to contagion effects.
  • Avoid shared information bias. Do not focus discussion on information that everybody is familiar with, such as track record. Sometimes only one member has pertinent information. They may not reveal it as they assume others know too.
  • Balance exploration and exploitation. Business as usual is not always the best decision. Changes can be beneficial even though there is risk attached.
  • Appoint a meta-champion who keeps a check on process, is aware of the pitfalls and can point out common biases. Promoting independence among opinions is probably the most helpful service that the champion can provide.



Cognition and Consciousness in Peter Pan

A Conversation with Rosalind Ridley

My friend and colleague, Rosalind Ridley, who has had a distinguished career with the MRC studying brain and behaviour, has just published an intriguing book about J M Barrie and Peter Pan. It turns out that Peter Pan is not just a childish story about pirates and children who can fly. Barrie was very aware of the scientific developments of his day and the original Peter Pan stories are infused with ideas about man’s place in the natural world and the mental lives of children and animals. In many places Barrie seems to have anticipated ideas in cognitive psychology that only emerged after his death.

CDF: I wonder why a respected neuroscientist came to write a book about Peter Pan?

ppkgcoverRMR: I came across an early edition of Barrie’s first Peter Pan book ‘Peter Pan in Kensington Gardens’, written in 1906. In the text I found descriptions of many aspects of cognitive psychology that have only been studied scientifically since the middle of the twentieth century. The more I read, the more I found. I was hooked.

CDF: Most people are unaware that Barrie wrote two novels about Peter Pan in addition to the pantomime. Do these give us a different view of the nature of Peter Pan and the intentions of Barrie?

Continue reading Cognition and Consciousness in Peter Pan

RMR: In ‘Peter Pan in Kensington Gardens’, Peter is about a week old while in ‘Peter and Wendy’ (1911), which is based on the pantomime, he is about six or seven years old (although he supposedly ‘still had all his baby teeth’ which indicates his immaturity). Although Peter is ‘the boy who wouldn’t grow up’ he undergoes several changes of age, out of synchrony with other people in the stories. One explanation for this is that Peter is Barrie’s memories of himself as a child, achieved through ‘mental time travel’, and that Barrie is both exploring the nature of childhood and re-living his own childhood.

CDF: What was Barrie like?

RMR: Barrie was a lonely man who had had a difficult childhood and a childless marriage that ended in divorce. He found adults difficult and sought refuge in a fantasy world outside the normal stream of consciousness of our mundane existence.

CDF: And yet, he was also one of the most successful authors of his time and knew everyone from Thomas Hardy to A. A. Milne. But he certainly had problems. I believe that Barrie suffered from insomnia, as did Lewis Carrol,  but that Barrie attempted to control this by taking heroin. He must often have experienced the strange states of consciousness that can occur at the borders of sleeping and waking. Did these experiences inspire some aspects of the Pater Pan story?

RMR: Yes, Barrie complained of terrible sleep and gave accurate descriptions of almost all the clinical parasomnias in his stories. It is more than likely that he experienced these sleep disturbances and that this taught him that what he experienced and what was happening ‘out there’ are not the same thing. When Barrie was six years old his older brother drowned. Their mother became very depressed and Barrie felt that his dead brother was more real in his mother’s mind than he was. This may have encouraged Barrie to think in terms of internal mental states rather than the outside world.

CDF: Barrie seems to have been seeking a special state of heightened consciousness, which he believed people experienced in some historical or childish Golden Age.

You call this state ‘sublime consciousness’. What is this?

RMR: Although he didn’t use these terms, Barrie clearly understood the modern distinction between primary mental representation (mainly perception) and secondary representation (mainly episodic memory, anticipation of the future, and the imagination of alternatives). His stories were based on the notion that these were different, mutually exclusive, types of consciousness and that only adult humans had what we would now call ‘secondary representation’. He longed for a pure type of primary consciousness (which is what I called sublime consciousness) which he believed was available to animals, children and only occasionally to adults. Barrie argued that animals and very young children were not burdened with the ‘sense of time’ or ‘sense of agency’ that comes with the development of secondary representation and so were free to enjoy a heightened experience of the present.

CDF: This reminds me of work showing that, if you think about being happy, you will feel less happy.

But isn’t there one animal in the stories who does have secondary representation?

2-solomons-sockRMR: Yes, Solomon the crow. In the picture by Arthur Rackham we see him with the sock he is using to save for his pension. Crows have always had a reputation for being clever and Nicky Clayton has published work suggesting that they can plan for the future.

CDF: And, crows’ brains contain more neurons than the brains of some monkeys of comparable size.

I remember the rather sentimental episode in the pantomime where children are told that every time they say, ‘I don’t believe in fairies’, then a fairy will die. But, in your book, you suggest that Barrie is making a comparison between the type of thing that fairies are and the type of thing that money is.

RMR: Well, yes, Barrie liked to play tricks with words and ideas. He made ethereal objects behave like solid objects; a shadow, for example, is folded up and put in a drawer. Like Lewis Carroll, Barrie saw that words and the objects they represented were separable but, whereas Carroll adopted a semantic view that ‘a word… means just what I choose it to mean’. Barrie took a more pragmatic approach in making Wendy describe a ‘kiss’ as a ‘thimble’ when she could see that Peter was using the two words the wrong way round. Barrie then goes on to distinguish between solid objects and socially constructed objects. In a rather complex scene, Peter has forgotten how to fly and is marooned on the island in the Long Water in Kensington Gardens. A boat made out of a five pound note washes up on the island, but, rather than using the boat to make his escape, Peter cuts the bank note up into smaller pieces and uses these to pay the thrushes (who have been told that these ‘coins’ are valuable) to build him a bird’s nest boat. Here Barrie recognised that money is not only a piece of paper, but is also a socially constructed object that only exists as currency so long as everyone believes in it. Similarly, fairies are socially constructed objects, who only exist if you and your friends believe in them.

CDF: We once did an imaging study where people watched bank notes being torn up. The higher the value, the more brain activity we saw.

You suggest that a major theme of the Peter Pan stories concerns the cognitive differences between animals, children and adults. After Darwin published his theory of evolution, people had to reconsider these differences, since he had shown that we are all animals.

RMR: Peter Pan is described as a ‘betwixt-and-between’,
part child, part bird (he can fly) and part instinctive, slightly dangerous creature, like the god Pan. This allowed Barrie to compare the mental world of adults, children and animals and to consider the extent to which human behaviour is instinctive rather than rational and enculturated. These are very post-Darwinian themes and Barrie clearly believed that children start life with animal instincts and develop additional, specifically human cognitive skills as they mature. This reflects the view put forward by the nineteenth century embryologist, Ernst Haeckel, that ‘ontogeny recapitulates phylogeny. It would not have occurred to anyone before Darwin to compare the behaviour, especially the moral behaviour, of humans and animals because humans were made in the image of God and animals were just dumb beasts. Barrie also refers to paths in Kensington Gardens that have been made by men and adjacent ‘vagrant paths that have made themselves’ suggesting that he understood that evolution could apply to anything that was based on bottom-up processes, not just plants and animals.

CDF: One of the more exciting research programmes to emerge toward the end of the 20th century was about theory of mind or mentalising. This is the ability that enables us to realise that other people may have different beliefs from us and that it is those beliefs, rather than reality, that will determine their behaviour. Children don’t seem to acquire a full version of this ability until they are about 6 or 7 years old.

RMR: Although Barrie does not specifically name the nature of Peter’s cognitive limitations, his various descriptions of Peter’s behaviour certainly indicate failures of mentalising. Peter cannot remember events of the past and cannot understand what ‘afraid’ means because it is about the future. Peter also appears not to have a fully developed theory of mind and the social cognition that develops from it. He has great difficulty dealing with the beliefs and desires of others.

“What are your exact feelings for me?”
“Those of a devoted son, Wendy.”
“I thought so,” She said, and went and sat by herself at the extreme end of the room.
“You are so queer,” he said, frankly puzzled, “and Tiger Lily is just the same. There is something she wants to be to me, but she says it is not my mother.”
“No, indeed it is not,” Wendy replied with frightful emphasis.

Here Peter is clearly described as not knowing what it is that Tiger Lily wants to be to him, rather than not knowing how he should respond to her amorous advances. Later Peter gives a puzzled, nervous laugh and skips off merrily when he thinks that Wendy has been shot dead.

CDF: Well, it’s certainly amazing that Barrie was so much ahead of his time in presenting these various ideas, which we associate with contemporary cognitive psychology, but is this enough? What does your foray into the humanities contribute to contemporary neuropsychology?

RMR: Barrie was a close observer of human and animal behaviour as well as being extremely well read. I suspect that many of his astute observations were entirely his own but the implications of scientific discovery was a very pressing issue amongst the intelligentsia of the time and Barrie knew a great deal about science. For example, his story of the fairy duke who does not know that he is in love charmingly demonstrates the James/Lange theory of emotion, which was proposed at the end of the nineteenth century. At first I was surprised by the cognitive approach he adopted but I now realise that much early psychology, especially that proposed by William James (whom Barrie had met), was very cognitive in approach. But it was then overshadowed by the subsequent schools of Psychoanalysis and Behaviourism. We should pay much more attention to the psychological insights of the nineteenth and early twentieth century.


Barrie’s literature makes science accessible, but Barrie also argued that a good grounding in science and the scientific approach could contribute to literature when he said ‘science is the surest means of teaching you how to know what you mean’.

Photograph of the paths in Kensington Gardens courtesy of Harry Baker
A version of this conversation previously appeared in The Psychologist, January 2017

The Encounter

Over the last couple of years, Uta & I have been meeting with Simon McBurney, director of Complicite as he prepared for his one-man show, The Encounter. Simon hoped that we might be able to tell him what neuroscience can reveal about the nature of consciousness.

The Encounter dramatizes the experiences of Loren McIntyre, as described in Amazon Beaming by Petru Popescu. When Simon told us about this book it was long out of print, but we managed to find a second hand copy. As a result of Simon’s work it was republished in 2015.

Loren McIntyre was a National Geographic photographer, and this is the story about his experiences when he was lost in the remote Amazon rain forest. His survival depends on the leader of a small group of Mayoruna people who he has followed into the jungle and then become hopelessly lost. But there is no common language through which they can communicate. He feels utterly isolated with ‘a psychological distance of 20,000 years’ between him and the people who are his only hope finding a way back. Eventually he starts to experience ‘communication’ from the leader of the group when he sits near him. He begins to understand some puzzling behaviour, for example, why the group keep destroying their villages and moving on. Remarkably, this communication doesn’t depend on language.

McBurneyIn The Encounter everyone in the audience wears earphones, which helps Simon to recreate and share all the strangeness and terror of McIntyre’s experiences through the wonder of acoustic technology.

When we first talked to Simon about the work he was developing around Amazon Beaming, he asked us whether we thought it was possible for two people to communicate without words. We said, absolutely.

And here is why.

Continue reading The Encounter

Communication is not simply about the transfer of information. You can do that with a cash machine. When we communicate we know that we are communicating, and we know that our partner knows that she is communicating. We have a subjective, conscious experience of communicating. This experience, we hypothesise, predates language.

This is what I would have said in a discussion planned after a performance of The Encounter at the Barbican. Unfortunately I couldn’t be there because I had to have an operation for a detached retina.

What is conscious experience?

When I look out into the audience, I am aware of innumerable faces. I have the subjective experience of seeing many faces. But this is an illusion. I don’t mean that you are all figments of my imagination. I am confident you are all out there, but, even so, some of you at least are figments of my imagination.

The problem is that my contact with you all seems so direct, when it is really very slight. The only clues I have about you come from the sparse signals that my eyes and ears are sending to my brain. From these crude signals, and from years of experience, my brain can make quite a good model of what’s out there.

elephantYou will remember the story of the blind men who come across an elephant. One feels its trunk and thinks it is a snake, another feels its leg and thinks it is a tree.

A single sighted man who comes across an elephant is doing the same thing. The elephant is too big to see with a single fixation of the eye. We have to look all over it. If our eye lands on the trunk, then it’s a good bet that it’s a snake. But, then, as the eye moves along it a head or a tail should appear. When this doesn’t happen, then the model has to be changed. It isn’t a snake. Perhaps it’s an elephant. The more evidence our eyes take in the more plausible it becomes that the thing is an elephant. Our eyes move very fast (4 to 8 fixations per second). Within a few 100 msecs we see the elephant. We are entirely unaware of all the work our brain has done and, of course, what we are seeing is not the elephant, but the model that our brain has constructed. This model is often incomplete with several missing bits that are filled in with guesses. This is why some of you are figments of my imagination. There is a well known youtube video, showing that a gorilla can walk by some basketball players without being noticed, if you are too busy counting the basketball passes.

But what is the point of all this vivid subjective experience?

HuxleyCapTH Huxley believed that our conscious experience has no function: ’Consciousness [is]completely without any power of modifying the working [of the body] as the steam-whistle which accompanies the work of a locomotive engine is without influence upon its machinery.’ I believe that Huxley was wrong and we can see this from the metaphor he chose. This is because the steam-whistle does influence the behaviour of other engines.

Our conscious experience is very vivid, but also very private. There is no way I can have your experiences. It even is possible that the colour experience that I call red is actually the one you would call green if you were to experience it. How could we ever know? But there is a paradox here. Our conscious experience may be private, but it is also the only aspect of our mental life that we can share with others. I can’t tell you anything about what my brain is doing. And I certainly can’t tell you about all those mental processes that never reach my consciousness.



What I can tell you about is my model of the world. And, at the same time, you can be telling me about your model of the world. So if we are like steam locomotives, we are certainly hearing each other’s whistles.




Conscious experience is for interacting

And, because we are sharing the same world and because we also have very similar brains, our models are also likely to be very similar. But they will not be entirely similar. Our models will also depend on all our past experiences including our interactions with others. Our models of the world will be strongly influenced by our cultural background.

But what happens when two people interact? Interacting with another person is different from interacting with a rock. Unlike a rock, the person I am interacting with is creating a model of me at the same time as I am making a model of her. The model I create of you helps me to predict what you are going to do, which also helps me to communicate with you. My model of you will have many different aspects. I will try to discover what sort of person you are. But in my view the most important aspect of you that I am trying to model, is your model of the world. That is the model of the world we are currently sharing.

brainsBecause we are sharing the same world, any differences in our models will reflect our different experiences and cultural backgrounds. So, when I know something about your model, I know something about you. But, if I need to communicate with you, then I should try to make my model similar to yours. And, at the same time, you will be trying to make your model similar to mine. Some believe that, if two devices interact while making inferences about each other, then they will eventually converge on the same model.

Language is extremely useful for discovering something about other peoples’ models of the world, but it is not the only way. Simply by watching how someone moves you can learn about how they see and understand the world about them. The more you spend time with someone else, the better you will get at predicting how they are going to move. You won’t know how you do it. It just happens.

To make this prediction you have learned about their model of the world and, inevitably, this has changed your own model. At some point the two models will be in almost perfect synchrony. At this point you will have the conscious experience of what seems like, and, indeed is, wordless communication.


Our Danish friend, Dan Bang , is just finishing his DPhil on Confidence.

If you type confidence into Google you will get millions of hits, mostly about self-confidence. You are told that, for a small fee, self-confidence can be learned and will enable you to influence people and earn more money.

This is not the kind of confidence that Dan is interested in.


I associate confidence with psychophysics experiments. You make people look at an endless series of pictures in which there may or may not be moving dots. You ask them, ‘Were the dots moving?’ and then ‘How confident are you that they were moving?’ These experiments are so boring that the only people prepared to take part are the authors of the paper.

CDF: So why is confidence so interesting? Continue reading Confidence

DB: On the one hand, confidence is an objective quantity. We can link confidence to behaviour or real-world events. We can ask, when people are more confident, are they also more likely to give the correct answer? We call this resolution (or metacognitive sensitivity). The more people’s low and high confidence discriminates between their incorrect and correct answers, the higher the resolution.

On the other hand, confidence is also a subjective quantity. You and I might have different ideas about what it means to be “not so sure” – does it mean that the probability that we are correct is 25% or 50%? We call this calibration (or metacognitive bias). So even if our confidence has the same resolution, I might express myself more cautiously than you do. Our low and high confidence need not fall within the same range. I might say “not so sure” when thinking that there is a 75% probability that I am correct. But you might have no problem saying “absolutely certain”.

DBsureFor me, confidence is interesting because, with carefully controlled experiments, we can quantify how people communicate their inner states, and we can ask whether the way in which they communicate this information changes with the social context.

CDF: So it may be interesting for you, but aren’t the experiments still boring?

DB: People don’t get bored doing my experiments. They work together in pairs and discuss what they have seen. We often think that confidence is a private experience, but in my experiments people talk to each other about how confident they feel.

CDF: Why would they talk about their confidence?

DB: If they disagree about what they have seen, they have to decide who is right. A good rule of thumb is that the more confident person is also more likely to be right. Two people working together can do better than the best person working alone, and the more they talk about confidence the greater the advantage for the pair. Simply by going with the more confident person after each presentation you can get an advantage.

CDF: How can people predict whether they are going to be right or not? This is very mysterious to me. Where does the information come from?

DB: There are a lot of different theories. Some think that our confidence directly reflects the reliability or strength of the information upon which our decisions are based. In my tasks, this information could be sampled from memory or through the senses. In general, the more reliable this information is, the more likely we are to be correct. Others don’t think we have such direct, privileged access to our inner workings. Instead, we infer our confidence.

One way to do this is to monitor the speed with which we reach our decisions. In most situations, decisions that we make quickly are more likely to be right, and fast responses tend to be associated with greater confidence. Observers are quite good at judging other people’s confidence by watching their movements. However, in one of our studies, we showed that simply going with the faster person is not as good as going with the more confident one. So, confidence seems to carry a lot of useful information.

CDF: I guess you mean that confidence is a marker of competence and speed is another marker. We would certainly want to take advice from competent people. But can’t this easily go wrong? Over-confident people think they are giving good advice when they are not. Working with an over-confident person could be disastrous.

DB: Even an over-confident person will be more confident when s/he is right and less confident when s/he is wrong. S/he can be accurate about their confidence (resolution), but have a bias to exaggerate it (calibration). If we want to work successfully with each other we need to calibrate the way we report our confidence to one another. When I say that I am very confident it has to mean the same as when you say you’re very confident.

CDF: You mean that I have to make sure that my subjective experience of confidence corresponds to your subjective experience of confidence. How is this possible? It’s like asking whether my experience of red is the same as your experience of red.

ConfCorrDB: Actually there’s a quick and dirty way of doing it, so to say, which works most of the time. People usually use words, but you can ask them to use numbers from 1 to 6 to indicate their confidence. An under-confident person might mostly use the numbers 1 to 4, while an over-confident person mostly uses the numbers 4 to 6. It ‘s fairly obvious that they are using the scale in a different way. I have found that people align their use of such confidence scales so that they have the same average confidence rating across the experiment. This might not necessarily be the middle of the scale. So, some people might both use the scale in an “under-confident” way, while others might both use it in an “over-confident” way. There are very few mismatches.

Confidence is a subjective experience, but there are still common features that people can agree on. The two ends of the scale might be fixed at guessing and certain. It is obviously more difficult to have agreement about the middle of the scale, but people can still agree on the order of their levels of confidence.

CDF: That’s very interesting. If you used a 3-point scale of confidence, it would be difficult to be sure if we both meant the same thing with a rating of 2, but the more items in the scale the less the problem will be. In an earlier study, your colleagues showed how each pair developed their own verbal descriptions of confidence – sure, almost sure, a little sure, not quite sure, &c. I was very surprised that the mean number of levels for these spontaneously developed scales was about 18. I was surprised because we all learned, as students, that the optimum number for a scale was 7±2. But, of course, the more levels we have, the less the problem of equating subjective experience.

DB: Yes, we actually find that, if you give people a continuous scale (e.g., 1 to 6 in steps of .000001) instead of a discrete one (e.g., 1 to 6 in steps of 1), then they perform better. The problem of agreeing on what exactly each level means disappears.

alignmentCDF: I am very interested in alignment. It seems to be a critical feature of joint action. The Mirror Neuron story is all about alignment. We automatically align our motor movements and our perception of the world. What you are telling me about confidence seems to be an example of automatic, subjective alignment.

DB: That’s much too speculative for me.

CDF: You called this strategy a quick and dirty method. Does this mean it sometimes goes wrong?

DB: Yes, the strategy only works when the people in the group have equal competence. If they have different levels of competence, they should not try to match their confidence. The more competent person should be consistently more confident than the less competent. Otherwise the pair will take the advice of the less competent person too often.

CDF: But presumably we can notice when someone is more or less competent? Could we do this first and then adjust the way we talk about our confidence?

DB: Actually this seems to be more difficult than you might think. We just published a study showing that people take too much advice from an incompetent partner (and take too little advice from a competent partner). This is not a problem of not being able to work out that the partner is less competent (or more competent). It happened even when they had explicit feedback about their relative competence. People seem to forget this information in the situation.

CDF: But you were using Danish students and every one knows how modest and trusting they are.

DB: That can’t be the explanation. We observed just the same behaviour in Iran where people are supposed to be less trusting of each other.

CDF: I wonder why there should be this universal equality bias, when it reduces successful group decisions?

DB: Perhaps people are more interested in smooth social interactions than in accurate decisions?

CDF: That’s too speculative for me.

What’s so good about being rational?

We are still planning THE BOOK, but we always turn to ideas for the graphic novel first and are constantly distracted by the wonderful artists that we are inspired by. That is, if we are not distracted by cooking and eating.

CDF (neatly cutting celery, chilli and chives):

The trolley problem has to feature.

choppingboardIt is not only visually striking but it will be useful to illustrate some facts about the notorious clash between emotion and reason in our social minds.

UF: Isn’t it strangely related to that other clash we are always struggling with? Between our egotistical and prosocial motives. Are we more rational when we are being prosocial?

Continue reading What’s so good about being rational?

So to recap: An out-of-control trolley is speeding down the line towards 5 railway workers who will all be killed. You can save them by diverting the trolley down a branch line, but this will result in one person being killed. Should you divert the trolley?

Most people answer, Yes. It’s the rational, utilitarian answer, and also pro-social, since it avoids killing 5 people.

CDF sharpens his knife with the consequence of such excruciating noise that UF has to temporarily leave the kitchen. When she returns, Chris is flattening a tiny chicken that’s almost split in half, and rubs it with herbs.

You can make a slight change of wording of the trolley problem: You can save the 5 workers by pushing the large man, standing next to you, onto the track, thus stopping the trolley, but also killing the large man. Should you push the large man?

Now, most people answer, No.

UF: So, what is going on?

CDF (carefully placing a layer of cut Brussels sprouts into butter foaming in a small heavy saucepan): Fortunately, there’s a brain imaging study to help us out. Volunteers in the scanner were asked to reflect on the suggestion that they should push the large man onto the track. They showed much higher activity in ‘emotional areas’ of the brain. It seems, if you don’t reflect you can more readily make the utilitarian choice – ‘utilitarian’ meaning ‘for the greater good’. Just do the arithmetic: the lives of 5 people add up to more than the life of 1. However, the emotional response to the thought of pushing a person onto the track is hard to ignore. It interferes with processes by which we might reach a utilitarian decision.

UF (turning up the gas flame while stirring vanilla custard): The emotions are brought to a boil by the extreme nature of the decision you have to make. They tell you that you can’t kill the large person next to you. But they also make you forget the five others. What happens if the outcome of the decision is less fraught?

CDF: There is the ultimatum game: Bob is given a pot of money to share with Liz. Bob offers a proportion to Liz. If Liz accepts, then both can keep their share. If Liz rejects the offer, then neither gets any money. The rational decision for Liz is to accept anything, since some money is better than none.

UF: In practice, Liz will get angry and reject offers when she feels they are insultingly low.

CDF: Rejection happens if Bob offers less than about a third of the pot. And now if you could get out of my way…

UF (taking her custard to the side and getting out sherry to dribble on some sponge fingers in dessert glasses): Just a moment…

CDF (drying his hands): Once again brain imaging comes to our rescue. As you suspected, rejection of offers is associated with activity in emotional regions of the brain.

UF: Even with these more trivial decisions, emotion is the enemy of reason. But wait, it’s not necessarily an irrational action. If we ignored emotion then we wouldn’t know what is good or bad for us. We make decisions by choosing the good and avoiding the bad. What is so good about being rational?

CDF (putting the chicken now covered in herbs into the oven): Talking of frontal lobes – the origin of reason in the brain: When the frontal lobes are damaged, decisions should become less rational.

UF (pouring the vanilla custard over morello cherries in the desert glasses): Don’t they?

CDF: When people with damage to prefrontal cortex play the ultimatum game they do become more irrational in their responses. They are strongly inclined to reject poor offers. But, here’s the rub: when they are presented with moral dilemmas, they select the more utilitarian scenarios, and they act more rationally than people with intact frontal lobes.

UF (sprinkling almond flakes on top of the custard): Well that’s a bit difficult to explain. How can frontal lobe damage cause people to be less rational in one situation and more rational in another?

CDF (opening a bottle of St Aubin, 2009): First, there’s a problem with the trolley problem: What people say they would do doesn’t necessarily relate to what they would actually do! In the ultimatum game people have to make real choices. But, as typically presented, the trolley problem is hypothetical.

UF: Let’s sit down and see what this wine tastes like.

CDF: And I can tell you about one problem with the trolley problem. It’s hypothetical.

The trolley problem in real life

 Attempts to explore the trolley problem in real life have proved controversial.

trolley1The latest activity from lawmakers comes just two weeks after a Senate bill introducing new trolley safety regulations died in committee. The bill encountered stiff opposition from industry lobby groups such as the National Railroad Association. “Trolleys don’t kill people,” said NRA spokesman Lane Stone, “moral philosophers kill people.”

(taken from here and here)

UF: (laying cutlery and large white napkins on the table): Didn’t our friend, Dean Mobbs compare hypothetical dilemmas with the same problem in real life?

CDF (opening the oven and springing away as his glasses get steamed up): Yes. This is the Pain vs Gain paradigm, which you can study in the lab. Participants get a pot of money and can either use this to prevent a companion from receiving painful electric shocks or keep the money for themselves.

UF: Surely, it’s clear what to do: You use all the money to prevent the shock to the companion.

CDF: Well, yes. In the hypothetical scenario 93% of the people said that’s what they’d do. But in real life this didn’t happen. All the participants kept some of the money for themselves, and all their companions suffered some shocks.

UF: So what trick are the emotions playing here? Where is our deeply prosocial nature; our predisposition to help others?

CDF serving up the chicken by cutting it neatly in half: People felt just that little bit more emotionally attached to their own benefit.

UF: Ah, this chicken is delicious. And it goes amazingly well with the blackened sprouts.

CDF: This version of cooking sprouts makes them almost edible.

UF: Lets face it. We are all moral hypocrites. We do things even though we say we wouldn’t. It’s tough following one’s moral principles.

CDF pouring more wine: Actually it’s also tough being a moral hypocrite. We have to justify our behaviour when we don’t follow our moral principles. One of the people in the Pain vs Gain experiment said, “I struggled with what to do. I wanted the money but I didn’t want to hurt him. I decided that he could take a little pain and I could make a little money.” We can always come up with hypocritical justifications.

UF (feeling benevolent after having been indulged in her inexplicable liking for sprouts): Sadly, looking after “Number One” often gets in the way of looking after your nearest and dearest others, let alone the greatest number of people.

Utilitarian judgements and the greater good

CDF: This brings us to the study by Guy Kahane at the Oxford Centre for Neuroethics.

UF (clearing the dishes away): I remember you saying what an excellent paper it was.

CDF: Yes indeed. Kahane and colleagues have explored what we have been talking about. They asked what kind of person endorses the utilitarian decision to kill the fat man next to him to save five lives. Was this a fine person thinking of the greater good? Not a bit of it. They found that this person is also likely to endorse behaviours such as tax evasion, doesn’t give money to charity and feels less of an identity with the group. This is a rational egotist.

UF: This brings me back to Liz rejecting low offers in the ultimatum game. She may actually have done a noble act serving the greater good. Maybe Bob will be taught a lesson and behave more fairly in the future.

CDF: Yes, people who reject low offers, are typically prosocial in other situations. Here being prosocial is linked to behaving irrationally, just as in Kahane’s study being egotistical is linked to behaving rationally.

UF (fetching the dessert glasses): I am interested in how the emotions feature in both types of people. Presumably emotions can be self-oriented or other-oriented.

CDF: I am interested in how making a rational choice doesn’t mean concern for the greater good. Rational means I can justify my behaviour to myself and to others, by showing that I have made the best choice.

CDF: This trifle is not bad. To continue: Being rational is about winning arguments, not about being good. The non-egotistical choice can also be considered rational, but it is a bit harder to justify to yourself: you have to believe that you or your friends will benefit later on. This is probably best in the long run, while the egotistical choice seems best in the short run.

What’s so good about being utilitarian?

UF: So, utilitarian judgments are just what we need when it comes to justifying our behaviour. Obviously it is better to save 5 at the expense of 1.

CDF: But emotional involvement is difficult to keep away. Consider the original dilemma proposed by William Godwin. If only one person can be saved from the fire, should we save Archbishop Fenelon or the chambermaid? Godwin –clearly ignoring the emotional component – concluded that we should save the Archbishop since he would contribute more to the greater good.

This is a utilitarian judgment, but is it a good judgment? Unfortunately all sorts of terrible things have been justified on the basis that the life of one kind of person is more valuable than the life of another kind of person. Here our strong emotional inhibitions may prevent us from entering into a nightmare scenario. I would not like to live in a society where less valuable people were routinely sacrificed for the greater good.

UF: Unfortunately people can get trapped in nightmare scenarios. Hurricane Katrina created Godwin’s dilemma in real life. Sheri Fink wrote about the terrible story of Memorial Hospital in New Orleans, when hospital staff were confronted with the need to evacuate the patients under most difficult circumstances. Imagine being surrounded by five feet of water, with no electricity, little in the way of food and medical supplies and temperatures indoors of 400C. And seven patients had died while being moved. Which patients should be given priority in the evacuation? The sickest and most vulnerable? Or should they be left behind, since they have ‘the least to lose’? The consequence of making the latter choice was arrest for second degree murder. Interestingly, amidst great public controversy, the case was rejected by a grand jury. They recognised the impossible dilemma that the staff faced.

CDF: I don’t know what decision I would make in such terrible circumstances, but I know I would want my rational attempts at self-justification to be tempered by emotion.

The last ferry from Esbjerg to Harwich: Why do we behave irrationally – or do we?

DSCF0114The Dana Sirena, the huge ferry, which has crossed the North Sea every day for uncountable years, will run no more. There is only one more journey and that will be to return from Harwich to Esbjerg – and that ‘s it. We don’t know who made the decision and we wonder what the arguments might have been. We are a bit sad and wonder whether this is a sign that our annual trips to Aarhus for the last ten years must come to an end sometime.

Waiting in the car to get on the ferry, we looked back at a lecture by Antonio Rangel, a few days before, which we much enjoyed. Rangel is a leading practitioner of neuro-economics, from Caltech, and he talked about some serious methodological issues in this field. It’s not about lack of replication, but about remoteness from real life. We have to face it, what people do in the lab just doesn’t transfer to the real world. Something crucial is being left out and not understood. People aren’t behaving as if they were optimal Bayesians. Continue reading The last ferry from Esbjerg to Harwich: Why do we behave irrationally – or do we?

UF: To be optimal our behaviour should be rational – no?

CDF: What economists and others mean by rational behaviour is that you choose the option that gives the highest benefit.

UF: This sounds okay, but people often seem not to choose what’s best for them.

CDF: Ah, this depends. Think of the famous Marshmallow experiment. You have to resist taking the one Marshmallow so that after a certain time you will receive two. But, is it always better to delay? Of course not. If the situation is unpredictable, then it is better to take the one Marshmallow than risk never getting any.

UF: So being impulsive is not always a bad idea.

CDF: You don’t choose a big reward option, if it is very unlikely to be achieved. To answer your question, people and other animals for that matter, don’t necessarily behave irrationally if they don’t do what is predicted by a formula to get them the highest value. The formula works in the lab where stakes are low and choices to be made occur with equal likelihood. Rangel argued that these situations are quite irrelevant to real life situations. What looks like weird behaviour from the theoretician’s point of view, turns out to be quite sensible when looked at in the right context. Maybe supposedly irrational people are maximising different variables compared to what the theoreticians think they ought to be maximising.

UF: So ‘crazy’ people aren’t irrational either?

CDF: Well, a very common idea is that everyone would behave like them if they had their bizarre experiences. Irrational behaviour means the model doesn’t fit.

UF: I see. The bizarre experiences are the proper context to explain the behaviour, which might be optimal. I like it, because once again we see how important it is to consider context. Do you have an example?

CDF: It always matters how something is framed. If someone says, “my glass is half-empty” this most likely means “please fill it up”. If someone says, “my glass is half-full” this means, “I’ve got enough for the moment”. So glass half-full and half-empty are not one and the same ‘value’. We find it incredibly easy to understand the meaning of utterances when we interact with others. We can calculate the value in a particular context quite fast.

UF: Isn’t it odd that when the questions are framed in a complex real life context, they become easy? It’s like a magic trick that shows us what the mind is really good at. It’s at home with complex computations that take into account what another person might know or not know. Strip the problems down to their logical essentials, and the computations become hard and result in errors.

CDF: The question is how does the mind do it? Models proposed by behavioural scientists and economists are extremely good at modelling very basic decision processes, but in social interactions other models are needed. Only if you have such models – and this will be after lots of behavioural experiments, – should you even begin to think of brain scanning. As Rangel said in his talk, brain scanning very rarely gives you any answers. You need a model first. It will not emerge from the data. If the data fit the model, then that means something.

UF: There is something else that I wish I understood better: What our ‘priors’ tell us, and what we pick up from current information are often at odds with each other. How do we deal with this?

CDF: There is a good example of how these two computations can be experimentally made to conflict, and in this case the priors win: In a trust game you learn over many rounds how people behave and this should give you a good idea of whether or not to trust that person. But you pay less attention to this learning process when the experimenter has planted in you some prior knowledge about the other person. For example, you read that Peter, the partner in your game, has recently been given a medal for rescuing a child from a fire, and has raised large amounts of money for charity. During the game, however, Peter behaves abominably and cheats. Yet, you remain trusting when all your unconscious processes want to tell you that you should distrust. Bad mistake.

UF: I can see how this relates to irrational behaviour: It is the personal and the subpersonal fighting it out with each other. But it is not always clear which type of knowledge you should use for the best: the prior knowledge that you have about the other person and their past deeds, or the information you currently extract from your interaction with them.

CDF: The prior knowledge you get from others will always come from a much larger database than your own direct experience. Perhaps that’s why we pay more attention to knowledge from others?

UF: Sometimes the priors can be too strong, and sometimes the bottom-up learning can exert too much influence. If there is a conflict that can’t be resolved, the decision is likely to be considered irrational.

CDF: Of course the priors are not fixed. They are constantly being altered by what happens in our real time interaction with the world and other people. Data from psychophysics tasks tell us that the decision you just made affects your next decision. How can I know what I like until I see what I have chosen? My behaviour tells me something – now I know what I should do next time.

UF: Is this similar to what happens when we follow the crowd and do what other people do? They may know something that we don’t know. We can benefit from their knowledge, as long as they have it. Like the traders on the stock exchange, who buy stocks that others buy. Perhaps they believe that the others have inside knowledge. This might sometimes even be true, but if it isn’t, stock market bubbles can be created. This certainly looks like irrational behaviour.

CDF: I think we have been talking about our favourite topic: Two systems and how they influence each other, System 1 and System 2, in Kahneman’s sense. Sub-personal and personal in Dennett’s sense. The influence of other people on us, and our influence on them occur both at the personal and the subpersonal level.

UF: But how does the influence of other people, say on the stockmarket, come about?

CDF: That’s what our book has to be about.

Meanwhile, after a long wait, we can drive onto the ferry. We spot a TV cameraman and a presenter in a long black coat, watching and commenting on the last journey of the old Dana Sirena from Denmark to England.

Our colleague from the Interacting Minds Centre at Aarhus University, Andreas Højlund Nielsen, told us about a 15 minute documentary film made by his sister-in-law, Mie Lorenzen. It is called ‘18 hours aboard the England ferry’. It will provide you with the tranquillity of a very calm transit.


Introduction to THE BOOK

For some years, Uta & I have been saying that we will write a book together about social cognition. Now, thanks to the Institute Jean Nicod, this has become a certified commitment. We have written many papers together, but never a book. You might ask, will this be the end of a lovely relationship?

This is what we have agreed on so far: I am trying to create a structure for the book; Uta said she would like to do the colouring in. In the previous post she has provided her overall view of what the book will be about, in what she calls the blurb. Continue reading Introduction to THE BOOK

Now for the structure: I need to choose some constraints that will determine the contents of the book and the order in which these contents will be presented. This structure will highlight the message that we wish to communicate and also indicate how our book on social cognition differs from others. We need severe constraints because so much is now being published on social cognition. Almost nothing was published prior to 1990, but in 2013 over 6000 papers appeared.ScogPubs

We have chosen a biological framework, so that our constraints come from considerations of evolution and brain function.

EvolutionThe most obvious evolutionary constraint is to consider human social cognition against the backdrop of social cognition in other animals from bees to apes. We will highlight a common thread of mechanisms for social cognition in animals, but also identify something special about human cognition, which enabled the emergence of language and cultural institutions.

We will also take account of theories, pioneered by John Maynard Smith, about the evolutionary mechanisms enabling the emergence of social interaction. This approach involves the application of game theory to the evolution of cooperation and to the emergence of the transfer of information between creatures, via cues, signals & communication.

BrainAll these processes of cooperation and communication are mediated by the brain, which is itself shaped by evolution and experience. I realise that any conclusion as to how the brain works is ‘radically premature’, but believe that our cognitive models should be consistent with what we know about the brain. Brains are essentially prediction machines. HohwyIn other words we use our brains to learn about the world in order to predict and thus modulate what will happen to us in both the short-term and the long-term. This is essentially a Bayesian account of brain function characterised as a continuously operating hierarchy of loops linking the evidence of the senses with beliefs about the nature of the world, while, at the same time, acting upon the world to justify these beliefs. The beauty of this model of brain function is that the same basic principle can account for low-level perception, for example explaining various visual illusions, while also explaining high levels, such as how we might read the intentions of others from their movements.

The structure of the bookGiven these background constraints based on considerations of the brain and of evolution, I am planning to structure our book in terms of learning and information transfer. Here are the sections I have in mind, with some of their contents.

What are we learning about? We need to learn about the nature of the world and how to deal with it. There are four worlds that we can learn about.

  1. The physical world of objects
  2. The biological world of agents (other creatures, other people)
  3. The social world of groups
  4. The mental world of ideas

With the exception of the physical world of objects, there is a social aspect to all these worlds. There is also the special problem that arises when we try to learn about other agents: while I am trying to learn about you, you may well be trying to learn about me. We are not just observing, we are interacting.

How do we learn?

  1. By direct experience (trial and error) – we explore the world by ourselves
  2. By observing what others do – we observe others exploring the world
  3. By communication with others – we explore the world with others

Learning from others and with others requires effective information transmission: This transmission can take the form of cues, signals or communication. In the case of cues, information is transmitted which is useful to an observer. The receiver, but not the sender, has evolved to take advantage of a cue. This is sometimes called public information. In the case of signals, the information is useful to both sender and receiver. Both sender and receiver have evolved to take advantage of a signal. In the case of communication, the signal is sent (and received) intentionally, i.e. it is recognised by sender and receiver as a signal. This is a form of explicit metacognition.

Learning from observing others depends on cues and signals. Learning with others requires communication.

800px-Auklet_flock_Shumagins_1986The emergence of groups and other complex entities:  Information exchange can create complex entities. From very simply rules of individual behaviour, large, cohesive groups emerge, such as swarms, shoals, and flocks. Simple rules at the individual level can also create complex interactions such as pack hunting behaviour in wolves. The emergence of these complex entities can be explained on the basis of simple responses to cues.

In the same manner more abstract groupings and interactions can emerge from responses to signals and communication at the individual level. For example, groups such as institutions, and concepts such as meaning emerge from individual communication. It is this intentional signalling that is the special feature of human social cognition and enables the development of culture.

SeeleyBookAre bees better than humans at making decisions?  Honeybees communicate to one another via their waggle dance. This enables bees to make group decisions about where to go to find the best nesting site. This group decision-making ability is far beyond the capability of an individual bee. The mechanisms by which individual bees interact to make a group decision turn out to be very similar to those involved when individual neurons interact within the mammalian brain to enable decision-making. So just as the swarm is much more capable than the individual bee, so should groups of humans be more capable than the lone individual.

Perhaps this is sometimes the case, but more often I wonder what has gone wrong.

How do we go from here?  From now on, as both Uta and I write these various sections, we will post summaries like this. Through your comments we hope write a better book. We want to explore the world of social cognition with others.

Under the Markov Blanket

I mentioned Markov blankets to Uta and she immediately was intrigued, as was my intention. We talked about it again when we were having lunch on a sunny Saturday in the Angelica Café next to St. Anna church.

Over a nourishing beef broth and a chicken salad, we were sitting under a tree that was just bursting into leaf. We could look across the Danube. Opposite us, in filigree splendour, was the Parliament building. In the distance, to the left, we could see the island in the river that is connected to both sides of the town via Margit bridge. Yellow trams were constantly moving along it in both directions. Toylike.


Continue reading Under the Markov Blanket

Uta felt despondent despite the glittering river view, despite the magnificent scenic backdrop, despite the delicious beef broth. She was complaining about people not liking boundaries in diagnostic categories, like autism and dyslexia. They were forever talking about grey areas and one thing shading into another.

“Actually, it’s just the point of Markov blankets that there must be boundaries” – I said to cheer her up.

“Please say more.”

From this point on she couldn’t get a word in edgeways.

How I discovered Markov blankets

I went to a lecture by Pierre Jacob the other day, where I learned that people who believe in embodied and extended cognition hate boundaries. So there is no boundary between the brain and the body – hence embodied cognition – and there is no boundary between the brain and sophisticated tools such as iPhones – hence extended cognition.

Boundaries play such a critical role in biological systems, that it was strange for me to find that some people should hate them. Take for example the cell membrane.

The cell membrane surrounds the cytoplasm of living cells. Very complicated transmembrane proteins span from one side of a membrane the other. These function as gateways to control what enters and exits the cell. Without the membrane, the cell ceases to exist and its components are absorbed back into the environment.

In mammals, the skin acts as a protective barrier. The outermost layer of the skin, the epidermis, forms a protective barrier over the body’s surface, responsible for keeping water in the body and preventing pathogens from entering. Here again complicated mechanical devices are found, such as the ears, mouth and anus, which function as gateways to control what enters and exits the body.

So I started wondering, ‘Is there a cognitive boundary defending and defining that bundle of psychological abilities that we call the mind’?

Fortunately, I had just been reading Karl Friston’s paper on ‘Life as we know it’. This paper introduced me to the concept of the Markov Blanket.

“So, at last – what is a Markov blanket?” Uta asked, looking up from her plate expectantly.

Markov blanketA Markov blanket separates states in a Bayesian network into internal states and external states that are hidden (insulated) from the internal states. In other words, the external states can only be seen indirectly by the internal states, through the Markov blanket.

In response to Uta’s frown, I said,

“The blanket is like a cognitive version of a cell membrane, shielding states inside the blanket from states outside.”

I just had an e-mail exchange with Karl Friston to find out more about these cognitive membranes, I told her, opening my laptop.

CDF: Boundaries play such a critical role in biological systems, that it was strange to find that some philosophers hate them.

KJF: This is interesting – I got an e-mail from Jakob Hohwy a few days ago – he just got a paper accepted in “Noûs”. He was also addressing these strange philosophers by talking about “evidential boundaries”. He framed the issue in terms of radical embodiment but clearly wanted to use Markov blankets to bring the boundaries centre stage.

CDF: In cognitive terms, the brain/mind is shielded by a Markov blanket with sensory inputs and motor outputs as the only way of interacting with external states. Does this provide us with a cognitive definition of the mind?

KJF: To my mind (sic) yes. This is because (being completely ignorant of philosophy) I can equate consciousness with inference. Inference is only defined in relation to (sensory) evidence – that necessarily induces a Markov blanket (that separates the stuff that is being inferred from the stuff that is doing the inferencing)

CDF: 3. Are iPhones, laptops, &c. protected by their own Markov blankets? If so, this is an argument against the extended mind.

KJF: Yes it is – this would be Jakob’s position. As I understand it, we still have an internal representation of an iPhone and make active inferences about how we expect ourselves to use it. (But the iPhone itself is outside the blanket and may be making inferences about us.)

CDF: Can Markov blankets form and dissolve over a short time (e.g. during selective attention or joint attention)?

KJF: Yes – I have not thought about this but the Markov blanket is itself an dynamic process and, over time, will visit many different states. I can imagine the sleep-wake cycle being an example of formation and dissolution of a Markov blanket through sensory gating. I will have to think about attention!

Uta has cheered up. “Now we have defined the mind. Next time we can use a Markov blanket to define dyslexia and autism.”

Some more technical stuff

The Markov blanket for a node A in a Bayesian network is the set of nodes composed of A’s parents, its children, and its children’s other parents. The Markov blanket of a node contains all the variables that shield the node from the rest of the network. This means that the Markov blanket of a node is the only knowledge needed to predict the behaviour of that node. The term was coined by Pearl in 1988. (Pearl, J (1988). Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference. Representation and Reasoning Series. San Mateo CA: Morgan Kaufmann.


There can be hierarchies of Markov blankets. For example, the Markov blanket of an animal encloses the Markov blankets of its organs, which enclose Markov blankets of cells, which enclose Markov blankets of nuclei and so on.