Category Archives: Aarhus

Slow Science

I love this painting by Carl Larsson. Here is a domestic scene of a mother and two children shelling fresh peas into an earthenware pot, pods heaping up on the floor. They are immersed in their work in companionable silence. They can anticipate a tasty seasonal meal. This is not  opening a bag of frozen peas and boiling it for a few minutes. This is slow food, savoured for its own sake. The slow food movement, according to Wikipedia,  started in the 1980s as a protest to resist the spread of fast food. It rapidly spread with the aim to promote local foods and traditional gastronomy. In August 2012 in Aarhus I first hit on the idea of slow science. Just as with food production slowness can be a virtue: it can be a way to improve quality and resist quantity.
painting1 Continue reading Slow Science

I had to give a short speech to celebrate the end of the Interacting Minds Project and the launch of the Interacting Minds Centre. I was looking back on the preceding 5 years and wondered whether the Project would have been predicted to succeed or to fail. I found that there had been far more reason to predict failure than success. One reason was that procedures for getting started were very slow, so slow – that they made us alternately laugh and throw up our hands in disbelief.

But, what if the success of the Project was not despite the slowness, but because of it? Chris and I had been plunged from the fast moving competitive UCL environment in London into a completely different intellectual environment. This was an environment where curiosity driven research was encouraged and competition did not count for much. After coming to Denmark for some extended stays and for at least one month every year since 2007 we have been converted. We are almost in awe of slowness now. We celebrate Slow Science.

Can Slow Science be an alternative to the prevalent culture of publish or perish?Modern life has put time pressure on scientists, particularly in the way they communicate: e-mail, phones and cheap travel have made communication almost instant. I still sometimes marvel at the ease of typing and editing papers with search, delete, replace, copy and paste. Even more astonishing is the speed of searching libraries and performing data analysis. What effect have these changes in work habits had on our thinking habits?

Slow Food and Slow Science is not slowing down for its own sake, but increasing quality. Slow science means getting into the nitty gritty, just as the podding of fresh peas with your fingers is part of the production of a high quality meal. Science is a slow, steady, methodical process, and scientists should not be expected to provide quick fixes for society’s problems.

I tweeted these questions and soon got a response from Jonas Obleser who sent me the manifesto of slowscience.org from 2010. He had already put into words what I had been vaguely thinking about.

Science needs time to think. Science needs time to read, and time to fail. Science does not always know what it might be at right now. Science develops unsteadily, with jerky moves and unpredictable leaps forward—at the same time, however, it creeps about on a very slow time scale, for which there must be room and to which justice must be done.

Slow science was pretty much the only science conceivable for hundreds of years; today, we argue, it deserves revival and needs protection. … We do need time to think. We do need time to digest. We do need time to misunderstand each other, especially when fostering lost dialogue between humanities and natural sciences. We cannot continuously tell you what our science means; what it will be good for; because we simply don’t know yet. 

These ideas have resurfaced again and again. Science journalist John Horgan posted this on his blog in 2011 “The Slow Science Movement Must be Crushed” with the punch line that if Slow Science caught on, and scientists started publishing only high quality data that have been double- and triple-checked, then he would have nothing to write about anymore.

Does science sometimes move too fast for own good? Or anyone’s good? Do scientists, in their eagerness for fame, fortune, promotions and tenure, rush results into print? Tout them too aggressively? Do they make mistakes? Exaggerate? Cut corners? Even commit outright fraud? Do journals publish articles that should have been buried? Do journalists like me too often trumpet flimsy findings? Yes, yes, yes. Absolutely.

I liked this, but not much more was discussed on blogs until it came to the more recent so-called replication crisis. I wonder if it possibly has converted some more scientists to Slow Science. Earlier this year, Dynamic Ecology  Blog had a post “In praise of slow science” and attracted many comments:

It’s a rush rush world out there. We expect to be able to talk (or text) anybody anytime anywhere. When we order something from half a continent away we expect it on our doorstep in a day or two. We’re even walking faster than we used to.

Science is no exception. The number of papers being published is still growing exponentially at a rate of over 5% per year (i.e. doubling every 10 years or so). Statistics on growth in number of scientists are harder to come by … but it appears …the individual rate of publication (papers/year) is going up.

There has been much unease about salami slicing to create as many papers as possible; about publishing ephemeral results in journals with scanty peer review. Clearly if we want to improve quality, there are some hard questions to be answered:

How do we judge quality science? Everyone believes their science is of high quality. It’s like judging works of art. But deep down we know that some pieces of our research are just better than others. What are the hallmarks? More pain? More critical mass of data? Perhaps you yourself are the best judge of what are your best papers. In some competitive schemes you are required to submit or defend only your best three/four/five papers. This is a good way of making comparisons fairer between candidates who may have had a different career paths and shown different productivity. More is not always better.

How to improve quality in science? That’s an impossible question, especially if we can’t measure quality, and if quality may become apparent only years later. Even if there was an answer, it would have to be different for different people, and different subjects. Science is an ongoing process of reinvention. Some have suggested that it is necessary to create the right social environment to incubate new ideas and approaches, the right mix of people talking to each other. When new tender plants are to be grown, a whole greenhouse has to be there for them to thrive in. Patience is required when there is unrelenting focus on methodological excellence.

Who would benefit? Three types of scientists: First, scientists who are tending the new shoots and have to exercise patience. These are people with new ideas in new subjects. These ideas often fall between disciplines but might eventually crystallising into a discipline of their own. In this case getting grants and getting papers published in traditional journals is difficult and takes longer. Second, scientists who have to take time out for good reasons, often temporarily. If they are put under pressure, they are tempted to write up preliminary studies, and by salami slicing bigger studies. Third, fast science is a barrier for scientists who have succeeded against the odds,  suffering from neuro-developmental disorders, such as dyslexia, autism spectrum disorder, or ADHD. It is well known that they need extra time for the reviewing and writing-up part of research. The extra time can reveal a totally hidden brilliance and originality that might be otherwise lost.

We also should consider when Slow Science is not beneficial.  If there is a race on and priority is all, then speed is essential. Sometimes you cannot wait for the usual safety procedures of double checking and replication.This may be the case if you have to find a cure for a new highly contagious illness. In this case be prepared for many sleepless nights. Sometimes a short effortful spurt can produce results and the pain is worth it. But it is not possible to maintain such a pace. Extra effort mean extra hours, and hence exhaustion and eventually poorer productivity.

An excuse for being lazy? Idling, procrastinating, and plain old worrying can sometimes bring forth bright flashes of brilliance. Just going over data again and again can produce the satisfying artisanal feelings one might expect to find in a ceramic potter or furniture maker. Of course, the thoughts inspired by quiet down time will be lost if they are not put into effect. Since slow science is all about quality,  this is never achieved by idling and taking short cuts, or over-promising. Slow science is not a way of avoiding competition and not a refuge for the ultra-critical who can’t leave well enough alone. Papers don’t need to be perfect to be published.

What about the competitive nature of science? Competition cannot be avoided in a time of restricted funding and more people chasing after fewer jobs. In competition there is a high premium on coming first. I was impressed by a clever experiment by Phillips, Hertwig, Kareev and Avrahami (2014): Rivals in the dark: How competition influences search in decisions under uncertainty [Cognition, 133(1).104-119]. These authors used a visual search task, where it mattered to spot a target as quickly as possible and indicate their decision with a button press. The twist was that players  were in a competitive situation and did not know when their competitors would make their decision. If they searched carefully they might lose out because another player might get there first? If they searched only very cursorily, they might be lucky. It turned out that for optimal performance it was adaptive to search only minimally. To me this is a metaphor of the current problem of fast science.

A solution to publish or perish? There may be a way out.  Game theory comes to our aid. The publish or perish culture is like the prisoner’s dilemma. You need to be slow to have more complete results and you need to be fast to make a priority claim, all at the same time.  Erren, Shaw & Morfeld (2015) draw out this scenario between two scientists who can either ‘defect’ (publish early and flimsy data) or ‘cooperate’ (publish late and more complete data). suggest a possible escape. Rational prisoners would defect. And this seems to be confirmed by the command publish or perish. The authors suggest that it should be possible to allow researchers to establish priority using the equivalent of the sealed envelope, a practice used by the Paris Académie des Sciences in the 18th century.  Meanwhile, prestigious institutions would need to foster rules that favour the publication of high quality rather than merely novel work. If both these conditions were met the rules of the game would change. Perhaps there is a way to improve quality through slow science.

 

Confidence

Our Danish friend, Dan Bang , is just finishing his DPhil on Confidence.

If you type confidence into Google you will get millions of hits, mostly about self-confidence. You are told that, for a small fee, self-confidence can be learned and will enable you to influence people and earn more money.

This is not the kind of confidence that Dan is interested in.

no-spiral

I associate confidence with psychophysics experiments. You make people look at an endless series of pictures in which there may or may not be moving dots. You ask them, ‘Were the dots moving?’ and then ‘How confident are you that they were moving?’ These experiments are so boring that the only people prepared to take part are the authors of the paper.

CDF: So why is confidence so interesting? Continue reading Confidence

DB: On the one hand, confidence is an objective quantity. We can link confidence to behaviour or real-world events. We can ask, when people are more confident, are they also more likely to give the correct answer? We call this resolution (or metacognitive sensitivity). The more people’s low and high confidence discriminates between their incorrect and correct answers, the higher the resolution.

On the other hand, confidence is also a subjective quantity. You and I might have different ideas about what it means to be “not so sure” – does it mean that the probability that we are correct is 25% or 50%? We call this calibration (or metacognitive bias). So even if our confidence has the same resolution, I might express myself more cautiously than you do. Our low and high confidence need not fall within the same range. I might say “not so sure” when thinking that there is a 75% probability that I am correct. But you might have no problem saying “absolutely certain”.

DBsureFor me, confidence is interesting because, with carefully controlled experiments, we can quantify how people communicate their inner states, and we can ask whether the way in which they communicate this information changes with the social context.

CDF: So it may be interesting for you, but aren’t the experiments still boring?

DB: People don’t get bored doing my experiments. They work together in pairs and discuss what they have seen. We often think that confidence is a private experience, but in my experiments people talk to each other about how confident they feel.

CDF: Why would they talk about their confidence?

DB: If they disagree about what they have seen, they have to decide who is right. A good rule of thumb is that the more confident person is also more likely to be right. Two people working together can do better than the best person working alone, and the more they talk about confidence the greater the advantage for the pair. Simply by going with the more confident person after each presentation you can get an advantage.

CDF: How can people predict whether they are going to be right or not? This is very mysterious to me. Where does the information come from?

DB: There are a lot of different theories. Some think that our confidence directly reflects the reliability or strength of the information upon which our decisions are based. In my tasks, this information could be sampled from memory or through the senses. In general, the more reliable this information is, the more likely we are to be correct. Others don’t think we have such direct, privileged access to our inner workings. Instead, we infer our confidence.

One way to do this is to monitor the speed with which we reach our decisions. In most situations, decisions that we make quickly are more likely to be right, and fast responses tend to be associated with greater confidence. Observers are quite good at judging other people’s confidence by watching their movements. However, in one of our studies, we showed that simply going with the faster person is not as good as going with the more confident one. So, confidence seems to carry a lot of useful information.

CDF: I guess you mean that confidence is a marker of competence and speed is another marker. We would certainly want to take advice from competent people. But can’t this easily go wrong? Over-confident people think they are giving good advice when they are not. Working with an over-confident person could be disastrous.

DB: Even an over-confident person will be more confident when s/he is right and less confident when s/he is wrong. S/he can be accurate about their confidence (resolution), but have a bias to exaggerate it (calibration). If we want to work successfully with each other we need to calibrate the way we report our confidence to one another. When I say that I am very confident it has to mean the same as when you say you’re very confident.

CDF: You mean that I have to make sure that my subjective experience of confidence corresponds to your subjective experience of confidence. How is this possible? It’s like asking whether my experience of red is the same as your experience of red.

ConfCorrDB: Actually there’s a quick and dirty way of doing it, so to say, which works most of the time. People usually use words, but you can ask them to use numbers from 1 to 6 to indicate their confidence. An under-confident person might mostly use the numbers 1 to 4, while an over-confident person mostly uses the numbers 4 to 6. It ‘s fairly obvious that they are using the scale in a different way. I have found that people align their use of such confidence scales so that they have the same average confidence rating across the experiment. This might not necessarily be the middle of the scale. So, some people might both use the scale in an “under-confident” way, while others might both use it in an “over-confident” way. There are very few mismatches.

Confidence is a subjective experience, but there are still common features that people can agree on. The two ends of the scale might be fixed at guessing and certain. It is obviously more difficult to have agreement about the middle of the scale, but people can still agree on the order of their levels of confidence.

CDF: That’s very interesting. If you used a 3-point scale of confidence, it would be difficult to be sure if we both meant the same thing with a rating of 2, but the more items in the scale the less the problem will be. In an earlier study, your colleagues showed how each pair developed their own verbal descriptions of confidence – sure, almost sure, a little sure, not quite sure, &c. I was very surprised that the mean number of levels for these spontaneously developed scales was about 18. I was surprised because we all learned, as students, that the optimum number for a scale was 7±2. But, of course, the more levels we have, the less the problem of equating subjective experience.

DB: Yes, we actually find that, if you give people a continuous scale (e.g., 1 to 6 in steps of .000001) instead of a discrete one (e.g., 1 to 6 in steps of 1), then they perform better. The problem of agreeing on what exactly each level means disappears.

alignmentCDF: I am very interested in alignment. It seems to be a critical feature of joint action. The Mirror Neuron story is all about alignment. We automatically align our motor movements and our perception of the world. What you are telling me about confidence seems to be an example of automatic, subjective alignment.

DB: That’s much too speculative for me.

CDF: You called this strategy a quick and dirty method. Does this mean it sometimes goes wrong?

DB: Yes, the strategy only works when the people in the group have equal competence. If they have different levels of competence, they should not try to match their confidence. The more competent person should be consistently more confident than the less competent. Otherwise the pair will take the advice of the less competent person too often.

CDF: But presumably we can notice when someone is more or less competent? Could we do this first and then adjust the way we talk about our confidence?

DB: Actually this seems to be more difficult than you might think. We just published a study showing that people take too much advice from an incompetent partner (and take too little advice from a competent partner). This is not a problem of not being able to work out that the partner is less competent (or more competent). It happened even when they had explicit feedback about their relative competence. People seem to forget this information in the situation.

CDF: But you were using Danish students and every one knows how modest and trusting they are.

DB: That can’t be the explanation. We observed just the same behaviour in Iran where people are supposed to be less trusting of each other.

CDF: I wonder why there should be this universal equality bias, when it reduces successful group decisions?

DB: Perhaps people are more interested in smooth social interactions than in accurate decisions?

CDF: That’s too speculative for me.

The last ferry from Esbjerg to Harwich: Why do we behave irrationally – or do we?

DSCF0114The Dana Sirena, the huge ferry, which has crossed the North Sea every day for uncountable years, will run no more. There is only one more journey and that will be to return from Harwich to Esbjerg – and that ‘s it. We don’t know who made the decision and we wonder what the arguments might have been. We are a bit sad and wonder whether this is a sign that our annual trips to Aarhus for the last ten years must come to an end sometime.

Waiting in the car to get on the ferry, we looked back at a lecture by Antonio Rangel, a few days before, which we much enjoyed. Rangel is a leading practitioner of neuro-economics, from Caltech, and he talked about some serious methodological issues in this field. It’s not about lack of replication, but about remoteness from real life. We have to face it, what people do in the lab just doesn’t transfer to the real world. Something crucial is being left out and not understood. People aren’t behaving as if they were optimal Bayesians. Continue reading The last ferry from Esbjerg to Harwich: Why do we behave irrationally – or do we?

UF: To be optimal our behaviour should be rational – no?

CDF: What economists and others mean by rational behaviour is that you choose the option that gives the highest benefit.

UF: This sounds okay, but people often seem not to choose what’s best for them.

CDF: Ah, this depends. Think of the famous Marshmallow experiment. You have to resist taking the one Marshmallow so that after a certain time you will receive two. But, is it always better to delay? Of course not. If the situation is unpredictable, then it is better to take the one Marshmallow than risk never getting any.

UF: So being impulsive is not always a bad idea.

CDF: You don’t choose a big reward option, if it is very unlikely to be achieved. To answer your question, people and other animals for that matter, don’t necessarily behave irrationally if they don’t do what is predicted by a formula to get them the highest value. The formula works in the lab where stakes are low and choices to be made occur with equal likelihood. Rangel argued that these situations are quite irrelevant to real life situations. What looks like weird behaviour from the theoretician’s point of view, turns out to be quite sensible when looked at in the right context. Maybe supposedly irrational people are maximising different variables compared to what the theoreticians think they ought to be maximising.

UF: So ‘crazy’ people aren’t irrational either?

CDF: Well, a very common idea is that everyone would behave like them if they had their bizarre experiences. Irrational behaviour means the model doesn’t fit.

UF: I see. The bizarre experiences are the proper context to explain the behaviour, which might be optimal. I like it, because once again we see how important it is to consider context. Do you have an example?

CDF: It always matters how something is framed. If someone says, “my glass is half-empty” this most likely means “please fill it up”. If someone says, “my glass is half-full” this means, “I’ve got enough for the moment”. So glass half-full and half-empty are not one and the same ‘value’. We find it incredibly easy to understand the meaning of utterances when we interact with others. We can calculate the value in a particular context quite fast.

UF: Isn’t it odd that when the questions are framed in a complex real life context, they become easy? It’s like a magic trick that shows us what the mind is really good at. It’s at home with complex computations that take into account what another person might know or not know. Strip the problems down to their logical essentials, and the computations become hard and result in errors.

CDF: The question is how does the mind do it? Models proposed by behavioural scientists and economists are extremely good at modelling very basic decision processes, but in social interactions other models are needed. Only if you have such models – and this will be after lots of behavioural experiments, – should you even begin to think of brain scanning. As Rangel said in his talk, brain scanning very rarely gives you any answers. You need a model first. It will not emerge from the data. If the data fit the model, then that means something.

UF: There is something else that I wish I understood better: What our ‘priors’ tell us, and what we pick up from current information are often at odds with each other. How do we deal with this?

CDF: There is a good example of how these two computations can be experimentally made to conflict, and in this case the priors win: In a trust game you learn over many rounds how people behave and this should give you a good idea of whether or not to trust that person. But you pay less attention to this learning process when the experimenter has planted in you some prior knowledge about the other person. For example, you read that Peter, the partner in your game, has recently been given a medal for rescuing a child from a fire, and has raised large amounts of money for charity. During the game, however, Peter behaves abominably and cheats. Yet, you remain trusting when all your unconscious processes want to tell you that you should distrust. Bad mistake.

UF: I can see how this relates to irrational behaviour: It is the personal and the subpersonal fighting it out with each other. But it is not always clear which type of knowledge you should use for the best: the prior knowledge that you have about the other person and their past deeds, or the information you currently extract from your interaction with them.

CDF: The prior knowledge you get from others will always come from a much larger database than your own direct experience. Perhaps that’s why we pay more attention to knowledge from others?

UF: Sometimes the priors can be too strong, and sometimes the bottom-up learning can exert too much influence. If there is a conflict that can’t be resolved, the decision is likely to be considered irrational.

CDF: Of course the priors are not fixed. They are constantly being altered by what happens in our real time interaction with the world and other people. Data from psychophysics tasks tell us that the decision you just made affects your next decision. How can I know what I like until I see what I have chosen? My behaviour tells me something – now I know what I should do next time.

UF: Is this similar to what happens when we follow the crowd and do what other people do? They may know something that we don’t know. We can benefit from their knowledge, as long as they have it. Like the traders on the stock exchange, who buy stocks that others buy. Perhaps they believe that the others have inside knowledge. This might sometimes even be true, but if it isn’t, stock market bubbles can be created. This certainly looks like irrational behaviour.

CDF: I think we have been talking about our favourite topic: Two systems and how they influence each other, System 1 and System 2, in Kahneman’s sense. Sub-personal and personal in Dennett’s sense. The influence of other people on us, and our influence on them occur both at the personal and the subpersonal level.

UF: But how does the influence of other people, say on the stockmarket, come about?

CDF: That’s what our book has to be about.

Meanwhile, after a long wait, we can drive onto the ferry. We spot a TV cameraman and a presenter in a long black coat, watching and commenting on the last journey of the old Dana Sirena from Denmark to England.

Our colleague from the Interacting Minds Centre at Aarhus University, Andreas Højlund Nielsen, told us about a 15 minute documentary film made by his sister-in-law, Mie Lorenzen. It is called ‘18 hours aboard the England ferry’. It will provide you with the tranquillity of a very calm transit.

DSCF0119

A Danish Breakfast with Andreas Roepstorff

 

Breakfast with Andreas

We are very fond of the light, bright, functional Scandinavian style of our apartment in Nobelparken, Aarhus University. It has been our home during our many stays, short and long, over the last ten years. The reason that we are coming here to the Interacting Minds Centre, and keep coming back, is Andreas Roepstorff. One of our special treats is if he comes to breakfast and brings with him freshly baked bread. Today he has also brought a chokoladestang, a classic Danish pastry, extra special. Actually, Danish pastry is called ‘wienerbrød’. All the ingredients of a ‘hyggelig’ breakfast are here. Also candles, thin chocolate slices, honey, jam, cheese, rullepølser, and plenty of coffee. Perhaps there are some other essential Scandinavian style ingredients, an open plan flat and some open plan minds.  Continue reading A Danish Breakfast with Andreas Roepstorff

In this context anything can be discussed. There is no need to be afraid of the big and awkward questions. We started talking about overall goals, long held beliefs, what we would really like to accomplish. Andreas suggested straight away that we need to think of hierarchies of goals, referring to Etienne Koechlin. Koechlin has mapped out hierarchies of goals in the Prefrontal Cortex. Long held beliefs will be kept in the background and other more short term beliefs will be nested within.

It’s all about upholding alternative views of the future, Andreas says. Then Chris throws in “Mental time travel”. I say “Episodic foresight” – as in a pleasant game of ping-pong.

AR: There is the open future – there are several possibilities in front of you.

UF: Ah – so like you to say this. The Viking spirit and your trademark – the Blue Ocean.

CDF: This is where other people give these alternative views.

UF: Where culture = other people.

AR: Okay. Think of the Blue Ocean – we can navigate in the future. We explore.

CDF: Not only finding out what world is like, but creating a world as we’d like it to be.

AR: Creating the world in your image. God-like.

UF: ??

AR: Religion is an extension of the social image. It’s a hierarchical story. You create the top of the hierarchy.

UF: Go on.

AR: The world is unpredictable. But we believe there is a real world – this constrains us.

UF: ??

AR: The other problem is: other people have different perspectives. We have the Ukraine example.

UF: If everyone had the same idea, what then?

AR: Maybe it’d be like China in the old days.

— We pause to help ourselves to more bread, butter and stuff to put on top.

 

AR: Here is a long held belief: the openness of the human cognitive system.  But there are constraints. Low level behaviour is constrained totally by immediate environment.

CDF: The higher up in hierarchy, the less constrained.

AR: The ability to share possible futures, to co-create, undoes the straight jacket that is there otherwise.

CDF: Humans are good at creating new niches.

AR: Yeah, and at changing the world to fit them. Imagining the future.

Aarhus sybilsUF: I have always been struck by the deep human interest in forecasts. You can see it in ancient archaeological sites, like Stonehenge, or the oracle of Delphi. There are the lovely medieval pictures of the Sybils in Aarhus cathedral. People have been obsessed by predicting the future. They have lots of devices for doing so. Why?

AR: You need external help, devices to forecast. Astronomy is way beyond human time scale. How long do you need to know that something is cyclical?

CDF: The Babylonians knew about the 18 years moon cycle.  They had to have instruments to monitor their observations over such a time scale. And once you start monitoring…there comes control.

UF: Are there some practical suggestions here? I would like to predict our own future.

AR: I have devised this exercise for postdocs: This is what I ask them when they first come:  Imagine yourself in five years time. Everything has gone as well as possible. It is 2019. You are invited to give keynote speech at a big conference. In Hawaii, no less. How did you get there? What would you want to tell people? Who is in the audience? Whom would you like to impress, living or dead?

UF: Nice.

AR: Some students know on the spot. They can then write a grant application.

UF: There is the curse of the here and now. An eternally extending tree of possibilities. The more you think about it the more choices you have.

AR: Exactly. You cannot see the path from here that will lead to a future endpoint. There are just too many branches. But you can, if it’s the other way round: Start from the endpoint and go back to the present.

postdicting the future

CDF: This is analogous to Daniel Wolpert’s solution to the motor problem by minimising endpoint variability.

UF: We have a theory now.

CDF: When you make an action there are an infinite number of ways to proceed. You first choose the endpoint and then you can minimise end-point error.

AR: Cultural experience allows a blueprint for the future. People do have expectations of their future. Ideal scenarios have an internal logic. You get all this from your culture and can frontload it in a ‘cognitive app’.

UF: So the blue ocean has some pre-defined spots to aim for?

AR: The critical thing is to place a buoy out there. Then you have to do something to get there.

CDF: Your exercise for postdocs is all very well. But what it doesn’t allow for is taking advantage of unforeseen opportunities that happen to occur. You should take this up and then you could go into a completely different direction.

AR: Reconfiguring is always fine. It is easier to follow a tangential line when you have a larger perspective. If your goals are too short term then this doesn’t work, and you can go off course.

UF: The proper Long Term perspective is from death.

AR: From beyond death. You have to see yourself from other people’s point of view in the far future. Think of your legacy.

CDF: What my mother used to call “the backdrop of eternity”.

— More helpings of coffee and chocoladestang are now necessary. Chris offers to make more coffee.

 

AR: Going back to the Blue Ocean and the exercise for postdocs. It’s necessary to anchor the imagination in a specific place. Hawaii. And there need to be specific people who the post-doc would like to be in the audience, and he would like to impress.

UF: I like this idea that specific constraints like these concrete anchor points actually get the imagination going.

AR: If you don’t have these far flung anchor points, then you can get into cognitive apathy; the more you think about it, the more choices open up, and it becomes even more complicated. I have been there….

CDF: In the Tower of London game where you have to plan future moves, there is this interesting observation: if there are two equally good moves, then you slow down. You would have thought you could speed up, since it doesn’t matter which one you choose! Now here’s an experiment: Would performance be improved if you asked people to think about planning their moves backwards rather than forwards?

UF: Good – we always need to finish on a suggestion for a new experiment!

image credit: groenling at flickr (1989): Sybils in Aarhus cathedral