On 27th April Ernö Teglas, Chris, Agnes Melinda Kovacs and I met up at the most OTT coffeehouse in Budapest, the New York. The décor puts you into Rococo mood, but the pianist on the upper level suggests a 1920s jazz feeling. Continue reading Not to be found in any Methods section
The conversation quickly turned to how to do experiments with babies. Why do such experiments take forever to complete? What to do with non-replications?
– “But this is the same with any experiments we have ever done!” – Chris quips. “But why do you think your experiments take a long time to complete? Why are you skeptical about non-replications?”
ET: Here is my claim: The success of a baby experiment is decided by how you instruct the parents, how and where they hold their baby.
Instruction is particularly relevant because it is the parents who coordinate the data acquisition. Without them these experiments simply cannot happen and they are very willing to help. The problem is that they have no experience with such situations. So, in a short session we have to turn them into “experimenters”. Every detail matters. It may seem surprising, but we really have long debates about how to hold the baby during tests, and what is the optimal position. The way a baby is held makes all the difference to their freedom to move. For example when held just under the arms, the mother may exert more influence on the baby than necessary. Also, the baby can easily slip down just slightly. And then eye gaze slides too. It may no longer be on the target you display on the screen. The hold has to be on the hips of the baby.
CF: Experiments stand and fall by how the experimenter instructs the subject.
UF: But isn’t this against the idea that scientific experimentation has to be independent of the experimenter. The whole point is that they can be replicated by somebody else.
CF: Ah, there are critical aspects of instructions, which often don’t get spelled out in methods sections.
ET: The pity is that if a student doing a first experiment fails to replicate a previous result then this throws doubt on the previous finding, when in fact it throws doubt on the student who is still learning and doesn’t yet know how to do the experiment. Only once the student has succeeded in replicating a well known robust finding, can he or she be trusted to do a good job.
CF: When I first worked at the long disbanded MRC Clinical Research Centre, the biochemists (who, mysteriously, later turned into molecular biologists) often said things like: today the reaction just didn’t work. We have to try again.
UF: So it’s not necessarily the sign of an immature and still soft science that you have to be pernickety on exactly how an experiment is carried out. If an experiment doesn’t replicate, there are many possible reasons and does not necessarily mean the previous results are not true.
ET: With infants we basically rely on measuring what they are looking at, and for how long they look at it: they look longer at something that surprises them; they get bored and look away when something is highly predictable to them. However, they also look away when something else attracts their attention; when there is some noise, when they feel uncomfortable for some reason. All this makes us extremely careful to have completely soundproof labs, very relaxed mothers, and babies who have only recently woken up and have been fed before we even start the experiment.
– This was the moment when we found out that the coffee Melange with Chili is actually rather spicy, but the honey that served as a bottom layer softened this feeling. “Overall each ingredient plays a role if their interaction is orchestrated by a hand sensitive to details” says Ernö
UF: Is it true that you often don’t get results that you know we should get?
ET: True! Then we go over every step of the procedure and find possible reasons.
UF: Is it okay to eliminate data when you believe there was a slip in the procedure for one particular baby?
ET: Actually you have to do it. You have to eliminate data all the time. If you don’t you include complete nonsense, when for example an infant no longer looks at the target. You need him/her to look to take notice of the scenario you have devised. We have to eliminate usually 20% of the data, if not more. …of course it depends on the experimental protocol: In habituation studies rejection rate can be as high as 50%. The procedure only lasts a few minutes because babies soon tire of watching a simple scenario on video. Sometimes we can only use a fraction of these few minutes.
UF: It makes me marvel not only at the fact that you are so incredibly scrupulous in your procedure but also that you have so many successes with your ingenious experiments.