…Then I wouldn’t believe you. It doesn’t matter whether you use a top hat and a wand, or a multi-million pound fMRI scanner: reading thoughts is still far beyond the reach of modern neuroscience, let alone anybody else. Recent years have seen huge advances in brain scanning technology and it is true that scientists can now effectively look inside the active brain and detect activity. But the technology has important limits.
This doesn’t stop ‘brain reading’ from hitting the popular press. The inexorable cycle of newspaper headlines has some recurring themes: politicians do bad stuff; photogenic students get good grades; animals, especially during the summer, get born and lost and found learn to talk or dance or knit… And, with surprising regularity, “Scientists can read your mind” (or words to that effect). This is not true. At best, it is a gross exaggeration – and, of course, many of these articles will qualify their assertions and eventually even admit that the scientists in question can’t actually read your thoughts, which is what most of us understand ‘mind reading’ to be.
The technique at the bottom of the majority of mind reading stories is fMRI (functional magnetic resonance imaging), which uses a huge magnetic field to quantify blood flow changes in the human brain, and infer brain activity in small pockets of space called ‘voxels’. This imaging technique itself has come under a lot of fire in recent years; there are some doubts about whether those shifts in blood flow necessarily reflect bristling brain activity. Furthermore, two 2009 meta-studies of fMRI papers flagged major concerns about selection bias and ‘voodoo correlations’ based on the way the active voxels are selected and analysed. Finally, a rather Pythonesque study even used poorly-analysed fMRI data to demonstrate brain activity in a dead salmon. Irreverent detractors aside however, it is clear that fMRI can be used to useful effect by scientists who are aware of its limitations; indeed, the (since retitled) ‘voodoo correlations’ paper came from within the lab of Nancy Kanwisher, a world leader in functional imaging who takes a notably ‘bottom-up’, assumption-free approach.
fMRI is good at comparing very specific things. If you happily put yourself into a scanner and were told to either imagine running a marathon or to picture the boy or girl whom you first kissed, the scanner could help scientists guess which one you actually did – if they already had results from other people thinking the same things. The scan would not be able to discover that you were actually thinking about lunch.
What about a simple question like, ‘Is this person lying?’ This is perhaps more likely, because it could be argued that lying and telling the truth do indeed engage different emotional or decision-making processes that might be physically distinguishable in the brain. However, there are very few scientific papers that actually examine deceptive behaviour using fMRI, and most of them have been inconclusive (such as this PNAS paper).
It’s remarkable, then, that at least two companies currently peddle fMRI-based lie detection services. In 2009, a Californian father accused of child abuse hired ‘No Lie MRI’ to demonstrate his innocence. The story was broken on March 14th by Emily Murphy in a Stanford blog post and Wired Science wrote it up two days later. Within a fortnight, the application to admit the MRI scan as defence evidence was withdrawn after the child’s lawyers received advice from Stanford’s Center for Law & the Biosciences, where Murphy works. In May this year, evidence from another company made it as far as a New York courtroom but was thankfully rejected.
We must be wary of these developments, but at the same time we should not allow them to detract from the other brilliant things that brain scanning can accomplish. The technology for brain-computer interfaces is progressing rapidly, from tweeting with your brain or silently bossing a robot about to monkeys learning to eat with robotic limbs. In each case, however, the fancy gadgets take quite some mastering, and they are unable to ‘read out’ their instructions directly from a naïve user. Similarly, the amazing experiments that have allowed near-vegetative patients to communicate (see the NY Times report here) rely on a brain-scanning strategy that is calibrated beforehand on healthy individuals.
Used carefully, both in terms of its technicalities and its ethical implications, brain imaging is powerful science – but it can’t read you like a book. And as for magicians and TV tricksters, there is only one that you can trust. Chris Cox, “the mindreader who can’t read minds”, uses body language and other predictable behavioural clues to predict how his volunteers will act in simple games, while openly admitting that any patter about actual mind-reading is “bullshit”. His, then, is the only mindreading show that even a neuroscientist can enjoy. Next time you see a mind-reader who is rather less up-front, or read another lazy headline about ‘mindreading’ scientists, remember Chris and think “He can’t do it – and neither can they!”
Jonathan Webb is a DPhil student in Neuroscience