From the “Mind-Blowing Animal Behavior” files

I’m not going to even to try to summarize this; just go over to Ed Yong’s blog and read about how groupers and moray eels collaborate to hunt down prey. Using gestures to communicate. There are videos.

The giant moray eel can grow to three metres in length and bites its prey with two sets of jaws—the obvious ones and a second set in its throat that can be launched forward like Hollywood’s Alien. It’s not a creature to be trifled with. But the coral grouper not only seeks out giant morays, but actively rouses them by vigorously shaking its body. The move is a call to arms that tells the moray to join the grouper in a hunt.

The two fish cooperate to flush out their prey. The grouper’s bursts of speed make it deadly in open water, while the moray’s sinuous body can flush out prey in cracks and crevices. When they hunt at the same time, prey fish have nowhere to flee.

Advertisements

We Aren’t the World

Economists and psychologists, for their part, did an end run around the issue with the convenient assumption that their job was to study the human mind stripped of culture. The human brain is genetically comparable around the globe, it was agreed, so human hardwiring for much behavior, perception, and cognition should be similarly universal. No need, in that case, to look beyond the convenient population of undergraduates for test subjects. A 2008 survey of the top six psychology journals dramatically shows how common that assumption was: more than 96 percent of the subjects tested in psychological studies from 2003 to 2007 were Westerners—with nearly 70 percent from the United States alone. Put another way: 96 percent of human subjects in these studies came from countries that represent only 12 percent of the world’s population.

Henrich’s work with the ultimatum game was an example of a small but growing countertrend in the social sciences, one in which researchers look straight at the question of how deeply culture shapes human cognition. His new colleagues in the psychology department, Heine and Norenzayan, were also part of this trend. Heine focused on the different ways people in Western and Eastern cultures perceived the world, reasoned, and understood themselves in relationship to others. Norenzayan’s research focused on the ways religious belief influenced bonding and behavior. The three began to compile examples of cross-cultural research that, like Henrich’s work with the Machiguenga, challenged long-held assumptions of human psychological universality.

from the Pacific Standard, via my cousin Peter on the Facebook.

Read the whole thing, it’s an excellent introduction to the research that led to the hypothesis that most psych studies are carried out on a WEIRD (Western, Educated, Industrialized, Rich, and Democratic) population, and that people growing up in WEIRD societies may have drastically different understandings of the world. The discovery that many things we thought were hardwired into humans are actually strongly affected by culture has shaped conversations across psychology in the last few years, and the article has many examples of both small and large cultural differences that shape human behavior.

Then I looked a little closer and realized it was Tim Minchin. That changed things a lot. It’s nine-minutes of sheer unadulterated brilliance. Watch it. If you only watch one nine-minute animated beat poem in your life (and one may be all that I have in me), this is the one to watch. It tells the story of a dinner party where he confronts a credulous hippie. Good stuff.

via allbleedingstops.blogspot.com

Simulated grunts throw off non-tennis-players in a “where’s that ball” task

Do y'all remember, from earlier this summer, the outrage that was arising in the pro tennis world over grunting? Pro-grunting players argue that producing such a sound when hitting the ball is a completely reasonable technique to get as much power as possible, while anti-grunting players argue that "excessive" noises are cheating, by preventing the opponent from hearing the sound of the ball hitting the racket, and are distracting to boot.

A PLoS ONE paper that just came out looked at precisely this question. Well, sorta this question. A laboratory approximation of this question, in which undergraduate students (none with "more than recreational tennis experience," they reassure us) watched clips of a tennis ball being hit and had to say where it landed.

They had "a professional tennis player" (no discussion of the gender or skill level of this player) hit forehand or backhand shots down a court towards a video camera centered on the far baseline, and collected clips of shots that landed in a 2 x 2 meter square at the corner of the sideline and the baseline. They counterbalanced their collection so that they had equivalent numbers of forehand and backhand shots going to each side, and then they showed each clip four times: twice with a simulated grunt (500 ms of white noise) and twice without; of these, each was shown once with the clip ending immediately upon the ball's landing, and once with the clip playing for 100 ms longer. (The instant-cutoff made the judgment task harder; the longer time to process the visual was easier. Maybe grunts only matter for hard-to-perceive things.) They asked the students who watched these clips to determine if the ball landed to the left or the right of the camera.

And what did they find?

Tennis fig 1
Here we see reaction time (so, lower is faster/better); dark bars are when the video was shown with a grunt, white bars are clips shown in silence. The hard judgements (when the clip ended exactly as the ball landed) are clearly made more slowly than the easy judgements – that's not surprising. More impressive is that the simulated grunt clearly slows people down (their ANOVA found no interaction between decision difficulty and sound; that is, the grunt had the same effect on easy-decision trials and hard-decision trials). People are 20 – 30 ms slower when they hear the noise at the same time as the player hits the ball.

Tennis fig 2
Same pattern for accuracy – people make fewer errors on easy trials than on hard ones, and fewer errors when the grunt is played than without. It's a difference of 3-4 %, but that's non-trivial in professional tennis!

This is a very pretty set of data – they've got a clean-cut effect, that's significant when measured in both error/accuracy and in reaction time – I'm jealous. My data should be this pretty.

My questions for these guys are about the validity of their task. In particular, they're showing people monocular stimuli -  a video on a computer screen isn't going to give you the three-dimensionality that you get when viewing stimuli with two eyes in the real world.

They also used a 500 ms burst of white noise for the simulated grunt – how does the length of that compare to the actual tennis grunts that are the source of such controversy?

Finally, how precise was the timing of that burst of noise? They only say "during the shot" – we know from work on the bouncing/streaming illusion that the timing of sounds played during visual perception can have huge effects on how we perceive objects to be moving.


Sinnett S., & Kingstone A. (2010). A preliminary investigation regarding the effect of tennis grunting: Does white noise during a tennis shot have a negative impact on shot perception? PLoS ONE 5(10): e13148. doi:10.1371/journal.pone.0013148

Using EEG to predict medication responsiveness

For a while now, one of the big ideas in clinical psych has been that we may be able to use functional neuroimaging (techniques like EEG, fMRI, PET, etc) to improve our treatment strategies for various mental illnesses. Diseases like depression seem to be characterized by a wide range of related symptoms, and it's well-established that people respond to different medications in a pattern that's hard to characterized. The logic goes, if we can see what's happening in the brain, we may be able to build up a database that links patients' neural activation patterns to what treatments they respond to, and then we can use that information to help future patients.

A soon-to-be-published paper from a group at McMaster University is doing just this, with schizophrenic patients, EEG, and clozapine. Clozapine is an anti-psychotic, but like most psych meds, it doesn't work for everyone. The stakes are higher because the side effects are pretty nasty. Thus, motivation to assess in advance whether a patient is likely to respond to the medication.

The group took resting EEG from 23 schizophrenic patients whose clozapine responses were known, and extracted a bunch of potentially predictive features:

In our study, these features are statistical quantities including coherence
between all electrode pairs at various frequencies, correlation and
cross-correlation coefficients, mutual information between all sensor
pairs, absolute and relative power levels at various frequencies, the left-to-right hemisphere power ratio,
the anterior/posterior power gradient across many frequencies and
between electrodes (calculated using logarithm difference of power
spectral density values). These quantities can all be readily calculated
from the measured EEG signal.

They then used a machine learning algorithm (which I'm not going into – I know a little bit about this, enough to understand their explanation of their algorithm, but not enough to place it in a larger context) to identify 8 features that were significantly predictive of whether a patient would respond to clozapine. That is, they created an 8-dimensional space, with each patient's data represented as a point in that space defined by its score on each of these features/metrics, in which the clozapine responders and the clozapine non-responders formed two distinct clusters.

Clozapine_rnr
This is a 2-dimensional collapse of the 8-D space, and you can see the clustering of the responders (blue) and non-responders (white). Note that even on this training data set, there's some prediction error, which shows up as overlap between the clusters. The accuracy of the model at predicting response of patients in the training set was about 87%.

Which is nifty and all, but what's really nice is that they then used the model to predict clozapine responsiveness in a new group of 14 patients. (This is standard operating procedure in developing this sort of algorithm – find a set of parameters that works well on your training data, and then test it on unfamiliar data and see if it still performs well. The concern is that you might overfit the training data, and develop an algorithm that sorts those 23 subjects perfectly, but is at close to chance if you give it unfamiliar data.) The algorithm's accuracy on this test group was about 86%, very similar to the accuracy on the training set, and definitely high enough to be useful as a clinical tool.

Of course this is clearly still pilot work – 37 participants is not enough to change the standard of treatment – but it's a very cool study, both for the EEG analysis/feature selection process, and the accuracy of response prediction.


Khodayari-Rostamabad, A., Hasey, G. M., MacCrimmon, D. J., Reilly, J. P., and de Bruin, H. (in press). A pilot study to determine whether machine learning methodologies using pre-treatment electroencephalography can predict the symptomatic response to clozapine therapy. Clinical Neurophysiology. doi:10.1016/j.clinph.2010.05.009 

(via Science Daily via mocost)

Yet another smackdown of fMRI lie-detection

I haven't been following this all that closely, but Mind Hacks today linked to this excellent Wired Science article summarizing the reasons that a judge in Tennessee decided to disallow "brain scan lie detection" evidence. The 1993 case Daubert v. Merrell Dow Pharmaceuticals set the current standard for scientific testimony in courtrooms:

Judge Pham, who presided over this evidentiary
hearing
, summarized his reading of Daubert: Reasonable tests to
apply and ideas to consider include “(1) whether the theory or technique
can be tested and has been tested; (2) whether the theory or technique
has been subjected to peer review and publication; (3) the known or
potential rate of error of the method used and the existence and
maintenance of standards controlling the technique’s operation; and (4)
whether the theory or method has been generally accepted by the
scientific community.”

The verdict in the Tennessee hearing was that this use of fMRI does not meet the standard in multiple ways. Two of the problems in this particular implementation of imaging-as-evidence are particularly startling, in the sense of "What were you thinking?!"

First up is

. . . the scientific methodology employed by Cephos, the company who conducted
the lie-detection test. After Semrau failed one of the two tests he’d
agreed to take, Cephos CEO Steven Laken retested him a third time,
claiming his client had been tired.

. . .

“It seems almost laughable that Cephos could parade this as a great
method when, in this very case, they tried it three times and got one
result twice and the other one once,” Greely wrote in an e-mail to
Wired.com. “In the only ‘real world’ test we’ve got evidence about,
their accuracy rate was either 66.7 percent or 33.3 percent.”

That's right, they ran two tests, and when one of them looked bad for the client, they ran it again to get a "better" result. Gah! This is the sort of thing that every research methods and research ethics and stats class IN THE WORLD tells you not to do, because it COMPLETELY INVALIDATES your data and analysis.

Second,

Furthermore, and the judge quoted extensively from the prosecution’s
cross-examination on this point, Cephos only claims to be able to offer a
general impression of whether someone is being deceptive. While they
ask dozens of individual questions, Laken admitted that his company’s
method could not be used to tell whether someone was lying or telling
the truth on any of specific facts.

That is to say, Laken refused to say that Semrau was telling the
truth to a question like, “Did you enter into a scheme to defraud the
government by billing for AIMS tests conducted by psychiatrists under
CPT Code 99301?” but was willing to say that Semrau was “more overall”
telling the truth.

"More overall" telling the truth? So he could be a generally truthful guy.. and still lying through his teeth about the issue of interest to the court, and this test would support only the general truthfulness? Like I said, I hadn't realized just how bad the presentation of this was, and I'm kind of horrified. I hope it stays far far away from the judicial system unless/until we figure out a way to do it right.

Yet another round of fMRI lie detection attempts

An attorney in NY is pushing to use fMRI evidence to demonstrate that a witness is telling the truth. While there are some studies showing differences in activation between deception and truth-telling, there's broad doubt that these findings are replicable in real-life settings, where the stakes are higher and the task is less constrained. And then there's the problem of asking juries to assess this evidence.

Juries tend to be overly credulous about any evidence offered as forensic or scientific evidence. And other studies show that imaging studies generate an extra layer of overcredulousness. (On those, see Dave Munger and Jonah Lehrer.) So when an 'expert' shows a jury a bunch of brain images and says he's certain the images say a person is lying (or not), the jury will led this evidence far more weight than it deserves.

Neuron Culture: Who
you gonna believe, me — or my lyin' fMRI?

There exists a snail, in hydrothermic vents in the Indian Ocean, that incorporates iron particles into its shell for extra protection.

From collision detection:

Scientists discovered Crysomallon squamiferum in 1999, but they didn’t know a whole lot about the properties of its shell until this month, when a team led by MIT scientists
decided to study it carefully. The team did a pile of spectroscopic and
microscopic measurements of the shell, poked at it with a nanoindentor,
and built a computer model of its properties to simulate how well it
would hold up under various predator attacks.

The upshot, as they write in their paper (PDF here), is that the shell is “unlike any other known natural or synthetic engineered armor.”
Part of its ability to resist damage seems to be the way the shell
deforms when it’s struck: It produces cracks that dissipate the force
of the blow, and nanoparticles that injure whatever is attacking .

Also, the post uses the phrase “Darwinian evolution crossed with Burning Man”. Win.

via kottke.org

Scholars Turn Their Attention to Attention – The Chronicle Review – The Chronicle of Higher Education

In the Chronicle:

Foerde and her colleagues argue that when the subjects were
distracted, they learned the weather rules through a half-conscious
system of "habit memory," and that when they were undistracted, they
encoded the weather rules through what is known as the
declarative-memory system. (Indeed, brain imaging suggested that
different areas of the subjects' brains were activated during the two
conditions.)

That distinction is an important one for educators, Foerde says,
because information that is encoded in declarative memory is more
flexible—that is, people are more likely to be able to draw analogies
and extrapolate from it.

"If you just look at performance on the main task, you might not see
these differences," Foerde says. "But when you're teaching, you would
like to see more than simple retention of the information that you're
providing people. You'd like to see some evidence that they can use
their information in new ways."

via marbury

This is one of the sanest reviews I've seen of what we know about multitasking, attention, and learning. No flailing about how the Internet is destroying civilization, but some real concerns about the compatibility of multitasking and specific tasks.