Sanity-checking

Last week, I ran some EEG data through an analysis that should find clusters of electrodes that recorded high activity at the same time. In other words, it looks for spatial and temporal adjacency. The cluster that it found looks like this:

wonky-cluster

That’s a map of all of the electrodes on our EEG nets, and the big blue circles overlay the electrodes that were part of my cluster. Hopefully it is apparent to you that this cluster has a topographically odd feature – the hole in the middle! Clusters don’t (generally) have holes like that – the correlation between adjacent electrodes is pretty high – and I knew from looking at the maps of overall activity that there wasn’t an activity dip in the center of that cluster. The particular electrode making up my hole was even more disconcerting because it is electrode Cz, the one that sits at the center of the top of the head, and when we record our EEG data that electrode is the reference. As data comes off the system, the trace at Cz is always zero, because measuring voltage requires a reference point. Although the data should have been re-referenced before the clustering algorithm, the worst-case “What when wrong here” answers involved the re-referencing having been lost, and that would be a major bug in an analysis that has been used already in several studies in the lab. Scary.

It turned out not to have been that bug, but another issue entirely. The first step in the clustering analysis is to take the spatial layout of electrodes, and define each electrode’s “neighborhood” – which other electrodes should be counted as “adjacent” to the electrode in question. The piece of code that does this takes in a list of names of electrodes you want to define neighborhoods for (in my case, all of them except the two eye-movement channels) and matches that list to a master list of electrode names and their positions. And in that master list, the electrode Cz was listed with its name as Cz’. Turns out, Cz and Cz’ don’t match, if you’re a computer, so Cz is never included in any neighborhoods, and so doesn’t turn up in the final cluster. Fixing the master layout list got me clusters including Cz, and all is well.

This kind of thing really illustrates the importance of having a human being look at your output, and think about whether the answer the computer is giving is plausible. I’m still working on how to best teach students to bring their critical eyes to this kind of output, and not just assume that the computer is right. Part of it is just developing enough experience to know what the results “should” look like, but in a lot of cases students already have a great deal of knowledge to bring to bear. It’s the same reason that I plot really raw data as often as I can – I’m not necessarily going to publish these graphs, but understanding what’s happening in my data set is essential, both for sanity-checking and to develop a theoretical understanding of what participants, or their brains, are doing.

A (bit of a) mirror neuron smackdown

I’ve been thinking about mirror neurons, lately, since a talk by a woman whose work presumes their existence and importance. Mirror neurons are the Higgs Boson of neuroscience, a phenomenon that is given greater significance by people outside the field than by people inside.

Via @scicurious, this article at Psych Today reminds us to rein in some of the hype:

A non-player tennis fan who’s never held a racket doesn’t sit baffled as Roger Federer swings his way to another victory. They understand fully what his aims are, even though they can’t simulate his actions with their own racket-swinging motor cells. Similarly, we understand flying, slithering, coiling and any number of other creaturely movements, even if we don’t have the necessary motor cells to simulate them.

Article at Psychology Today

Neurosciencyness is the new truthyness

Got off the train this morning and noticed a new (ish? I’m not super attentive to these) ad:

Image

L-theanine is an amino acid that crosses the blood-brain barrier and has been shown to increase alpha-band EEG activity. The logic here seems to be the same as that in things like the games that come with the Mindwave biofeedback system:  since alpha-band EEG oscillations occur when people are relaxed, then manipulations that increase in alpha-band activity must be increasing relaxation. (Nobody, to my knowledge, uses this logic while also accounting for the many other cognitive activities that increase alpha-band activity, like actively remembering things, or attending to one object in a crowded field, that do not seem to have much at all to do with relaxation. Brains are, unfortunately, complex, interconnected, emergent systems, and cause and effect are rarely straightforward.)

Neuro seem to have a whole range of these drinks, each with a different additive, a neurosciency gibberish explanation of why that additive will change your brain function, and a different targeted effect. A few years ago, a paper demonstrated that people are much more willing to accept explanations (even bad explanations) if a neurosciency clause is included. Shortly afterwards, another group showed that people are more likely to accept neurosciency explanations if the brain activity being discussed was shown on realistic brain pictures, as opposed to abstract images or bar graphs.

Image

Besides calling their brand “neuro”, these guys have a cute little “EEG activity in the brain” logo, and their website has the slogan “light it up”. Scarily impressive use of the metaphors and descriptors that are used to describe localized neural activity, even by those of us who should know better.

When I posted these to Facebook this morning, one friend said, “I’m not sure whether to be skeptical about their ability to actually modify neurotransmitter availability, or seriously concerned about people actually changing their brain chemistry in untested and unregulated ways.” I think this pretty well sums up my feelings too.

Using EEG to predict medication responsiveness

For a while now, one of the big ideas in clinical psych has been that we may be able to use functional neuroimaging (techniques like EEG, fMRI, PET, etc) to improve our treatment strategies for various mental illnesses. Diseases like depression seem to be characterized by a wide range of related symptoms, and it's well-established that people respond to different medications in a pattern that's hard to characterized. The logic goes, if we can see what's happening in the brain, we may be able to build up a database that links patients' neural activation patterns to what treatments they respond to, and then we can use that information to help future patients.

A soon-to-be-published paper from a group at McMaster University is doing just this, with schizophrenic patients, EEG, and clozapine. Clozapine is an anti-psychotic, but like most psych meds, it doesn't work for everyone. The stakes are higher because the side effects are pretty nasty. Thus, motivation to assess in advance whether a patient is likely to respond to the medication.

The group took resting EEG from 23 schizophrenic patients whose clozapine responses were known, and extracted a bunch of potentially predictive features:

In our study, these features are statistical quantities including coherence
between all electrode pairs at various frequencies, correlation and
cross-correlation coefficients, mutual information between all sensor
pairs, absolute and relative power levels at various frequencies, the left-to-right hemisphere power ratio,
the anterior/posterior power gradient across many frequencies and
between electrodes (calculated using logarithm difference of power
spectral density values). These quantities can all be readily calculated
from the measured EEG signal.

They then used a machine learning algorithm (which I'm not going into – I know a little bit about this, enough to understand their explanation of their algorithm, but not enough to place it in a larger context) to identify 8 features that were significantly predictive of whether a patient would respond to clozapine. That is, they created an 8-dimensional space, with each patient's data represented as a point in that space defined by its score on each of these features/metrics, in which the clozapine responders and the clozapine non-responders formed two distinct clusters.

Clozapine_rnr
This is a 2-dimensional collapse of the 8-D space, and you can see the clustering of the responders (blue) and non-responders (white). Note that even on this training data set, there's some prediction error, which shows up as overlap between the clusters. The accuracy of the model at predicting response of patients in the training set was about 87%.

Which is nifty and all, but what's really nice is that they then used the model to predict clozapine responsiveness in a new group of 14 patients. (This is standard operating procedure in developing this sort of algorithm – find a set of parameters that works well on your training data, and then test it on unfamiliar data and see if it still performs well. The concern is that you might overfit the training data, and develop an algorithm that sorts those 23 subjects perfectly, but is at close to chance if you give it unfamiliar data.) The algorithm's accuracy on this test group was about 86%, very similar to the accuracy on the training set, and definitely high enough to be useful as a clinical tool.

Of course this is clearly still pilot work – 37 participants is not enough to change the standard of treatment – but it's a very cool study, both for the EEG analysis/feature selection process, and the accuracy of response prediction.


Khodayari-Rostamabad, A., Hasey, G. M., MacCrimmon, D. J., Reilly, J. P., and de Bruin, H. (in press). A pilot study to determine whether machine learning methodologies using pre-treatment electroencephalography can predict the symptomatic response to clozapine therapy. Clinical Neurophysiology. doi:10.1016/j.clinph.2010.05.009 

(via Science Daily via mocost)

Yet another smackdown of fMRI lie-detection

I haven't been following this all that closely, but Mind Hacks today linked to this excellent Wired Science article summarizing the reasons that a judge in Tennessee decided to disallow "brain scan lie detection" evidence. The 1993 case Daubert v. Merrell Dow Pharmaceuticals set the current standard for scientific testimony in courtrooms:

Judge Pham, who presided over this evidentiary
hearing
, summarized his reading of Daubert: Reasonable tests to
apply and ideas to consider include “(1) whether the theory or technique
can be tested and has been tested; (2) whether the theory or technique
has been subjected to peer review and publication; (3) the known or
potential rate of error of the method used and the existence and
maintenance of standards controlling the technique’s operation; and (4)
whether the theory or method has been generally accepted by the
scientific community.”

The verdict in the Tennessee hearing was that this use of fMRI does not meet the standard in multiple ways. Two of the problems in this particular implementation of imaging-as-evidence are particularly startling, in the sense of "What were you thinking?!"

First up is

. . . the scientific methodology employed by Cephos, the company who conducted
the lie-detection test. After Semrau failed one of the two tests he’d
agreed to take, Cephos CEO Steven Laken retested him a third time,
claiming his client had been tired.

. . .

“It seems almost laughable that Cephos could parade this as a great
method when, in this very case, they tried it three times and got one
result twice and the other one once,” Greely wrote in an e-mail to
Wired.com. “In the only ‘real world’ test we’ve got evidence about,
their accuracy rate was either 66.7 percent or 33.3 percent.”

That's right, they ran two tests, and when one of them looked bad for the client, they ran it again to get a "better" result. Gah! This is the sort of thing that every research methods and research ethics and stats class IN THE WORLD tells you not to do, because it COMPLETELY INVALIDATES your data and analysis.

Second,

Furthermore, and the judge quoted extensively from the prosecution’s
cross-examination on this point, Cephos only claims to be able to offer a
general impression of whether someone is being deceptive. While they
ask dozens of individual questions, Laken admitted that his company’s
method could not be used to tell whether someone was lying or telling
the truth on any of specific facts.

That is to say, Laken refused to say that Semrau was telling the
truth to a question like, “Did you enter into a scheme to defraud the
government by billing for AIMS tests conducted by psychiatrists under
CPT Code 99301?” but was willing to say that Semrau was “more overall”
telling the truth.

"More overall" telling the truth? So he could be a generally truthful guy.. and still lying through his teeth about the issue of interest to the court, and this test would support only the general truthfulness? Like I said, I hadn't realized just how bad the presentation of this was, and I'm kind of horrified. I hope it stays far far away from the judicial system unless/until we figure out a way to do it right.

Yet another round of fMRI lie detection attempts

An attorney in NY is pushing to use fMRI evidence to demonstrate that a witness is telling the truth. While there are some studies showing differences in activation between deception and truth-telling, there's broad doubt that these findings are replicable in real-life settings, where the stakes are higher and the task is less constrained. And then there's the problem of asking juries to assess this evidence.

Juries tend to be overly credulous about any evidence offered as forensic or scientific evidence. And other studies show that imaging studies generate an extra layer of overcredulousness. (On those, see Dave Munger and Jonah Lehrer.) So when an 'expert' shows a jury a bunch of brain images and says he's certain the images say a person is lying (or not), the jury will led this evidence far more weight than it deserves.

Neuron Culture: Who
you gonna believe, me — or my lyin' fMRI?