Your Brain, by the Numbers

Jorge Cham, of the wonderful PhD Comics, has made a beautiful and fascinating chart of brain-related numbers. Click through for the full-res version.

via MindHacks

 

Advertisements

Neurosciencyness is the new truthyness

Got off the train this morning and noticed a new (ish? I’m not super attentive to these) ad:

Image

L-theanine is an amino acid that crosses the blood-brain barrier and has been shown to increase alpha-band EEG activity. The logic here seems to be the same as that in things like the games that come with the Mindwave biofeedback system:  since alpha-band EEG oscillations occur when people are relaxed, then manipulations that increase in alpha-band activity must be increasing relaxation. (Nobody, to my knowledge, uses this logic while also accounting for the many other cognitive activities that increase alpha-band activity, like actively remembering things, or attending to one object in a crowded field, that do not seem to have much at all to do with relaxation. Brains are, unfortunately, complex, interconnected, emergent systems, and cause and effect are rarely straightforward.)

Neuro seem to have a whole range of these drinks, each with a different additive, a neurosciency gibberish explanation of why that additive will change your brain function, and a different targeted effect. A few years ago, a paper demonstrated that people are much more willing to accept explanations (even bad explanations) if a neurosciency clause is included. Shortly afterwards, another group showed that people are more likely to accept neurosciency explanations if the brain activity being discussed was shown on realistic brain pictures, as opposed to abstract images or bar graphs.

Image

Besides calling their brand “neuro”, these guys have a cute little “EEG activity in the brain” logo, and their website has the slogan “light it up”. Scarily impressive use of the metaphors and descriptors that are used to describe localized neural activity, even by those of us who should know better.

When I posted these to Facebook this morning, one friend said, “I’m not sure whether to be skeptical about their ability to actually modify neurotransmitter availability, or seriously concerned about people actually changing their brain chemistry in untested and unregulated ways.” I think this pretty well sums up my feelings too.

Understanding (and quantifying) uncertainty

Dave Kleinschmidt has some commentary on the Nate Silver fangirl/boy-ing that many of us quantitative types have been engaging in for the last week. 

My tribe—the data nerds—is feeling pretty smug right now, after Nate Silver’s smart poll aggregation totally nailed the election results. But we’re also a little puzzled by the cavalier way in which what Nate Silver does is described as just “math”, or “simple statistics”. There is a huge amount of judgement, and hence subjectivity, required in designing the kind of statistical models that 538 uses. I hesitate to bring this up because it’s one of the clubs idiots use to beat up on Nate Silver, but 538 does not weight all polls equally, and (correct me if I’m wrong) the weights are actually set by hand using a complex series of formulae.

The point is that the kind of model-building that Nate Silver et al. do is not just “math”, but science. This is why I don’t really likethat XKCD comic that everyone has seen by now. Well I like the smug tone, because that is how I, a data scientist, feel about 538′s success. That is right on. But we’ve known that numbers work for a long time. Nate Silver and 538 is not just about numbers, about quantifying things. Pollsters have been doing that for a long time. It is about understanding the structured uncertainty in those numbers, the underlying statistical structure, the interesting relationships between the obvious data (polling numbers) and the less obvious data (economic activity, barometric pressure, etc.) and using that understanding to combine lots of little pieces of data into one, honkin’, solid piece of data.

When I teach stats, or talk about stats in my other classes, I try to hammer on this point about uncertainty. As scientists, we're dealing with noise in our data from all kinds of places. Is the sample under study "weird" in some way? Is our measure noisy? How noisy? How variable are people? Why? Does the time of day/day of week/week of year when people are tested matter? We can estimate how much uncertainty (what statisticians call "error") comes from each of these sources, and try to figure out if there's a structure/pattern underneath the noise, but in order to do that successfully you have to really think about the sources of the error. I think every time I've been really screwed over by an experiment, it's been because there was a source of variability or a kind of variability that I just didn't expect.

Then I looked a little closer and realized it was Tim Minchin. That changed things a lot. It’s nine-minutes of sheer unadulterated brilliance. Watch it. If you only watch one nine-minute animated beat poem in your life (and one may be all that I have in me), this is the one to watch. It tells the story of a dinner party where he confronts a credulous hippie. Good stuff.

via allbleedingstops.blogspot.com

Effect Sizes in Psychology

My Research Methods class has been wrestling lately with the relationships between statistical significance, power, and effect sizes, and with the balance between what's best, or most true, and what's actually practiced by researchers.

This post from a few months ago has some nice discussion of the difference between small-but-significant effects, and big whopping useful/meaningful effects: 

The problem with small effect sizes is that they mean all you've done is nudge the system. The embodied nervous system is exquisitely sensitive to variations in the flow of information it is interacting with, and it's not clear to me that merely nudging such a system is all that great an achievement. What's really impressive is when you properly break it – If you can alter the information in a task and simply make it so that the task becomes impossible for an organism, then you have found something that the system considers really important. The reverse is also true, of course – if you find the right way to present the information the system needs, then performance should become trivially easy. 

Their example of breaking the right thing is a bit hard to understand without reading the linked materials, but their example of fixing the right thing is beautiful:

A real problem in visually guided action is the accurate, metric perception of size (to pick an object up, you need to scale your hand to the right size ahead of time). Study after study after study has showed that vision simply can't provide this without haptic feedback from touching the object; but we do scale our hands correctly! The question is how do we do it? Geoff has been plugging away at this for years, trying to provide people with what he thought were sensible opportunities to explore objects visually, with no luck, until he rotated the objects through 45° (a huge amount in vision). BAM! Suddenly people could visually perceive metric shape, and this persisted over time without being constantly topped up (Lee & Bingham, 2010). Suddenly we knew how we did this task; metric visual perception of shape is enabled by all the large scale locomotion we get up to – moving into a room, for example. Without this calibration, the task was impossible, but as soon as the right manipulation was made, the impossible became straight-forward, and the effect size is huge.

If you're still trying to make sense of why psychologists care about effect sizes, take a look.

The Small Effect Size Effect at PsychScienceNotes (not the journal), via @scicurious.