Science in Action: Omega-3 (conference submission)

A few days ago I submitted a title and abstract for a talk to be given at the November 2007 meeting of the Psychonomic Society, a group of experimental psychologists:

Rapid Effects Of Omega-3 Fats On Brain Function

I measured the effect of omega-3 fats on my brain by comparing flaxseed oil (high in omega-3) with other plant fats (low in omega-3) and with nothing. Flaxseed oil improved my balance, increased my speed in a memory-scanning task and in simple arithmetic problems, and increased my digit span. The first three effects were very clear, t > 6. The effects of flaxseed oil wore off in a few days and appeared at full strength within a day of resumption. The best dose was at least 3 tablespoons/day, much more than most flaxseed-oil recommendations. Supporting results come from three other subjects. Because the brain is more than half fat, it is plausible that type of dietary fat affects how well it works. The most interesting feature of these results is the speed and clarity of the improvement. The tools of experimental psychology may be used to determine the optimal mix of fats for the brain with unusual clarity.

If I ever made a time line for my life, this submission would be one of the events.

Directory of my omega-3 posts.

19 Replies to “Science in Action: Omega-3 (conference submission)”

  1. Congratulations! It is an amazing discovery, and will have far-reaching effects as it becomes better known.

    It is also interesting to look at this result and try to see why it wasn’t found earlier. We’ve known for some time that omega-3 and omega-6 are out of balance in the modern diet, and that people are healthier when they reduce omega-6 and increase omega-3. We’ve also seen some possible longer-term brain-health benefits from higher omega-3 intake, in terms of the effect on depression and affect. But no one found the right threshold for seeing the overnight effect on brain functioning; with the right intake levels, omega-3s (presumably DHA) increase overall brain performance (speed, balance, and it seems likely that overall IQ would also increase).

    So we finally have a relatively low-cost diet intervention that can make a significant, measurable difference to how well the world works. Not too shabby 🙂

  2. Why wasn’t it found earlier? Reminds me of something Don Hewitt, creator of 60 Minutes, once said when he was asked why television was so awful. He said, “No one ever asked me a question about television to which the answer was not ‘money.'”

  3. I think the finding is potentially interesting. However, without blinding your experiments, you open yourself up to really easy criticism. Namely, that the *expectation* of better performance (from the consumption of omega-3 fatty acids) leads to significantly better performance. It’s pretty well-documented that expectations can significantly influence performance – see, for example It would be a shame if your experiments were dismissed out of hand because you didn’t blind your tests.

    I don’t buy your argument that you didn’t expect any effect when you first tried things, as there’s plenty of suggestive material out there that omega 3 fatty acids influence brain function, e.g. that they may help relieve depression, help ADD sufferers focus, etc. And even if there weren’t, your measurements are noisy, so an increase in performance that’s purely random on day 1 could lead to an expectation of further performance gains on days 2 and forward – all of a sudden, a random effect ends up persisting because of a problematic experimental design.

  4. Tom, you may have something there. TV is bad because money plays too big a role. And scientists underperform because status-seeking plays too large a role. Self-experimentation made it easy to discover the effects of omega-3, but self-experimentation, because anyone can do it, is very low status.

    Geoff, I try to do the most informative experiments. There is already plenty of evidence against the expectations explanation: the dose effects that Tim Lundeen and I have found. The surprising effect I discovered was on my balance, not on depression — they are very different. (No one has ever speculated that Prozac improves balance, for example.) I don’t know of the results you mention about “helping ADD sufferers focus”, but I will look for them.

  5. I’ve been doing 2-4 tablespoons a day now of flax seed oil for a couple of weeks; i think it improves my sleep and seems to be kind of soothing. But I’m not sure exactly. I’m also engage in other experiments that are having effects… One question for Seth: If exhaustive controlled double blind studies were performed and determined that flax seed oil did not have the effects you were experiencing in balance, mental acuity etc., would you stop taking it? Would you conclude that other factors like expectation etc. were causing the effects?

    I ask because I take Juvenon, which may promote brain cell and metabolic efficiency — or may not, the science has not been done. But I respect Bruce Ames enough to give it a shot; if the science eventually says it does not work, I’ll stop spending my money… I can’t subjectively judge, though I am generally doing well intellectually…

  6. I’ve measured the effects — they don’t disappear just because their explanation might change. I drive a scooter. It works. I won’t stop driving just because someone else concludes either (a) scooters don’t work or (b) they don’t work the way I thought they did.

  7. Do a google search for “omega 3 cognition” and you’ll see tons of material – over a million pages in my search – the meme is quite pervasive. Even if you somehow missed all of that, your “surprise” argument has a simple alternative explanation in the persistent random effect thing I described. I can imagine that expectation levels might have a dose-response effect, too: if thinking that you’ll do well improves your performance, why shouldn’t thinking that you’ll do *really* well improve your performance even more?

    Maybe I’m misinterpreting you, but your suggestion that a blinded experiment would not be “informative” comes across to me as sounding like you have accepted your hypothesis as fact and are not interested in potential falsification. Given that you are already doing non-standard things with self-experimentation, why hand skeptics further reasons to doubt you?

    I think there’s good reason to believe that some portion of your observed effect may be due to expectations. It might be 100%, it might be 0%, or it might be somewhere in between. A measure of the size of the expectation effect sounds like information to me, and interesting and valuable information to boot.

  8. “there’s good reason to believe that some portion of your observed effect may be due to expectations.” What is that good reason? Most studies that look for an effect of expectations come up empty.

    A blinded experiment would less information than other experiments (because the idea it would test is unlikely). I’m not saying it would be zero informative. My idea that varying the dose is a good way to control for all sorts of things (including expectations) is a well-accepted one within experimental psychology. I learned about it my first year of graduate school.

  9. RE expectations and performance, check out the literature on stereotype threat. There has been some fascinating recent work on expectations and test performance by Robert Rosenthal @ UCLA and Shelley Correll at Cornell – both were referenced in the Slate article I pointed you to above. (It’s conceivable that the effect is a function of negative expectations)

  10. First, let me say that, given your track record, Seth, I would bet you are on to something very important. I think you have some rare capacity that I cannot describe to make objective judgments of subjective effects.

    On the other hand, wouldn’t virtually all scientists agree that a double blind study or series of studies with a thousand subjects, none of whom know what they are getting, or even what the study is about, being tested by experimenters who do not know who has gotten what, would tell us far more than one person engaged in self experimentation, no matter how skilled at objective self observation?

    Another point: the stereotype threat literature is interesting. Also the work of John Bargh and others who show that just by activating a schema in the mind, consciously or unconsciously, and without generating “expectations”, behavior is altered. People exposed to words related to the elderly in word unscrambling tests walk more slowly leaving the experiment. People exposed to fast images that they associate with anger on a computer screen that they cannot consciously perceive act angrier in a situation where anger might be appropriate. The general strongly founded observation: When schemas are activated in our minds, consciously or unconsciously, we act in ways that are “consistent” with those schemas in situations where the schemas apply. The last 10 years of research are very strong on this…

    So could you have olive oil versus flax seed oil schemas in your mind influencing your behavior? It’s only a hypothesis. I know that it’s something some cognitive psych researchers would immediately ask upon encountering your findings…

  11. An additional reason for blinding you might want to consider: You are trying to establish self-experimentation as a useful process for coming up with good ideas. I think there are some special considerations for experiments on one’s self that are less of an issue when the experimenter and the subject are separate:

    Consider the rationale for double-blinding experiments: (1) if the subject knows the nature of the treatment and intended outcome, there may be placebo effects or the like, and (2) if the experimenter has a strong stake in a particular outcome, s/he may unconsciously (or consciously) bias measurements in a particular direction.

    With self-experimentation and no blinding, you get the worst of both worlds: Measurements are performed by and reports come from an interested party who knows the desired outcome.

    Now, you may be a reliable observer in spite of all these potentials for bias, or you may not be – I have no way of knowing. Conceivably, you may not even know if there is an unconscious bias. Without blinding, you’re asking people to take on faith your reliability in the face of an interest in a particular outcome.

    Any other special considerations in evaluating self-experiments?

  12. Geoff, no experiment is perfect. To point out imperfections — possible imperfections — is no help at all in most cases because the alternatives are also imperfect. So long as one deals in hypotheticals. If there was evidence that effects such as the ones I describe (e.g., balance improvement) have been produced by expectations, I would be more interested in doing the experiments you describe. There isn’t any evidence that dose-size effects can be produced by expectations in any domain, as far as I know. The Slate article doesn’t reference any such evidence. It does mention work by Robert Rosenthal. A much better description of that work is here:

    which describes how it has been over-used.

    Tim, if you look at the experimental psychology literature, you will find few if any placebo-controlled studies. I can’t think of a single one. Is this because experimental psychologists are naive? All of them? No, it’s because they understand there are better ways to control for expectations.

  13. No I am not gaining weight. At you will find many people who drink flavorless oil to lose weight. In my book The Shangri-La Diet you will find an explanation of why this works (and it certainly does, at least some of the time).

  14. Interesting finding, but what is the mechanism that produces this change? From my limited knowledge of biology, I am surprised that resumption would have an effect within one day. Sure, the brain is full of fats, but I imagine most of them didn’t migrate there overnight.

    I share some of Geoff’s concerns. Self-experimentation may be special with respect to placebo effects. I also share your infatuation with self-experimentation as a source of ideas and a way to convince yourself to pursue a finding.

    What is the major issue with recruiting even a small sample of naive participants and administering these supplements and tests to them? Is it IRB? In visual psychophysics we frequently have small sets of subjects, and usually a few of them are simply labmates who don’t know precisely what we’re up to, but are not entirely naive. Can you get a few of your friends/colleagues to do this sort of thing? I wouldn’t bother going to elaborate lengths to “blind” the participants, but confirmation from people who don’t share your motivations and expectations would go a long way towards convincing a lot of people like myself.

  15. There’s been confirmatory data — as my abstract says — from three other people. As you suggest, I will try to find others who will also try it.

    About mechanism: I can’t even guess, except that it involves replacement of omega-6 fats with omega-3 fats. Fats are loosely bound in cell membranes; the speed is plausible because of that fact.

  16. Wow, somehow I completely missed the line “Supporting results come from three other subjects.” even after reading through it twice. Sorry!

    For effects this dramatic (assuming it’s reasonably strong in the other subjects you tested), the only problem you’ll run into is people who don’t understand how hard you’d have to work to sort of accidentally fake this data in performance tasks (which is what “placebo effect” implies to me). Unfortunately that includes almost every social scientist on the planet. You’ve got a friend in the visual psychophysicist, believe me. You only have to check out some of the articles at to realize that. There was recently an article they published with N=1 for perhaps 1/2 of the reported experiments. The journal is peer-reviewed, respected, and has a higher impact factor than JEP: Human Perception and Performance, which is reasonably high-profile.

    Nice work! I look forward to your talk.

Comments are closed.