Tim Lundeen, a Bay Area software developer, previously posted here about what happened when he increased his daily dose of DHA (an omega-3 fat in fish oil) from 400 mg/day to 800 mg/day: The next day, the speed with which he did simple arithmetic (e.g., 7 + 3) increased. At that point he had only four days from the high-DHA condition. Now he has two months. Here it is:

The y axis is the total time taken to do a set of 100 simple arithmetic problems.

Bottom line: The improvement continued, at roughly the same level. Very good evidence for an effect.

Tim had earlier found that doses of 200 and 400 mg/day of DHA had no apparent effect.

How is he differentiating the effect of omega 3 from the effect of practice?

The sudden improvement he saw when he started taking omega-3 couldn’t be due to practice. The data show that by then further practice was having little or no effect.

Sorry, didn’t look at the previous post. My bad.

He might also try lowering the omega three, and seeing if the scores go back up.

Why not do a hypothesis test rather than just eyeballing the data in a graph?

Looks like you’d need to do a regression that included some time-based term to account for practice effects (surely there are some standard functional forms for this?) and one for dose.

Doug’s suggestion of modifying the dose over time is a good one – the current all-low-dose followed by all-high-dose structure would probably make any test fairly sensitive to the particular choice of practice term.

I didn’t do a hypothesis test because the effect strikes me as obvious. Very small p value. Are you saying the effect is not obvious to you?

Seth,

Like Geoff, I see a time-based trend in the graph. I’m curious: how would controlling for time influence the DHA effect?

Can you make the data set available? I’d like to drop it into R for a quick analysis.

Cheers,

Tom

Seth,

As a follow-up to my previous post, I was able to extract Tim’s timing data from the plot (using Engauge Digitizer). Using R, I fit the data to three models. I’ll summarize my results below.

Model 1: relative improvement is explained by taking more DHA. Result: taking more DHA corresponded to a 10 percent improvement in math-problem-set timings.

Model 2: relative improvement is explained by practice. Result: each day of practice corresponded to a 0.15 percent improvement in timings.

Model 3: relative improvement is explained by both practice and taking more DHA. Result: after controlling for practice, taking more DHA was no longer significant, and each day of practice corresponded to a 0.14 percent improvement in timings.

So, at least by my cursory analysis, the data do not argue for an improvement effect when going from 400 to 800 mg/day of DHA.

If you’re interested in my models, results, or R code, please let me know, and I’ll post them.

Cheers,

Tom

How did you control for practice?

Seth,

I used the day as a proxy for practice. That is, for day N, the model accounts for N days of practice, with the effect of each day’s practice being multiplicative on the output. (In the model, all effects are multiplicative to model diminishing returns. The results were similar, however, when I used a linear model.)

The model, in R syntax, is given as follows:

lm (log.timing ~ day + more.dha)

where log.timing is the natural logarithm of the timing data from Tim (to model multiplicative effects), day is the day upon which the timing was taken, and more.dha is a binary factor indicating whether 800 mg of DHA was taken on that day instead of 400 mg.

Fitting the data to the model, the coefficient of day was -0.0014 (95% CI = [-0.0019, -0.00084], p

I would fit the same model you did but not use all the days — the effects of practice are obviously nonlinear if you use all the data. I’d

1. start at Day 40 (to get rid of the obvious nonlinearity at the beginning)

2. omit the 4 blue outliers after Day 40, which correspond to the first days back from trips.

I think if you do that analysis you will see a big effect of DHA.

Seth,

If I fit the model I mentioned earlier to the subset of the data you suggest, I do indeed get the results you expected. Practice (as modeled by elapsed days) is no longer a significant predictor of the problem-set timings. Increased DHA, however, is significant: increasing the daily DHA dosage from 400 to 800 mg corresponds to a 4.7 percent decrease in timings (95% CI for the effect is [1.4, 7.9] percent).

But is it reasonable to exclude the first 40 or so days? I think so, if we assume that the task is subject to an initial learning effect that plateaus at some point. Judging from the plot, day 40 is into the plateau stage and makes for a reasonable inclusion cutoff.

Cheers,

Tom

Thanks for doing the suggested analysis. A vast amount of data shows that practice has much larger effects early in learning than later in learning, even when performance is measured on a log scale. It’s not just this data set that makes me want to exclude a bunch of early days, it’s all that other data, too.

Seth,

No, it’s not obvious to me. If I ignoring the first 20 days or so, my eye sees a set of points that looks like it is clustering around a linear, downward sloping line. Maybe there is a discontinuity around the color change, maybe not. With the first 20 days thrown in, I see some kind of exponential-like decay that approaches a linear asymptote. Some kind of A-B testing would make this a lot more convincing since it does look like there is a fairly pronounced time component.

Interesting stuff, though.

Geoff