Back Pain Cured by Sarno’s Ideas

Two years ago, a professor of decision science wrote me to say that Vitamin D3 in the morning greatly improved his sleep. Recently he wrote again:

Once again you have dramatically improved my life through your blog.

In this Assorted Links post you offered:

The back pain of a friend of mine, which had lasted 20 years and was getting worse, went away when he followed this doctor’s advice

I read the link about Dr. Sarno and went to Amazon to check out his book, “Healing Back Pain”. 700 reviews with a 4.5 star rating. I spent two hours reading the reviews. Person after person saying, “my back is better” and nobody really described what the book had them do. I bought it two weeks ago.

In a nutshell, Sarno says that this type of back pain is caused by oxygen deprivation of some back muscles/tendons, and that the mind has does this as a defense mechanism so I don’t have to confront my subconscious anger.

I don’t have to pinpoint the source of my anger. I don’t have to come to grips with it and stop being angry. I just have to acknowledge the anger. That’s it. I read half the book in one sitting. I thought, this is crazy, but it has 700 4+ stars at Amazon. Maybe it does work.

My wife and I have two cars. One of them is a small Saturn. I hate it. It hurts my back to get in or out of it, and if I drive for more than five minutes I have to squirm to keep the back pain under control. Last week I took the Saturn for two half hour drives with only one wince of pain. Today I took it to the gym (a five minute drive) but it didn’t hurt to get in or out.

In the morning, to get out of bed, I have to roll over and swing my legs out toward the floor and then prop myself up into a sitting position. At least, that’s how I’ve done it for the past year. This week, I just sat up in bed with no pain. Every morning.

I am still a bit weak in the lower back, after more than a year of restricted physical activity. But this is amazing.

Interview with Sarno on Larry King Live (1999).

Saturated Fat and Heart Attacks

After I discovered that butter made me faster at arithmetic, I started eating half a stick (66 g) of butter per day. After a talk about it, a cardiologist in the audience said I was killing myself. I said that the evidence that butter improved my brain function was much clearer than the evidence that butter causes heart disease. The cardiologist couldn’t debate this; he seemed to have no idea of the evidence.

Shortly before I discovered the butter/arithmetic connection, I had a heart scan (a tomographic x-ray) from which is computed an Agaston score, a measure of calcification of your blood vessels. The Agaston score is a good predictor of whether you will have a heart attack. The higher your score, the greater the probability. My score put me close to the median for my age. A year later — after eating lots of butter every day during that year — I got a second scan. Most people get about 25% worse each year.  My second scan showed regression (= improvement). It was 40% better (less) than expected (a 25% increase). A big increase in butter consumption was the only aspect of my diet that I consciously changed between Scan 1 and Scan 2.

The improvement I observed, however surprising, was consistent with a 2004 study that measured narrowing of the arteries as a function of diet. About 200 women were studied for three years. There were three main findings. 1. The more saturated fat, the less narrowing. Women in the highest quartile of saturated fat intake didn’t have, on average, any narrowing. 2. The more polyunsaturated fat, the more narrowing. 3. The more carbohydrate, the more narrowing. Of all the nutrients examined, only saturated fat clearly reduced narrowing. Exactly the opposite of what we’ve been told.

As this article explains, the original idea that fat causes heart disease came from Ancel Keys, who omitted most of the available data from his data set. When all the data were considered, there was no connection between fat intake and heart disease. There has never been convincing evidence that saturated fat causes heart disease, but somehow this hasn’t stopped the vast majority of doctors and nutrition experts from repeating what they’ve been told.

The Emperor’s New Clothes and the New York Times Paywall

A few years ago I blogged about three books I called The Emperor’s New Clothes trilogy. Each book described a situation in which, from a certain point of view, powerful people — our supposed leaders — “walked around naked”, that is, did things absurd to the naked eye, like the Emperor in the story. As in the story, many people, including experts, said nothing.

After reading about the fate of the Washington Post, I thought of the New York Times paywall, which can be avoided (i.e., defeated) by using what Chrome calls “incognito mode”. (Firefox has a similar mode.) I didn’t know this until recently; some of my friends didn’t know it. One of them carefully rationed the Times articles she read. I wonder how the long the ignorance will last. The Times is an extremely important institution. In the many long discussions at the Times about the paywall, no one mentioned this?

“A Debt-Ceiling Breach Would be Very, Very, Very Bad”

At the end of an article by Kevin Roose in New York about the effects of a debt-ceiling breach:

The bottom line: A debt-ceiling breach would be very, very, very bad.

Keep in mind that these are all hypothetical scenarios. Reality could be better, or much worse. The truth is that while we sort of know what a government shutdown would look like (since it’s happened in the past), we have no idea what chaos a debt-ceiling breach could bring. If, in a month, we reach the X Date, run out of money, and are stuck in political stalemate, we’ll be entering truly uncharted waters. And we’ll be dealing our already-fragile economy what could amount to a knockout blow.

This is an example of something common: Someone who has never correctly predicted anything (in this case, Roose) telling the rest of us what will happen with certainty. If Roose is repeating what experts told him, he should have said who, and their track record. Roose is far from the only person making scary predictions without any evidence he can do better than chance. Here is another example by Derek Thompson in The Atlantic.

The same thing happens with climate change, except that it is models, not people, making predictions. Models that have never predicted climate correctly — for example, none predicted the current pause in warming — are assumed to predict climate correctly. We are supposed to be really alarmed by their predictions. This makes no sense, but there it is. Hal Pashler and I wrote about this problem in psychology.

A third example is the 2008 financial crisis. People who failed to predict the crisis were put in charge of fixing it. By failing to predict the crisis, they showed they didn’t understand what caused it. It is transparently unwise to have your car fixed by someone who doesn’t understand how cars work, but that’s what happened. Only Nassim Taleb seems to have emphasized this. We expect scary predictions based on nothing from religious leaders — that’s where the word apocalypse comes from. From journalists and the experts they rely on, not so attractive.

I don’t know what will happen if there is a debt-ceiling breach. But at least I don’t claim to (“very very very bad”). And at least I am aware of a possibility that Roose (and presumably the experts he consulted) don’t seem to have thought of. A system is badly designed if a relatively-likely event (debt-ceiling breach) can cause disaster — as Roose claims. The apocalyptic possibilities give those in control of whether that event happens (e.g., Republican leaders in Congress) too much power — the power to scare credulous people. If there is a breach, we will find out what happens. If a poorly-built system falls down, it will be much easier to build a better one. Roose and other doom-sayers fail to see there are plausible arguments on both sides.

“Science is the Belief in the Ignorance of Experts” — Richard Feynman

“Science is the belief in the ignorance of experts,” said the physicist Richard Feynman in a 1966 talk to high-school science teachers. I think he meant science is the belief in the fallibility of experts. In the talk, he says science education should be about data —  how to gather data to test ideas and get new ideas — not about conclusions (“the earth revolves around the sun”). And it should be about pointing out that experts are often wrong. I agree with all this.

However, I think the underlying idea — what Feynman seems to be saying — is simply wrong. Did Darwin come up with his ideas because he believed experts (the Pope?) were wrong? Of course not. Did Mendel do his pea experiments because he didn’t trust experts? Again, of course not. Darwin and Mendel’s work showed that the experts were wrong but that’s not why they did it. Nor do scientists today do their work for that reason. Scientists are themselves experts. Do they do science to reveal their own ignorance? No, that’s blatantly wrong. If science is the belief in the ignorance of experts, and X is the belief in the ignorance of scientists, what is X? Our entire economy is based on expertise. I buy my car from experts in making cars, buy my bread from bread-making experts, and so on. The success of our economy teaches us we can rely on experts. Why should high-school science teachers say otherwise? If we can rely on experts, and science rests on the assumption that we can’t, why do we need scientists? Is Feynman saying experts are wrong 1% of the time, and that’s why we need science?

I think what Feynman actually meant (but didn’t say clearly) is science protects us against self-serving experts. If you want to talk about the protection-against-experts function of science, the heart of the matter isn’t that experts are ignorant or fallible. It is that experts, including scientists, are self-serving. The less certainty in an area, the more experts in that area slant or distort the truth to benefit themselves.  They exaggerate their understanding, for instance. A drug company understates bad side effects. (Calling this “ignorance” is too kind.) This is common, non-obvious, and worth teaching high-school students. Science journalists, who are grown ups and should know better, often completely ignore this. So do other journalists. Science (data collection) is unexpectedly powerful because experts are wrong more often than a naive person would guess. The simplest data collection is to ask for an example.

When Genius by James Gleick (a biography of Feynman) was published, I said it should have been titled Genius Manqué. This puzzled my friends. Feynman was a genius, I said, but lots of geniuses have had a bigger effect on the world. I heard Feynman himself describe how he came to invent Feynman diagrams. One day, when he was a graduate student. his advisor, John Wheeler, phoned him. “Dick,” he said, “do you know why all electrons have the same charge? Because they’re the same electron.” One electron moves forward and backward in time creating all the electrons we observe. Feynman diagrams came from this idea. The Feynman Lectures on Physics were a big improvement over standard physics books — more emotional, more vivid, more thought-provoking — but contain far too little about data, in my opinion. Feynman failed to do what he told high school teachers to do.

Progress in Psychiatry and Psychotherapy: The Half-Full Glass

Here is an excellent introduction to cognitive-behavioral therapy (CBT) for depression, centering on a Stanford psychiatrist named David Burns. I was especially interested in this:

[Burns] currently draws from at least 15 schools of therapy, calling his methodology TEAM—for testing, empathy, agenda setting and methods. . . . Testing means requiring that patients complete a short mood survey before and after each therapy session. In Chicago, Burns asks how many of the therapists [in the audience] do this. Only three [out of 100] raise their hands. Then how can they know if their patients are making progress? Burns asks. How would they feel if their own doctors didn’t take their blood pressure during each check-up?

Burns says that in the 1970s at Penn [where he learned about CBT], “They didn’t measure because there was no expectation that there would be a significant change in a single session or even over a course of months.” Forty years later, it’s shocking that so little attention is paid to measuring whether therapy makes a difference. . . “Therapists falsely believe that their impression or gut instinct about what the patient is feeling is accurate,” says May [a Stanford-educated Bay Area psychiatrist], when in fact their accuracy is very low.

When I was a graduate student, I started measuring my acne. One day I told my dermatologist what I’d found. “Why did you do that?” he asked. He really didn’t know. Many years later, an influential psychiatrist — Burns, whose Feeling Good book, a popularization of CBT, has sold millions of copies — tells therapists to give patients a mood survey. That’s progress.

But it is also a testament to the backward thinking of doctors and therapists that Burns didn’t tell his audience:

–have patients fill out a mood survey every day
–graph the results

Even more advanced:

–use the mood scores to measure the effects of different treatments

Three cheap safe things. It is obvious they would help patients. Apparently Burns doesn’t do these things with his own patients, even though his own therapy (TEAM) stresses “testing” and “methods”. It’s 2013. Not only do psychiatrists and therapists not do these things, they don’t even think of doing them. I seem to be the first to suggest them.

Thanks to Alex Chernavsky.

Assorted Links

Thanks to Jeff Winkler and Tom George.

What Goes Unsaid: Self-Serving Health Research

“The realization that the world is often quite different from what is presented in our leading newspapers and magazines is not an easy conclusion for most educated Americans to accept,” writes Ron Unz. He’s right. He provides several examples of the difference between reality and what we are told. In finance, there are Bernie Madoff and Enron. Huge frauds are supposed to be detected. In geopolitics, there is the Iraq War. Saddam Hussein’s Baathists and al-Quada were enemies. Invading Iraq because of 9/11 made as much sense as attacking “China in retaliation for Pearl Harbor” — a point rarely made before the war. In these cases, the national media wasn’t factually wrong.  No one said Madoff wasn’t running a Ponzi scheme. The problem is that something important wasn’t said. No one said Madoff was running a Ponzi scheme.

This is how the best journalists (e.g., at The New Yorker and the New York Times) get it wrong — so wrong that “best” may be the wrong word. In the case of health, what is omitted from the usual coverage has great consequences. Health journalists fail to point out the self-serving nature of health research, the way it helps researchers at the expense of the rest of us.

The recent Health issue of the New York Times Magazine has an example. An article by Peggy Orenstein about breast cancer, meant to be critical of current practice, goes on and on about how screening has not had the promised payoff. As has been widely noted. What Orenstein fails to understand is that the total emphasis on screening was a terrible mistake to begin with. Before screening was tried, it was hard to know whether it would fail or succeed; it was worth trying, absolutely. But it was always entirely possible that it would fail — as it has. A better research program would have split the funds 50/50 between screening and lifestyle-focused prevention research.

The United States has the highest breast cancer incidence (age-adjusted) rates in the world — about 120 per 100,000 women, in contrast to 20-30 per 100,000 women in poor countries. This implies that lifestyle changes can produce big improvements. Orenstein doesn’t say this. She fails to ask why the Komen Foundation has totally emphasized cure (“race for the cure”) over prevention due to lifestyle change. In a long piece, here is all she says about lifestyle-focused prevention:

Many [scientists and advocates] brought up the meager funding for work on prevention. In February, for instance, a Congressional panel made up of advocates, scientists and government officials called for increasing the share of resources spent studying environmental links to breast cancer. They defined the term liberally to include behaviors like alcohol consumption, exposure to chemicals, radiation and socioeconomic disparities.

Nothing about how the “meager funding” was and is a huge mistake. Xeni Jardin of Boing Boing called Orenstein’s article “a hell of a piece“. Fran Visco, the president of the National Breast Cancer Coalition, praised Orenstein’s piece and wrote about preventing breast research via a vaccine. Jardin and Visco, like Orenstein, failed to see the elephant in the room.

Almost all breast-cancer research money has gone to medical school professors (most of whom are men). They don’t do lifestyle research, which is low-tech. They do high-tech cure research. Breast cancer screening, which is high-tech, agrees with their overall focus. High-tech research wins Nobel Prizes, low-tech research does not. For example, those who discovered that smoking causes lung cancer never got a Nobel Prize. Health journalists, most of whom are women, apparently fail to see and definitely fail to write how they (and all women) are harmed by this allocation of research effort. The allocation helps the careers of the researchers (medical school professors); it hurts anyone who might get breast cancer.

The Blindness of Scientists: The Problem isn’t False Positives, It’s Undetected Positives

Suppose you have a car that can only turn right. Someone says, Your car turns right too much. You might wonder why they don’t see the bigger problem (can’t turn left).

This happens in science today. People complain about how well the car turns right, failing to notice (or at least say) it can’t turn left. Just as a car should turn both right and left, scientists should be able to (a) test ideas and (b) generate ideas worth testing. Tests are expensive. To be worth the cost of testing, an idea needs a certain plausibility. In my experience, few scientists have clear ideas about how to generate ideas plausible enough to test. The topic is not covered in any statistics text I have seen — the same books that spend many pages on to how to test ideas. Continue reading “The Blindness of Scientists: The Problem isn’t False Positives, It’s Undetected Positives”

Unhelpful Answers (Ancestral Health Symposium 2013)

At the Ancestral Health Symposium, I went to a talk about food and the brain, a great interest of mine. The speaker said that flaxseed oil was ineffective because only a small fraction (5%) gets converted into DHA — a common claim.

During the question period, I objected.

Seth I found that after I ate some flaxseed oil capsules, my balance improved. Apparently flaxseed oil improved my brain function. This disagrees with what you said.

Speaker Everyone’s different.

A man in the audience said what I observed might have been a placebo effect. I said that couldn’t be true because the effect was a surprise. He disagreed. (The next day, in the lunch line, he spoke to a friend about getting in a kerfuffle with “an emeritus professor who wasn’t used to being disagreed with.”) I spoke to the speaker again:

Seth Is it possible that flaxseed oil is converted to DHA at a higher rate than you said?

Speaker Anything’s possible.

This reminded me of a public lecture by Danny Kahneman at UC Berkeley. During the question period, a man, who appeared to have some kind of impairment, asked a question that was hard to understand. Kahneman gave a very brief answer, something like “No.” 

Afterwards, a woman came over to me. Maybe flaxseed oil reduced inflammation, she said. Given that the brain is very high in omega-3, and so is flaxseed oil, this struck me as unlikely. I said I didn’t like how my question had been answered. I’ve been there, she said. Other members of her family were doctors, she said. She would object to what they said and they would respond in a dismissive way.

The speaker is/was a doctor. Her talk consisted of repeating what she had read, apparently. The possibility that something she read was wrong . . . well, anything’s possible.

 

 

 

The Truth in Small Doses: Interview with Clifton Leaf (Part 2 of 2)

Part 1 of this interview about Leaf’s book The Truth in Small Doses: Why We’re Losing the War on Cancer — and How to Win It was posted yesterday.

SR You say we should “let scientists learn as they go”. For example, reduce the need for grant proposals to require tests of hypotheses. I agree. I think most scientists know very little about how to generate plausible ideas. If they were allowed to try to do this, as you propose, they would learn how to do it. However, I failed to find evidence in your book that a “let scientists learn as they go” strategy works better (leaving aside Burkitt). Did I miss something?

CL Honestly, I don’t think we know yet that such a strategy would work. What we have in the way of evidence is a historical control (to some extent, we did try this approach in pediatric cancers in the 1940s through the 1960s) and a comparator arm (the current system) that so far has been shown to be ineffective.

As I tried to show in the book, the process now isn’t working. And much of what doesn’t work is what we’ve added in the way of bad management. Start with a lengthy, arduous, grants applications process that squelches innovative ideas, that funds barely 10 percent of a highly trained corps of academic scientists and demoralizes the rest, and that rewards the same applicants (and types of proposals) over and over despite little success or accountability. This isn’t the natural state of science. We BUILT that. We created it through bad management and lousy systems.
Same for where we are in drug development. We’ve set up clinical trials rules that force developers to spend years ramping up expensive human studies to test for statistical significance, even when the vast majority of the time, the question being asked is of little clinical significance. The human cost of this is enormous, as so many have acknowledged.

With regard to basic research, one has only to talk to young researchers (and examine the funding data) to see how badly skewed the grants process has become. As difficult (and sometimes inhospitable) as science has always been, it has never been THIS hard for a young scientist to follow up on questions that he or she thinks are important. In 1980, more than 40 percent of major research grants went to investigators under 40; today it’s less than 10 percent. For anyone asking provocative, novel questions (those that the study section doesn’t “already know the answer to,” as the saying goes), the odds of funding are even worse.

So, while I can’t say for sure that an alternative system would be better, I believe that given the current state of affairs, taking a leap into the unknown might be worth it.

SR I came across nothing about how it was discovered that smoking causes lung cancer. Why not? I would have thought we can learn a lot from how this discovery was made.

CL I wish I had spent more time on smoking. I mention it a few times in the book. In discussing Hoffman (pg. 34, and footnote, pg. 317), I say:

He also found more evidence to support the connection of “chronic irritation” from smoking with the rise in cancers of the mouth and throat. “The relation of smoking to cancer of the buccal [oral] cavity,” he wrote, “is apparently so well established as not to admit of even a question of doubt.” (By 1931, he would draw an unequivocal link between smoking and lung cancer—a connection it would take the surgeon general an additional three decades to accept.)

And I make a few other brief allusions to smoking throughout the book. But you’re right, I gave this preventable scourge short shrift. Part of why I didn’t spend more time on smoking was that I felt its role in cancer was well known, and by now, well accepted. Another reason (though I won’t claim it’s an excusable one) is that Robert Weinberg did such a masterful job of talking about this discovery in “Racing to the Beginning of the Road,” which I consider to be the single best book on cancer.

I do talk about Weinberg’s book in my own, but I should have singled out his chapter on the discovery of this link (titled “Smoke and Mirrors”), which is as much a story of science as it is a story of scientific culture.

SR Overall you say little about epidemiology. You write about Burkitt but the value of his epidemiology is unclear. Epidemiology has found many times that there are big differences in cancer rates between different places (with different lifestyles). This suggests that something about lifestyle has a big effect on cancer rates. This seems to me a very useful clue about how to prevent cancer. Why do you say nothing about this line of research (lifestyle epidemiology)?

CL Seth, again, I agree. I don’t spend enough time discussing the role that good epidemiology can play in cancer prevention. In truth, I had an additional chapter on the subject, which began by discussing decades of epidemiological work linking the herbicide 2-4-D with various cancers, particularly with prostate cancer in the wheat-growing states of the American west (Montana, the Dakotas and Minnesota). I ended up cutting the chapter in an effort to make the book a bit shorter (and perhaps faster). But maybe that was a mistake.

For what’s it worth, I do believe that epidemiology is an extremely valuable tool for cancer prevention.

[End of Part 2 of 2]

The Truth in Small Doses: Interview with Clifton Leaf (Part 1 of 2)

I found a lot to like and agree with in The Truth in Small Doses: Why We’re Losing the War on Cancer — and How to Win It by Clifton Leaf, published recently. It grew out of a 2004 article in Fortune in which Leaf described poor results from cancer research and said that cancer researchers work under a system that “rewards academic achievement and publication over all else” — in particular, over “genuine breakthroughs.” I did not agree, however, with his recommendations for improvement, which seemed to reflect the same thinking that got us here. It reminded me of President Obama putting in charge of fixing the economy the people who messed it up. However, Leaf had spent a lot of time on the book, and obviously cared deeply, and had freedom of speech (he doesn’t have to worry about offending anyone, as far as I can tell) so I wondered how he would defend his point of view.

Here is Part 1 of an interview in which Leaf answered written questions. Continue readingThe Truth in Small Doses: Interview with Clifton Leaf (Part 1 of 2)”

Researchers Fool Themselves: Water and Cognition

A recent paper about the effect of water on cognition illustrates a common way that researchers overstate the strength of the evidence, apparently fooling themselves. Psychology researchers at the University of East London and the University of Westminster did an experiment in which subjects didn’t drink or eat anything starting at 9 pm and the next morning came to the testing room. All of them were given something to eat, but only half of them were given something to drink. They came in twice. On one week, subjects were given water to drink; on the other week, they weren’t given water. Half of the subjects were given water on the first week, half on the second. Then they gave subjects a battery of cognitive tests.

One result makes sense: subjects were faster on a simple reaction time test (press button when you see a light) after being given water, but only if they were thirsty. Apparently thirst slows people down. Maybe it’s distracting.

The other result emphasized by the authors doesn’t make sense: Water made subjects worse at a task called Intra-Extra Dimensional Set Shift. The task provided two measures (total trials and total errors) but the paper gives results only for total trials. The omission is not explained. (I asked the first author about this by email; she did not explain the omission.) On total trials, subjects given water did worse, p = 0.03. A surprising result: after persons go without water for quite a while, giving them water makes them worse.

This p value is not corrected for number of tests done. A table of results shows that 14 different measures were used. There was a main effect of water on two of them. One was the simple reaction time result; the other was the IED Stages Completed (IED = intra/extra dimensional) result. It is likely that the effect of water on simple reaction time was a “true positive” because the effect was influenced by thirst. In contrast, the IED Stages Completed effect wasn’t reliably influenced by thirst. Putting the simple reaction time result aside, there are 13 p values for the main effect of water; one is weakly reliable (p = 0.03).  If you do 20 independent tests, purely by chance one is likely to have p < 0.05 at least once even when there are no true effects. Taken together, there is no good reason to believe that water had main effects aside from the simple reaction time test. The paper would be a good question for an elementary statistics class (“Question: If 13 tests are independent, and there are no true effects present, how likely will at least one be p = 0.03 or better by chance? Answer: 1 – (0.97^13) = 0.33″). 

I wrote to the first author (Caroline Edmonds) about this several days ago. My email asked two questions. She replied but failed to answer the question about number of tests. Her answer was written in haste; maybe she will address this question later.

A better analysis would have started by assuming that the 14 measures are unlikely to be independent. It would have done (or used) a factor analysis that condensed the 14 measures into (say) three factors. Then the researchers could ask if water affected each of the three factors. Far fewer tests, far more independent tests, far harder to fool yourself or cherry-pick.

The problem here — many tests, failure to correct for this or do an analysis with far fewer tests — is common but the analysis I suggest is, in experimental psychology papers, very rare. (I’ve never seen it.) Factor analysis is taught as part of survey psychology (psychology research that uses surveys, such as personality research), not as part of experimental psychology.  In the statistics textbooks I’ve seen, the problem of too many tests and correction for/reduction of number of tests isn’t emphasized. Perhaps it is a research methodology example of Gresham’s Law: methods that make it easier to find what you want (differences with p < 0.05) drive out better methods.

Thanks to Allan Jackson.

Heart Disease Epidemic and Latitude Effect: Reconciliation

For the last half century, heart disease has been the most common cause of death in rich countries — more common than cancer, for example. I recently discussed the observation of David Grimes, a British gastroenterologist, that heart disease has followed an infectious-disease epidemic-like pattern: sharp rise, sharp fall. From 1920 to 1970, heart disease in England  increased by a factor of maybe 100; from a very low level to 500 deaths per 100,000 people per year. From 1970 to 2010, it has decreased by a factor of 10. This pattern cannot be explained by any popular idea about heart disease. For example, dietary or exercise or activity changes cannot explain it. They haven’t changed the right way (way up, way down) at the right time (peaking in 1970). In spite of this ignorance, I have never heard a health expert express doubt about what causes heart disease. This fits with what I learned when I studied myself. What I learned had little correlation with what experts said.

Before the epidemic paper, Grimes wrote a book about heart disease. It stressed the importance of latitude: heart disease is more common at more extreme latitudes. For example, it is more common in Scotland than the south of England. The same correlation can be seen in many data sets and with other diseases, including influenza, variant Creuztfeldt-Jacob disease, multiple sclerosis, Crohn’s disease and other digestive diseases. More extreme latitudes get less sun. Grimes took the importance of latitude to suggest the importance of Vitamin D. Better sleep with more sun is another possible explanation.

The amount of sunlight has changed very little over the last hundred years so it cannot explain the epidemic-like rise and fall of heart disease. I asked Grimes how he reconciled the two sets of findings. He replied:

It took twenty years for me to realize the importance of the sun. I always felt that diet was grossly exaggerated and that victim-blaming was politically and medically convenient – disease was due to the sufferers and it was really up to them to correct their delinquent life-styles. I was brought up and work in the north-west of England, close to Manchester. The population has the shortest life-expectancy in England, Scotland and Northern Ireland even worse. It must be a climate effect. And so on to sunlight. So many parallels from a variety of diseases.

When I wrote my book I was aware of the unexplained decline of CHD deaths and I suggested that the UK Clean Air Act of 1953 might have been the turning point, the effect being after 1970. Cleaning of the air did increase sun exposure but the decline of CHD deaths since 1970 has been so great that there must be more to it than clean air and more sun. At that time I was unaware of the rise of CHD deaths after 1924 and so I was unaware of the obvious epidemic. I now realize that CHD must have been due to an environmental factor, probably biological, and unidentified micro-organism. This is the cause, but the sun, through immune-enhancement, controls the distribution, geographical, social and ethnic. The same applies to many cancers, multiple sclerosis, Crohn’s disease (my main area of clinical activity), and several others. I think this reconciles the sun and a biological epidemic.

He has written three related ebooks: Vitamin D: Evolution and Action, Vitamin D: What It Can Do For Your Baby, and You Will Not Die of a Heart Attack.

Assorted Links

  • Kombucha beer (which may not taste like beer)
  • A growing taste for sour. “I saw bottles of [kombucha] in rural Virginia gas stations . . .  kimchi, fermented cabbage, has spread from Korean kitchens to Los Angeles taco trucks.”
  • Exercise and weight loss. Only the extremes of exercise — very intense exercise (very brief) and very long lasting exercise (walking) — reduce weight or keep weight low. The middling exercise Americans actually choose (aerobics) has little effect. This post, by my friend Phil Price, gets the high-intensity part right but the low-intensity part wrong.
  • Weight loss fails to prevent heart attacks. “The study followed 5,200 patients and lasted 11 years.” Surely cost tens of millions of dollars. More evidence of mainstream ignorance about heart disease.
  • A kickback by any other name . . . “At least 17 of the top 20 Bystolic prescribers in Medicare’s prescription drug program in 2010 have been paid by Forest [which makes Bystolic] to deliver promotional talks. In 2012, they together received $284,700 for speeches and more than $20,000 in meals.”

Thanks to Bryan Castañeda and Hal Pashler.

Assorted Links

  • natural acne remedies
  • A mainstream climate scientist has doubts. “We’re facing a puzzle. Recent CO2 emissions have actually risen even more steeply than we feared. As a result, according to most climate models, we should have seen temperatures rise by around 0.25 degrees Celsius (0.45 degrees Fahrenheit) over the past 10 years. That hasn’t happened. In fact, the increase over the last 15 years was just 0.06 degrees Celsius (0.11 degrees Fahrenheit) — a value very close to zero. This is a serious scientific problem.” What would Bill McKibben say?
  • Personal Experiments, a research site where you can sign up for experiments.
  • Trouble at GSK Shanghai. The defenses of the accused strike me as plausible.
  • Sleep disturbance in a hospital. “Between 10 p.m. and 6 a.m., I did not go more than an hour without some kind of interruption.” As ridiculous as cutting off part of the immune system because of too many infections (tonsillectomies) and the view that acne has nothing to do with diet.

Thanks to Dave Lull.

The Rise and Fall of Heart Disease

Heart disease was once the number one killer in rich countries. Maybe it still is. Huge amounts of time and money have gone into trying to reduce it — statins, risk factor measurement (e.g., cholesterol measurement), telling people to “eat healthy” and exercise more, and so on. Unfortunately for the poor souls who follow the advice (e.g., take statins), the advice givers, such as doctors, never make clear how little they know about what causes heart disease. Maybe they don’t realize how little they know. Continue reading “The Rise and Fall of Heart Disease”

Hospitals and Their Employees: Stuck in the 1800s

An article in the New York Times describes how difficult it has been for hospital administrators to get their employees to wash their hands. Hospital-acquired infections are an enormous problem and cause many deaths, yet “studies [in the last 10 years] have shown that without encouragement, hospital workers wash their hands as little as 30 percent of the time that they interact with patients.” Hospitals are now — just now — trying all sorts of things to increase the hand-washing rate. The germ theory of disease dates from the 1800s. Ignasz Semmelweis did his pioneering work, showing that hand-washing dramatically reduced death rate (from 18% to 2%), in 1847.

So hospitals are only now (in the last few years) grasping the implications of facts and a well-established theory from the 1800s. What goes unsaid in the usual discussion of how awful this is — how dare doctors refuse to wash their hands!, a sentiment with which I agree — is how backward both sides of the discussion are. A discussion in which many lives are at stake.

The Times article now has 209 comments, many by doctors and nurses. The doctors, of course, went to medical school and passed a rigorous test about medicine (“board-certified”). Yet they don’t know basic things about infection. (One doctor, in the comments, calls hand-washing “this current fad“.) They appear to have no idea that it is possible to improve the body’s ability to resist infection. I read all the comments. Not one mentioned two easy cheap low-tech ways to reduce hospital infections:

1. Allow patients to sleep well. The body fights off infection during sleep, but hospitals are notoriously bad places to sleep. Patients are woken up by nurses, for example. You might think that everyone knows sleep helps fight infection . . . but apparently not hospital administrators nor the doctors and nurses who commented on the Times article. It was in the interest of these doctors and nurses to suggest alternative solutions because they dislike washing their hands.

2. Feed patients fermented foods (or probiotics). Fermented foods help you fight off infections. I believe this is because the bacteria on fermented food are perfectly safe yet  successfully compete with dangerous bacteria. In any case, plenty of studies show that probiotics and fermented foods reduce hospital infections. In one study, “use of probiotics reduced the new cases of C. difficile-associated diarrhea by two thirds (66 per cent), with no serious adverse events attributable to probiotics.” Maybe this just-published article (Probiotics: a new frontier for infection control”) will bring a few people who work in hospitals into the 21st century.

That hospital administrators and their doctors and nurses — and, in this discussion, their critics — are stuck in the 1800s is clear enough. What is slightly less clear is that our understanding is better now than it was in the 1800s and some of the new knowledge is useful.

Thanks to Bryan Castañeda.

Celiac Experts Make Less Than Zero Sense

In the 1960s, Edmund Wilson reviewed Vladimir Nabokov’s translation of Eugene Onegin. Wilson barely knew Russian and his review was a travesty. Everything was wrong. Nabokov wondered if it had been written that way to make sense when reflected in a mirror.

I thought of this when I read recent remarks by “celiac experts” in the New York Times. The article, about gluten sensitivity, includes an example of a woman who tried a gluten-free diet:

Kristen Golden Testa could be one of the gluten-sensitive. Although she does not have celiac, she adopted a gluten-free diet last year. She says she has lost weight and her allergies have gone away. “It’s just so marked,” said Ms. Golden Testa, who is health program director in California for the Children’s Partnership, a national nonprofit advocacy group. She did not consult a doctor before making the change, and she also does not know [= is unsure] whether avoiding gluten has helped at all. “This is my speculation,” she said. She also gave up sugar at the same time and made an effort to eat more vegetables and nuts.

Fine. The article goes on to quote several “celiac experts” (all medical doctors) who say deeply bizarre things.

“[A gluten-free diet] is not a healthier diet for those who don’t need it,” Dr. Guandalini [medical director of the University of Chicago’s Celiac Disease Center] said. These people “are following a fad, essentially.” He added, “And that’s my biased opinion.”

Where Testa provides a concrete example of health improvement and refrains from making too much of it, Dr. Guandalini does the opposite (provides no examples, makes extreme claims).

Later, the article says this:

Celiac experts urge people to not do what Ms. Golden Testa did — self-diagnose. Should they actually have celiac, tests to diagnose it become unreliable if one is not eating gluten. They also recommend visiting a doctor before starting on a gluten-free diet.

As someone put it in an email to me, “Don’t follow the example of the person who improved her health without expensive, invasive, inconclusive testing. If you think gluten may be a problem in your diet, you should keep eating it and pay someone to test your blood for unreliable markers and scope your gut for evidence of damage. It’s a much better idea than tracking your symptoms and trying a month without gluten, a month back on, then another month without to see if your health improves.”

Are the celiac experts trying to send a message to Edmund Wilson, who died many years ago?

Are Low-Carb Diets Dangerous?

A link from dearieme led me to a recent study that found low-carb high-protein diets — presumably used to lose weight — associated with heart disease. The heart disease increase was substantial — as much as 60% in those with the most extreme diets. (A critic of the study, Dr. Yoni Freedhoff, called the increase in risk “incredibly small“.) Four other studies of the same question have produced results consistent with this association. No study — at least, no study mentioned in the report — has produced results in the opposite direction (low-carb high-protein diets associated with a decrease in heart disease).

I find this interesting for several reasons.

1. I learned about the study from a Guardian article titled “What doctors won’t do”. A doctor named Tom Smith said, “I would never go on a low-carbohydrate, high-protein diet like Atkins, Dukan or Cambridge.” Fine. He didn’t say what he would do to lose weight. The psychological costs of obesity are huge. The popularity of low-carb diets probably has a lot — or everything — to do with the failure of researchers to find something better. I have never seen people who criticize low-carb diets appear aware of this. I disagree with a lot of Good Calories Bad Calories but I completely agree with its criticism of researchers. Continue reading “Are Low-Carb Diets Dangerous?”