The Lessons of Tisano Tea

I was curious how Tisano Tea began (yesterday’s post) because it was an unusual product (chocolate tea). There wasn’t any point I was trying to make. At a party last night, however, I found myself talking to the daughter of a diplomat (Tisano Tea was started by the son of a diplomat). I told her the story of Tisano Tea. And I couldn’t help pointing out two generalizations it supports:

1. I’ve blogged many times about the value of insider/outsiders — people who have the knowledge of insiders but the freedom of outsiders. Patrick Pineda, the founder of Tisano Tea, was not an insider/outsider but he connected two worlds — the United States and Venezuela (in particular poor Venezuelan farmers) — that are rarely connected.

2. When people from rich countries try to help people in poor countries, the usual approach is to bring something from the rich country to the poor country. Nutritional knowledge, medicine, dams, and so on. One Laptop Per Child is an extreme example. Microcredit is a deceptively attractive example. In recent years, the flaws in this approach have become more apparent and there has been a shift toward local solutions to problems (e.g., the best ideas to help Uganda will come from Ugandans and those who have lived there a long time). Tisano Tea illustrates something that people in rich countries have had an even harder time imagining: people in a poor country (Venezuela) knew something that improved life in a rich country (the United States) — namely, that you can make tea from cacao husks. A small thing, but not trivial (maybe chocolate tea supplies important nutrients). An American desire for Venezuelan cacao husks improves life in Venezuela. Ethnic food trucks are a more subtle example.  When immigrants from poor countries manage to make a living in a rich country — using knowledge of their own cuisine is a good way to do this — they often send money home. As far as I know, this possibility has been ignored in development studies.

My research, which shows how a non-expert can do research that teaches something to experts, is related to the second generalization. For example, my research on faces and mood has something to teach experts on depression and bipolar disorder. Although the term “home remedy” is standard, and lots of non-experts have improved their health in ways not approved by doctors, I have never heard a health expert show a realization that this could happen.

Cuban Data Refute Mainstream Health Beliefs

A new BMJ paper looks at Cuban health before and after the economic crisis of 1991-1995, when the Cuban economy nose-dived. There wasn’t enough gasoline for cars. so bike riding greatly increased. In addition, people ate less. What effect did these changes (more exercise, less eating) have on health?

You know what is supposed to happen: Better health. Walter Willett, the Harvard epidemiologist, wrote a commentary about the study that concluded “The current findings add powerful evidence that a reduction in overweight and obesity would have major population-wide [health] benefits.” In other words, Willett said that what happened supports conventional beliefs.

But it didn’t. In several ways, what happened contradicts conventional beliefs.

1. A popular belief is that exercise causes weight loss. However, the percentage of “physically active individuals” doubled from 1985 to 2010 (from about 30% to 60%). In spite of this, the prevalence of obesity considerably increased (from about 13% to 18%) at the same time. Apparently exercise is considerably less important than something else. I have never heard a public health advocate say this.

2. A graph showing rates of heart disease, cancer, and stroke (the three main killers) over the period showed no change in rates of cancer and stroke. In spite of big changes in both exercise and obesity. The rate of heart disease stayed constant during the period when obesity went down. It steadily dropped during the period of time when obesity went up. Apparently the factors that control obesity and the factors that control heart disease are quite different (contradicting the usual view that exercise reduces both).

3. There is no simple connection between diabetes and obesity. During the economic crisis, when the prevalence of obesity went down by half (from 15% to 7%) and exercise greatly increased, the prevalence of diabetes slightly increasedOnly after the crisis did the usual correlation (more obesity, more diabetes) emerge.

4. The only lifestyle factor to have its conventional effect: smoking. When you stop smoking, you gain weight is the usual belief (which I also believe). The data definitely support this connection. A huge reduction in the fraction of people who smoke (from 30% to 10%) did not reduce cancer but did coincide with a great increase in obesity.

5. Cubans are doing something right, as shown by the considerable decrease in heart disease and diabetes deaths. Apparently they are also more health-conscious, as shown by much higher rates of exercise and much lower rates of smoking. (Assuming that cigarettes did not become too expensive.) They are getting fatter, too, but apparently that is less damaging than we are told.

Willett and the authors of the study look at subsets of the data and use theories about “time-lag” to draw reassuring conclusions. In fact, large portions of the data are not easily explained by conventional ideas, as I’ve shown. You can look at the data many ways, but to me the study makes two main points. 1. During a period when everyone was forced to do what doctors recommend (exercise more, eat less), health did not improve. 2. During a period (post-crisis) when obesity got steadily worse, health improved (heart disease rates went down, cancer stayed the same, diabetes mortality went down). Cuba is too poor for the improvement to be due to better high-tech modern medicine. Taken together, these findings suggest we should be more skeptical of what we are told by doctors and health experts such as Willett.

Is Red Meat Dangerous?

A recent paper from the Cleveland Clinic reports more than a dozen studies that add up, say the authors, to the conclusion that red meat and other meats cause heart disease at least partly by increasing trimethylamine-N-oxide (TMAO), which is made from carnitine by intestinal bacteria. Meat, especially red meat, is high in carnitine.

The results were reported all over the world, including the New York Times. There are several reasons to question the conclusion:

1. The association between meat and heart disease is weak. An epidemiological paper from the Harvard Nurses Study found estimated reductions in heart disease on the order of 10-20% when a “healthy” food was substituted for meat. Conclusions about causality (eating Food X causes Disease Y) based on the Harvard Nurses Study have predicted wrongly over and over when tested in experiments, so even this weak association is questionable. A 2010 meta-analysis found no association between red meat consumption and heart disease. The absence of any correlation is surprising because red meat is widely believed to be unhealthy. People who eat more red meat would presumably do more other “unhealthy” things. (Perhaps the error rate of the underlying epidemiology is high. Errors push associations toward zero.)

2. Within the Cleveland paper, the associations between carnitine and TMAO and heart disease are weak. For example, people with the greatest sign of heart disease (“triple” angiographic evidence of heart disease) had only slightly more carnitine in their blood (about 15% more) than people with the least sign of heart disease. (Maybe it is peak levels of carnitine rather than average levels that matter.)

3. A 1996 epidemiological study (via Chris Kresser) that looked at the correlates of various “healthy” habits among people especially interested in health (e.g., they shop at health food stores) found no detectable effect of being a vegetarian. For example, vegetarians had the same all-cause mortality as non-vegetarians. Other factors were associated with reduced mortality, including eating wholemeal bread daily and eating fruit daily. This study looked at a large number of people (about 11,000) for a long time (17 years), so I consider the lack of difference (vegetarians versus non-vegetarians) strong evidence against the idea that modest amounts of meat are harmful.  (And I am going to start eating wholemeal bread in small amounts.)

I don’t dismiss the paper. Among people who eat more than modest amounts of meat, there may be something to it. Now and then epidemiology turns up a powerful risk factor — something associated with a risk increase by a factor of 4 or more (people at a high level of the risk factor get the disease at least four times more often than people at a low level of the factor). History shows that such correlations are likely to tell us something about causality. With weaker correlations (such as the correlation between red meat and heart disease), it is much more a guessing game.

To me, the important clue about heart disease is that it is very low in both Japan and France, much lower than in countries with high rates of heart disease. The two countries that have little in common besides the fact that in both people eat a lot more fermented food than in most places. In France, they drink wine, eat stinky cheese and yogurt. In Japan, they eat miso, pickles, and natto. Maybe fermented food protects against heart disease.

Maybe We SHOULD Eat More Fat?

In a review of Salt Sugar Fat by Michael Moss, a new book about the food industry, David Kamp writes:

The term “bliss point” . . . is used in the soft-drink business to denote the optimal level of sugar at which the beverage is most pleasing to the consumer. . . .

The “Fat” section of “Salt Sugar Fat” is the most disquieting, for, as Moss learns from Adam Drewnowski, an epidemiologist who runs the Center for Obesity Research at the University of Washington, there is no known bliss point for fat — his test subjects, plied with a drinkable concoction of milk, cream and sugar, kept on chugging ever fattier samples without crying uncle. This realization has had huge implications in the food industry. For example, Moss reports, the big companies have come to understand that “cheese could be added to other food products without any worries that people would walk away.”

By “fat” Moss means animal fat (the fat in cheese, for example). I haven’t seen the book but I’m sure Moss doesn’t consider the possibility that “there is no known bliss point for fat” because people should be eating much more animal fat. In other words, it is hard to detect the bliss point when people are suffering from severe fat deprivation.

My view of how much animal fat I should eat changed abruptly when I found that large amounts of pork fat made me sleep better. One day I ate a lot of pork belly (very high fat) to avoid throwing it away. That night I slept much better than usual. I confirmed the effect experimentally. Later, I found that butter (instead of pork fat) made me faster at a mental test. This strengthened my belief that I should eat much more animal fat than countless nutrition experts have said. (Supporting data.)

My sleep and mental test evidence was clear and strong (in the sense of large t value). The evidence that animal fat is bad (based on epidemiology) is neither. That is one reason I trust what I found rather than what I have been told.

Another reason I trust what I found the fact that people like the taste of fat. That evolution has shaped us to like the taste of something we shouldn’t eat makes no sense. (Surely I don’t have to explain why this doesn’t mean that sugar — not available to prehistoric man — is good for us.) In contrast, it is entirely possible that nutrition experts have gotten things backwards. Epidemiology is a fledgling science and epidemiologists often make mistakes. Their conclusions point in the wrong direction. Here is an example, about the effect of beta-carotene on heart disease:

Epidemiology repeatedly found that people who consumed more beta-carotene had less heart disease. When the idea that beta-carotene reduces heart disease was tested in experiments, the results suggested the opposite: beta-carotene increases heart disease.

“Fat will become the new diet food” (via Hyperlipid).

Omega-6 is Bad For You

For a long time, nutrition experts have told us to replace saturated fats (solid at room temperature) with polyunsaturated fats (liquid at room temperature). One polyunsaturated fat is omega-6. Omega-6 is found in large amounts in corn oil, soybean oil, and most other vegetable oils (flaxseed oil is the big exception). According to Eat Drink and Be Healthy (2001) by Walter Willett (and “co-developed with the Harvard School of Public Health”), “replacing saturated fats with unsaturated fats is a safe, proven, and delicious way to cut the rates of heart disease” (p. 71). “Plenty of proof for the benefits of unsaturated fats” says a paragraph heading (p. 71). Willett failed to distinguish between omega-3 and omega-6.

A recent study in the BMJ shows how wrong Willett (and thousands like him) were. This study began with the assumption that omega-3 and omega-6 might have different effects, so it was a good idea to try to measure the effect of omega-6 separately.

They reanalyzed data from a study done in Sydney Australia from 1966 to 1973.The study had two groups: (a) a group of men not told to change their diet and (b) a group of men told to eat more omega-6 by eating more safflower oil (and reducing saturated fat intake, keeping overall fat intake roughly constant). The hope was that the change would reduce heart disease, as everyone said.

As these studies go, it was relatively small, only about 500 subjects. The main results:

Compared with the control group, the intervention group had an increased risk of all cause mortality (17.6% v 11.8% [emphasis added]; hazard ratio 1.62 (95% confidence interval 1.00 to 2.64); P=0.051), cardiovascular mortality (17.2% v 11.0%; 1.70 (1.03 to 2.80); P=0.037), and mortality from coronary heart disease (16.3% v 10.1%; 1.74 (1.04 to 2.92); P=0.036).

A 50% increase in death rate! The safflower oil was so damaging that even this small study yielded significant differences.

The authors go on to show that this result (omega-6 is bad for you) is supported by other studies. Walter Willett and countless other experts were quite wrong on the biggest health issue of our time (how to reduce heart disease, the #1 cause of death).

Posit Science: Does It Work? (Continued)

In an earlier post I asked 15 questions about Zelinski et al. (2011) (“Improvement in memory with plasticity-based adaptive cognitive training: results of the 3-month follow-up”), a study done to measure the efficacy of the brain training sold by Posit Science. The study asked if the effects of training were detectable three months after it stopped. Henry Mahncke, the head of Posit Science, recently sent me answers to a few of my questions.

Most of my questions he declined to answer. He didn’t answer them, he said, because they contained “innuendo”. My questions were ordinary tough (or “critical”) questions. Their negative slant was not at all hidden (in contrast to  innuendo). For the questions he didn’t answer, he substituted less critical questions. I give a few examples below.  Unwillingness to answer tough questions about a study raises doubts about it.

His answers raised more doubts. Continue reading “Posit Science: Does It Work? (Continued)”

A Revolution in Growing Rice

Surely you have heard of Norman Borlaug, “Father of the Green Revolution”. He won a Nobel Peace Prize in 1970 for

the introduction of these high-yielding [wheat] varieties combined with modern agricultural production techniques to Mexico, Pakistan, and India. As a result, Mexico became a net exporter of wheat by 1963. Between 1965 and 1970, wheat yields nearly doubled in Pakistan and India.

He had a Ph.D. in plant pathology and genetics. He learned how to develop better strains in graduate school. He worked as an agricultural researcher in Mexico.

You have probably not heard of Henri de Laulanié, a French Jesuit priest who worked in Madagascar starting in the 1960s. He tried to help local farmers grow more rice. He had only an undergraduate degree in agriculture. In contrast to Borlaug, he tested simple variations that any farmer could afford. He found that four changes in traditional practices had a big effect:

• Instead of planting seedlings 30-60 days old, tiny seedlings less than 15 days old were planted.
• Instead of planting 3-5 or more seedlings in clumps, single seedlings were planted.
• Instead of close, dense planting, with seed [densities] of 50-100 kg/ha, plants were set out carefully and gently in a square pattern, 25 x 25 cm or wider if the soil was very good; the seed [density] was reduced by 80-90% . . .
• Instead of keeping rice paddies continuously flooded, only a minimum of water was applied daily to keep the soil moist, not always saturated; fields were allowed to dry out several times to the cracking point during the growing period, with much less total use of water.

The effect of these changes was considerably more than Borlaug’s doubling of yield:

The farmers around Ranomafana who used [these methods] in 1994-95 averaged over 8 t/ha, more than four times their previous yield, and some farmers reached 12 t/ha and one even got 14 t/ha. The next year and the following year, the average remained over 8 t/ha, and a few farmers even reached
16 t/ha.

The possibility of such enormous improvements had been overlooked by both farmers and researchers. They were achieved without damaging the environment with heavy fertilizer use, unlike Borlaug’s methods.

Henri de Laulanié was not a personal scientist but he resembled one. Like a personal scientist, he cared about only one thing (improving yield). Professional scientists have many goals (publication, promotion, respect of colleagues, grants, prizes, and so on) in addition to making the world a better place. Like a personal scientist, de Laulanié did small cheap experiments. Professional scientists rarely do small cheap experiments. (Many of them worship at the altar of large randomized trials.) Like a personal scientist, de Laulanié tested treatments available to everyone (e.g., butter). Professional scientists rarely do this. Like a personal scientist, he tried to find the optimal environment. In the area of health, professional scientists almost never do this, unless they are in a nutrition department or school of public health. Almost all research funding goes to the study of other things, such as molecular mechanisms and drugs.

Personal science matters because personal scientists can do things professional scientists can’t or won’t do. de Laulanié’s work shows what a big difference this can make.

A recent newspaper article. The results are so good they have been questioned by mainstream researchers.

Thanks to Steve Hansen.

How to Encourage Personal Science?

I wonder how to encourage personal science (= science done to help yourself or a loved one, usually for health reasons). Please respond in the comments or by emailing me.

An obvious example of personal science is self-measurement (blood tests, acne, sleep, mood, whatever)  done to improve what you’re measuring. Science is more than data collection and the data need not come from you. You might study blogs and forums or the scientific literature to get ideas. Self-measurement and data analysis by non-professionals is much easier than ever before. Other people’s experience and the scientific literature are much more available than ever before. This makes personal science is far more promising than ever before.

Personal science has great promise for reasons that aren’t obvious. It seems to be a balancing act: Personal science has strengths and weakness, professional science has strengths and weaknesses.  I can say that personal scientists can do research much faster than professionals and are less burdened with conflicts of interest (personal scientists care only about finding a solution; professionals care about other things, including publication, grants, prizes, respect, and so on). A professional scientist might reply that professional scientists have more training and support. History overwhelming favors professional science — at least until you realize that Galileo, Darwin, Mendel, and Wegener (continental drift) were not professional scientists. (Galileo was a math professor.) There is very little personal science of any importance.

These arguments (balancing act, examination of history) miss something important. In a way, it isn’t a balancing act. Professional science and personal science do different things. In some ways history supports personal science. Let me give an example. I believe my most important discovery will turn out to be the effect of morning faces on mood. The basic idea that my findings support is that we have a mood control system that requires seeing faces in the morning to work properly. When the system is working properly, we have a circadian rhythm in mood (happy, eager, serene during the day, unhappy, reluctant, irritable at night). The strangest thing is that if you see faces in the morning (e.g, 7 am) they have no noticeable effect until 6 pm the same day. There is a kind of uncanny valley at work here. If you know little about mood research, this will seem unlikely but possible. If you are an average professional mood researcher, it will seem much worse: can’t possibly be true, total nonsense. If you know a lot about depression research, however, you will know that there is considerable supporting research (e.g., in many cases, depression gets better in the evening). It will still seem very unlikely, but not impossible. However, if you’re a professional scientist, it doesn’t matter what you think. You cannot study it. It is too strange to too many people, including your colleagues. You risk ridicule by studying it. If you’re a personal scientist, of course you can study it. You can study anything.

This illustrates a structural problem:

2013-02-28 personal & professional science in plausibility space

This graph shows what personal and professional scientists can do. Ideas vary in plausibility from low to high; data gathering (e.g., experiments) varies in cost from low to high. Personal scientists can study ideas of any plausibility, but they have a relatively small budget. Professional scientists can spend much more — in fact, must spend much more. I suppose publishing a cheap experiment would be like wearing cheap clothes. Another limitation of professional scientists is that they can only study ideas of medium plausibility.  Ideas of low plausibility (such as my morning faces idea) are “crazy”. To take them seriously risks ridicule. Even if you don’t care what your colleagues think, there is the additional problem that a test of them is unlikely to pay off. You cannot publish results showing that a low-plausibility idea is wrong. Too obvious. In addition, professional scientists cannot study ideas of high plausibility. Again, the only publishable result would be that your test shows the idea is wrong. That is unlikely to happen. You cannot publish results that show that something that everybody already believes is true.

It is a bad idea for anyone — personal or professional scientist — to spend a lot of resources testing an idea of low or high plausibility. If the idea has low plausibility, the outcome is too likely to be “it’s wrong”. There are a vast number of low-plausibility ideas. No one can afford to spend a lot of money on one of them. Likewise, it’s a bad idea to spend a lot of resources testing an idea of high plausibility because the information value (information/dollar) of the test is likely to be low. If you’re going to spend a lot of money, you should do it only when both possible outcomes (true and false) are plausible.

This graph explains why health science has so badly stagnated — every year, the Nobel Prize in Medicine is given for something relatively trivial — and why personal science can make a big difference. Health science has stagnated because it is impossible for professionals to study ideas of low plausibility. Yet every new idea begins with low plausibility. The Shangri-La Diet is an example (Drink sugar water to lose weight? Are you crazy?). We need personal science to find plausible new ideas. We also need personal science at the other extreme (high plausibility) to customize what we know. Everyone has their quirks and differences. No matter how well-established a solution, it needs to be tailored to you in particular — to what you eat, when you work, where you live, and so on. Professional scientists won’t do that. My personal science started off with customization. I tested various acne drugs that my dermatologist prescribed. It turned out that one of them didn’t work. It worked in general, just not for me. As I did more and more personal science, I started to discover that certain low-plausibility ideas were true. I’d guess that 99.99% of professional scientists never discover that a low-plausibility idea is true. Whereas I’ve made several such discoveries.

Professional scientists need personal scientists to come up with new ideas plausible enough to be worth testing. The rest of us need personal scientists for the sake of our health. We need them to find new solutions and  customize existing ones.




More Trouble in Mouse Animal-Model Land

Mice — inbred to reduce genetic variation — are used as laboratory models of humans in hundreds of situations. Researchers assume there are big similarities between humans and one particular genetically-narrow species of mouse. A new study, however, found that the correlation between human genomic changes after various sorts of damage (“trauma”, burn, endotoxins in the blood, and so on) and mouse genomic changes was close to zero.

According to a New York Times article about the study, the lack of correlation “helps explain why every one of nearly 150 drugs tested at huge expense in patients with sepsis [severe blood-borne infection] has failed. The drug tests all were based on studies in mice.”

This supports what I’ve said about the conflict between job and science. If your only goal is to find a better treatment for sepsis, after ten straight failures you’d start to question what you are doing. Is there a better way? you’d wonder. After twenty straight failures, you’d give up on mouse research and starting looking for a better way. However, if your goal is to do fundable research with mice — to keep your job — failures to generalize to humans are not a problem, at least in the short run. Failure to generalize actually helps you: It means more mouse research is needed.

If I’m right about this, it explains why researchers in this area have racked up an astonishing record of about 150 failures in a row. (The worst college football team of all time only lost 80 consecutive games.) Terrible for anyone with sepsis, but good for the careers of researchers who study sepsis in mice. “Back to the drawing board,” they tell funding agencies. Who are likewise poorly motivated to react to a long string of failures. They know how to fund mouse experiments. Funding other sorts of research would be harder.

In the comments on the Times article, some readers had trouble understanding that 10 failures in a role should have suggested something was wrong. One reader said, “If one had definitive, repeatable, proof that the [mouse model] approach wouldn’t work…..well, that’s one thing.” Not grasping that 150 failures in a row is repeatable in spades..

When this ground-breaking paper was submitted to Science and Nature, the two most prestigious journals, it was rejected. According to one of the authors, the reviewers usually said, “It has to be wrong. I don’t know why it is wrong, but it has to be wrong.” 150 consecutive failed drug studies suggest it is right.

As I said four years ago about similar problems,

When an animal model fails, self-experimentation looks better. With self-experimentation you hope to generalize from one human to other humans, rather from one genetically-narrow group of mice to humans.

Thanks to Rajiv Mehta.

Web Browsers, Black Swans and Scientific Progress

A month ago, I changed web browsers from Firefox to Chrome (which recently became the most popular browser). Firefox crashed too often (about once per day). Chrome crashes much less often (once per week?) presumably because it confines trouble caused by a bad tab to that tab. “Separate processes for each tab is EXACTLY what makes Chrome superior” to Firefox, says a user. This localization was part of Chrome’s original design (2008).

After a few weeks, I saw that crash rate was the only difference between the two browsers that mattered. After a crash, it takes a few minutes to recover. With both browsers, the “waiting time” distribution — the distribution of the time between when I try to reach a page (e.g., click on a link) and when I see it — is very long-tailed (very high kurtosis). Almost all pages load quickly (< 2 seconds). A few load slowly (2-10 seconds). A tiny fraction (0.1%?) cause a crash (minutes). The Firefox and Chrome waiting-time distributions are essentially the same except that the Chrome distribution has a thinner tail. As Nassim Taleb says about situations that produce Black Swans, very rare events (in this case, the very long waiting times caused by crashes) matter more (in this case, contribute more to total annoyance) than all other events combined.

Continue reading “Web Browsers, Black Swans and Scientific Progress”

Who Is Listened To? Science and Science Journalism

This book review of Spillover by David Quammen is quite unfavorable about Laurie Garrett, the Pulitzer-Prize-winning science journalist. Several years ago, at the UC Berkeley journalism school, I heard her talk. During the question period, I made a comment something like this: “It seems to me there is kind of a conspiracy between the science journalist and the scientist. Both of them want the science to be more important than it really is. The scientist wants publicity. The science journalist wants their story on the front page. The effect is that things get exaggerated, this or that finding is claimed to be more important than it really is.” Garrett didn’t agree. She did not give a reason. This was interesting, since I thought my point was obviously true.

The book review, by Edward Hooper, author of The River, a book about the origin of AIDS,  makes a more subtle point. It is about how he has been ignored.

When I wrote The River, I did my level best to interview each of the major living protagonists involved in the origins-of-AIDS debate. This amounted to well over 600 interviews, mostly of two hours or more, and about 500 of which were done face-to-face rather than down the phone. Although the authors of the three aforementioned books (Pepin, Timberg and Halperin, Nattrass) all devote time and several pages to The River, and to claims that I definitely got it wrong, not one of them bothered to contact me at any point – either to challenge my findings, or to ask me questions. However, I have been contacted by someone through my website (a lawyer and social scientist) who asked me several questions, to all of which I responded. Later, this man read the first two of these three pro-bushmeat books and contacted the authors of each by email, to ask them one or two simple questions about their dismissal of the OPV hypothesis [= the AIDS virus came from an oral polio vaccine]. His letters to Pepin, Timberg and Halperin (which he later forwarded to me) were courteous and non-confrontational, and in two instances he sent three separate letters, but apparently not one of the authors could be bothered to reply to any of these approaches.

In other words, there is a kind of moat. Inside the moat, are the respected people — the “real” scientists. Outside the moat are the crazy people, whom it is a good idea to ignore. Even if they have written a book on the topic. Hooper and those who agreed with him were outside the moat.

Hooper quotes Quammen:

“Hooper’s book was massive”, Quammen writes, “overwhelmingly detailed, seemingly reasonable, exhausting to plod through, but mesmerizing in its claims…”

I look forward to the day that the Shangri-La Diet is called “seemingly reasonable”. Quammen and Garrett (whose Coming Plague has yet to come) write about science for a living. I have a theory about their behavior. To acknowledge misaligned incentives (scientists, like journalists, care about other things than truth )  and power relationships (some scientists are in a position to censor other scientists and points of view they dislike) would make their jobs considerably harder. They are afraid of what would happen to them — would they be kicked out, placed on the other side of the moat? — if they took “crazy” views seriously. It is also time-consuming to take “crazy” views seriously (“massive . . . exhausting”). So they ignore them.

Elements of Personal Science

To do personal science well, what should you learn?

Professional scientists learn how to do science mostly in graduate school, mostly by imitation, although they might take a statistics class. Personal scientists rarely have anyone to imitate, so have more need to understand basic principles. There are five skills/dimensions that matter. Here are a few comments about each one: Continue reading “Elements of Personal Science”

Why Quantified Self Matters

Why Quantified Self Matters is the title of a talk I gave yesterday at a Quantified Self conference in Beijing. I gave six examples of things I’d discovered via self-tracking and self-experiment (self-centered moi?), such as how to lose weight (the Shangri-La Diet) and be in a better mood. I said that the Quantified Self movement matters because it supports that sort of thing, i.e., personal science, which has several advantages over professional science. The Quantified Self movement supports learning from data, in contrast to trusting experts.

If I’d had more time, I would have said that personal science and professional science have different strengths. Personal science is good at both the beginning of research (when a new idea has not yet been discovered) and the end of research (when a new idea, after having been confirmed, is applied in everyday life). It is a good way to come up with plausible new ideas and a good way to develop them (assess their plausibility when they are still not very plausible, figure out the best dose, the best treatment details). That’s the beginning of research. Personal science is also a good way to take accepted ideas and apply them in everyday life (e.g., a medical treatment, an idea about deficiency disease) because it fully allows for human diversity (e.g., a medicine that works for most people doesn’t work for you, you have an allergy, whatever). That’s the end of research.

Professional science works well, better than personal science, when an idea is in a middle range of plausibility — quite plausible but not yet fully accepted. At that point it fits a professional scientist’s budget. Their research must be expensive (Veblen might have coined the term conspicuous research, in addition  to “conspicuous consumption” and “conspicuous leisure”) and only quite plausible ideas are worth expensive tests. It also fits their other needs, such as avoidance of “crazy” ideas and a steady stream of publishable results (because ideas that are quite plausible are likely to produce usable results when tested). Professional science is also better than personal science for studying all sorts of “useless” topics. They aren’t actually useless but the value is too obscure and perhaps the research too expensive for people to study them on their own (e.g., I did research on how rats measure time).

In other words, the Quantified Self movement matters because it gives all of us a new scientific tool. A way to easily see where the scientific tools we already have cannot easily see.



More Sitting, More Diabetes: New Meta-Analysis

The first evidence linking exercise and health was a study of London bus workers in the 1950s. The drivers, who sat all day, had more heart attacks than the ticket takers on the same buses, who were on their feet all day. It was a huge advance — evidence, as opposed to speculation. The results were taken countless times to imply that exercise reduces heart attacks but epidemiologists understood there were dozens of differences between the two jobs. For example, driving is more stressful than ticket taking. Maybe stress causes heart attacks.

The first time I learned about this study, I focussed on two differences. The ticket takers were more exposed to morning sunlight (on the top deck of double-decker buses) and they were on their feet much more. Maybe both of those things — morning sunlight exposure and standing a lot — improve sleep. Maybe better sleep reduces heart attacks. The London data were not consistent with the claims of aerobic exercise advocates because the ticket takers did nothing resembling aerobic exercise.

Later I discovered that walking an hour/day normalized my fasting blood sugar levels — another effect of “exercise” (but not aerobic exercise). I had data from only one person (myself), but it was experimental data. The treatment difference between the two sets of data being compared (no walking versus walking) was much sharper, in contrast to most epidemiology. I am sure the correlation reflects cause and effect: Walking roughly an hour/day normalized my blood sugar. This wasn’t obvious. The first thing I tried to lower my fasting blood sugar levels was a low-carb diet, which didn’t work. I discovered the effect of long walks by accident.

A recent meta-analysis combined several surveys that measured the correlation of how much you sit with other health measures. The clearest correlation was with diabetes: People who sit more are more likely to get diabetes. Comparing the two extremes (most sitting with most standing), there was a doubling of risk. Because people who stand more walk more, this supports my self-experimental findings.

I found pure standing (no walking), or leisurely (on-off) walking, did not lower fasting blood sugar (which I measured in the morning). After I noticed that walking an hour lowered blood sugar, I tried slacking off: wandering through a store or a mall for an hour. This did not lower fasting blood sugar. I concluded it had to be close-to-nonstop walking. Someday epidemiologists will measure activity more precisely — with Fitbits, for example. I predict the potent part of standing will turn out to be continuous walking. Long before that, you can see for yourself.











How Helpful Are New Drugs? Not So Clear

Tyler Cowen links to a paper by Frank Lichtenberg, an economist at Columbia University, that tries to estimate the benefits of drug company innovation by estimating how much new drugs prolong life compared to older drugs. The paper compares people equated in a variety of ways except the “vintage” (date of approval) of the drugs they take. Does taking newer drugs increase life-span? is the question Lichtenberg wants to answer. He concludes they do. He says his findings “suggest that two-thirds of the 0.6-year increase in the life expectancy of elderly Americans during 1996-2003 was due to the increase in drug vintage” — that is, to newer drugs.

An obvious problem is that Lichtenberg has not controlled for health-consciousness. This is a standard epidemiological point. People who adopt Conventional Healthy Behavior X (e.g., eat less fat) are more likely to adopt Conventional Healthy Behavior Y (e.g., find a better doctor) than those who don’t. For example, a study found that people who drink a proper amount of wine eat more vegetables. Another reason for a correlation between conventionally-healthy practices is mild depression. People who are mildly depressed are less likely to do twenty different helpful things (including “eat healthy” and “find a better doctor”) than people who are not mildly depressed. (And mild depression seems to be common.) Perhaps doctors differ. (Lichtenberg concludes there are big differences.) Perhaps better doctors (a) prescribe more recent drugs and (b) do other things that benefit their patients. Lichtenberg does not discuss these possibilities.

A subtle problem with Lichtenberg’s conclusion that we benefit from drug company innovation is that drug-company-like thinking — the notion that health problems should be “solved” with drugs — interferes with a better way of thinking: the notion that to solve a health problem, we should find out what aspects of the environment cause it. I suppose this is why we have Schools of Public Health — because this way of thinking, advocated at schools of public health, is so incompatible with what is said and done at medical schools. Public health thinking has a clear and impressive track record — for example, the disappearance of infectious disease as a major source of death. There are plenty of other examples: the drop in lung cancer after it was discovered that smoking causes lung cancer, the drop in birth defects after it was discovered that folate deficiency causes birth defects. Thinking centered on drugs has done nothing so helpful. Spending enormous amounts of money to develop new drugs shifts resources away from more cost-effective research: about environmental causes and prevention. Someone should ask the directors of the Susan K. Komen Foundation: Why “race for the cure”? Wouldn’t spending the money on prevention research save more lives?


Bayesian Shangri-La Diet

In July, a Cambridge UK programmer named John Aspden wanted to lose weight. He had already lost weight via a low-carb (no potatoes, rice, bread, pasta, fruit juice) diet. That was no longer an option. He came across the Shangri-La Diet. It seemed crazy but people he respected took it seriously so he tried it.  It worked. His waist shrank by four belt notches in four months. With no deprivation at all. Continue reading “Bayesian Shangri-La Diet”

Assorted Links

Thanks to Anne Weiss and Dave Lull.

Measuring Yourself to Improve Your Health? Want to Guest-Blog?

What surprised me most about my self-experimental discoveries was that they were outside my area of expertise (animal learning). I discovered how to sleep better but I’m not a sleep researcher. I discovered how to improve my mood but I’m not a mood researcher. I discovered that flaxseed oil improved brain function but I’m not a nutrition researcher. And so on. This is not supposed to happen. Chemistry professors are not supposed to advance physics.  Long ago, this rule was broken. Mendel was not a biologist, Wegener (continental drift) was not a geologist. It hasn’t been broken in the last 100 years. As knowledge increases, the “gains due to specialization” — the advantage of specialists over everyone else within their area of expertise — is supposed to increase. The advantage, and its growth, seem inevitable. It occurs, say economists, because specialized knowledge (e.g., what physicists know that the rest of us, including chemists, don’t know) increases. My theory of human evolution centers on the idea that humans have evolved to specialize and trade. In my life I use thousands of things made by specialists that I couldn’t begin to make myself.

Here we have two things. 1. A general rule (specialists have a big advantage, within their specialty, over the rest of us) that is overwhelmingly true. 2. An exception (my work). How can this be explained? What can we learn from it? I’ve tried to answer these questions but I can add to what I said in that paper. The power of specialization is clearly enormous. Adam Smith, who called specialization “division of labor”, was right. The existence of an exception to the general rule suggests  there are forces pushing in the opposite direction (toward specialists being worse than the rest of us in their area of expertise) that can be more powerful than the power of specialization. Given the power of specialization, the countervailing forces must be remarkably strong. Can we learn more about them? Can we harness them? Can we increase them? The power of specialization has been increasing for thousands of years. How strong the countervailing forces may become is unclear.

The more you’ve read this blog, the more you know what I think the countervailing forces are. Some of them weaken specialists: 1. Professors prefer to be useless rather than useful (Veblen).  2. A large fraction (99%?) of health care workers have no interest in remedies that do not allow them to make money. 3. Medical school professors are terrible scientists. 4. Restrictions on research. Some of them strengthen the rest of us: 1. Data storage and analysis have become very cheap. 2. It is easier for non-scientists to read the scientific literature. 3. No one cares more about your health than you. These are examples. The list could be much longer. What’s interesting is not the critique of health care, which is pretty obvious, but the apparent power of these forces, which isn’t obvious at all.

I want to learn more about this. I want learn how to use these opposing forces and, if possible, increase them. One way to do this is find more exceptions to the general rule, that is, find more people who have improved their health beyond expert advice. I have found some examples. To find more, to learn more about them, and to encourage this sort of thing (DIY Health), I offer the opportunity to guest-blog here.

I think the fundamental reason you can improve on what health experts tell you is that you can gather data. Health experts have weakened their position by ignoring vast amounts of data. Three kinds of data are helpful:  (a) other people’s experiences, (b) scientific papers and (c) self-measurement (combined with self-experimentation). No doubt (c) is the hardest to collect and the most powerful. I would like to offer one or more people the opportunity to guest-blog here about what happens when they try to do (c). In plain English, I am looking for people who are measuring a health problem  and trying to improve on expert advice. For example, trying to lower blood pressure without taking blood pressure medicine. Or counting pimples to figure out what’s causing your acne. Or measuring your mood to test alternatives to anti-depressants. I don’t care what’s measured, so long as it is health-related. (Exception: no weight-loss stories) and you approach these measurements with an open mind (e.g., not trying to promote some product or theory). I am not trying to collect success stories. I am trying to find out what happens when people take this approach.

Guest-blogging may increase your motivation, push you to think more (“I blog, therefore I think“) and give you access to the collective wisdom of readers of this blog (in the comments). If guest-blogging about your experiences and progress (or lack of it) might interest you, contact me with details of what you are doing or plan to do.

Posit Science: Does It Help?

Tim Lundeen pointed me to the website of Posit Science, which sells ($10/month) access to a bunch of exercises that supposedly improve various brain functions, such as memory, attention, and navigation. I first encountered Posit Science at a booth at a convention for psychologists about five years ago. They had reprints available. I looked at a study published in the Proceedings of the National Academy of Sciences. I was surprised how weak was the evidence that their exercises helped.

Maybe the evidence has improved. Under the heading “world class science” the Posit Science website emphasizes a few of the 20-odd published studies. First on their list of “peer-reviewed research” is “the IMPACT study”, which has its own web page.

With 524 participants, the IMPACT study is the largest clinical trial ever to examine whether a specially designed, widely available cognitive training program significantly improves cognitive abilities in adults. Led by distinguished scientists from Mayo Clinic and the University of Southern California, the IMPACT study proves that people can make statistically significant gains in memory and processing speed if they do the right kind of scientifically designed cognitive exercises.

The study compared a few hundred people who got the Posit Science exercises with a few hundred people who got an “active control” treatment that is poorly described. It is called “computer-based learning”. I couldn’t care less that people who spend an enormous amount of time doing laboratory brain tests (1 hour/day, 5 days/week, 8-10 weeks) thereby do better on other laboratory brain tests. I wanted to know if the laboratory training produced improvement in everyday life. This is what most people want to know, I’m sure. The study designers seem to agree. The procedure description says “to be of real value to users, improvement on a training program must generalize to improvement on real-world activities”. Continue reading “Posit Science: Does It Help?”

Quantified Self Utopia: What Would It Look Like?

On the QS forums, Christian Kleineidam asked:

While doing Quantified Self public relations I lately meet the challenge of explaining how our lives are going to change if everything in QS goes the way we want. A lot of what I do in quantified self is about boring details. . . .  Let’s imagine a day 20 years in the future and QS is successful. How will that day be different than [now]?

Self-measurement has helped me two ways. Continue reading “Quantified Self Utopia: What Would It Look Like?”