Wednesday, July 6, 2011

Why I'm standing by our study of dietary health claims in newspapers

On Monday saw James Randerson, science and environmental editor of the Guardian, a critique of it ("Ben Goldacre 's study on dietary messages should be taken with a pinch of salt").

Most of his piece is 2400 words issued criticizes us for a position that we do not (and we told him that we do not think before he wrote his piece).

His is a very long piece, so I've broken down his objections in sections. As a summary, is what he said:

• We shouldn 't apply to evidence grading systems that were only designed to assess health advice, all science and environment stories (but we didn' t, and we think that would be a stupid idea!).

• Claims with weak evidence has been presented with many restrictions (that 's an interesting idea for an additional study, although his examples of restrictions don' t seem like reservations)

All research is done with limited resources, and therefore methodological limitations, which are freely and openly discussed. I believe, as ever, that discussing the strengths and weaknesses of a specific study design is the absolute best way to understand science: so I'm very happy to go though each of James's arguments in more detail, and I also hope, for that same reason, that this post is interesting on its own merits.

What we did was very simple: we assessed the quality of evidence for every one of 111 dietary health claims in one week of newspapers. We found that overall, the quality of evidence for dietary advice was poor, and that this might lead to the public being misled, overall, routinely, by what they read in papers. We think it would be better if heath advice in newspapers was generally based on stronger forms of evidence, but of course there will be times, even for the very specific issue of dietary advice, where there will be reasons to write stories on weaker forms of evidence. However, since about 70% of the advice given had the lowest two forms of evidence, there might be a matter of scale here.

But bear in mind you would have to propose a very strong effect. We know that for the effect he proposes, the high-quality articles must have been displaced while the low-quality ones tended to remain. If we were to decide that, for the sake of argument, it is ok for 30% of nutritional health claims to be poorly supported by evidence, then to shrink our ~70% figure to ~30%, then Obama must have displaced three quarters of the high-quality articles and none of the low quality ones. This may be true, but it's a very big selective effect. A very large sample would be required to find out.

It is perhaps worth noting at this stage that I 'm not aware of many numbers close or studies comparable to 2400 words methodical assessment of what James has given this reception. I would absolutely welcome that more and more frequently in the media.

Reading patterns into the smaller numbers on individual newspapers

Then it's' \ s worth noting that James was to sample as many items in it, or demands in certain individual tries to read newspapers. This kind of small sub-group analysis is generally considered to be extremely unwise for the following reasons. 111 claims in 37 articles, is big enough to give a summary form, but if these 111 claims and 37 items under ten ten ways newspapers are split, the numbers are so small that the best explanation for differences between the newspapers by chance is. We explain in our paper: we don 't think the numbers are large enough to draw conclusions on the number of stories in each newspaper or the quality of evidence for the claims in a particular newspaper.

We could compare leaflets against tabloid newspapers, as the numbers were still quite large, with this fraction, and we found a modest difference. With the WCRF criteria 67% of broadsheet health claims were from the lowest two categories of evidence and 74% in tabloids (p = 0.02 for those who are interested), the difference was not 't very dramatic.

James insists on drawing conclusions from the number of claims from individual newspapers in that one week, and tries to explain the patterns he believes he has seen. I don't think that's valid. We explained why this is unwise. If James has an explanation of why random chance is not the best explanation for those patterns, at those tiny numbers for individual subgroups and newspapers, then he should say so.

You might be tempted to join him, and try to see patterns in the noise (really, is the danger of this something that I 've covered several times in the column). Maybe you want to say, for example, after you 've seen the results, it' s health claims it is less than some others in The Times newspaper. That might make sense. Well, it could be a true knowledge, it might still be a coincidence (and, remember, this was the first time that someone took a week to sample and counted them all). Does it make sense to you that the mail did pretty good the quality of evidence? Probably not, I suspect. Your opinion changes about cherry picking individual newspapers, now that the result goes against your wishes? It shouldn 't: it' s probably all sounds, you just shouldn 't do it!

The "Goldacre criteria", and paper

Summary, and thoughts on improving research

Although it may be uncomfortable for people working in the media, this is a legitimate phenomenon to investigate, and to try and document. People make real world decisions based on the information that they receive through the media, and this has very real consequences for their health. If they are being routinely misled, then this is an important and serious public health issue.



0 comments:

Blog Archive