Advertisement

Of Mice (Studies) and Men

  • 26 Nov 2013
  • By Derek Lowe
  • 4 min read
  • Comments
Here's an article from Science on the problems with mouse models of disease.
or years, researchers, pharmaceutical companies, drug regulators, and even the general public have lamented how rarely therapies that cure animals do much of anything for humans. Much attention has focused on whether mice with different diseases accurately reflect what happens in sick people. But Dirnagl and some others suggest there's another equally acute problem. Many animal studies are poorly done, they say, and if conducted with greater rigor they'd be a much more reliable predictor of human biology.

The problem is that the rigor of animal studies varies widely. There are, of course, plenty of well-thought-out, well-controlled ones. But there are also a lot of studies with sample sizes that are far too small, that are poorly randomized, unblinded, etc. As the article mentions (just to give one example), sticking your gloved hand into the cage and pulling out the first mouse you can grab is not an appropriate randomization technique. They aren't lottery balls - although some of the badly run studies might as well have used those instead.
After lots of agitating and conversation within the National Institutes of Health (NIH), in the summer of 2012 [Shai] Silberberg and some allies went outside it, convening a workshop in downtown Washington, D.C. Among the attendees were journal editors, whom he considers critical to raising standards of animal research. "Initially there was a lot of finger-pointing," he says. "The editors are responsible, the reviewers are responsible, funding agencies are responsible. At the end of the day we said, 'Look, it's everyone's responsibility, can we agree on some core set of issues that need to be reported' " in animal research?
In the months since then, there's been measurable progress. The scrutiny of animal studies is one piece of an NIH effort to improve openness and reproducibility in all the science it funds. Several institutes are beginning to pilot new approaches to grant review. For an application based on animal results, this might mean requiring that the previous work describe whether blinding, randomization, and calculations about sample size were considered to minimize the risk of bias. . .

Not everyone thinks that these new rules are going to work, though, or are even the right way to approach the problem:
Some in the field consider such requirements uncalled for. "I am not pessimistic enough to believe that the entire scientific community is obfuscating results, or that there's a systematic bias," says Joseph Bass, who studies mouse models of obesity and diabetes at Northwestern University in Chicago, Illinois. Although Bass agrees that mouse studies often aren't reproducible—a problem he takes seriously—he believes that's not primarily because of statistics. Rather, he suggests the reasons vary by field, even by experiment. For example, results in Bass's area, metabolism, can be affected by temperature, to which animals are acutely sensitive. They can also be skewed if a genetic manipulation causes a side effect late in life, and researchers try to use older mice to replicate an effect observed in young animals. Applying blanket requirements across all of animal research, he argues, isn't realistic.

I think, though, that there must be some minimum requirements that could be usefully set, even with every field having its own peculiarities. After all, the same variables that Bass mentions above - which are most certainly real ones - could affect studies in completely different fields. This, of course, is one of the biggest reasons that drug companies restrict access to their animal facilities. There's always a separate system to open those doors, and if you don't have the card to do it, you're not supposed to be in there. Pace the animal rights activists, that's not because it's so terrible in there that the rest of us wouldn't be able to take it. It's because they don't want anyone coming in there and turning on lights, slamming doors, sneezing, or doing any of four dozen less obvious things that could screw up the data. This stuff is expensive, and it can be ruined quite easily. It's like waiting for a four-week-long soufflé to rise.
That brings up another question - how do the animal studies done in industry compare to those done in academia? The Science article mentions some work done recently by Lisa Bero of UCSF. She was looking at animal studies on the effects of statins, and found, actually, that industry-sponsored research was less likely to find that the drug under investigation was beneficial. The explanation she advanced is a perfectly good one: if your animal study is going to lead you to spend the big money in the clinic, you want to be quite sure that you can believe the data. That's not to say that there aren't animal studies in the drug industry that could be (or could have been) run better. It's just that there are, perhaps, more incentives to make sure that the answer is right, rather than just being interesting and publishable.
Doesn't the same reasoning apply to human studies? It certainly should. The main complicating factor I can think of is that once a company, particularly a smaller one, has made the big leap into human clinical trials, it also has an incentive to find something that's good enough to keep going with, and/or good enough to attract more investment. So perverse incentives are, I'd guess, more of a problem once you get to human trials, because it's such a make-or-break situation. People are probably more willing to get the bad news from an animal study and just groan and say "Oh well, let's try something else". Saying that after an unsuccessful Phase II trial is something else again, and takes a bit more sang-froid than most of us have available. (And, in fact, Bero's previous work on human trials of statins seems to show various forms of bias at work, although publication bias is surely not the least of them).