1 in 3 Drugs Have Side Effects Discovered After They Have Been Released To Market

2 in 3 researchers cannot replicate the findings of other researchers, which calls into question the majority of scientific research

Bad Science Is A Danger To Public Health

A new report in the Journal of the American Medical Association found that from 2001 to 2010, almost one third of all drugs approved by the US Food and Drug Administration (FDA) had additional side effects after they were brought to market.

And of the 222 prescription drugs approved over the decade, 71 (32%) and a “postmarket safety event.”

The lead author of the study, Dr. Joseph Ross, had this to say:

The large percentage of problems was a surprise… We know that safety concerns, new ones, are going to be identified once a drug is used in a wider population. That’s just how it is. The fact that that’s such a high number means the FDA is working hard to evaluate drugs and once concerns are identified, they’re communicating them.

It’s reasonable to expect some surprises when working with systems as complex such as human biology.

Although this doesn’t detract from the public humiliation that scientists have been experiencing in the media recently, nor does it preclude us asking why.

Why are there so many issues with pharmacology, given that even minor problems could pose serious health risks?

2 in 3 Researchers Can’t Replicate Other Scientist’s Studies

Much of it boils down to bad science, caused by the “publish or perish” mentality on university campuses.

Back in February, the BBC published an article opining that “science is facing a ‘reproducibility crisis’ where more than two-thirds of researchers tried, and failed to reproduce another scientist’s experiments.”

This was is response to an immunologist having trouble replicating 5 separate cancer studies.

The whole idea behind peer-reviewed research is that it should be scrutinized in a way that ensures a high level of confidence in the results.

One way to do this is by testing to see if the study can be replicated.  If it can’t, then we should be less confident in the findings.

Even more frightening: according to Nature, 70% of researchers could not replicate another scientist’s findings when independently running the same experiments.

This means that there are plenty of studies out there that are falling through the gaping crevice of the peer-reviewed system—frankly, junk science is being used to craft public policy.

Unfortunately, this is partly why the reverence of credentialism is often unfounded.

There are 2 major problems that need to be addressed, if we are to fix this:

1.  We need to put more emphasis, and allocate more funding, to replication.  Part of the problem is that private researchers and institutions don’t want to invest time and money into replicating other people’s experiments because it’s not sexy—there’s no chance to find “the next big thing”, or to discover something new.

The system, as it is, prioritizes novelty over quality, which is becoming a major issue.

2.  Another problem is that science has become highly politicized: special interests fund science that tends to confirm their suspicions, rather than impartial studies, or basic research.

To make this point, just look at how the fake scientist Bill Nye relies on the “scientific consensus” to back up his political claims regarding climate change.

Science needs to focus more on the scientific method, and less on politics.

At it’s heart, we need to make science reproducible again.

Share Me

 

Leave a Reply

Be the First to Comment!

Notify of
avatar
wpDiscuz