Tranparency in research

It has been reported that Tamiflu does not have the effect it has thought to have had. This report gives us some indication to how pharmaceutical companies act. Many countries purchased Tamiflu in big amounts in connection to the birdflu-scare in 2005-2006 and 2009 in connection to swineflu panepidemic which WHO said existed. The vaccine was purchased from Roche for several hundred millions. Even Sweden bought the vaccine for hundreds of millions of Swedish crowns.

There were many critical voices against the mass-purchase, especially from the Cochrane-network. The network complied many reports on the effects of the vaccine. Many of them were sponsored by the pharmaceutical companies. These reports could not show any significant effects of Tamiflu but what was also interesting was that a lot of tests had been kept secret and results were never published. This leads to another problem, besides the fact that tax money is spent on vaccine with no significant effect, namely that pharmaceutical companies withheld research results and research data. They rarely publish negative results, which means that it is difficult to make well-informed decisions on e.g. which medication to choose. Say there is 80 studies, 40 of them are positive which are published and 40 show no effects or even negative effects but all of these are not published. Usually only a few of them are published. What we know is that in most cases the medication has a positive effect but the truth is more complicated. Would you like to take medicine which in reality had no effect, like with Tamiflu?

Science and research should be build on transparance and access to both research results and research data/datasets so that we can make decisions based on reliable data. Read more on Tamiflu case What the Tamiflu saga tells us about drug trials and big pharma published in The Guardian.

We have written about the value of negative and inconclusive results in the blog in connection to the open access week last fall.

Pieta Eklund

Negative results = no results, or contribution to knowledge?

120x600oaNegative results, results which do not support the expressed hypothesis or results which fall too far from the expected results are rarely analyzed. These results are not published as often either. There is a bias, so called publication bias towards publishing the positive, to publish the results which support the hypothesis. This is not seen as a research ethical problem. This problem is partly due to publish-or-perish culture; there is a competition to publish and to get citations to be able to be in the competition for research funding. Publication bias gives a twisted image of the research area and research literature. It can even lead to researchers manipulating research data.

Scientific journals are not interested in publishing replicated studies because they lack in news value. They are not interested in to publish negative results although a lot of important research which we regard as the truth has not been possible to replicate later. Some postdocs went as far as creating Journal of Negative Results: Ecology & Evolutionary Biology. This is not a sustainable solution. Instead, more research data should be made available and the norms in scientific communication should be changed: researchers should write exact what they have done. Maybe this has not been possible earlier with print journals with  word limits. With electronic journals and open access this is now possible. Many open access journals are happy to give more space for extended methodological descriptions and discussions.

Ben Goldacre: What doctors don’t know about the drugs they prescribe – tells about a study which showed that some university students had the ability to see into the future. We hear only about the times when someone has been able to do something which maybe leads us to believe in it. We believe that a scientific article is correct, that certain medicine work well against e.g. depression. The fact that we do not know is that only the positive results have been reported. So, we hear only of the cases when the oracle was correct, not the times it was wrong. Within medicine and pharmacy research of reporting of only the positive could be lethal. Goldacre gives an example of a medicine which has been studied in a number of studies, 38 with positive results and 36 with negative results. 37 of the studies with positive results were published. Meanwhile only 3 of the negative studies where published. In The Power of Negative Thinking by Jennifer Couzin-Frankel writes about the same phenomenon: only a fraction of the studies were possible to replicate.

Researchers should be encouraged to publish negative results and it should also become easier to publish negative results. Peer J, open access journal in medicine and biology writes on their web page that they publish methodologically and theoretically correct articles – the results does not have be news worthy. PEER J writes that ”negative/inconclusive results are acceptable. They also write that all research data should be available for the reviewer and if possible made available even for others. They even publish the reviewer’s comments. Maybe one of the reasons Peer J is able to work this way is that it is a newly started open access journal. Scientific publishing is quite traditional and protectionistic area: publishers are not quick to implement new ways to work.

To work for knowledge ambiguous results should be published and research data should be made easily available, which is what open access works for.

Watch Ben Goldacre’s TED Talks on publication bias and what it could have for effect.

The film is about 14 mins long.

Read also: Fanelli, D. (2012). Negative results are sidappearing from most diciplines and countries. Scientometrics, 90, ss. 891-904. 10.1007/s11192-011-0494-7

Read even short essays on negative results in Marin Ecology Porgress Series from 1999. The essays talk about negative results and remind us that even positive results should be eyed with skepticism.

Pieta Eklund

“Truthiness” & “sciency” in academy

4th of Obtober Science published an article which was described as a “sting operation” against open access journals. The aim was to create an fake article with obvious methodological and scientific problems and then send the article to a number of open access journals to see which would publish the article. The problem with the study is that the researcher (John Bohannon, biologist and science journalist) knew that the oa-journals the article was sent to were predatory open access journals and that he did not have a control group of traditional toll access journals.

The article has created a lot of discussions and even some well-written satirical blog posts, e.g. Dr. Mike Taylors Anti-tutorial: how to design and execute a really bad study. Mike Taylor is  paleontologist in the Department of Earth Sciences at the University of Bristol, UK.

Taylor writes”…It has that sciencey quality. It discusses methods. It has supplementary information. It talks a lot about peer-review, that staple of science. But none of that makes it science. It’s a maze of preordained outcomes, multiple levels of biased selection, cherry-picked data and spin-ridden conclusions. What it shows is: predatory journals are predatory. That’s not news.”

Bohannon’s article does not help us to form an objective opinion on oa journals because of its flaws and that it is trying to describe something as the truth but instead gives  “truthiness”, as Gary F. Daught describes it, a face. “Truthiness” is defined in Wikipedia  as “a quality characterizing a ‘truth’ that a person making an argument or assertion claims to know intuitively ‘from the gut’ or because it ‘feels’ right without regard to evidence, logic, intellectual examination, or facts”.

This is not the first time the peer review process is under scrutiny. In 1996 Alan Sokal tested Social Text, a highly regarded journal in Sociology, and in just some weeks ago it was reported that some serbian researchers had tested peer review process. The article was not even up to par but it had also cited among others B. Sagdiyev (Borat), A. S. Hole and Michael Jackson.

Next time someone conducts a study like this it would be great if they made a serious try: tested both highly regarded oa journals and ta journals, and then drew conclusions which the results allow and not spinn them into someting unrecognizable.

Pieta Eklund

COPE – Committee on publication ethics

COPE offers advice to editors on different aspects of publication ethis, especially who research misconct should be handled.

COPE has a database of all the cases that has been discussed by the COPE members and forum. It is a good source to use when you are interested to see what kind of cases are discussed and perhaps when you need guidance on how to act in a case where you suspect misconduct. You might have questions like how has the data been analyzed since that can lead to questions on the reliability of the study. You can search the database or browse by category and year. They have a number of categories, such as authorship, falsifying research data, plagiarizing etc.

COPE was formed in 1997 by a small group of editors in medical journals and today COPE has over 7000 members within all academic areas. Membership is for editors or scientific journals who are interested in publication ethics. All COPE members should follow the Code of Conduct for Journal Editors.The document is 11 pages long and has a descriptions of recommendations and best practice for a editor, such as continually improve the journal and always be willing publish corrections and retractions. Among other things the code includes demands that the journal updates its author guidelines, offers possibility for authors to appeal editor’s decisions and make sure that the peer review process is just, unbiased and timely. The editors should also work for making sure that research is ethical, especially when it comes to research which deals with humans and animals.  There are even guidelines on how an editor should act when research misconduct such as plagiarizm, falsifying research data or an author is added or removed from the authorlist prior to publishing. COPE has a number of flowcharts for editors to follow when misconduct is suspected.

If you are an journal editor you could become a member of COPE. The membership cost varies some what depeding on how many issues your journal publishers a year, and if you just publish articles whithout having an organised journal the cost is decided from the number of articles you publish. There is also an organizational pricing model.

cope_flow_engPieta Eklund