Who Watches the Watchmen? Blind Trust Isn’t Enough in Today’s Research Environment
by Shane Caldwell
In the spring of 2012, an article appeared in the niche crystallography journal Acta Crystallographica, Section F (“Acta F” to those in the know). This article, “Detection and analysis of unusual features in the structural model and structure-factor data of a birch pollen allergen” describes a protein structure published in a 2010 Journal of Immunology paper. Following a thorough analysis, author Bernhard Rupp (of textbook fame) concludes there is:
“… no doubt that model and data of [structure] are incompatible and that the deposited [data] are not based on actual experiments, and their standard uncertainties are not based on experimental errors.”
Translated to everyday language, this reads. “The data isn’t real.” He is accusing the authors of data fabrication.
Technical language and a deferential tone mask the severity of this accusation. The author response, published in the same issue, cuts to the chase.
“The University of Salzburg immediately informed and commissioned the Austrian Agency for Research Integrity (OeAWI) to carry out an investigation into possible data fraud on the part of author Robert Schwarzenbacher, the co-author solely responsible for the Bet v 1d structure and the crystallographic section of the J. Immunol. paper. The OeAWI is presently preparing a report of this investigation. ”
This is a great example of researchers and institutions taking the appropriate steps to address concerns of research misconduct. Then: “Author Schwarzenbacher admits to the allegations of data fabrication and deeply apologizes to the co-authors and the scientific community for all the problems this has caused. ”
Oh. Well then. No need for an investigation after all, right? Well….
“Note added in proof: subsequent to the acceptance of this article for publication, author Schwarzenbacher withdrew his admission of the allegations. ”
Presumably he talked to a lawyer.
So, what went wrong? We can infer that researchers working in one field (immunology) brought in a collaborator from an outside field (structural biology) to add complementary experiments and drum up the impact of the project. The steep learning curve and specialized techniques of structural biology meant that the other authors had to trust Schwarzenbacher’s work was conducted rigorously and honestly. Obviously, this trust was misplaced.
Almost two years after the Acta F paper was published, Schwarzenbacher has lost his job, but has sued for wrongful termination. The structure in question has been mothballed in the Protein Data Bank, and his contribution to the Journal of Immunology paper has been removed. The paper still stands on its other experiments, as the authors argue that the paper’s conclusions did not depend on the fraudulent structure work. Regardless, it’s a black mark on the record of the coauthors, the journal, funding agencies, the university, and the Austrian structural biology community. The reputations of many parties have suffered from the actions of one misguided researcher.
Most have been happy to hang the blame on Schwarzenbacher’s shoulders, and rightly so. But does he carry all of the responsibility? I’ll argue he doesn’t. 7 co-authors all approved the work for publication. The Journal of Immunology, its reviewers and editor all gave it the stamp of approval. The department and university provided the environment where this misconduct could go unnoticed. The allergy and immunology research community also missed the fraud. Everyone appears to have been content to accept the credit and conclusions from presumably legitimate work, but once fraud was brought to light immediately distanced themselves from the situation. This is not a sustainable practice.
This isolated case could be one of many to come. We need not just consider fraud, either. Even the most scrupulous researchers are subject to the insidious influence of wishful thinking. Critical mistakes can slip through when a “ringer expert” operates on their own, without any scrutiny. As technology drives ever more complicated experiments, and granting agencies continue to reward multidisciplinary work, lone specialists will increasingly be required in collaborations, and the chances of fraud or major errors slipping through will increase.
Schwarzenbacher’s is a particularly good example of this problem because there is no ambiguity about his fraud – a novice crystallographer would find obvious problems with the data. Passing the data past a single critical eye could have caught the fabrication before publication. This drives home how the current environment can allow ethical or methodological problems to slip through, leading to flawed or fraudulent conclusions. A mechanism is needed to improve oversight – the assumption of good faith is not sufficient. Peer review is supposed to provide this function, but here, as elsewhere, it failed.
Besides fixing peer review, how can future incidents like this be prevented? Movements like open data will play a role. Deposition of data is already a condition for publication of structural work, and Schwarzenbacher’s fraud was discovered through curation of the database. Post-publication peer review can also help identify problematic data and correct the scientific record, but neither of these mechanisms can prevent the initial publication of bad or fraudulent data.
What is really needed is a culture change. The institutions, journals, and researchers involved in multidisciplinary collaborations can’t escape responsibility for scientific oversight. This could mean some sort of institutional review, or a requirement to pass the data by a friendly but impartial third party. Most importantly, the community needs to accept that scientific integrity is the responsibility of everyone involved, not just the person who processed the data. Researchers need to take steps to ensure that they can stand by the integrity of all of the work to which they attach their names. Ignorance is a poor excuse.
It’s worth considering what drove Schwarzenbacher to cross the line. Rather than some master manipulation to trick his collaborators and journal reviewers, his actions appear more like simple indifference and laziness. As highlighted by the partial (not full) retraction, the structural work wasn’t central to the study, and the faked data is obvious enough that he clearly didn’t expend much effort to cover his tracks. Perhaps he didn’t think anyone would notice or care, and if so, he was somewhat correct. His collaborators and research environment allowed him to cut a corner that he shouldn’t have cut. He chose expedience over honesty.
The editors of Acta F make the observation that
“It seems clear that the pressures on scientists early in their careers are so severe that a few are compelled to risk their careers in order to further them.”
I think it’s time everyone else assumes a little responsibility for letting it happen. Find out more about the Schwarzenbacher debacle on RetractionWatch:
And in Nature:
As well as a relevant Nature editorial from around the same time:
Find me online @sj_caldwell, or send me an email at shane.caldwell17 [at] gmail.com