Gaining perspective

I guess it’s safe to consider my data criticism rather extensive, but there are other layers to consider. And though they are much less obvious, they seem far more important.

Gaining perspective
Generated with deepai.org.
This post is part of an on-going series. If you feel lost, you can get an overview here or jump to the start here.

Because the situation (as a whole) does not just document the technical investigations, but also their underpinning scientific process. That part is obscure, but it is in no way less real, and in my opinion, it is much more relevant. In fact, that’s the dimension that keeps me going. Were it all about a few failed experiments, a few unreliable results, I wouldn’t care. That’s science. But so are honesty and integrity. And if researchers abandon those, the scientific process is eroded.

One of the more difficult things for me to grasp is how the previously discussed data ever made it to publication. Typically, (natural) science departments meet regularly for the specific purpose of discussing the current state of projects, and newly attained results. It’s standard procedure, intended to ensure that any faulty data is caught before it leaves the institute. When I first started my employment, the department met twice a week. To the best of my knowledge, that was also the case for my predecessors. Which means the problematic data didn’t just make it past one or two pairs of eyes, but many: concurrent students, post graduates, PhDs and post-docs, not to mention the primary investigator.

Considering the two relevant theses were produced over a six-year period, it really must be questioned how often the data were discussed, and in what form. Did the researchers show what they ultimately published? Or did they provide deeper insight? Did anyone ever question the many indications of unreliable data? Were any of the seemingly endless issues ever actually addressed? And if so, why did none of it—or any of the necessary information to establish data reliability in the first place—make it into the final documents? Why did the authors choose such problematic graphs and results for their exemplary analysis? And why is everything else absent? Did anyone ever really analyse these results? Or did nobody even care?

But the circle of implicated people quickly grows larger, far beyond the department. The theses weren’t just supervised by one professor, but two, and the secondary supervisors both have their own research groups. As supervisors, it is their duty to critically review the documents, so, did they? Were any of the here raised issues identified? Were they discussed? Resolved? And if so, why are they all unmentioned, even in the later published articles? Which brings into question another aspect: Scientific publications go through a (supposedly tight) review process, during which anonymous experts critically analyse any findings before they can be published. Did the reviewers receive further information that dispelled the mentioned doubts?

And what about the research partners? One thesis details closer cooperation with another department, of another institute. The other stems from the same doctoral college I would later join—which had regular assemblies where students presented progress reports. Considering the doctoral college spanned across multiple institutes, and investigated twelve projects, a couple dozen scientists attended these meetings. Did honestly nobody notice?

And the circle grows larger still. After all, once I had discovered these issues, I collated as much detail as possible before contacting the appropriate organs, all of which failed too. While the university’s ombudsman for student matters was unable to help, the ombudsman for scientific integrity was incorrectly listed, and the supposedly correct contact simply ignored my messages (and yes, I do have proof for it; I only avoid posting it because I’m respectful of people’s privacy). Meanwhile, one of the doctoral college’s faculty members was a (substitute) member of the scientific integrity ombudsman department. Of course, they didn’t bat an eye.

Then, lastly, the country’s highest authority refused to investigate the issue properly, too. It didn’t review my full statement, but forced me to rewrite the allegations to fit ever more narrow conditions. Even so, they then only reviewed one single point of criticism, verified it as relevant, had the accused acknowledge the criticism’s validity, and then just… stopped looking. “Oopsie”, is all my opposition had to say. All, of course, against repeated protest against this obviously ridiculous approach. I called out that exact scenario before it had even happened. I also pointed them towards raw data that would prove it all, but they never even asked for it.

It’s thus fair to say the entire system failed. At each and every turn. From students, over colleagues, post-docs, supervisors, and assemblies, to journals, reviewers, safeguard departments and entire institutions. Every step along the way: negligence. Clear as day, and it can be made overtly obvious, too, by fashion of the academy of science’s own guidelines (I refer to the 2019 version; as of writing, the website somehow lists the 2015 version):

re: §1 Research Integrity

  1. Multiple researchers did not display “a willingness to subject oneself to professional criticism” and did not “respond to such criticism with reasoned argumentation”. My opposition only ever gave hypotheticals, saying the issues I described ‘would have been caught’. Their response to the commission’s investigation was a validation of the criticism, coupled with the claim it was all just an accident. The only other position they ever took were that the ‘exemplary graphs are just representative and not real data’. That’s neither a defence nor a reasoned argument. It’s an admission.
    The paragraph also speaks of “the responsible and fair treatment of junior scientists/researchers in particular”, but clearly, I was put at a decisive disadvantage, with the entire project resting on faulty results (not to mention all the other stuff).
  2. “[T]he Standards of Good Scientific Practice applicable to their respective fields” were obviously not upheld for extended periods of time and the responsible persons failed “to investigate and settle any doubts as to the applicable standards, to avoid research misconduct and to immediately remedy any misconduct detected”. There was no reaction, and never any attempt to remedy even just one of the issues.
  3. The department never “ensure[d] that the Standards of Good Scientific Practice [were] communicated […], defined in writing” nor “actually observed”. They also failed their responsibility of “drawing special attention to the risks of research misconduct” and the “retention and storage of data”. Neither did the university “ensure that the contact information of those persons and groups in charge of enforcing the Standards of Good Scientific Practice and investigating allegations of research misconduct at the research organisation is known and easily retrievable at all times”.
  4. The document actually explicitly details that supervisors of “especially projects related to diploma/master’s theses or doctoral studies” need to “ensure a research environment that enables junior researchers in particular to adhere to the Standards of Good Scientific Practice”. As mentioned above (§1.2), these rules include a responsibility to investigate and remedy scientific misconduct—and thus its reporting, too.

re: §2 Standards of Good Scientific Practice

  1. “Precise record keeping and documentation of the research process as well as the results in such a way that ensures that the studies/investigations are reproducible; this includes the collection of primary and original data (or raw material), which is transparent, seamlessly recorded and documented; where they serve as the basis for publications, these data and documents (e.g. laboratory notes) are to be stored on durable, backed-up data media at the research institution where they were generated […]”. I think my data criticism speaks for itself here. Vast swathes of the concerned results are incomprehensible, analysis procedures (and even experimental approaches) undocumented, and large amounts of data were irretrievable lost, due to non-existent data retention standards.

re: §3 Research misconduct

  1. Interestingly, the form of misconduct depends on how you interpret the situation: “Violations are deemed ‘wilful’ when a researcher considers a violation of the Standards of Good Scientific Practice possible and accepts that possibility when conducting research. Violations are deemed ‘conscious’ when a researcher considers a violation of the Standards of Good Scientific Practice not merely possible, but certain. Violations are deemed ‘grossly negligent’ in cases where a researcher shows blatant disregard for due diligence in a given research context and therefore fails to recognise that s/he is violating the Standards of Good Scientific Practice to a great extent; for example, this is the case where even the simplest, most obvious considerations are not taken into account and the researcher disregards considerations which should have occurred to any person”.
    Would you consider it an obvious consideration to contemplate how a change in instrument accessory affects the measurement? Is it an obvious problem when findings imply the investigated sample’s destruction? Or to contemplate what shape of signal is expected under particular measurement conditions?
  2. The guidelines further detail which “actions in particular are to be considered research misconduct”, mentioning “the falsification of data”, for example by “selectively omitting data” or the “misleading interpretation of data”. The list extends to the “disposal of […] data before the applicable retention periods have passed”, and the “creating disadvantages to the career advancement of junior scientists and researchers who have reported potential research misconduct (whistle-blowers)”. All relevant in the here discussed context.

re: §4 Involvement in research misconduct

  1. Research misconduct, according to the statutes, can “also include involvement in other persons’ violations of the Standards of Good Scientific Practice”, for example through the “co-authorship of publications based on falsified data or otherwise generated through violations of the Standards of Good Scientific Practice; or neglect of supervisory obligations”.
  2. Furthermore, all authors of publications have a “joint responsibility for the publication’s adherence to the Standards of Good Scientific Practice”. Once again, I think the data criticism details enough.

Ironically, it is thus the academy of science’s own brochure that implicates a far larger circle of scientists than the original researchers themselves. Even the academy itself clearly failed their own standards, as did an entire population of scientists. And not just one or two points, but every single paragraph the academy lists. Of course, none of them suffered any consequences anyway. I shouldered those.

It is difficult to explain just how difficult blowing the whistle is, or what challenges it entails. It is utterly isolating, hopelessly overwhelming, and stunningly imbalanced. Thanks to science’s extreme specialisation, there are but a few people who understand the field’s intricacies as necessary, and chances are they can’t join the fight without serious professional risks. Others might want to help, but won’t be able to relate, nor provide help with the critical analysis, as the whistleblower needs. In fact, most people will discourage any pursuit of the matter because they’ll see just how self-destructive it is, and they know the chances of success are zero. And they are right. I’ve been fighting for years, and I have only pain to show for it. I burnt out, went through hell, and lost trust in my mind. I don’t know if I’ll ever be able to handle professional stress as much as I once could, or whether I’ll find joy in work ever again. Not to mention the financial strain. How many people are willing to waste years of their prime fighting for ideals, at the risk of health and mind?

The respective academy of science actually helped write the European Network of Research Integrity Office’s (ENRIO’s) Handbook on Whistleblower Protection in Research (of which I cite the 2023 version). That document goes on and on about the importance of properly supporting whistleblowers—legally, financially, psychologically, medically. They know full well, that it needs “protection before, during and after an investigation”, and that universities have a duty to properly review reports, at the very least to avoid scandals and reputational damages. It also cites a “small Dutch study using two validated surveys [which] found that ’85% of whistleblowers suffered from severe to very severe anxiety, depression, interpersonal sensitivity and distrust, agoraphobia symptoms, and/or sleeping problems’” (van der Helden et al., 2019). Having gone through my experience, I remain entirely unsurprised. Maybe handbooks just aren’t enough.

Then there’s the disparity. Everyone trusts the expert and questions the whistleblower, who thus ends up with the burden of proof. Even though my opposition fails to show the data their claims demand, I have to provide the reasoning and analysis to prove it—yet, when I pointed to the data that actually does, nobody was willing to take even just a glance at it. Meanwhile, the opposition’s “oops” carried more weight than any of the criticism I ever forwarded. Not to mention that only a fraction of the investigating commission could even relate to the field (unless the Humanities, Law, or Economics also employ atomic force microscopy for their research). Out of curiosity: How much would you trust a commission that misspells the university’s name in their final statement?

Figure 1: The university’s namesake (Johannes Kepler) is misspelled in the academy of science’s commission’s final statement on the matter.
Figure 1: The university’s namesake (Johannes Kepler) is misspelled in the academy of science’s commission’s final statement on the matter.

The imbalance is further exacerbated by another crucial factor: I can’t even show all the evidence. What data I collected are the department’s legal property, as are the relevant raw data I saw during my employment. Meanwhile, written correspondence and other pertinent information that isn’t published falls under confidentiality. All the whistleblower can do is forward information through appropriate channels, which, of course, are also confidential. But what happens if those channels fail (e.g. because their “secure system” crashes, as indeed happened), or the responsible organ refuses to properly investigate (e.g. because the accusations are “too extensive to review”)? What then?

Figure 2: Statement posted after my pointing out that messages didn’t make it through the academy of science’s secure messaging system. It explains the used (BKMS-)system did not function properly between December and January 18th, that several messages did not reach the academy as a result, that the system provider has since restored functionality, and lastly asks whistleblowers to re-send messages originally transmitted during that timeframe.
Figure 2: Statement posted after my pointing out that messages didn’t make it through the academy of science’s secure messaging system. It explains the used (BKMS-)system did not function properly between December and January 18th, that several messages did not reach the academy as a result, that the system provider has since restored functionality, and lastly asks whistleblowers to re-send messages originally transmitted during that timeframe.

For me, it is all the more reason to put all out there because this particular situation is well-documented by the authors themselves. The clearest, cleanest, least emotional way to approach this, is through the data, and that’s out there. What it shows is unequivocal, and the author’s own publications confirm it.

It’s also difficult to detail the damages caused by it all. Financially speaking, we are talking about at least six figures, and that’s just the salaries paid while the project was stuck. Add in the salaries of the predecessors, the material, equipment, occupancy, and so forth, and the sum could easily reach seven figures. Not to mention the financial impact outside the department. My burnout leaves me unable to work full-time, still, which is why I’m reorienting. Then consider the health repercussions, and last, not least, the reputational damage that may still follow. If anyone ever takes a closer look at all this, and realises just how problematic these publications are, it won’t just be the authors who need to answer, but all of them. All involved institutes and universities.

I wanted to trust the system. Step by step, I followed procedure, reaching out further and further, only to be let down again and again. The harsh truth is that the system doesn’t want to reach clarity. It wants to protect itself, and thus the people already established. The individual is disadvantaged at every turn, and it is palpable. Frankly, it is far easier to simply ignore problems until they fade away, which they will. Over time, problematic publications will dissipate among the millions of articles published every year. What is unreliable, is not reproduced, will vanish from memory. By the time they are caught—if ever they are—, it won’t matter any more. The authors will have received their titles and accolades, their doctorates and careers.

The final conclusion, then, is rather perverse: The more you respect the scientific process, the smaller your chances of academic success. Science is a slow virtue. It requires patience, self-reflection, granular analysis, and all of that needs time. If you take it, less diligent peers will surpass you. You don’t publish aplenty, you perish. If you’re honest about your findings and don’t oversell them, your funding will run out. If you admit to mistakes that inevitably happen in the pursuit of the unknown, you will be ousted. Trying to remedy mistakes, will have you opposing institutes. And so, whoever tries to protect the scientific process, ends up punished for it.

It is far more conducive to success to deny responsibility as much as possible. Don’t look at your co-worker’s data, and don’t discuss their results. Don’t scrutinise your findings more than absolutely necessary. Present as little as you can, as dressy as you can. Be selective in what you show; highly selective. Overlook problem indicators. In fact, hide them, so nobody runs the risk of overthinking these obviously irrelevant issues. In like fashion, if someone doesn’t show the data that supports their claims, just assume they are right—but don’t build on their findings; pursue other avenues. Never ever admit mistakes, and no matter what you do, always maintain plausible deniability. At all cost.

Chances are no one will ever bother to check, anyway. Sleep sound in the knowledge it takes years to uncover any such issues, and even then, nothing will come of it. It’s why I feel responsible to see this through. My situation is unique in many facets, and one of them stands out: My opposition’s data is out there, freely available. Anyone can download it. Anyone can read it and anyone can check it. I encourage you to do so, because by the end of it, you won’t have to take my word. You can take theirs.

Continue the story...