Rose Gatfield-Jeffries is an honours student in history and philosophy of science at the University of Sydney. During the summer, she worked with Jason Chin and I on assessing the credibility and transparency of empirical legal research.
As background for our project, we sent Rose several contemporary articles and videos on meta-science, mostly from a list we stole from Simine Vazire. Rose sent us her thoughts on the articles and videos, below, which Jason and I were impressed by. We think many interested in meta-science will find her thoughts interesting.
Misinterpretations of evidence, and worse interpretations of evidence (video)— Fiona Fidler
It is interesting to hear a first-hand account of witnessing research fraud, and the subsequent path into a metascience approach. One of the key problems presented here was that not only are misconceptions rampant in psychology, but researchers are also (potentially willingly) oblivious to error and resistant to change. David Kahneman once said that we blind to the obvious, but we are also blind to our blindness (which is much worse). This in a way comes through the “why bother changing/better the devil you know” argument. Some may prefer to remain blind without upending the status quo.
Towards a more self-correcting science (video) — Simine Vazire
I think this presentation was particularly conducive to describing and understanding the landscape of meta-science, and the credibility side of the crisis. The notion that science has a culture, and that it is trust in this culture that creates trust in science seems very hypocritical, as science (stereotypically) holds immense pride in being an objective paradigm without cultural or social influence — presenting a very appealing argument for the social constructivists amongst us. Vazire seems to look the reality of science in the eye which I highly appreciate, although I believe this is a rare attitude towards science outside the realm of HPS academia itself.
Journalists discuss the replication crisis (video) — Christie Aschwanden, Stephanie Lee, Ivan Oransky, Richard Harris
I very much liked this discussion. Hearing from science communication veterans and their respective angles was a valuable use of time — particularly the 5:38 perspective (a website I often use) noting that in essence correlation is by no means causation. Just because the data may show that people who cut the fat off their meat are more likely to be atheists it does not mean this is related in any other sense but coincidence. This stresses the importance of preregistration because it prevents researchers from looking at data and deciding post-collection that a certain surprising relationship looks interesting.
The psychology of scientists: the role of cognitive biases in sustaining bad science (video)— Dorothy Bishop
One of the very first cognitive restraints in science that Bishop presents is “seeing patterns in noise”. I do wonder whether this can be extended to a paradigm-level analysis, where in some cases, there is simply far too much research occurring that will later be deemed as the noise preventing the correct transmission and reception of the signal. Should science therefore aim for quality over quantity? Bishop’s later discussion of intentionality in research malpractice was, in my opinion, a bit difficult to agree with. I do not think that malpractice can hide behind a façade of unintentional omission/misinterpretation because if a researcher expects to be at the top of their field, it is their job to ensure these ‘mistakes’ do not make it to the final publication. Perhaps this is an overly harsh line to take.
Barriers to conducting replications: reproducibility project in cancer biology (video)— Tim Errington
Errington stated that 0 of 51 papers and 0 of 197 experiments laid out instructions for a complete protocol to be designed and study replicated. I struggle to see how this is allowed to slip through the minds of journal editors etc., with the understanding that as young as Year 9 science classes are instructed to write experimental methods as if they can be followed by someone who has never stepped foot in the lab before. It is quite uninspiring to learn that professional scientists have seemed to forget this (at least in essence). Proof is not solely in results data; it is inextricably entwined within methodology and scientific publications ought to reflect this.
Embracing variation and accepting uncertainty (video) — Andrew Gelman
Gelman seems to hint at the idea that things are unreplicable if the terms used cannot be defined properly: e.g., “beautiful parents are more likely to have girls”, beautiful could be defined in any way imaginable (facial proportionality, success, personality, Western beauty, historical beauty). These norms do not stand still and therefore neither would the results of experimenting with them, because the definitions of these norms are not democratic or formed through consensus.
How universities cover up scientific fraud — Justin Pickett
This was a very alarming read if I am honest. To read about case studies of blatant scientist misconduct and the complete lack of ramifications from responsible institutions does not shed a positive light on the status quo of research practices. I completely agree with Pickett’s conclusion that scientific fraud should be independently investigated — it seems illogical that it is not already. I think my main takeaway from this article is solidifying how genuinely vital research on scientific malpractice is, to expose misconduct and consequently force research practices back onto a path of credibility and accountability.
Science in an age of skepticism — Rachel Ankeny
I remember being asked in a Political Economics of Climate Change seminar: why do you believe that global warming is a problem? The class, composed of Political Economy students provided a united answer: “because the science says so.”
Trust is science is so important as Ankeny argues in this article, because it ensures comprehensive action to scientific claims. Her introduction of Oreskes’ ‘instability of science’ or ‘pessimistic induction’ strikes a similar note to the ‘why bother changing’ argument Fidler demonstrates in her presentation. If science is inevitably going to be wrong, why are we even bothering to do it, let alone change the internal processes of it? This, in my belief, is where people enter the picture. Interview a climate-change refugee or a cancer patient in remission and one will quickly understand that science is pertinent to global wellbeing (and thus why it’s processes should be as credible as possible).
I remember reading once that a researcher gave a sociopath diagnostic test to Silicon Valley CEOs and they scored incredibly high, because traits of a sociopath (grandiose sense of self, fixation on gaining power, exploitation of other people to achieve personal ends, lack of sympathy) are also traits of a risk-taking, capitalist-loving CEO. Correlation, again, does not necessarily mean causation. This is why I particularly agree with Franklin’s discussion of removing psychopathy checklist from death-penalty sentencing. If we were to take Oreskes’ words from above as gospel, no permanent action (taking a life) should be decided upon through purely scientific means (PCL-R).
Yes, this is one small step for Nature, one giant leap for the advancement of credibility and transparency in science. Hopefully the next steps will include independent investigation into scientific fraud, mandated protocol inclusion in articles, and reproducible methodologies.
This article is an excellent first-hand account of how endemic data fabrication actually is in scientific literature. To learn that 97% of the proposed articles Miyakawa flagged did not sufficiently provide raw data to back claims is alarming and yet not entirely unexpected. I completely agree that without raw data (and methodological details about how this data was collected and analysed), a scientific claim is essentially baseless.