ReproducibiliTea Blog

The replication crisis in psychology: Pre-registration and Registered Reports as crusaders for a brighter future | 20/01/23 | Dr Roman Briker

In our first Edinburgh ReproducibiliTea session of 2023, Dr Roman Briker gave a talk on pre-registration and Registered Reports. Dr Briker is an Assistant Professor in Organisational Behaviour at Maastricht University School of Business and Economics, and an Open Science Ambassador at the School of Business and Economics.

Reproducibility crisis and questionable research practices

Many academic journals are interested in significant results, those with a P-value of less than 0.05. Dr Briker shared a personal experience of writing his first paper and spotting an error in his draft that would impact the results of his statistical analyses. He was worried that correcting the error would lead to his findings being non-significant, and that journals would no longer be interested in his work.

This led Dr Briker to realise that this is not the way research should work. In his talk, Dr Briker suggested that this current model of academic publishing – the culture of “publish or perish” – contributes to the reproducibility crisis, as significant results are published and non-significant results are filed away. Dr Briker also gave examples of scientific fraud, including Dan Ariely and Daryl Bem, and reports from a survey which suggested that 8% of Dutch scientists have at some point falsified data. Overall, the focus on significant outcomes reduces our focus on rigorous methodology.

Dr Briker mentioned that the issue of irreproducibility impacts all fields of research, and that only 25% to 60% of scientific findings are replicable. He spoke about questionable research practices which have been allowed to thrive in our current research culture, including HARKing (Hypothesising After Results are Known), selective reporting, optimal/selective stopping of experiments, changing control variables, playing around with outliers, changing the inclusion or exclusion criteria, using different analytical methods, and rounding off P-values (e.g. reporting a P value of 0.53 as P = < 0.5).

Pre-registration and registered reports

Dr Briker suggested pre-registration and Registered Reports as potential solutions to these problems.

A pre-registration is a publicly time-stamped pre-specification of a research study design, including hypotheses, required sample sizes, exclusion criteria, and planned analyses. It is completed prior to data collection and is not peer-reviewed (Logg & Dorison, 2021).

A Registered Report goes further than a pre-registration, including the introduction, theory and hypothesis, proposed methods and analyses (Chambers & Tzavella, 2022). This is submitted to a journal, or platform such as Peer Community In Registered Reports, for peer-review prior to data collection. Once the Registered Report is approved by reviewers, it gains in-principle acceptance for publication in a journal, and the results will be published whether they are significant or not, as long as the plan outlined in the Registered Report is followed.

In his talk, Dr Briker explained what parts of a study design should be pre-registered, and gave an example of his own pre-registration. He also highlighted a number of templates available, and busted some myths surrounding concerns researchers may have about pre-registering a study.

Slides, references and pre-registration templates mentioned in Dr Briker’s talk are available on OSF: https://osf.io/cgkua/

The session recording is available on our YouTube channel.

This blog is written by Emma Wilson

SOCIALS:

For any questions/suggestions, please send us an email at edinburgh.reproducibilitea@ed.ac.uk

ReproducibiliTea Blog

Open Research Across Disciplines | 16/12/22 | Emma Wilson

In our December session of Edinburgh ReproducibiliTea, Emma Wilson presented a session on open research practices across disciplines. Emma is a PhD student at the Centre for Clinical Brain Sciences.

The session was focused on the UK Reproducibility’s list of open research case studies, examples, and resources for various research disciplines: https://www.ukrn.org/disciplines/

The list of resources can be cited as follows:

Farran EK, Silverstein P, Ameen AA, Misheva I, & Gilmore C. 2020. Open Research: Examples of good practice, and resources across disciplines. https://doi.org/10.31219/osf.io/3r8hb

What is open research?

Open research is all about making research practices and findings more transparent and accessible. The University of Edinburgh defines open research as “research conducted and published via a combination of two or more of the following attributes:

  • Open Access publication
  • Open research data
  • Open source software and code
  • Open notebooks
  • Open infrastructure
  • Pre-registration of studies

We use the term open research instead of open science as it is more inclusive of the broad spectrum of work that takes place at the University.

Open research across disciplines resource

The UK Reproducibility Network (UKRN) have produced a document and webpage with examples of open research practices across different research disciplines. The document is updated over autumn and was last updated in October 2022.

The resource covers 28 disciplines from Archaeology & Classics to Veterinary Science. New resources can be added to the collection via this Google Form.

Examples of open research across different disciplines

Emma chose a few example resources to talk about in her presentation.

Art & Design: Open Access at the National Gallery of Art

The National Gallery of Art have an open access policy for public domain artworks. You can search and download over 50,000 artworks on their website, and they have made a dataset of information on over 130,000 artists and artworks available on GitHub.

Artificial Intelligence: recommendations on creating reproducible AI

In 2018, Gundersen, Gil and Aha published an article describing recommendations on creating reproducible artificial intelligence.

Economics: case study from a PhD student

Dr Marcello De Maria, a graduate from the University of Reading, describes the benefits of open research within economics.

Engineering: open source 3D printing toolkit

Slic3r is an open source software that allows anyone to convert 3D models into printing instructions for a 3D printer. They have a large GitHub community involved in creating and maintaining code and take pride in the fact that the provide resource for free to the community.

Music, Drama and Performing Arts, Film and Screen Studies: podcast on making music research open

Alexander Jensenius, Associate Professor at the Department of Musicology – Centre for Interdisciplinary Studies in Rhythm, Time and Motion (IMV) at the University of Oslo, discusses open research within the context of music research in a podcast hosted by the University Library at UiT, the Arctic University of Norway. He also discusses MusicLab, an event-based project which aims to collect data during musical performances and analyse it on the fly.

Physics: citizen science project case study

In this case study, Professor Chris Scott, Dr Luke Barnard, and Shannon Jones discuss a citizen science project they ran on the online platform Zoonverse. Their project focused on analysing images of solar storms and four thousand members of the public took part.

Barriers to open research

In the final section of her presentation, Emma then discussed some of the barriers that may prevent researchers from working openly. These included:

  • Funding and finances (e.g. to pay open access publishing fees)
  • Time and priorities (e.g. time required to learn new skills, and supervisor / lab cultures around open research practices)

Finally, the session closed with a discussion around the implementation of open research in different disciplines, and whether all researchers and disciplines should be judged the same when it comes to this implementation.

The slides for Emma’s talk are available on our OSF page and the session recording is available on YouTube.

This blog is written by Emma Wilson

SOCIALS:

For any questions/suggestions, please send us an email at edinburgh.reproducibilitea@ed.ac.uk

ReproducibiliTea Blog

Introducing FAIRPoints and FAIR + Open Research for Beginners | 18/11/22 | Dr Sara El-Gebali

In our November session of Edinburgh ReproducibiliTea, we were joined by Dr Sara El-Gebali. Sara is a Research Data Manager, Co-Founder of FAIRPoints and Project Leader of LifeSciLab. In her talk, Sara introduced FAIRPoints, an event series highlighting pragmatic community-developed measures towards the implementation of the FAIR data principles, and some of the projects currently ongoing at FAIRPoints.

What is FAIR?

FAIR stands for Findable, Accessible, Interoperable, and Reproducible. FAIR is a set of best practices rather than a set of rules.

FAIR + Open Research for Beginners

FAIR + Open Research for Beginners is a new community-led effort towards the inclusion of education on open and FAIR principles at earlier time points, such as in high school and undergraduate curriculums.

Through this initiative, Sara and the FAIRPoints community are launching a set of Google Flash Cards related to FAIR and open data. The flash cards help students better find answers to educational questions they have searched for on Google. The group are also working on developing slide decks and accompanying scripts that can be delivered in schools, undergraduate teaching, and public lectures.

Anyone with an interest in FAIR and open data can join the community and get involved in the initiative by subscribing to events and joining the FAIRPoints Slack channel:

You can find out more about FAIRPoints on their website. The slides for Sara’s talk are available on our OSF page and the session recording is available on YouTube.

This blog is written by Emma Wilson

SOCIALS:

For any questions/suggestions, please send us an email at edinburgh.reproducibilitea@ed.ac.uk

EORI Bulletin

13/05/2022 5-minute update    

EORI keeps an eye on changes in the fields of Open Science, FAIR data principles, and others, and directs any interested parties to important updates:   

  • Last month, Vergoulis et al. (2022) launched the beta of BIP! Scholar. This is an online service that allows researchers to set up their own academic profile in accordance with Open Science guidelines for fair research assessment. One of the goals of BIP! Scholar is to curb the overreliance on performance indicators such as the h-index, which may be an inaccurate reflection of a researchers’ academic experiences. You can read more about the platform here and sign up for the beta here.  
  • Here’s an interesting article by van der Wal et al. (2022) on the merits and drawbacks of publishing academic talks online. As the COVID-19 pandemic has caused a massive shift towards online conferences, this has led to increased accessibility and reach for academic talks. However, as van der Wal et al. argue, this comes with certain ethical considerations, including data privacy, and it may also make the speaker open to persistent criticism, which could be challenging for early career researchers. Van der Wal et al. argue that the speaker should decide whether their talk is made available online. Furthermore, talks may need to be edited before they are put online to avoid ethical problems.  
  • On the topic of academic feedback, here is another interesting article by Iborra et al. (2022) on how to give constructive criticism on preprints using the FAST (Focused, Appropriate, Specific and Transparent) principle.  
  • And once again, don’t forget to sign up for Edinburgh University’s first Open Research conference on the 27th of May. It’s free to all students and staff at the University of Edinburgh, and events will be held both online and in person. The conference will feature talks and workshops on how to get started with Open Science, practical considerations in Open Science, what resources are available at the University of Edinburgh, as well as many more.  

The best way to get more updates is to follow EORI on Twitter.   

EORI Bulletin

29/04/2022 5-minute update   

EORI keeps an eye on changes in the fields of Open Science, FAIR data principles, and others, and directs any interested parties to important updates:  

  • Smith and Sandbrik published this interesting paper on some of the potential ethical issues with Open Science research practices in biological research.  They posit that preregistration could help encourage risk assessment in the earlier stages of the research lifecycle and call for responsible and considered data sharing and access. Here’s also a WIRED article on their paper. 
  • Here’s a great introductory resource for data visualisation with R. It’s aimed at researchers who have not used R before and features many different types of graphs and example code.  
  • And finally, don’t forget to sign up for Edinburgh University’s first Open Research conference on the 27th of May. It’s free to all students and staff at the University of Edinburgh, and events will be held both online and in person. The conference will feature talks and workshops on how to get started with Open Science, practical considerations in Open Science, what resources are available at the University of Edinburgh, as well as many more.  

The best way to get more updates is to follow EORI on Twitter.  

EORI Bulletin

18/03/2022 5-minute update  

EORI keeps an eye on changes in the fields of Open Science, FAIR data principles, and others, and directs any interested parties to important updates: 

  • The Arqus European University Alliance has joined a steadily increasing number of research institutions that are committing to Open Science principles. This is a great step towards removing accessibility barriers and making scientific research easily accessible within and beyond Europe. You can read their Openness position paper here.  
  • A lawsuit in which ResearchGate was sued for hosting 50 copyrighted papers has ended inconclusively for both sides. Though ResearchGate was ruled responsible for hosting the papers, the status of any other paper hosted on the platform that may infringe with copyright law remains unclear. Still, this is a worrying precedent for the future of openly accessible research, and ResearchGate wants to appeal the decision. You can read more about the case here
  • And finally, the University of Surrey has launched its Open Research page, which hosts many valuable resources and guidelines. You can find them here

The best way to get more updates is to follow EORI on Twitter

EORI Bulletin

04/03/2022 – 5-minute update   

EORI keeps an eye on changes in the fields of Open Science, FAIR data principles, and others, and directs any interested parties to important updates:   

  • Open Science scholarship has revolutionised the scientific community, yet the sheer number of novel terms and concepts associated with it may be daunting for new researchers. To reduce entry barriers to open scholarship, the FORRT project has developed a community-sourced glossary of 250 relevant open scholarship terms. This is a great resource if you have ever wondered what PARKing is or what Type M errors are, and you can learn more about how the project came to be here
  • Gopalakrishna et al. (2022) published an investigation into the prevalence of questionable research practices and academic misconduct in research circles in the Netherlands. Their findings are worrying, with at least one in two researchers reporting that they frequently participate in questionable research practices, whilst one in twelve reported falsifying or fabricating their data at least once. Gopalakrishna and colleagues suggest that reducing the “publish or perish” mentality and amplifying the role of the peer reviewer in “gatekeeping” research quality and integrity may help reduce the widespread use of questionable research practices. 
  • This preprint by Steve Haroz offers a comprehensive breakdown of differences between five preregistration platforms (GitHub, AsPredicted, Zenodo, OSF (template) & OSF (open-ended)). This can help researchers make informed decisions when deciding where to preregister their study, but also highlights what information is especially vital to include in a preregistered report.  

The best way to get more updates is to follow EORI on Twitter

EORI Bulletin

18/02/2022 – 5-minute update  

EORI keeps an eye on changes in the fields of Open Science, FAIR data principles, and others, and directs any interested parties to important updates:  

  • A recent study by Skiles and colleagues reported that online conferences are not only good for reducing researchers’ carbon footprint, but also promote diversity and inclusion. The study found that the recent move towards online, rather than in-person, conferences has removed some of the monetary barriers for attendees, especially boosting attendance by women and early career researchers. 
  • In the future, all publicly-funded research conducted in South Africa will be published in open access journals, a draft national open science policy has revealed. The draft aims to promote positive change within the scientific culture and to increase the public benefit of funded research.
  • The Friedrich-Alexander University in Germany has become the first German university to adopt an Open Science policy. With this, the university pledges itself to promoting high-quality, transparent and open access research.  

The best way to get more updates is to follow EORI on Twitter

EORI Bulletin

08/02/2022 – 5-minute update 

After a break, EORI is back for the new year with our 5-minute bulletin! 

EORI keeps an eye on changes in the fields of Open Science, FAIR data principles, and others, and directs any interested parties to important updates: 

  • Recently, NASA launched their Transform to Open Science mission. The program has designated 2023 as the Year of Open Science and aims to use Open Science principles to further accelerate scientific research and to promote the inclusion of historically excluded communities in its science program.  
  • The European University Association published its Open Science Agenda 2025. Its key priority areas are promoting open access, implementing FAIR data practices, and encouraging more responsible research assessment. The purpose of this agenda is to aid its members in the transition towards Open Science.  
  • This article covers the rise of preprints during the COVID-19 pandemic, highlighting both the benefits and pitfalls of rapid data sharing. It suggests that open access data and code sharing are paramount in ensuring the quality of scientific preprints.  

The best way to get more updates is to follow EORI on Twitter

ReproducibiliTea Blog

Errors in Research

“Fallibility in Science- Responding to Errors in the Work of Oneself and Others”

This was the first session of year 2022 and revolved around a paper discussion on Errors in Research. It was led by Laura Klinkhamer, a PHD student at The University of Edinburgh. Her research interests lie at the intersection of neuroscience and psychology. The discussion was on Professor Dorothy Bishop’s 2018 commentary paper ‘Fallibility in Science: Responding to Errors in the Work of Oneself and Others’. Apart from the paper discussion, the session involved interactive sessions with anonymous polls on Mentimeter.com and some interesting discussions in the breakout rooms. 

The session began with imagining a scenario where a PHD student runs a series of studies to find a positive effect. After getting null findings in three studies, the student changed the design and found a statistically significant effect in the fourth study. This resulted in paper publication in a prestigious journal with student as first author. The study was also featured on National Public Radio. However, after two weeks the student realized as a consequence of preparing for a conference talk that the groups in the study were miscoded and the study was a faulty one. The same scenario was asked to be imagined by the participants in the session and to report their answers anonymously on Mentimeter.com. 

According to Azoulay, Bonatti and Krieger (2017), there was an average decline of 10% in subsequent citations of early work of authors who publicly admitted their mistake. However, the effect was small when the mistake made was an honest one. Moreover, there was no reputational damage in case of junior researchers. According to Hosseini, Hillhorst, de Beaufort & Fanelli (2018), 14 authors who self-retracted their papers believed their reputation would be damaged badly. However, in reality, self-retraction did not damage their reputation but improved it. 

Incentives for Errors in Research or Research Misconduct:

  1. Pressure from colleagues, institutions and journal editors to publish more and more papers
  2. Progression in academic career is determined greatly by metrics that incentivize publications and not retractions

Unfortunately, according to Bishop (2018) there are very few incentives for honesty in academic careers. Participants were encouraged to share their opinions on Mentimeter.com on what would they do to incentivize scientific integrity. 

Open Research:

  1. Research that is publicly accessible does not indicate that it is free from errors. However, open data and open code enhances the chances of error detection by the other authors
  2. Open research encourages scientists to double check their data and code before publication
  3. Open research helps normalize error detections and reduces stigma which eventually leads to scientific accuracy 

How to Respond to Errors in the Work of Other Researchers:

There are different platforms to do that including-

  • Contacting researchers directly
  • Contacting researchers via journal (if possible)
  • Preprint servers
  • PubMed Commons (discontinued)
  • PubPeer (commentators can be anonymous)
  • Twitter
  • Personal blogs
  • OSF and Octopus (emerging platforms)

One of the drawbacks of anonymous platforms is that they often result in criticism of someone’s work that can be harsh and discouraging. When responding to errors in the work of other scientists it is important to make no assumptions. Because a failure to replicate an original study can be due to reasons beyond incompetence or fraudulent intentions. The scale of error can be useful while approaching the situation.

Scale of errors:

  • Honest errors- coding mistakes
  • Paltering- using a truthful statement to mislead by failing to provide the relevant contextual information
  • P-hacking
  • Citing only a part of literature that matches with one’s position. Commonly referred to as confirmation bias
  • Inaccurate presentation of results from cited studies
  • Inventing fake data
  • Paper mills- businesses producing fake studies for profits

There was a little discussion on the case of Diederik Stapel who was fired instantly after it was discovered that he faked a large-scale data during his academic career. Moreover, some discussion was done on paper mills that are polluting the scientific literature for profits. An important question remains: who are/should be responsible for detecting and responding to large errors? 

  1. At an internal level, head of the department/lab, whistleblowing policy and research misconduct policy
  2. Journals 
  3. Separate institutes like UKRIO (UK Research Integrity Office
  4. Technology
  5. External researchers

There was a lot more to be discussed and hopefully the discussion can continue in later discussions and/or the conference. There is a ‘Edinburgh Open Research Conference’ on Friday 27 May, 2022 organised by the Library Research Support Team and EORI/Edinburgh ReproducibiliTea. SAVE THE DATE!!!!

Anonymous responses the participants on Mentimeter.com:

This blog is written by Sumbul Syed

SOCIALS:

Edinburgh RT Twitter

Edinburgh RT OSF page

Edinburgh RT mailing list

For any questions/suggestions, please send us an email at edinburgh.reproducibilitea@ed.ac.uk