ReproducibiliTea Blog

Octopus | 19/05/23 | Dr Alexandra Freeman and Tim Fellows

Our final Edinburgh ReproducibiliTea session of the 2022-2023 term was all about Octopus. No – not the animal! is a new platform for registering and publishing research in a way that’s free, fast and fair for everyone.

We were joined by Dr Alexandra Freeman from Cambridge University and Tim Fellows from Jisc who both work on creating and developing the Octopus platform.

Alexandra began the talk with an introduction of how Octopus came about and the problems it aims to fix. She drew on her experience working with the media to discuss how journals’ goal of publishing interesting and “impactful” stories conflicts with their purpose as a primary research record.

Octopus aims to overcome this problem by becoming a new platform for publishing primary research records. Alexandra notes that many parts of the research process involve different skillsets, resource requirements and expertise. In contract to journal articles, Octopus breaks research down into eight different types of publication (like the eight limbs of an octopus):

  1. Problem
  2. Hypothesis
  3. Methods / protocol
  4. Results / data
  5. Analysis
  6. Interpretation
  7. Real world application
  8. Peer review

Each component can have different authors and receives its own DOI. Different components can also branch off from each other, for instance two methods for testing the same hypothesis or two analyses of the same data.

In addition to registering and publishing new work, researchers can also add their open access publications to Octopus.

Tim finished the talk with a live demo of Octopus, and the session ended with a Q&A. Please note the Q&A was not recorded.

For more information on Octopus and to publish or register your own work, visit their website and follow them on Twitter @octopus_ac.

You can also watch the recording of the session on YouTube.

This blog is written by Emma Wilson


For any questions/suggestions, please send us an email at

ReproducibiliTea Blog

Replication in autism research | 21/04/23 | Michelle Dodd

In our April 2023 session of Edinburgh ReroducibiliTea, Michelle Dodd spoke about replication studies within autism research, and shared her personal experiences on why robust and reproducible autism research is important.

Michelle began with a short introduction explaining what common terms used by the autism community mean. From her slides:

  • Autism: a type of neurology that is characterised by social-communication differences and restricted, repetitive behaviours and interests.
  • Neurodiversity: naturally occurring different neurology which can strengthen society with their diversity. No way of being is better than another and minority neurotypes, such as autistic people, are subject to similar challenges and stigma as other minority groups.
  • Neurodivergent: a single person can’t be neurodiverse so they are neurodivergent (or neurotypical).

Previously, researchers thought that autistic people had communication deficits as autistic people often struggle to communicate with non-autistic people. However, we know that that’s not true because autistic people can communicate with each other. The communication differences between autistic and non-autistic people is known as the Double Empathy Problem.

The Double Empathy Problem was investigated in 2018 by Catherine Crompton, Sue Fletcher-Watson and others at the University of Edinburgh, which confirmed that autistic people have a different social interaction style, rather than a deficit, compared to non-autistic people (1).

Michelle is one of the researchers conducting a replication of this study. This time the team are taking an open research approach by publishing protocols on the Open Science Framework, writing a Registered Report, and increasing the sample size of participants (2).

Watch the recording of Michelle’s presentation on YouTube.


  1. Autistic peer-to-peer information transfer is highly effective
  2. Open science in experimental autism research: a replication study of information transfer within and between autistic and non-autistic people

This blog is written by Emma Wilson


For any questions/suggestions, please send us an email at

ReproducibiliTea Blog

Citizen Science and Participatory Research | 31/03/23 | Neil Coleman

In this month’s Edinburgh ReproducibiliTea session, we were joined by Neil Coleman (they/them) who has recently joined the Edinburgh of Edinburgh as the Library Citizen Science Engagement Officer.

Neil introduced us to the concepts of citizen science and participatory research and explained the different levels of citizen science – from using “citizens as sensors” in a crowdsource project to collaborating with citizens to define problems and collect and analyse data in “extreme” citizen science.

Neil also provided some examples of participatory research projects. Check them out below!

Finally, Neil highlighted some of the ways the Library can help University of Edinburgh staff and students get involved with participatory research, including using pre-existing resources (Collections, Digital Research Services, Scholarly Communications, and Physical Spaces and Event Management) and developing new services, both internally and externally, such as networking across the university and connecting (ethically) with participants and communities.

Following Neil’s presentation, we had a discussion about our own experiences with participatory research, and potential challenges or barriers to conducting participatory research. This part of the session is not included in the recording.

The session recording is available on our YouTube channel.

This blog is written by Emma Wilson


For any questions/suggestions, please send us an email at

ReproducibiliTea Blog

10 simple rules for failing successfully in academia | 24/02/23 | Emma Wilson

In this month’s Edinburgh ReproducibiliTea session, we discussed the paper titled “Ten simple rules for failing successfully in academia” by Gaillard et al.

The paper provides advice on how to navigate academic failure. The authors argue that failure is an inevitable part of the academic journey and that it can be a valuable learning experience if approached with the right mindset. The ten rules they present are:

  1. Define failure: You can think of failure as setting a goal and then never achieving that goal. However it is also important to think about the impact of the failure, how much control you had over the outcome, and learn to see failure as an event not a state of being.
  2. Dare to fail: Remember that there is never a straight road to success. Start taking opportunities even when there is high chance of failure.
  3. Don’t compare yourself to others: Comparing yourself to others can often lead to feelings of inadequacy imposter syndrome. Rather than comparing yourself to others, it is better to reflect on your own journey and realise how far you’ve come.
  4. Do compare yourself to others: Although it may sound contradictory to the previous point, you can compare yourself to others in a healthy way by listen to your peers who are often about their own failures.
  5. Keep track of failures and successes: Thinking about failures too much can sometimes stop us from celebrating the small wins. Keeping track of the everyday problems you face and the solutions you found can help you appreciate the effort you’ve put in to your work.
  6. Study the system: It can be helpful to think about the academic system that you work in and how privileges, discrimination, or distorted incentives may impact success and failure.
  7. Make failure a part of the process: Failure is normal, but there are things you can do to try to mitigate failure as much as possible. Try seeking out feedback before mistakes can happen.
  8. Create a support network: It is important to seek out people you feel comfortable sharing your failures with and connect with others in similar situations.
  9. Find what works for you: Everyone has a different strategy for dealing with failure. Sometimes you might be able to move on quickly, but other times you might need to step back and process the failure.
  10. Pay it forward: Share your failures, mentor juniors or peers, challenge the system, and do your part to normalise failure as a thing that happens to everyone.

Following a short presentation about the paper, we had a longer discussion session about our own experiences with failure and our strategies for moving on from failures. We did not record this part of the session so that attendees felt more comfortable sharing their experiences.

The session recording is available on our YouTube channel.

This blog is written by Emma Wilson


For any questions/suggestions, please send us an email at

ReproducibiliTea Blog

The replication crisis in psychology: Pre-registration and Registered Reports as crusaders for a brighter future | 20/01/23 | Dr Roman Briker

In our first Edinburgh ReproducibiliTea session of 2023, Dr Roman Briker gave a talk on pre-registration and Registered Reports. Dr Briker is an Assistant Professor in Organisational Behaviour at Maastricht University School of Business and Economics, and an Open Science Ambassador at the School of Business and Economics.

Reproducibility crisis and questionable research practices

Many academic journals are interested in significant results, those with a P-value of less than 0.05. Dr Briker shared a personal experience of writing his first paper and spotting an error in his draft that would impact the results of his statistical analyses. He was worried that correcting the error would lead to his findings being non-significant, and that journals would no longer be interested in his work.

This led Dr Briker to realise that this is not the way research should work. In his talk, Dr Briker suggested that this current model of academic publishing – the culture of “publish or perish” – contributes to the reproducibility crisis, as significant results are published and non-significant results are filed away. Dr Briker also gave examples of scientific fraud, including Dan Ariely and Daryl Bem, and reports from a survey which suggested that 8% of Dutch scientists have at some point falsified data. Overall, the focus on significant outcomes reduces our focus on rigorous methodology.

Dr Briker mentioned that the issue of irreproducibility impacts all fields of research, and that only 25% to 60% of scientific findings are replicable. He spoke about questionable research practices which have been allowed to thrive in our current research culture, including HARKing (Hypothesising After Results are Known), selective reporting, optimal/selective stopping of experiments, changing control variables, playing around with outliers, changing the inclusion or exclusion criteria, using different analytical methods, and rounding off P-values (e.g. reporting a P value of 0.53 as P = < 0.5).

Pre-registration and registered reports

Dr Briker suggested pre-registration and Registered Reports as potential solutions to these problems.

A pre-registration is a publicly time-stamped pre-specification of a research study design, including hypotheses, required sample sizes, exclusion criteria, and planned analyses. It is completed prior to data collection and is not peer-reviewed (Logg & Dorison, 2021).

A Registered Report goes further than a pre-registration, including the introduction, theory and hypothesis, proposed methods and analyses (Chambers & Tzavella, 2022). This is submitted to a journal, or platform such as Peer Community In Registered Reports, for peer-review prior to data collection. Once the Registered Report is approved by reviewers, it gains in-principle acceptance for publication in a journal, and the results will be published whether they are significant or not, as long as the plan outlined in the Registered Report is followed.

In his talk, Dr Briker explained what parts of a study design should be pre-registered, and gave an example of his own pre-registration. He also highlighted a number of templates available, and busted some myths surrounding concerns researchers may have about pre-registering a study.

Slides, references and pre-registration templates mentioned in Dr Briker’s talk are available on OSF:

The session recording is available on our YouTube channel.

This blog is written by Emma Wilson


For any questions/suggestions, please send us an email at

ReproducibiliTea Blog

Open Research Across Disciplines | 16/12/22 | Emma Wilson

In our December session of Edinburgh ReproducibiliTea, Emma Wilson presented a session on open research practices across disciplines. Emma is a PhD student at the Centre for Clinical Brain Sciences.

The session was focused on the UK Reproducibility’s list of open research case studies, examples, and resources for various research disciplines:

The list of resources can be cited as follows:

Farran EK, Silverstein P, Ameen AA, Misheva I, & Gilmore C. 2020. Open Research: Examples of good practice, and resources across disciplines.

What is open research?

Open research is all about making research practices and findings more transparent and accessible. The University of Edinburgh defines open research as “research conducted and published via a combination of two or more of the following attributes:

  • Open Access publication
  • Open research data
  • Open source software and code
  • Open notebooks
  • Open infrastructure
  • Pre-registration of studies

We use the term open research instead of open science as it is more inclusive of the broad spectrum of work that takes place at the University.

Open research across disciplines resource

The UK Reproducibility Network (UKRN) have produced a document and webpage with examples of open research practices across different research disciplines. The document is updated over autumn and was last updated in October 2022.

The resource covers 28 disciplines from Archaeology & Classics to Veterinary Science. New resources can be added to the collection via this Google Form.

Examples of open research across different disciplines

Emma chose a few example resources to talk about in her presentation.

Art & Design: Open Access at the National Gallery of Art

The National Gallery of Art have an open access policy for public domain artworks. You can search and download over 50,000 artworks on their website, and they have made a dataset of information on over 130,000 artists and artworks available on GitHub.

Artificial Intelligence: recommendations on creating reproducible AI

In 2018, Gundersen, Gil and Aha published an article describing recommendations on creating reproducible artificial intelligence.

Economics: case study from a PhD student

Dr Marcello De Maria, a graduate from the University of Reading, describes the benefits of open research within economics.

Engineering: open source 3D printing toolkit

Slic3r is an open source software that allows anyone to convert 3D models into printing instructions for a 3D printer. They have a large GitHub community involved in creating and maintaining code and take pride in the fact that the provide resource for free to the community.

Music, Drama and Performing Arts, Film and Screen Studies: podcast on making music research open

Alexander Jensenius, Associate Professor at the Department of Musicology – Centre for Interdisciplinary Studies in Rhythm, Time and Motion (IMV) at the University of Oslo, discusses open research within the context of music research in a podcast hosted by the University Library at UiT, the Arctic University of Norway. He also discusses MusicLab, an event-based project which aims to collect data during musical performances and analyse it on the fly.

Physics: citizen science project case study

In this case study, Professor Chris Scott, Dr Luke Barnard, and Shannon Jones discuss a citizen science project they ran on the online platform Zoonverse. Their project focused on analysing images of solar storms and four thousand members of the public took part.

Barriers to open research

In the final section of her presentation, Emma then discussed some of the barriers that may prevent researchers from working openly. These included:

  • Funding and finances (e.g. to pay open access publishing fees)
  • Time and priorities (e.g. time required to learn new skills, and supervisor / lab cultures around open research practices)

Finally, the session closed with a discussion around the implementation of open research in different disciplines, and whether all researchers and disciplines should be judged the same when it comes to this implementation.

The slides for Emma’s talk are available on our OSF page and the session recording is available on YouTube.

This blog is written by Emma Wilson


For any questions/suggestions, please send us an email at

ReproducibiliTea Blog

Introducing FAIRPoints and FAIR + Open Research for Beginners | 18/11/22 | Dr Sara El-Gebali

In our November session of Edinburgh ReproducibiliTea, we were joined by Dr Sara El-Gebali. Sara is a Research Data Manager, Co-Founder of FAIRPoints and Project Leader of LifeSciLab. In her talk, Sara introduced FAIRPoints, an event series highlighting pragmatic community-developed measures towards the implementation of the FAIR data principles, and some of the projects currently ongoing at FAIRPoints.

What is FAIR?

FAIR stands for Findable, Accessible, Interoperable, and Reproducible. FAIR is a set of best practices rather than a set of rules.

FAIR + Open Research for Beginners

FAIR + Open Research for Beginners is a new community-led effort towards the inclusion of education on open and FAIR principles at earlier time points, such as in high school and undergraduate curriculums.

Through this initiative, Sara and the FAIRPoints community are launching a set of Google Flash Cards related to FAIR and open data. The flash cards help students better find answers to educational questions they have searched for on Google. The group are also working on developing slide decks and accompanying scripts that can be delivered in schools, undergraduate teaching, and public lectures.

Anyone with an interest in FAIR and open data can join the community and get involved in the initiative by subscribing to events and joining the FAIRPoints Slack channel:

You can find out more about FAIRPoints on their website. The slides for Sara’s talk are available on our OSF page and the session recording is available on YouTube.

This blog is written by Emma Wilson


For any questions/suggestions, please send us an email at

ReproducibiliTea Blog

Errors in Research

“Fallibility in Science- Responding to Errors in the Work of Oneself and Others”

This was the first session of year 2022 and revolved around a paper discussion on Errors in Research. It was led by Laura Klinkhamer, a PHD student at The University of Edinburgh. Her research interests lie at the intersection of neuroscience and psychology. The discussion was on Professor Dorothy Bishop’s 2018 commentary paper ‘Fallibility in Science: Responding to Errors in the Work of Oneself and Others’. Apart from the paper discussion, the session involved interactive sessions with anonymous polls on and some interesting discussions in the breakout rooms. 

The session began with imagining a scenario where a PHD student runs a series of studies to find a positive effect. After getting null findings in three studies, the student changed the design and found a statistically significant effect in the fourth study. This resulted in paper publication in a prestigious journal with student as first author. The study was also featured on National Public Radio. However, after two weeks the student realized as a consequence of preparing for a conference talk that the groups in the study were miscoded and the study was a faulty one. The same scenario was asked to be imagined by the participants in the session and to report their answers anonymously on 

According to Azoulay, Bonatti and Krieger (2017), there was an average decline of 10% in subsequent citations of early work of authors who publicly admitted their mistake. However, the effect was small when the mistake made was an honest one. Moreover, there was no reputational damage in case of junior researchers. According to Hosseini, Hillhorst, de Beaufort & Fanelli (2018), 14 authors who self-retracted their papers believed their reputation would be damaged badly. However, in reality, self-retraction did not damage their reputation but improved it. 

Incentives for Errors in Research or Research Misconduct:

  1. Pressure from colleagues, institutions and journal editors to publish more and more papers
  2. Progression in academic career is determined greatly by metrics that incentivize publications and not retractions

Unfortunately, according to Bishop (2018) there are very few incentives for honesty in academic careers. Participants were encouraged to share their opinions on on what would they do to incentivize scientific integrity. 

Open Research:

  1. Research that is publicly accessible does not indicate that it is free from errors. However, open data and open code enhances the chances of error detection by the other authors
  2. Open research encourages scientists to double check their data and code before publication
  3. Open research helps normalize error detections and reduces stigma which eventually leads to scientific accuracy 

How to Respond to Errors in the Work of Other Researchers:

There are different platforms to do that including-

  • Contacting researchers directly
  • Contacting researchers via journal (if possible)
  • Preprint servers
  • PubMed Commons (discontinued)
  • PubPeer (commentators can be anonymous)
  • Twitter
  • Personal blogs
  • OSF and Octopus (emerging platforms)

One of the drawbacks of anonymous platforms is that they often result in criticism of someone’s work that can be harsh and discouraging. When responding to errors in the work of other scientists it is important to make no assumptions. Because a failure to replicate an original study can be due to reasons beyond incompetence or fraudulent intentions. The scale of error can be useful while approaching the situation.

Scale of errors:

  • Honest errors- coding mistakes
  • Paltering- using a truthful statement to mislead by failing to provide the relevant contextual information
  • P-hacking
  • Citing only a part of literature that matches with one’s position. Commonly referred to as confirmation bias
  • Inaccurate presentation of results from cited studies
  • Inventing fake data
  • Paper mills- businesses producing fake studies for profits

There was a little discussion on the case of Diederik Stapel who was fired instantly after it was discovered that he faked a large-scale data during his academic career. Moreover, some discussion was done on paper mills that are polluting the scientific literature for profits. An important question remains: who are/should be responsible for detecting and responding to large errors? 

  1. At an internal level, head of the department/lab, whistleblowing policy and research misconduct policy
  2. Journals 
  3. Separate institutes like UKRIO (UK Research Integrity Office
  4. Technology
  5. External researchers

There was a lot more to be discussed and hopefully the discussion can continue in later discussions and/or the conference. There is a ‘Edinburgh Open Research Conference’ on Friday 27 May, 2022 organised by the Library Research Support Team and EORI/Edinburgh ReproducibiliTea. SAVE THE DATE!!!!

Anonymous responses the participants on

This blog is written by Sumbul Syed


Edinburgh RT Twitter

Edinburgh RT OSF page

Edinburgh RT mailing list

For any questions/suggestions, please send us an email at

ReproducibiliTea Blog

Bayesian data analysis and preregistration 17/12/2021 with Dr Zachary Horne

This session was the final session of the year 2021. The speaker was Dr Zachary Horne, a lecturer at School of Philosophy, Psychology & Language Sciences, The University of Edinburgh. Dr Horne talked about Bayesian statistics and preregistration in the context of open research practices. Dr Horne started his presentation by talking about what is Bayesian data analysis and very broadly it is a data analysis that takes into consideration prior information about a particular domain, in addition to data collection. Sometimes it is also called prior distribution. 

There are different aspects to keep in mind when it comes to preregistration in Bayesian data analysis:

  • How the data is going to be collected
  • Why is the data being collected in a particular way?
  • Sample size
  • Operationalization of constructs
  • Specifying key analyses
  • Aspects of analysis that will be exploratory

Bayesian workflow (Gelman et al., 2020)

  1. Choosing an initial model
  2. Prior predictive checking
  3. Fitting the model
  4. Computational problems and algorithm diagnostics
  5. Posterior predictive checking
  6. Prior robustness

Dr Horne talked about prior predictive checking in a bit detail and it covers the following features:

  • Prior to data collection, is the model consistent with what is already known about the world?
  • What distribution is implied for an outcome variable given prior and likelihood?
  • Assessing the credibility of model before collecting the data

A question ‘Do tweets from activist groups (e.g., PETA, Greenpeace, etc.) with photos get liked more than tweets without photos?’ was central during the session to discuss models in Bayesian data analysis. Analysis showed that photos are better as far as likes are concerned on twitter. With respect to which model is the ‘right’ model, the regularizing model provided better estimates of central tendency of distribution. However, none of the priors (optimistic, regularizing and improper) captured that the larger central tendency is coming out just from many tweets getting 200 or so likes, but also from tweets getting huge numbers of likes! Moreover, models have a room for improvement. 

The session was concluded with pre-registration priors in Bayesian data analysis and Dr Horne suggested using regularizing priors for the parameters of interest especially when those parameters are expected to ‘do something’ and incorporating posterior information in the priors of subsequent related models.

This blog is written by Sumbul Syed


Edinburgh RT Twitter

Edinburgh RT OSF page

Edinburgh RT mailing list

For any questions/suggestions, please send us an email at

ReproducibiliTea Blog

Edinburgh University Research Optimisation Course (EUROC) 19/11/2021 with Dr Gillian Currie

In this session, Dr Gillian Currie who is a Postdoctoral Research Fellow in the CAMARADES group, Centre for Clinical Brain Sciences at The University of Edinburgh talked about EUROC (Edinburgh University Research Optimisation Course) which encourages open research practices in animal research. Dr Gillian Currie is a meta-researcher and her research interests include improvement in research methodology.

Dr Currie began talking about EUROC (Edinburgh University Research Optimization Course), a course with a focus on rigorous design, conduct, analysis and reporting of research using animals. She further mentioned some key points on research using animals:

  • In the year 2020, 2.8 million animals were used in research across the UK
  • The studies helped understand basic biology, complex diseases and potential treatments development
  • However, there were certain concerns regarding difficulties in replication, reproducibility and translation

Dr Currie talked briefly about the translational pipeline which aims to translate pre-clinical research into clinical research which further results in improved health. A survey conducted by Nature involving 1,576 researchers found that 52% of the researchers agreed there is a ‘reproducibility crisis’. The problem of ‘replication crisis’ can be attributed to the following reasons:

  1. Smaller sample size in studies
  2. Publication bias
  3. Limited randomization and blinding

Dr Currie carried on with a discussion by talking about new opportunities in open research practices including:

  1. An increased focus on methodological rigour which involves ensuring appropriate power, appropriate statistics and p values
  2. An increased transparency through pre-registration of studies, reporting of methods as well as sharing of data
  3. Measures to reduce risks of biases

It is important to realise that a small improvement manifested across large number of researchers can help make sure to have a substantial effect overall. 

Course structure of EUROC:

EUROC comprises of 3 modules which can be completed across multiple sessions. Every module consists of 1 core and 1 extended lecture.

MODULE 1: Study Design and Data Analysis

In module 1, ‘Study Design’ section will comprise of internal validity, Risks of bias, Construct and external validity and Exploratory vs confirmatory research. ‘Data Analysis’ section consists of Statistical analysis, Significance testing, Sample size and statistical power, Outliers, Unit of analysis and Multiple outcome testing.

MODULE 2: Experimental Procedure

Module 2 is divided into two sections: Maximizing Study Validity and Study Design. The former section includes topics like Risks of bias, Pilot studies, Confounding characteristics and variables, Validity of outcome and Optimization of complex treatment parameters. The latter section will have Use of reference compounds, Statistical Analysis Tips, Replication and Standardisation.

MODULE 3: Pre-registration and Reporting

The final module will deal with Pre-registration (including Study protocols) and Reporting (Data sharing, Statement of conflict of interest, Reporting standards).

The course is a contribution by The University of Edinburgh towards an improvement in research. Therefore, the course is also available to researchers outside the university through this link

How to access EUROC on Learn (for people within the University of Edinburgh):

  1. Log in to Learn
  2. Click on ‘self-enrol’ (available on top right of the screen)
  3. Scroll down to Research Improvement
  4. Click on EUROC (Edinburgh University Research Optimisation Course)

The session was concluded with Dr Currie talking about a research improvement project that is coming up soon. Delays in dissemination of research findings act as impediments in scientific progress, therefore one of the most important aims of research improvement project is to increase the speed at which findings are shared with the use of pre-prints. A Pre-print is an early version of a scholarly article that has not gone under peer-review. It is open to comments and is a good means to prioritise new ideas.

This blog is written by Sumbul Syed


Session’ video on YouTube

Edinburgh RT Twitter

Edinburgh RT OSF page

Edinburgh RT mailing list

For any questions/suggestions, please send us an email at