We aim to constantly update and add resources in order to provide resources and support to facilitate good research, such as good data and statistics practices, as well as open science.
- Kathawalla, U., Silverstein, P., Syed, M. (2021). Easing Into Open Science: A Guide for Graduate Students and Their Advisors. Collabra: Psychology; 7 (1): 18684. https://doi.org/10.1525/collabra.18684
- Munafò, M., Nosek, B., Bishop, D. et al. (2017). A manifesto for reproducible science. Nat Hum Behav 1, 0021. https://doi.org/10.1038/s41562-016-0021
- Markowetz, F. Five selfish reasons to work reproducibly. Genome Biol 16, 274 (2015). https://doi.org/10.1186/s13059-015-0850-7
- Farran, E., Silverstein, P., Ameen, A., Misheva, I., & Gilmore, C. (2020). Open Research: Examples of good practice, and resources across disciplines. https://doi.org/10.31219/osf.io/3r8hb
- Edinburgh ReproducibiliTea: Ben Thomas – Introduction to Open Research
- Edinburgh ReproducibiliTea: Kaitlyn Hair – Selfish Reasons to Work Reproducibly
- Edinburgh ReproducibiliTea / UoE School of Psychology: Mini course on open research (originally designed for undergraduate students)
- RIOT science club: Dr Priya Silverstein – Easing into open science: No time like the present
Resources @ University of Edinburgh:
- Information Services Open Research webpage
- Information Services Making your Research Open Access webpage
- University’s Open Research Blog
- University’s Open Research Newsletter
- Edinburgh Open Research Initiative Teams Channel
- Everything Hertz – by Dan Quintana and James Heathers
- FAIRdata podcast – by Rory MacNeil (CEO of Electronic Notebook Service ResearchSpace)
- Science Integrity Digest – by Elisabeth Bik
- The 100% CI – by Anne Scheel, Ruben Arslan, Malte Elson & Julia Rohrer
- The 20% Statistician – by Daniel Lakens
- Centre for Open Science Blog
- #bropenscience is broken science
- Framework for Open and Reproducible Research Training
- UK Reproducibility Network
- Center for Open Science
- Open research primers @ UKRN
- Biostats teaching resources (courtesy of Dr. Crispin Jordan)
Here are a few terms commonly discussed in the context of open research (categorised by theme rather than alphabetical order).
Please see the Framework for Open and Reproducible Research Training (FORRT) glossary for a more extensive list.
|Open Science/Open Research/|
|An umbrella term for a set of principles and practices related to academic transparency, reproducibility, accessibility and integrity. |
These terms are often used interchangeably, yet there some differences in the level of inclusiveness of these terms.
“Open Science” is regarded as the least inclusive term because it implies that it is exclusive to the disciplines that are commonly considered “the sciences”.
“Open Research” is a more inclusive term as it refers to all open practices in research in all disciplines, not just “the sciences”.
“Open Scholarship” is considered the most inclusive term, as it extends to all disciplines and scholarly activities, including non-research activities such as teaching.
|Open Access||Making scholarly outputs (e.g. published research papers, monographs, data) freely and openly accessible online. |
I.e. readers do not have to pay a subscription fee to read/access your work.
Green Open Access: the work is openly accessible from a public repository (e.g. PURE, preprint server)
Gold Open Access: the work is immediately openly accessible upon publication via a journal website
Platinum/Diamond Open Access: a subset of Gold OA in which all works in the journal are immediately accessible after publication from the journal website without the authors needing to pay article processing costs (APCs).
|Plan S||Plan S is an initiative for moving towards full open-access scholarly publishing, which was launched in 2018 by “cOAlition S”, a consortium of national research agencies and funders from 12 European countries. Read more here|
|Research metrics||Quantitative measurements designed to evaluate research outputs and their impacts.|
|Journal Impact Factor (JIF)||Metric for journals. The JIF for a particular year is calculated as:|
the total number of citations of its articles that were published during the previous two years/total number of citable published articles in the journal during those two years
This gives the average number of citations for recently published articles in that journal. Based on the premise that good quality articles, published in respected and well-read journals with rigorous peer review processes, receive more citations, JIFs are often taken as a proxy for the quality of a journal and papers. However, there are several important reasons why we should NOT rely on JIFs as a measure of research quality. These include that the JIF can be influenced in several ways and be misleading (e.g. a few highly-cited articles can inflate the number significantly), that the 2 year time-limit in the calculation of JIFs does not allow for fair comparison between disciplines (in some fields knowledge and citations are accrued more slowly) and importantly that the academic world has grown to rely so much on the impact factor that it has become a quick way to evaluate individual researchers (i.e. what the impact factor was for the journals their work got published in) when deciding who gets a promotion or grant, rather than actually reading and properly evaluating their work. Read more here and here
|H-Index||The H-index is a metric for individual researchers, used a proxy for how productive and influential a researcher is. It is calculated by counting the number of publications for which an author has been cited by other authors at least that same number of times. (E.g. H-index of 17 means that the researcher has published at least 17 papers that have each been cited at least 17 times).|
|The Research Excellence Framework (REF)||The Research Excellence Framework (REF) is the UK’s system for assessing the excellence of research in UK higher education providers. The REF outcomes are used to inform the allocation of around £2 billion per year of public funding for universities’ research. Read more here|
|Reproducibility||Re-performing the same analysis with the same methods using a different analyst (to see if the methods are reproducible and give you the same result)|
|Replicability||Re-performing the experiment on a different data set (to see if the study outcome can be replicated)|
|P-hacking||Also known as data dredging, data fishing, data snooping or data butchery. An exploitation of data analysis in order to discover patterns which would be presented as statistically significant, when in reality, there is no underlying effect. |
(i.e. trying different analysis methods until you get a significant result, and then reporting only that result)
|HARKing||Hypothesizing After Results are Known |
Presenting a post hoc hypothesis (based on interpreting the results) in a research report as if it were an a priori hypothesis (a hypothesis determined before running the analysis, which would be tested with the analysis).
|Pre-print||A preprint is a full draft research paper that is shared publicly before it has been peer reviewed. Most preprints are given a digital object identifier (DOI) so they can be cited in other research papers. A preprint is a full draft of a research paper that is shared publicly before it has been peer reviewed.|
Examples of pre-print servers are biorxiv and psyarxiv
|Pre-registration||Specifying your research plan in advance of your study (particularly distinguishing your confirmatory and exploratory questions and strategies) and submitting it to a registry. |
Read more here
Registered reports are a kind of publishing format that includes submitting an introduction and methods section (and sometimes pilot data analysis) for a proposed study, which then undergoes Stage 1 peer review. If the registered report then gets principal acceptance, it means that under the premise that you conduct the research as stated, it will get published regardless of the results and outcomes. You then conduct the study as stated in your methods, write up the results and submit the full report again which will go to a Stage 2 peer review before it goes on to publication.
Read more here
|FAIR practice||Particularly refers to data and project management practices and how to ensure these are Findable, Accessible, Interoperable and Reusable. Read more here|
|San Francisco Declaration on Research Assessment (DORA)||San Francisco Declaration on Research Assessment |
A set of recommendations to improve the ways in which the output of scientific research is evaluated by funding agencies, academic institutions, and other parties. Read more here
|League of European Research Universities (LERU)||The League of European Research Universities: an established network of 23 research-intensive universities in Europe. Develop sand disseminates views on research, innovation and higher education, helping to shape policy at the EU level. One of their publications is a roadmap template for universities that contains 41 recommendations on open science. In January 2022, the Edinburgh Open Research Roadmap was published by the Library which contains an assessment about the progress made at the University of Edinburgh and details steps forward.|
|Citizen science||Citizen science is the practice of public participation and collaboration in scientific research to increase scientific knowledge, for example through co-production of research or consultancy.|
|Public engagement||The term public engagement is used to describe the many ways in which the activity and benefits of higher education and research can be shared with the public for mutual benefit. This includes citizen science projects, but also other projects such as events such as public lectures, science festivals, exhibitions and communication through publications, radio and TV.|
|Open Research Culture||A research culture/environment in which principles of transparency, openness, and reproducibility are considered the norm and valued as important features of research, which are reflected by policies and incentive structures. Read more here|
|Equity||Equity (as opposed to “equality”) recognizes that each person has different circumstances and allocates the exact resources and opportunities needed to reach an equal outcome.|