This post is by Research Data Management Advisor, Rachel Proudfoot

Researchers’ attitudes to open research

What are the barriers to – and benefits of – open research? Is open research practice of personal and professional benefit? Is it something to be done just because your funder says you have to? Does the time and effort required outweigh the benefits? What is open research, anyway, and what does it look like in practice?

Introducing this event, Karen Abel from the Library Research Support Service outlined the preliminary findings of the open research survey running across the University of Leeds since last year. The survey asks whether researchers practice open research, what motivations come into play and what barriers are encountered. Respondents (so far) have identified a lack of dedicated funding, funder mandates, training, time and information as key barriers to open research practice.

This Open Lunch session explored some of these barriers in more detail, offering an international perspective from Professor Hugh Shanahan and a personal practice perspective from Dr Marlene Mengoni.

The Open Research Survey is still running and it would be great to get more responses from University of Leeds research staff, PGRs, and professional services and support staff if you have a few minutes https://leeds.onlinesurveys.ac.uk/open-research

A recording of the event is available on YouTube:

Research Objects and FAIR

Hugh Shanahan, Professor of Open Science at Royal Holloway

Twitter: @hughshanahan

ORCID: 0000-0003-1374-6015

Don’t try to do everything at once

Hugh emphasised that openness is a continuum rather than “you’re either open or you’re not!” It’s easy to become overwhelmed by the ‘grand vision’ of all research outputs being fully open and interoperable. However, it’s possible to take small, practical and useful steps which can be built on incrementally. One small thing, well implemented, helps you move to the next. For examples of small steps, see the list at the bottom of this post.

Multiple paths to the same ideas

To illustrate how much open research has progressed over the past few years, Hugh referenced a 2014 literature review paper by Fecher and Friesike: Open Science: One Term, Five Schools of Thought. The five school (Infrastructure, Public, Measurement, Democratic, Pragmatism – see linked paper for definitions) illustrate that there are multiple pathways to similar ideas. Different schools of thought are emerging as open science matures: for example, reproducibility is not reflected in Fecher and Friesike’s 2014 analysis but is at the forefront of open science today.

The paper also draws a rather barbed comparison between electric cars and open science: “Open Science appears to be somewhat like the proverbial electric car—an indeed sensible but expenseful thing which would do better to be parked in the neighbor’s garage; an idea everybody agrees upon but urges others to take the first step for.” This assertion is becoming outdated – in terms of both open science and, indeed, electric cars!

How infrastructure can change perception

How is the value of a research output assessed? What is a ‘first class research object’? The Research Excellence Framework (REF) tends to prioritise peer reviewed publications as the most ‘REF-able’ output. However, it is possible to submit multiple output types to REF e.g. software, but Hugh pointed to work by the Software Sustainability Institute which shows that “Despite around 70% of research being reliant on software, in the last REF only 0.02% of research outputs were related to software”. There is a perception that ‘other’ research outputs can’t or shouldn’t be submitted.

How can we change perceptions? Infrastructure plays an important role. For example, we’re all used to DOIs for journal articles and take them for granted. Work is well underway to have persistent identifiers for research data and for software. If we routinely see these identifiers in reference lists and bibliographies – rather than sitting awkwardly in the text – perceptions will shift and there will be more acceptance that diverse outputs are ‘first class research objects’.

FAIR

The FAIR principles (16 principles in four ‘buckets’) state data should be Findable, Accessible, Interoperable and Reusable. See a summary of the FAIR principles here. Standards for how FAIR can be delivered in practice are emerging. The FAIR4RS working group is currently defining FAIR Guiding Principles for software.

Hugh’s slides are available: https://doi.org/10.5281/zenodo.4808081.

Hugh’s talk on YouTube starts here: Research Objects and FAIR

Open from beginning to end: addressing barriers to open research – a personal experience

Marlène Mengoni, Lecturer, Institute of Medical and Biological Engineering, University of Leeds

Twitter: @mengomarlene

ORCID: 0000-0003-0986-2769

Motivations and research culture

I managed to complete my PhD only because others were sharing their data openly

Dr Marlène Mengoni

Marlene undertook her PhD at the University of Liege. Liege has a strong open access policy and openness and sharing were embedded in the research culture.

Understanding and influencing research culture is a good way to tackle some of the barriers, or perceived barriers, to open research.

What should I share? What’s useful?

Consider how much research was involved to generate the data and how difficult it was to generate. Data deposit mandates tend to require data underpinning the published research is shared, however, you could also consider whether your data have value in their own right. This might lead to the release of raw as well as processed data. What are the consequences if a key team members leaves, would the data be difficult to regenerate? Consider having open and transparent methods. A key question to ask yourself: how would you reuse your data?

Marlene offered a table analysing these decision processes with her own data (biomechanics of joints).

Marlene positions open science as a driver of research integrity: open methods and data enable scrutiny and reduce the risk of malpractice. Although sharing methods, protocols and data might feel like giving away a laboratory’s ‘unique selling point’, Marlene suggests “Opening know-hows and protocols are drivers, not inhibitors, of collaborations”.  

I’m going to get scooped!

What if someone beats me to publishing that all important paper? Marlene offered an example of open data which has been available for two years – no other researchers have published from it yet. However, the open data was available for use by dissertations students (UG and MSc) and new PGRs – easy to access and to cite. Releasing the data has been mostly a win situation, internally, and a paper associated with the data is being published this year. Always consider if you have potentially exploitable intellectual property in the data – that would be a reason not to share – otherwise you have more to gain than lose.

I’ve no time – and what are the benefits anyway?

Reproduciblity should encompass the idea of self-reproducibility. Marlene gave a great example of how reproducibility has been embedded into UG training: a researcher produces a detailed description of their methodology, students aim to reproduce what has been done to make sure the methods are complete and assess the sensitivity of the methods to the outcome. This can help researchers develop more robust methods – so it’s a win win.

Being an open research advocate and practitioner can have a positive impact on your profile and career. Marlene gave examples of reviewer comments where her open research practice was acknowledged: “It is good to see plans for dissemination of raw data which can progress the field much beyond the aims of the grant.”

Like Hugh, Marlene suggested the importance of incremental steps when realising the full benefit of open research and changing the research culture. Normalise open research by practice. It would be great to see more rewards and recognition of open science practice!

Themes from the Question and Answer session

Equality and access

“Do the standards of Open Research (like FAIR) factor in low-income countries/labs?”

It’s difficult to change the culture without the supporting infrastructure in place. Perhaps there are small and practical wins. Training and open source tools may support researcher engagement and help build supportive communities. We could top slice a proportion of funding to support better infrastructure (repositories, standards). In order to travel on the road, you need to build it! We also need to consider data ‘justice’ – for example, data being generated in a low income country and then being stored and analysed in a high income country (for more information, see the CARE principles or indigenous data governance).

How to disseminate data

“What’s an appropriate dissemination route for a relatively raw dataset which hasn’t been processed/analysed?”

Various options were discussed and opinions varied. If data is accessible and findable in a repository with appropriate documentation, it doesn’t necessarily add value to have something like a ‘Data in Brief’ or data paper. However, one might consider whether being in a context where people are actively looking for data will help researcher find your work – thus a data paper or data in brief may have a role in data promotion. The questioner had spotted a paper in a leading journal which was effectively ‘look at our new data’ without much analysis around it. Perhaps this indicates the direction of travel?

Do we emphasise “the data under the paper” too much, not treating data as a primary research output in its own right?

Practice varies widely across research communities. Data journals and software journals are starting to be more common. We have standards for data citation (for example, the recommendations from the Research Data Alliance). Hugh stressed that data and software should appear in the bibliography alongside all the other resources cited in the paper so contributors should get appropriate credit and citations for their outputs. Marlene acknowledged that requirements to share data – for instance, data that underpin a paper – are a driver for data sharing; on the other hand, this type of requirement can encourage poor practice (e.g. sharing mean values and error bars from a published figure) which misses the main rationale and benefits for sharing the data.  

Small, incremental steps? What would they be?

  • Make your data management plan meaningful and make it work for you – don’t see it as a tick box.
  • Adopt consistent filenaming conventions
  • Create metadata for your data as you go along
  • Think about how good documentation can help handovers – e.g. if a member of staff leaves
  • Think about what repository might be a good place to deposit your data and ensure you get DOIs for your data and software
  • For software, use tools like Git to make incremental improvements and release your code fairly early
  • Encourage your research community to develop controlled vocabulary by agreeing definition of key terms