Open Research Advisor Nick Sheppard reports on this virtual conference that brought together more than 100 delegates from 13 countries across the globe. This post covers day 1 with sessions on Incentives and Improving Research Culture. Future posts will discuss sessions on days 2 and 3. Full programme here.

Day 1 – keynote

As a journalist himself, Michael Blastland knows bad journalism when he sees it and is concerned that science is all too prone to similar issues, warped by humanity’s “rage to conclude” . His keynote set the scene for many of the fundamental issues explored during the rest of the conference, notably research evaluation that can rely excessively on metrics, often lacking nuance and emphasising story telling and problematic notions of ‘impact’.

Incentives

Indeed, the first session focussed on incentives with Sarah de Rijke considering how metrics are interrupting good science, with scientists aiming for first-author papers required for tenure over good quality co-authored papers for example, while Sandra Schmid discussed how impact factor can influence reproducibility, due to the perception that papers in high impact journals need to be “complete and definitive”.

As the dominant way of attributing worth, many researchers inevitably “think with indicators” which affects the types of work researchers consider viable and interesting.

Grace Gottlieb described an online training course being developed on research transparency at UCL. Researchers who complete the course will get a badge they can use on their webpage/email signature, linking to an open science profile and allowing them to signal a commitment to open research. Badges are based on the Open Science Badges from the Center for Open Science

Finally for the first session, Merle Marie Pittelkow asked what studies are worth replicating? proposing new ways to justify replicating some studies over others.

An online Q & A with all speakers demonstrated just how effective and discursive an online conference format can be in the age of Covid-19. It’s also worth reviewing the conference twitter hashtag – #RRTS20

Improving Research Culture

Study replication was a theme picked up in the next session by Olavo Amaral with a really interesting initiative from South America, the Brazilian Reproducibility Initiative that has sought to reproduce a sample of 60-80 experiments across participating laboratories in Brazil. Each experiment will be reproduced in 3 different laboratories, with preregistered protocols developed to be as close as possible to those of the original study.

You can read more at the links below:

Brazilian biomedical science faces reproducibility test (Nature news)

Two years into the Brazilian Reproducibility Initiative: reflections on conducting a large-scale replication of Brazilian biomedical science (MetArXiv preprint)

Anne-Marie Coriat from Wellcome also picked up themes from the keynote, emphasising that we shouldn’t take trust in science for granted and that the system prioritises quantity over quality. The well publicised Wellcome survey from last year makes for sobering reading with researchers calling out unhealthy competition, bullying, focus on metrics, pressure to publish and for positive results.

Wellcome provide a “Cafe Culture” kit to facilitate a discussion around research culture in your institution.

Next up was Peter McQuilton demonstrating Fairsharing.org, a community resource that interlinks community standards (metadata schemas and vocabularies) to databases, repositories and data policies (by funders and policies) across disciplines. Naturally we are proponents of the FAIR principles (Findable, Accessible, Interoperable, Reusable) and I’ve now contributed a record about the Research Data Leeds repository.

In the final talk of the session, Hannah Sonntag presented SourceData as a starting point to bridge publishing and open science infrastructure. The platform enables collaboration via “smart figures”, self-contained packages with human readable versions of the figure (illustration, textual description) as well as machine readable metadata and direct links to repositories for raw data. The resulting graph database is linked across common elements and already includes 43000 experiments comprising half a million entities.

Hannah also highlighted the need for ‘smart’ authoring tools to engage authors earlier in the reasearch process. SDash, currently a pilot, is an example of such a tool to help authors create smart figures as part of their research process.

Throughout day one poster lightning talks provided a quick-fire overview of a range of fascinating projects:

octopus – “a bit like a pre-print server but you don’t have to publish a full paper”

protocols.io – a secure platform for developing and sharing reproducible methods

Research Square – a novel take on a preprint server that “lets you share your work early, gain feedback from the community, and start making changes to your manuscript prior to peer review in a journal”

ripeta – focus on assessing the quality of reporting and the robustness of the scientific method