Open Science is often equated with open access to articles and, increasingly, to data and software. It is also seen to be driven primarily by the Biomedical and Life sciences and of little real relevance to the Humanities, Arts and Social Sciences (HASS). But Open Science is about the nature of scholarship in the 21st Century. It is as much about process and practice – the ‘where, how and when’ – as it is about the ‘what’. With increasing concerns about the reliability of research, this session focuses on the ‘how’ of scholarship. If good practice leads to good research, how do more open methodologies improve that practice and provide benefits – not just for individual researchers but for research and scholarship itself – and what relevance, if any, does this have to HASS …?

Speaker abstracts:

Anita Bröllochs, Head of Outreach, Protocols.io
Abstract: ‘Reproducibility – the methods behind the data’
Research papers and protocol organization in private labs and companies often lack detailed instructions for repeating experiments. protocols.io is an open access platform for scientists to create step-by-step, interactive and dynamic protocols that can be run on mobile or web. Researchers can share protocols with lab mates, collaborators, the scientific community or make them public, with ease and efficiency. Real time communication and interaction keep protocols up to date with versioning, forking, Q&A, and troubleshooting. Public protocols receive a DOI and allow open communication with authors and researchers to encourage efficient experimentation and reproducibility.

Adrian Aldcroft, Editor-in-Chief, BMJ Open
Abstract: ‘Open publication of methodological aspects of research’
BMJ Journals aim to increase transparency in the research process by publishing articles related to methodological aspects of research, such as study protocols and registered reports. I will present a broad perspective of what we are trying to achieve with these article types along with a discussion of how they fit into our journal workflow.

Rik Peels, Assistant Professor in Philosophy, Vrije Universiteit Amsterdam
Abstract: ‘The Need for Replication in the Humanities’
In the debate on replication all attention so far has gone to the sciences, such as the biomedical sciences and the social sciences, and psychology. However, one may wonder why we should confine the need for replication to those fields? In my talk, I defend that replication is also possible and desirable in the humanities, since many humanistic disciplines have a radically empirical orientation. I draw lessons from the replication crisis in others fields and explain how the drive for replication in the humanities should take various unique features of the humanities into account.

Download slides:
Parallel 1b – Anita Brollochs

Chair

Catriona J MacCallum
Director of Open Science, Hindawi Ltd
Catriona MacCallum is Director of Open Science at Hindawi. She has 20 years experience in scholarly publishing and 15 years in Open Access Publishing. She joined PLOS from Elsevier in 2003 to launch PLOS Biology as one of the Senior Editors and left PLOS as Advocacy Director in 2017. She is currently a member of the European Commission’s Open Science Policy Platform and the UKRI Open Access Practitioners Group. She is on the steering committee of DORA and is a founding individual of the I4OC campaign.

Speakers

Mr. Adrian Aldcroft
Editor-in-Chief, BMJ Open
Adrian Aldcroft is the Editor in Chief of BMJ Open, an open access medical journal that promotes transparency in the publication process by publishing reviewer reports and previous versions of the manuscript alongside the article. Adrian has worked as a staff editor on PLOS ONE and as a Senior Executive Editor on the BMC Series. Prior to working in publishing, Adrian spent several years at The University of Western Ontario mapping functional areas of the brain using fMRI.

Anita Bröllochs
Head of Outreach, protocols.io
Anita Bröllochs is the Head of Outreach at protocols.io where she is working closely with the scientific community to help make research more reproducible. She has a strong passion for open access and tools that help accelerate scientific discoveries. Anita studied Life Science Engineering at the Friedrich-Alexander University Erlangen-Nürnberg, Germany. Her background is in optical clearing of skeletal muscle tissue for multi-photon microscopy.

Mr. Rik Peels
Assistant Professor in Philosophy, Vrije Universiteit Amsterdam
Dr Rik Peels is Assistant Professor in Philosophy at the Vrije Universiteit Amsterdam, the Netherlands. His primary research interests are the ethics of belief, ignorance, scientism, and various issues in the philosophy of religion. He wroteResponsible Belief: A Theory in Ethics and Epistemology(Oxford University Press, 2017), edited Moral and Social Perspectives on Ignorance(Routledge, 2017), and co-edited The Epistemic Dimensions of Ignorance (Cambridge University Press, 2016) and Scientism: A Philosophical Exposition and Evaluation (Oxford University Press, 2018).

View Transcript
[00:00:00.00] [MUSIC PLAYING] [00:00:16.88] CATRIONA MACCALLUM: I think those are people coming from that last session. So given the time, we’re just going to start. We’ve got 45 minutes. I’m not going to speak much likely. What I would like to say is I think that open peer review, open methods, feed into open science. The common denominator is about research, integrity, and transparency.
[00:00:40.91] And part of this is about, I think, not just increasing the way we might be able to replicate or reproduce, even in principle, the results of others, but it’s about increasing accountability of science, science in its broadest sense. So I’m using the European term to mean the humanities here as well. And it’s about increasing that public trust in what academics and the Academy writ large is doing. And I think this goes well beyond openness, per se, but speaks essentially to what the values of 21st century scholarship are.
[00:01:26.87] So I’m absolutely delighted to be able to chair this session. And thank you to the organizers. We’re going to start with Adrian Aldcroft, who’s from BMJ Open. And he’s going to talk a little bit about their work on both protocols and registered reports. Anita Brollochs is going to talk about the platform Protocols.io, which is a platform where authors can independently deposit with or without associated articles, any structured method.
[00:01:56.06] And then professor Rik Peels from Vrije University of Amsterdam, who– I have say, I’m really excited about this, because I come from STEM of life sciences. And I found, and I’m sure it’s been going on for many, many years, that this is a very, very active discussion in the humanities, which is fantastic, because so much of open science and open access appears to be led by the STEM subjects. And I think it’s absolutely fundamental to all subjects across humanities, arts, social sciences, science technology and medicine. So without more ado, Adrian.
[00:02:46.87] ADRIAN ALDCROFT: Thanks very much, Catriona. So I’m going to be talking about open publication of methodological aspects of research. And before I go into what BMJ is doing to improve the reporting of methods, I want to talk about why it’s important that we do this. The first is that we want to increase transparency. A really important part of research is that you say what you’re going to do before you do it. But we know that researchers aren’t always honest about this, and it’s caused a huge problem in poor research. So we want to improve this.
[00:03:20.80] We also want to decrease publication bias for positive results. I think everyone here knows that publishing negative results is really important. It helps reduce research waste. And in the case of clinical trials, it can even save lives. We want to reduce selective presentation of data analysis. We want researchers to be honest about what they find. We don’t want them to tell a story to try to get published in the highest impact journal.
[00:03:45.43] And it’s also we want to distinguish between a hypothesis generating and hypothesis testing analysis. So when a research article comes into a journal like BMJ Open, if they don’t have a protocol, it’s really hard to tell whether it’s exploratory analysis, whether they’ve found something and then structured research around it or whether they’re testing a hypothesis. And the validity of the analysis is really dependent on the order in which you do things. It’s never clear.
[00:04:12.68] But if I were to summarize why we want to do this, I would say it’s because it adds a layer of trust between authors and journals. So what can we do to improve this? I’m going to talk about BMJ Open, why I’m the editor. So that’s the broad open access general medical journal, BMJ. We’ve been publishing study protocols since our launch in 2011. And I’m briefly going to touch on our preclinical journal, BMJ Open Science, which considers the publication of registered reports.
[00:04:43.62] So first, looking at study protocols, there are a few considerations if you do want to publish study protocols. One is by no means the most important, but impact factor could take a hit. Study protocols are included in the calculation for impact factor. And you might think that a study protocol will be cited exactly once, when that research article is published. It’s not quite true. We do have a number of study protocols I’ve cited quite well. But on the whole, they are cited less than the research article, as you’d expect.
[00:05:15.31] What do we mean by study protocol? It probably sounds intuitive, but actually, it’s not. If you look at a clinical trial protocol, it can be hundreds of pages long, really, really detailed, really specific to the study. A reader is probably not interested in that detail. What we publish as study protocol is probably better described as the research article without the results, so still a lot of detail, enough to reproduce it, but missing some of the real specifics.
[00:05:43.77] And there are a number of considerations when it comes to the peer review of the protocol. The first one that we find is quite practical. Some reviewers just don’t realize that journals publish study protocols, so we’ll get a peer review report saying, this is interesting, but where are the results? So we have to put a number of warnings or information sheets into the manuscript saying we know this is the study protocol, this is why we think it’s important to consider it, and this is what we’re looking for in the peer review.
[00:06:11.31] Timing is everything with study protocols because you need the final study to be in place for the ethical approval to be complete. But the data collection shouldn’t be complete, otherwise, if at the analysis stage, you don’t know whether the authors are saying what they planned to do from the very start. So sometimes you have quite a narrow window between when the study starts and when the data collection is complete. And it’s really important that you at least start the peer review process before it reaches that stage.
[00:06:41.64] Philosophical question, what type of review is required? Some people think you can be a little lighter with study protocols, because authors just have to say what they’re doing. But then some people might say you actually need to be really thorough. Because this is the foundation upon which the research that comes after is built upon, so you need to be actually very strict about what the analysis plan is.
[00:07:03.37] And then there’s another issue in terms of fast tracking protocols that were peer reviewed by a funder. I used to work at BMC. And at BMJ, we do the same thing sometimes where, if authors can provide proof that a funded peer reviewed the protocol, then we can just accept it because we don’t think we need to duplicate the effort. And there are advantages and disadvantages to this.
[00:07:25.17] Obvious advantage is it’s fast. All the office has to do is provide some documentation that’s been peer reviewed, and you can just accept it. So it’s efficient as well. It reduces the duplication of effort inherent in research and peer review. Funders have already reviewed the study. Does the journal really need to do any more? And authors and publishers are happy because it’s really quick and easy and efficient for both authors and publishers.
[00:07:52.91] But there are a number of disadvantages, which I think are really important to take note of. The first is you need to trust somebody else, which is always a bit scary, because you don’t know what their process is, what’s gone into it. It is based on trust. And if you don’t have a formal relationship with that funder, then you really are reliant on them.
[00:08:15.06] You generally won’t know who the reviewers are and if they had a conflict of interest. I’ve yet seen a funder who does open peer review in conflict of interest declarations. I imagine the people overseeing the peer review of the funding checked for these things, but we don’t know. Funders and journals are often looking for different things. We sometimes see peer review reports from funders that say, this is a great group, this could have a big impact, but don’t actually comment on the study design, which is really what we want to see.
[00:08:45.06] Another question, which I don’t know the answer to but I’m assuming the answer is no, is the study protocol sent to the journal identical to the one reviewed by the funder? Funders and journals have different formatting requirements, again, are looking for different things. It could be a completely different document that was peer reviewed by the funder than they’ve submitted to the journal.
[00:09:03.22] And peer review systems are often not compatible. One of the core principles of BMJ Open is that we do open peer review and we publish the peer review reports. We don’t just wanted to throw that away and fast track all of the study protocols that come in. So with any protocol for a drug or device which could really impact clinical patient care, then we do our own peer reviews, because we don’t know whether there are competing interests.
[00:09:27.97] And then last disadvantage is if you’re reliant on someone else’s peer review, It’s more difficult to review the research article if it does come in. Because otherwise, you can just use your own peer review and you know who the people, which I’ll get into a bit more later.
[00:09:44.25] Plans and opportunities for study protocols, I’d like to work with funders to consolidate our peer review processes. So that can mean that, if you talk to a funder, you say, we do open peer review, can you do the same thing? Then once they’ve peer reviewed the study protocol to gain funding, we could just publish it, because we’re using the same system.
[00:10:05.16] And I’d also like to make it easier and also encourage authors to submit the research article if we’ve peer reviewed the study protocol. And in that case, reviewers would just have to check that the authors have done in the research what they said they were going to do in the protocol, which might trigger some people to think of another hot topic in publishing, which is registered reports.
[00:10:25.76] And here’s a really vague, broad description of what registered reports are. So authors design their study, it goes to a journal, they do a peer review, then they actually perform the study. And then it undergoes peer review again, and the final report is published. This is gaining a lot of traction and attention. It’s been new and innovative. When you actually break it down, it’s quite similar to what we’ve been doing with study protocols since 2011.
[00:10:51.83] So how are they different? In many ways, we’re trying to accomplish the same goal with protocols and registered reports. The aim is methodological transparency. We want to reduce publication bias. We want to reduce dishonest methods in research like p-hacking and HARKing. There are differences in study protocols when it’s published or if it’s published or posted.
[00:11:15.30] So with protocols, authors get two different publications, the protocol, the research article, with registries reports slightly inconsistent amongst publishers. But often, it will be published both at the same time, and it’ll just be one publication. The registered report model is quite rigid in terms of you don’t completely lock yourself into one publisher. You could go elsewhere with the research.
[00:11:39.33] But there’s really an incentive to stay within that publisher, which could be a real advantage to the publisher. But to the author, it means they have less flexibility in publishing their research where they might want to publish it. And again, this could be a both positive and a negative element. But elements of registered reports can be quite hidden.
[00:11:55.71] With protocols, part of what we want to do is make it transparent that somebody is performing this study so other people don’t do the same thing. The risk is that somebody could scoop you. So register reports, it stays hidden until the actual final research is published. I won’t go into much detail, but BMJ Open Science considers both protocols and registered reports. And this is the workflow. And I’m just trying to make the point that it’s really complicated.
[00:12:27.31] So it is a different way of doing things from the standard peer review of research articles. And it’s necessarily complex. But in order to really tick all of the boxes of what a registered report should be, it needs to go through this complicated workflow. So in summary, the publication of protocols and registered reports are two ways, that publishers can help improve transparency and methodology and research, the peer review and publication processes for both study protocols and registered reports varies a lot, and there are opportunities for improvement.
[00:13:02.34] And I think we should start focusing more on collaboration between funders and journals, which could be beneficial for all involved. And if some publishers are already working on this and have had some success collaborating with funders and publishing registered reports and protocols. And that’s it for me. Thanks.
[00:13:18.91] [APPLAUSE] [00:13:25.20] CATRIONA MACCALLUM: Many thanks, Adrian. Now Anita’s going to tell us how we can all get involved in this.
[00:13:37.78] ANITA BROLLOCHS: Hello, everybody. My name is Anita Brollochs, and I’m from Protocols.io. And I would like to start with a little story. This is Rocky, and he’s very excited about his first day in a new lab. And he’s about to start his first experiment. But because this is something he has never worked with before, he needs to plan the experiment, and he needs to figure out what he should actually be doing for his first experiment.
[00:14:03.14] So his ideas, oh, I’ll just look at what the previous postdoc did, so I would want to look at his lab notebook. But turns out, the previous postdoc had a paper lab notebook, and nobody in the lab really knows where it is, if the person took it with him or where it went. So it’s lost, it’s not accessible. He doesn’t know what he should to do. So he’s like, oh well, let’s read some papers.
[00:14:27.37] He finds the paper and that says, “We did what reference 45 did!” He’s like, oh, that’s great, I’ll check what reference 45 did. And reference 45 of course said, oh, we did what others did. And he’s like, well, how did others do this? And he finds the original paper that is obviously behind a pay wall. And that says, “We did this with conventional methods.” So Rocky doesn’t really know what he should do.
[00:14:52.57] And this is not really a new issue. Reproducing somebody else’s research is often a problem. And a lot of times, the reason for this is because we are lacking all the details of the methods and methods are not really well communicated. And so at Protocols.io, our mission is to make it easier for scientists to share all the method details before, during, and after publication.
[00:15:16.75] And how we do to this is– so the methods on Protocols.io are not static PDFs. They’re interactive and dynamic. And here, you can see what the protocol typically looks like. There is a lot of stuff going on. And one is, down here, you can see the start of the step-by-step description of the protocol. So the actual method is not one long text. It’s single, easy to follow step-by-steps.
[00:15:42.85] And then on the right hand side, you can see a commenting functionality so everybody can comment on the methods, either on a protocol level if they have a suggestion, common question on the protocol as a whole, or you can also create a comment to a specific step if you have a question on a step. And then everybody who uses this protocol will get a notification saying that there is a comment. And then the discussion happens right there on the protocol.
[00:16:05.26] And then also, every public protocol gets a DUI, as you can see there. And that makes the protocol citable. And also one important thing is every person can create forks of protocol. So if somebody finds a protocol that they use in a modified version, they can create their own version of that person’s protocol. And the person who publish it originally, they can always create a new version of a protocol. So if they create it, and they publish it, and then maybe they keep working on a method or some optimizations or corrections that need to be made, they can always keep it up to date with versions.
[00:16:39.99] But there is one thing– or there’s two things. Even if you do have all the platform and all the tools to share all the detailed methods, there’s two questions that I hear a lot from people. And one is, well, I used this method, and it’s great and it works, but I didn’t develop it, so I don’t want to share the details, because I’m not the right person to share this. And that’s a question a lot of people have.
[00:17:02.80] And another one is, well, I don’t know when it’s the right time to share it. Should I share my method after I’ve tried it once and it works? Do I do it 1,000 times, 100 times? When is the right time to share it? And regarding the first question, can I share it– so I’m not a legal person, so I cannot give any legal advice. But if there’s any lawyers in the room, I’d be happy to discuss this.
[00:17:25.96] But this is a very interesting site I found on the US Copyright Office. They have a whole page on work that’s not protected by copyright. And there’s entire section of methods. And it turns out actually, methods apparently are not protected by copyright. I mean, of course, if you’re copying– if you look at the images or discussion parts and all that, that’s a different story. But the method part itself, the recipes of how to do something, according to this, is not protected.
[00:17:52.36] But still, even if you would be legally on the good side to do this, there’s also the question of ethics. Is it ethical to share somebody else’s method? And there’s this really interesting phrase that the Committee of Publication Ethics published. They say, “Use of similar or identical phrases in method sections where there are limited ways to describe a common method, however, is not uncommon.”
[00:18:19.28] So I think a lot of times, people try really hard to make something sound different, even though, I mean, if this is what you use, if this is the method you use, just say this is what you use. There’s no reason really to make it more complicated than it already is. But on Protocol.io, we solve this problem by allowing you also to differentiate between contact and author.
[00:18:43.47] So if you find somebody else’s protocol, you can list them as the author if you don’t feel like you’re the person who should be the author of this. And you can be the contact for this protocol. That means you will be the person who receives the notifications and all these things. And once the protocol is published, the author is a fixed field. So that can never be changed. But the content can always change.
[00:19:05.99] And that also, for example, if you are sharing a protocol and you’re leaving the lab, you can always transfer the contact ownership. So there’s the difference between the author and the contact. And for example, also what we see a lot is if a protocol is gradually developed in the lab and nobody really knows who actually is the author, because it was the whole lab as a whole, you can also put the lab as an author. So that’s an option, too.
[00:19:31.73] So when to share protocols? So I think there’s not really a right answer to when is the perfect time to share a protocol. But I personally think the earlier you share, the better it is. But on Protocols.io, we leave it up to you to decide when you want to share it. And when you are publishing a protocol, we ask you to say what is the status of this protocol.
[00:19:52.56] So you can decide between working, in development, or other. And you can also add the details to what that means to you, like is this something that worked for you for years, or is this something you tried and it worked for now? Or you can say, this is something I’m still developing. Or you can even say, this is something I’m planning to do. So you can get feedback from the community early on.
[00:20:14.90] Or you can also say, like this for example, they say we attempted this, we used this set of protocols, but it didn’t work. So you can also say, this is what we used, and it didn’t work. So we really leave it up to you to say what is the status of the protocol and how confident are you about this method.
[00:20:35.03] And one thing I wanted to share with you that I think is very exciting, one thing that can happen when all the protocols are in one central repository and one central place where they are accessible– here’s an example from Twitter. Somebody was looking for someone with RNA extraction protocols for cortical neuron cultures. And then there was a thread.
[00:20:56.54] And then in the end, the person suggested Trizol protocol. That is on Protocols.io. And the interesting thing is, if you look at where that protocol actually came from, it came from a stickleback fish parasite protocol. So it has no relation at all to cortical neuron cultures. And I think that’s really exciting.
[00:21:16.76] If we have all the protocols and methods in one central repository where they’re all accessible, you might enable somebody else who is not necessarily from your field to find awesome methods that would be helpful for them. Because I don’t know if that person would have actually come up with the idea and look into a stickleback parasite protocol to find the method. So this is really exciting what can happen.
[00:21:37.52] And I just want to finish with that I’m very excited that more and more organizations are encouraging the use of Protocols.io. That includes journals and publishers. There’s more than 500 journals now that actually added us to their author guidelines to just have their authors produce more producable papers by just reporting the exact method details. And also funders are requiring or recommending Protocols.io in their grand guidelines, which is really exciting.
[00:22:08.49] And my last slide is just our adoption. As you can see, we’re growing, which is really exciting. We’re growing at about 1,000 protocols every month now. And this is it for me. Thank you very much for your time.
[00:22:22.30] [APPLAUSE] [00:22:29.54] CATRIONA MACCALLUM: So we’re just hanging on, and we’ll have all the questions at the end. So much of this has being driven by the biomedical community. And really, does it have anything to do with the humanities? And Rik Peels from the Vrije University in Amsterdam is going to tell us if it does.
[00:22:47.00] RIK PEELS: Thank you. All right. Thanks to the organizer, and thank you in particular to Catriona for chairing this session. I made a short handout. And I’m starting to think that I might be the only one at this conference having a handout. But then I thought, well, I’m a philosopher, so I might get away with it. Some people would call it old fashioned. And someone suggested to me, well, call it retro. So I made a retro handout. And I’m going to walk you through it.
[00:23:16.45] All right. The main thesis is basically this replication and replicability, so what is needed in order to do replication is a desideratum for all empirical studies in the humanities. That’s the basic thesis I’m going to defend. So I want to exclude at the outset what you could call deductive studies, so the studies that use merely a priori reasoning from intuitions, fields like ethics, epistemology, metaphysics, certain parts of argumentation theory. I don’t think it works exactly the same way there. So I’m focusing on the empirical humanities. All right. That’s the main thesis. Now, the definitions.
[00:23:54.44] First off, what is a replication study? A replication study is basically a study that is an independent, in a particular sense, repetition of an earlier published study using methods that are sufficiently similar and in circumstances that are sufficiently similar. And I take replicablity to be the features of a study that make this possible. So it should have certain features in order for it to be possible in the first place to carry out a replication study.
[00:24:22.19] What do I mean by the humanities? Well, that’s a large field. I’m thinking of classics, history, archeology, linguistics, law and politics, [? digital, ?] the study of the performing arts, the visual arts, philosophy, theology, religious studies, and a few others. And philosophers like to discuss what exactly comes as humanities and whatnot. I’m not going to delve into that. So this gives you a rough idea.
[00:24:44.87] Finally, what is it to be a desideratum? Well, basically, something that is desirable. It’s a virtue of a study if it is replicable, as it is a virtue of theories, for instance, if it has predictive power, coherence with background knowledge, simplicity is a virtue, expandatory scope, internal consistency, and so on. These are all things we strive after. And so I suggest we should strive after replicability in humanities.
[00:25:09.46] All right. Those were the definitions. Now, three kinds of replications in the humanities and in all academic fields. First, there is what is sometimes called a reproduction. That is a replication with existing data and the same research protocol. So that’s why it’s called a reproduction, or sometimes repetition. So we repeated using the original data, same protocol.
[00:25:30.80] Second, you can use new data but the same research protocol. That is often referred to as a direct replication. And the third and final, there is a conceptual replication, where you not only use new data but also new research protocols. So you use, for instance, new methods.
[00:25:44.96] So my qualified thesis is this– some kind of replication, so either number one or two or three or a combination of those, is a desideratum, something valuable, something to be desired, for all empirical humanistic studies. All right. Now the argument. And you may wonder, why an argument? Isn’t this obvious? Well, to many scholars in the humanities, this is not obvious. There is a raging debate these days.
[00:26:11.96] So let me start with an argument, and then I’ll get back to the raging part. All right. Here’s the argument. First, there are certain facts and certain truths about the objects of inquiry. So the objects of inquiry are, say, objects of art or historical facts, certain texts that have a particular meaning, certain human artifacts and their relations to people or a specific individual human being. Those are the objects that the humanities study.
[00:26:40.24] Well, there are facts, there are truths about those objects. And I would say that– then I can argue this in some more detail– the object of the humanities is to acquire knowledge and understanding and insight into these facts, into these truths. Some people might say there is way too much disagreement in the humanities. But I would say that disagreement, in fact, entails that there is truth out there and knowledge to be had.
[00:27:04.31] So it’s not a matter of you like chocolate and I like vanilla. There’s a matter of taste. There’s true disagreement. And if there is true disagreement, there must be something on which scholars in humanities disagree. All right. Now, how do we come to know those truths? I suggest by systematic academic inquiry.
[00:27:22.07] Third, you can come to know something only if you can return to that fact. So it should be a reliable process. Of course, you can hit upon it by accident so that’s a process of discovery. But in the context of justification, you can return to it time and again. Fourth, the more often you arrive at that truth, preferably by way of different methods, the more reliable it becomes, so the more you can trust the particular results in question.
[00:27:48.78] And the final step is this– replicability is the desideratum that makes that possible. So if the study is replicable, other people can arrive at that same truth using that method or maybe another method. It follows that replicablity in humanities is desideratum. And I would like to stress that there is an extra urgency these days, because there are lots of questionable research practices, as we all know.
[00:28:12.38] And there is some evidence, maybe not enough yet, but there is evidence to think that those also widely occur in the humanities, including, for instance, a certain publication bias. There’s also sloppy science. So no strong violations of any norms, just sloppy science. You forget something, you overlook something, that can happen all the time. These are just human errors and also human epistemic and moral wrongs that occur. And there’s no reason to think that that would not occur in the humanities, no principled reason.
[00:28:44.21] So that gives extra urgency to the matter of replicability. So that the a priori argument, an argument from insight into what it is to know something and what the humanities are. But you could add an empirical argument to this. And I will be doing this over the next three years. So I will be replicating with a team at the Vrije University in Amsterdam, several studies in economic history for instance, part of the Rembrandt project, and a few other things. So we will also show it to be the case. And that might even be more convincing to some people.
[00:29:14.05] All right. Now a few objections. So as I said, there has been a whole debate about this. Since we and a couple of other people raised the issue, you will find a few references at the end of the handouts. Cited along our friends is [INAUDIBLE], Britt Holbrook, and a few others have been involved in this, as you will see on Twitter as well and also in the references that I mentioned.
[00:29:32.27] What kinds of objections? Here’s one. The object is unique. For instance, there is one novel To the Lighthouse by Virginia Woolf, whereas there are lots of H2O atoms that you can study for instance. My reply is that is true but irrelevant. So the scientists also study unique objects, like the whole of spacetime, for instance.
[00:29:55.04] And in the humanities, you can return to various kinds of multiple objects. So you can compare, for instance, Virginia Woolf’s work with that of others or make a comparison between different novels that she wrote. What matters is whether you can return time and again to that single object, say To the Lighthouse, to that particular novel, maybe using different methods. So the unicity is irrelevant.
[00:30:19.43] Second, some people say, well, this is a copying from the natural sciences. This is scientism. We should keep that out of the door. I reply, of course we should be careful, but then, nothing of what I said relies on the way that natural sciences do this. It has to do with the nature of knowledge and the nature of what inquiry in humanities is. So that is not entirely convincing either.
[00:30:43.64] Some people say, well, what about biographies or other grand theories? I say, well, maybe replicating the whole thing is difficult, but you can replicate particular results, particular ideas from biographies or large studies. Doesn’t a subject play inevitably a role? I say, yes, that’s the case, but then that’s by the way also from the case in natural sciences.
[00:31:05.90] So I would say that what that means is that you should be transparent about certain tested assumptions that play a role. And that’s also the case for normative assumptions, the other point. So when you do inquiry that involves normative assumptions, you should just be open and transparent and explicit about that so that others can either replicate it or say, I can’t replicate it, because I don’t share this particular normative assumption.
[00:31:31.49] Finally, isn’t there a wide variety of methods in humanities? Yeah, that’s definitely true, conceptual analysis, literature analysis, historical analysis, constructivism, Socratic method, and so on. But that’s, of course, also the case for the natural sciences. There’s an even wider variety of methods there. And that doesn’t impede replicablity and the value of replication. So I think the case stands.
[00:31:56.15] This brings me to the conclusion. What should we do? I think we should share information about the original studies and the replication studies. For instance, as you all know, there’s this thing about preregistration going on. I see no reason why you couldn’t do that for many studies in the humanities. You can just preregister, formulate the hypotheses, which data we’ll use, and so on. So we could do that.
[00:32:16.95] So one big thing one, one great thing we can do is learn from the debates in the biomedical sciences. And we then means the scholars in the humanities. Second, know when and how to perform replication study so you can, for instance, assess the expected overall costs and benefits, you can educate people, especially young scholars about the value– but also somewhat more senior scholars– about the value of applicability and actually carrying out replication studies, and finally, of course, present various kind of incentives. For instance, journals can do such a thing and so can funding agencies.
[00:32:52.55] And finally, I think we should focus on cornerstone studies. Because we need to convince scholars in the humanities that this is valuable and also needed. And if we focus on cornerstone studies, then it might be more successful, because people will see what the ramifications are. Thank you very much for your attention.
[00:33:09.98] [APPLAUSE] [00:33:16.08] CATRIONA MACCALLUM: Thank you very much. If the three panelists would like to come up– that was fascinating. It’s interesting, because in some ways, I think the way the biomedical community is talking about replication, reproducibility, all the nuts and bolts, how’d you do it, where do you put the information? And the first two talks reflect that, where as the humanities, it’s still– and you are in the philosophy department. It comes more from a philosophical perspective at one level.
[00:33:51.46] And I think when you said– I think most senior researchers actually do need to be engaged in this, because often, they’re the ones that don’t appreciate. I’m speaking very general terms here. Whereas I think the younger researchers tend to be able to do that. And having that conversation about assumptions across all subject areas would be very interesting. Anyway, enough of me talking. Are there any questions from the audience?
[00:34:22.43] None? That means I get to ask them all. So I’ll start with Adrian. I mean, BMJ wanted to do this. And BMJ Open does it within the journal. What benefits do you see having protocols separate from the journal in a way that Anita is doing. Are there benefits of being with the journal or having some completely independent platform?
[00:34:56.31] ADRIAN ALDCROFT: Yeah, I think by publishing them in the journal you offer all the advantages that publishers offer in terms of peer reviewing them, the authors get a publication out of it, and it’s searchable on PubMed. It’s become more like a regular protocol. And it’s not just the methods, because it still has the introduction sections, so outlining the rationale and along with we also ask them to go into a bit more detail than they would in the research article in terms of the ethics behind it, because it is more central when we usually get the methods. As well as we ask authors to provide a dissemination plan.
[00:35:40.64] So we see it more as sitting somewhere in between maybe something like Protocols.io or a trial registry. And that provides a bit more detail, a bit more background. Those things are really important as well. I guess the disadvantage is that it has to take the time to undergo peer review. So it’s not instantly posted online and available to everyone. But it does take a bit more time, like another article.
[00:36:10.95] CATRIONA MACCALLUM: Ah, there we go.
[00:36:12.59] AUDIENCE: I have a question.
[00:36:13.57] CATRIONA MACCALLUM: Good. That’s great. Thank you.
[00:36:15.45] AUDIENCE: [INAUDIBLE] University of [INAUDIBLE]. So my question for you is, considering that we take care about the format of the articles in all disciplines, and in some disciplines, all articles are following the [INAUDIBLE] format and the methodologies described quite well in all articles, how did you come to the situation where 80% of the research results are not reproducible published research results?
[00:36:56.86] CATRIONA MACCALLUM: That’s a great question.
[00:36:59.90] ANITA BROLLOCHS: I think a lot of times the methods are just lacking so much detail. And all the little details are so important to replicate something. Even the region where you ordered it, the vendor is really important. Sometimes even the lot number is important. Because if you have a different batch or a different lot, the whole experiment could fail, and it doesn’t work anymore. And all the little details are so important.
[00:37:24.35] And usually in materials and methods sections in paper, they’re not really long, and there’s just not so much detail. And I think a lot of times we just leave all the important details out that are actually important for being able to have enough detail to replicate what has been done. But even which machine you use to shake something or something like that is just so important. And I think that’s the reason why we can not reproduce.
[00:37:50.70] RIK PEELS: Right. So there might be several causes for this. So one might be lack of detail and transparency. And another one might be those questionable research practices. So studies show that about 35% of people are likely to engage in questionable research practices. At least that’s what colleagues describe to each other rather than to themselves.
[00:38:11.15] CATRIONA MACCALLUM: Is this across all disciplines? You’re talking about all?
[00:38:13.71] RIK PEELS: So I think it’s been studied for the biomedical science, a few of the big ones.
[00:38:17.45] CATRIONA MACCALLUM: Yeah, I think it’s mostly big ones.
[00:38:19.37] RIK PEELS: I don’t know about philosophy, but it wouldn’t work for the biomedical scientists. And if that’s the case, then say, if you do p-hacking, for instance, that might cause trouble, for instance, in trying to replicate a study. So there might be multiple causes for the replicablity crisis or crises.
[00:38:36.56] ADRIAN ALDCROFT: The only thing I’ll add is that I think research articles where they’ve already done everything tend to be poorly reported. We know with clinical trials, adverse events are hardly ever reported. But we know that they’re not unusual. And things like p-hacking and just a lack in transparency and also things like outcome switching or people emphasizing the positive results that they do find in a study. So it tells us to wait.
[00:39:05.93] But those could be due to chance, because they weren’t what they were looking for in the first place, and they’re emphasizing this. And there’s no way for editors and reviewers to know that they’ve switched this at some point, unless they’ve provided a study protocol or their preregistered plan.
[00:39:23.25] AUDIENCE: [INAUDIBLE] [00:39:25.66] CATRIONA MACCALLUM: Yes.
[00:39:26.01] AUDIENCE: –very related.
[00:39:27.97] CATRIONA MACCALLUM: Is there a microphone? Oh, sorry.
[00:39:29.21] AUDIENCE: Sorry. But it’s very important. So do you think that the way research results are published nowadays so that journals are forcing authors to present research results on 10 pages, including literature release– it’s actually the reason of most of what you explained, actually. Should we make a step forward somehow and– we are in the digital world, but we are still publishing 10 pages paper.
[00:40:12.44] CATRIONA MACCALLUM: So how much does the way we’ve become accustomed to communicating research in journal format, small and high impact journals, affect the replicability.
[00:40:25.88] ADRIAN ALDCROFT: I think that’s definitely an element there. So I mentioned in my talk that the study protocols that we publish tend to not be the full, long, 100 page documents, but more like a research article without the results. And we do see a lot of authors submit their study protocol as an article type and then in their supplementary [INAUDIBLE] provide that big long document, which I think is a really good thing, because it is more transparent
[00:40:51.89] I think the practical challenge is if you’re asking peer reviewers for no payment to review these long documents, they probably wouldn’t be able to do it. So we have to make it digestible. But I think, in this day and age, there’s nothing stopping people from providing the full details in case somebody wants to look at it or needs to look at it.
[00:41:10.47] CATRIONA MACCALLUM: I’m aware of time in the members’ meeting, but perhaps we can just wrap the–
[00:41:14.70] AUDIENCE: I’ll try and keep this quick. I guess to be very brief, my question is about whether there’s money available for this. And Adrian, you showed a very complex diagram. Are you finding that this is adding cost in terms of time to your process and resources as well? And Anita, what’s your business model funding, you finding people interested in funding Protocols.io? I have to say, all three talks were fascinating. I really enjoyed them.
[00:41:45.27] ANITA BROLLOCHS: Should I start?
[00:41:46.06] ADRIAN ALDCROFT: Yeah, go ahead.
[00:41:46.85] ANITA BROLLOCHS: So our business model is everything that’s on the public side of free. It’s free for everybody to read– all the public content is all open access, and it’s free for everybody to publish protocols. But then, if you want to use it for private collaboration, so if you want to create an internal lab group and you don’t want to share your protocols, then it’s a subscription model. So it’s mostly biotech that pay. And we also do have– now we start a campus subscription, so a whole entire campus can get a campus wide license for unlimited usage on campus.
[00:42:23.42] CATRIONA MACCALLUM: I think it better it to go– I would love to ask about the extent of publication bias in the humanities actually, although we don’t have time, and what evidence there might be, because I think publication bias might be slightly different between the humanities and sciences. I don’t know if you want to say the last word.
[00:42:39.41] ADRIAN ALDCROFT: Right. But they are still human beings. So there will be, for instance, a greater interest in new research, original research, for instance, and not that much interest in negative results, for instance. So some of the same factors will inevitably play a role.
[00:42:55.07] CATRIONA MACCALLUM: So one of the things that would be great, I think, is to have this conversation more. And I know that all the speakers will be here today later. So if you’d like to speak to them, please do. I know certainly I’m going to grill Paul Rick about the humanities. And I hope this is just the start of this conversation across all the disciplines. Thank you very much for coming. And thank you to all the speakers for taking the time.
[00:43:24.65] [MUSIC PLAYING]