In order to support new models and workflows effectively, processes and systems often need to be updated, replaced or streamlined. Covering different aspects of the publishing process, speakers will share their experiences of developing technology solutions to support business transformation.

Speaker abstracts:

Stephen Wilkes, Product Manager, Royal Society of Chemistry
Abstract: ‘Licence & Payment Portal – an Author Experience case study’
The Royal Society of Chemistry has built an author-centric licensing and payment system that supports our strategic initiatives to invest in and support Open Access (OA) workflows. The system speeds up the entire process for the author and for the first time automatically recommends licence and OA options based on funder and institution preferences (including recognising our Read & Publish institutes), taking the headache away from authors. This presentation will focus on how we deployed Agile product and service development, getting prototypes in front of our authors fast and iterate.

Jonathan Hevenstone, SVP of Business Development, Atypon
Abstract: ‘Technologies for making new business models viable’
Meeting the challenges that open access, piracy, increased submission volume, and “Open Science” pose requires a two-pronged approach that addresses the goals of both researchers and publishers: accelerating scientific communication, reducing publishing costs, and enriching the scientific record.
By offering both the content and tools researchers need, publishers can make the research process easier and more productive for authors, add new value to their websites and brand, keep their content and websites relevant, and reduce time to market and publishing costs.
Such tools, all of which support reproducible science, enable researchers to discover and access relevant content from across the web more effectively; author collaboratively using all the artifacts of research, from references to data sets; easily access needed content across all publisher sites through a single login; publish their research quickly; and integrate all of these services—and content—into a single interface.
These tools would both complement and become components of an end-to-end publishing workflow that includes the tight integration of publishers’ distribution platform with submission and peer review systems, automated content conversion technology, quality-controlled pre-publication platforms, and other technologies for the benefit of researchers and publishers..

Andrew Smeall, Chief Digital Officer, Hindawi Ltd
Abstract: ‘Finding Business Value in Open Source’
Hindawi is rebuilding our publishing platform and making our new solution open source. Open source can be expensive: when we develop features, we can’t always pick the simplest solution for our needs. We seek feedback from our open source collaborators to ensure our solution is interoperable and reusable for the entire community. Our goal is a modular system where we have the option to mix and match open source and legacy components to provide a variety of solutions.
We see business value in this approach. Building momentum in open source projects is hard, but making our code reusable can help jump start other platform development projects. These new platforms reduce the barriers for new open access journals to come online. Our business grows as open access grows, not only through publications in our journals but also through the publishing partnerships we support as a service provider.
As long as more platforms move towards a shared set of open standards, the research community as a whole will benefit. Open infrastructure in the form of independent, interoperable platforms in loose competition with one another will lead to faster innovation than a landscape dominated by one or two providers. It will remove barriers to data moving between systems and ultimately benefit researchers by standardizing workflows and reducing unnecessary repetition of clerical work.

Download slides:
Parallel 3a – Andrew Smeall
Parallel 3a – Jonathan Hevenstone
Parallel 3a – Stephen Wilkes


Jennifer Schivas
Head of Strategy and Industry Engagement, 67 Bricks Ltd
Jennifer Schivas is Head of Strategy and Industry Engagement at 67 Bricks, a technology company who help publishers build information products for the data-driven world. Jennifer joined 67 Bricks from the partnerships and innovation team at Oxford University Press and has previously held roles at Taylor & Francis and Intellect Books. She brings practical experience of using technology to address challenges and harness opportunities in a rapidly changing market and a vision of how technology will impact the scholarly ecosystem of the future.


Mr. Andrew Smeall
Chief Digital Officer, Hindawi Ltd
Andrew leads Hindawi’s product and technology teams in London. He started at Hindawi in 2011, working on new ventures and product development. He has worked previously on privacy technology at Enliken and as a multimedia producer at Asia Society. Andrew has a B.A. in Chinese from Yale University and an M.B.A. from NYU Stern.

Jonathan Hevenstone
SVP of Business Development, Atypon
Jonathan is responsible for setting and executing Atypon’s market and sales strategies as well as managing the sales and marketing teams. He has 25 years’ experience with publishing technologies and services, including 15 years in a business development leadership role establishing relationships with the world’s largest publishers and online retailers. His areas of expertise include content strategy, publishing workflows, professional services, product marketing, eBook production and distribution, content conversion, and print on demand. Jonathan holds a BA in English and creative writing from Dartmouth College and an MA in English literature from New York University.

Stephen Wilkes
Product Manager, Royal Society of Chemistry
Stephen Wilkes is a Product Manager at the Royal Society of Chemistry, developing the product vision and roadmap to enable the company to deliver a range of digital author services. Stephen has 22 years’ experience in STM Publishing covering Journals, Databases, Product Management, Project Management, Business Process Improvement, and Systems Improvement.

View Transcript
[00:00:00.49] [MUSIC PLAYING] [00:00:21.22] JENNIFER SCHIVAS: My name’s Jennifer Schivas from 67 Bricks. And I am really pleased to be chairing this session today on the Technology in a Changing Landscape.
[00:00:29.58] Ongoing changes in our industry are bringing increasing opportunities and challenges– so for example, in the form of open access, piracy, increasing submission volumes, and open science, leading to a whole range of new workflows and models in response to that.
[00:00:46.15] And because of this, in order to support those new models, our internal systems and processes often need to be streamlined, updated, or, indeed, replaced entirely. So I’m really pleased to be joined today by three fantastic speakers who are going to cover different aspects of the publishing workflow and share their experiences of developing technology solutions to support business transformation.
[00:01:08.16] With me, I have Stephen Wilkes, Product Manager from the Royal Society of Chemistry, Jonathan Hevenstone, Senior Vice President of Business Development from Atypon, and Andrew Smeall, Chief Digital Officer from Hindawi.
[00:01:21.57] Now each of of those speakers is going to spend 10 to 12 minutes sharing their experience with us. And that will leave us time, hopefully, for some joint Q&A session at the end.
[00:01:30.90] So without further ado, it’s my absolute pleasure to introduce our first speaker, Stephen Wilkes from the Royal Society of Chemistry.
[00:01:36.85] [APPLAUSE] [00:01:42.80] STEPHEN WILKES: OK. Thank you, Jennifer. Yes. So my name is Steve Wilkes. It’s a pleasure to open this session this morning for you.
[00:01:51.35] So I’m a product manager at the Royal Society of Chemistry. And this morning, if these work, I’m going to talk about a license and payment portal that we have developed, together with a development consultancy partner. And that’s 67 Bricks.
[00:02:06.84] We work with them to develop the license and payment portal. And I’m going to talk about that, how we developed that, but also the methodology we used in that development cycle.
[00:02:15.95] But first off, just to give you a bit of background information about the Royal Society of Chemistry. So we publish 44 journals. Three of those journals are open access. So our largest chemistry journal, RSC Advances is open access, also, our flagship journal, Chemical Science. And we flipped both those journals from hybrids to open access a few years ago.
[00:02:38.64] We also have one journal that’s open access that was born open access. That’s Nanoscale Advances. We publish about 35,000 articles a year. And about a quarter of that content is open access. So last year, it was about 9,000 articles published open access.
[00:02:55.48] But of course, it wasn’t always that way. So back in 2005, we published only 5,000 articles a year. And we published 5 open access articles.
[00:03:05.58] So of course, at that point, license signing and paying for APCs could afford to be manual and spreadsheet-based. And as we grew, our open access content grew 1% or 2% per year.
[00:03:17.95] And again, we could afford to be manual and using spreadsheets and to-ing and fro-ing. But of course, at some point, that was causing delays and potential frustration to our authors, our customers, librarians. And as our Read and Publish deals came on board, those consortia as well, and institutions.
[00:03:36.12] So we really wanted to grow– we had a growing demand for an improved customer service. And we have bold open access ambitions as well. So we want to grow our content to 50% open access by 2025. So that’s 25,000 articles out of 50,000 publications. So of course, we need a flexible robust system to handle that volume of papers that will grow with our open access.
[00:04:01.06] So today’s agenda– so I’m going to whistle-stop tour, just 10 minutes. So it’s going to be a very brief but strategic statement, where we are at the RSC, the desired outcomes for the license and payment portal, what we are trying to achieve. I’m going to talk about the development methodology we used. So Agile, I’ll talk a bit more about that.
[00:04:20.60] Than what we delivered as part of the project, the impact it’s had and, also, the legacy of the agile project group, the work we’re doing, and how we’ve used that for to develop a new submission system and also an article tracker.
[00:04:36.57] So here we have our strategic statement– Digital First Publisher focused on delivering high-quality, accessible, impactful content. And really that’s about re-architecting legacy systems that really were their print-oriented systems that haven’t really evolved into the digital world. So we need to really bring those up to speed.
[00:04:59.11] And we have a program of work called the Future Publishing Platform Program of work, which hopes to do that over a two, three year cycle. And then, finally, an excellent customer experience. This is really about a greater shift to providing customers what they want. So it’s really the author-as-customer approach.
[00:05:16.98] And how are we going to do that? So two reasons, here. Develop a deeper understanding about authors, readers, and customers. So this is really getting our customers, getting our authors involved in every step of the process. And that is user research. And I’ll touch a bit more on that in a short while.
[00:05:32.93] And a second point using Agile methodology getting prototypes in front of users fast. And then we iterate. So we get feedback and we iterate. And again, I’ll touch a bit more on that in a second.
[00:05:45.76] So what were the desired outcomes of the license and payment portal? Primarily, it was to support the Read and Publish business model we have. So we currently have 70 institutions on our Read and Publish program– so from Europe, USA, Japan, and also Saudi Arabia.
[00:06:02.95] So really, we wanted to integrate that into the license system so that the system automatically recognizes an author and what institute they’re from. And if they’re from a Read and Publish institute, it gives them those options in the payment system and the license system.
[00:06:19.58] Second point, improve the user experience and licensing. So, yeah. We had a fragmented system, as I said, and lots of manual processes in place.
[00:06:28.99] And finally, to automate the process for managing Open Access payments– so not just the APCs, but also, discounts, waivers, and how we handle sanctions through the system.
[00:06:41.93] So Agile. So we used Agile as a methodology for the project to deliver the system. And as I said, Agile gets– the system gets prototypes, gets beta versions out to users really quickly. And we can test with our users. We can get the feedback straightaway. We can make improvements and iterate.
[00:07:02.11] So here is the Agile Manifesto. Individuals and interactions over processes and tools, Working software over comprehensive documentation, Customer collaboration over contract negotiation, and Responding to change over following a plan.
[00:07:14.54] And while there is value on the items on the right, of course, we value the items on the left more in Agile. And this contrasts to say, waterfall, where you have lots of planning up front, you get requirements all up front, you have an end-point that you’re going to deliver the system, and then two or three years later, you deliver it. And that may or may not be what your user wants. Because you haven’t actually involved them, generally, as part of that process.
[00:07:39.47] So that’s the Agile manifesto. I’ll move quickly through these points, here. So individual interactions over processes and tools. So our requirements and solutions evolved through a collaborative effort with a team the team is very cross-functional. So we had people from technology, publishing– across the business.
[00:07:57.26] The core team was made up of a product manager– we had back-end developers internally and externally, from 67 Bricks, who were working on the code. We had front-end developers working on the UI. We also had quality assurance specialists on the team. We had a business analyst. And we involved subject matter experts whenever we could from the business. We brought them into the team. And they when we had those discussions and elaboration sessions, as I’ll talk about.
[00:08:29.65] So working software over comprehensive documentation. So as I’ve alluded to, we move quite quickly in Agile. So we released quite a few different versions. We had over 6,500 real users use the system before it was even complete. So we’re getting lots of feedback on a regular basis. We held five external user research sessions. That’s external. We had a lot of internal user research, as well, carried out. But again, we were getting feedback from those external people. And they were telling us what they wanted from the system, what they wanted to see.
[00:09:01.21] Phased rollout to different journals. So we started on one large journal first. And that was really just a– to mitigate risk a bit. And the large journal had limited complexity to it. So we’re now able to see how it worked with that large journal. And then we could roll out to more journals as we moved along the project. And then, yeah, we released up to four times per week. So again, that’s very different to waterfall.
[00:09:27.40] Customer collaboration over contract negotiation. So we substituted traditional supplier contracts for more contractual flexibility. And that required quite a high level of trust between us and our partners. Also, we held weekly elaboration sessions, as I’ve said. So each week, we would talk about the requirements, gathering the sort of things that we needed to think about for the short, mid, and long-term for the project. And again, that brought in stakeholders from the business.
[00:09:56.50] We had weekly showcases where we would show progress to the business. And that has proved really popular. We would demo something. It may not be working or not. We just put something up there that we were working on for the week. And we’d quickly get feedback from the audience members, anyone in the business, as to whether they thought that was useful or not.
[00:10:13.72] And then, also, we held regular retrospectives and health checks. So this is really to make sure that the team’s functioning correctly, everyone’s comfortable in the ceremonies we’re using, the meetings we’re having, the development cycle we’re doing. And again, that was about being open and open about our mistakes that are happening during the weeks. We make mistakes during Agile. And everybody had to be open and trustworthy in that environment. And again, things we learned from the retrospectives we would feed into the project team and– to make it better.
[00:10:46.58] In terms of responding to change over following a plan, again, user research is really key in this, in Agile. So here we are visiting an academic in Milton Keynes. So we involved academics whenever we could in the license and payment portal development.
[00:11:02.60] And a couple of examples, here. We took the system out quite early to an academic. And at the time, we had an email confirmation page in the system, but we didn’t have an email– sorry, a confirmation page, but we didn’t have an email that came to them afterwards as a receipt of the transaction. And they quickly told us they wanted a receipt of the transaction. So we were able to build that into the system and develop that functionality.
[00:11:27.60] We also, at the beginning, didn’t quite distinguish between the author and the payee in the system. So the author and the payee are quite different people. So we had to make sure that that distinction was a bit clearer. And we could send over an email link that the author could then send on to their finance department or wherever to pay for it in a separate workflow. So that’s how Agile worked. We were learning as we were going along and testing our assumptions.
[00:11:54.84] And in terms of delivery of value, here are the things we delivered. So new license system released. We supported our Read and Publish sales model through automated identification of authors from Read and Publish institutes. We managed our waivers and discounts. We could calculate APCs and VAT, facilitate invoice generation, and manage vouchers.
[00:12:15.05] So that’s the first six there. Originally, we had number seven in there, allow card payments. So this was allow card payments directly from the system. We could kick off into a manual process, and that was fine. But actually, from the system, we wanted to do that in that development cycle. But because of these new additional steps– and again, this is how Agile works– we had to build these in instead, and we delayed the card payments. And in fact, that’s only just gone live now. So that’s an example of where, in a traditional project, that might be seen as a bit of a failure. But actually, in Agile, that’s kind of a success. We’ve got other things in there that were more important at the time that we needed to do.
[00:12:57.75] And then in terms of impact, automatic identification of Read and Publish authors. So it took three weeks, on and off, to get that open access license signed before– so [INAUDIBLE] between the authors and us. It’s down to three minutes, now, in the system. We freed up two dedicated members of staff through the automation. And probably more importantly, we introduced Agile to the business.
[00:13:20.42] So you can see here, we developed a submission system. And we used Agile methodology, as well, to build this system. So authors were telling us they wanted an improved submission flow from the user research we were doing in other areas. So we decided we’d have a go at building a submission system over our external peer review system. So this isn’t a peer review system. It’s just a method of submitting the paper quickly.
[00:13:47.70] And our system automatically extracts information from the paper. They key fields are populated for the author as they bring the paper in. And together with the license signing process and through the license system, we’ve managed to save about an hour on submission time for every single author. So that’s pretty impressive for a new system that’s only just really come out of beta. And the system usability– the system is so– we measure that, as well. And that’s improved significantly from the baseline metric in the original submission system we used.
[00:14:24.11] And then finally, article tracker. We’ve employed the same methodology, Agile, to build and article tracker. Again, authors were telling us they wanted more granularity in the process. They don’t see enough of what’s happening with the paperwork, particularly when it’s in peer review. So we’ve developed a tracker that– a simple tracker here that outlines all the key stages in the peer review process.
[00:14:45.41] But more importantly, we’ve got a bit more granularity there in the peer review. So they know who the associate editor is. They know how many reviewers have agreed, how many reports have been received. So this is just a very initial MVP, Minimum Viable Product, at the moment. We hope to develop this a bit more in the future.
[00:15:06.73] But we’ll be using Agile. So as I said, we’re working on future publishing platform program of work for the next two to three years. And we’re going to be using Agile and Agile teams to deliver that work in the next two to three years, as well. And I think– am I in time? I think that’s it.
[00:15:22.33] JENNIFER SCHIVAS: Good.
[00:15:23.23] STEPHEN WILKES: There should be a thank you there.
[00:15:24.79] [LAUGHTER] [00:15:25.75] [APPLAUSE] [00:15:30.55] JENNIFER SCHIVAS: Thanks, Stephen. So I’d like, now, to introduce our second speaker, who’s Jonathan Hevenstone from Atypon.
[00:15:41.90] JONATHAN HEVENSTONE: Thanks, Jennifer. So I’m here to talk about tools for open science that we’ve been launching through a number of acquisitions we’ve made over the past few years. Many of you who know us might wonder, why is Atypon doing this? Because Atypon is a provider of a publishing platform to publishers. And so I wanted to explain why we’re doing this, what it’s about, what the purpose is, and what the benefits are to the publishers we work with.
[00:16:10.66] There’s two main themes. One is, we’re all familiar with the litany of complaints that researchers have about all of us– publishers, publishing. My sister– I always use her as the straw man researcher. So she– my sister Deborah is a social scientist. All she wants to do is send around her PDFs of her articles, maybe have a WordPress site that she creates, do it all herself. It’s very easy; doesn’t cost anything. Why do I need all of this? You’re making me jump through hoops. People are– I’m paying or my institution’s paying to access the content or paying to publish the content. What is all this?
[00:16:46.75] There’s a tremendous value proposition that publishers can offer to researchers that’s right there. But the friction is– the friction and the how– and the staleness of it is interfering with that perception. And so to do things to enable researchers to interact with you more seamlessly, to get their research out there faster, to enable preprinting right away, to be able to represent the richness of all the work that they’re doing, all of the different artifacts, and support reproducibility– all of these things are value adds that they crave and that I think would reinforce publishers’ value proposition.
[00:17:31.07] The second part of it is, we need to make the world safe for the business that we’re in. We build publishing websites that are really all separate. And it becomes problematic when users are going to aggregated services, whether they’re doing a search on Google, finding stuff on PubMed, going to Sci-Hub for pirated content or shared content on ResearchGate. It fits their needs right now in a way that the websites that we build, in some ways, do not.
[00:18:03.49] And what we want to do is reinvigorate the ecosystem, build connections across the publisher site, bring discovery to the researchers in ways that bring them back to the authoritative version that’s on your websites and to enable them to author and submit very seamlessly with you and keep them in a pro-publisher environment that’s really built for everybody, not just for us. I mean our competitors, other players in this space. And so that’s really what this is about, is try to make the– make this whole ecosystem work for all of us. And we’re self-interested, of course, because we make our money from publishers. And so if it’s not working for you, it’s not working for us.
[00:18:46.00] So I’ll start with Manuscripts, which is a company that we acquired several years back. I don’t know if Matias, the founder, is here in the room, but I know he’s going to be here at ALPSP today. Manuscripts is a collaborative authoring tool that started as an OS X native app. It’s being relaunched this month, along with several of these other applications. It’s going to now be an online tool, fully digital native, also running native locally on OS X, Windows, and Linux.
[00:19:17.71] Scitrus is a personalized AI discovery tool. It’s meant to address, I think, the concerns I was sharing about Google. And people have been trained to imagine that the way to get to the content that you need is by doing searches. And we know– we focus a lot on making search really effective on the sites that we host. But the amount of activity on the search is actually relatively low. And you can understand why, because it’s much easier to do a search somewhere else, where you can access everything, rather than just one publisher’s set of content.
[00:19:50.85] Scitrus flips it around by providing a machine learning powered feed of exactly what it is that you’re interested in coming from the most up-to-date content that we’re pulling from across the entire scholarly web. So it includes peer-reviewed published articles and book chapters. It includes preprints. It includes news that’s associated with science and scholarly information.
[00:20:19.56] And when you sign up, you just answer a couple of questions about your major fields of interest are or what your discipline is that you focus on. And then it starts to learn from your interactions. And when you spend time reading something, it learns from that. When you dismiss something, it learns from that. And I’ll give you a couple screenshots later. But we think this is going to be a very powerful tool for research.
[00:20:39.33] Unlike meta, it spans the entire space. So there’s competitors. There’s lots of different tools like this. I think ours– there’s room for all of them, really. But ours is a little bit different from the other ones that are out there.
[00:20:53.29] Connect is part of the connective tissue for creating this kind of an environment. And I think it’s almost like Atypon putting something on the table and saying, we feel like this is something that needs to be there for the things that we’re doing to work. But it’s not going to work unless it works for third-party app developers, other platform providers, editorial system companies. It’s a way for an individual user to set up a single account that will give them access.
[00:21:26.28] It’s basically authori– authorization and access to any participating publisher site or third-party service. They can sign up with an ORCID ID or a Twitter ID, Google. I think we’re going to add LinkedIn– to make it really easy for these– all these different resources to be connected in a seamless user experience.
[00:21:50.25] Obviously, it’s not going to work if there’s not adoption. So anybody who might think that working with us on that is antithetical because they compete in some way, I want to make sure that you’re comfortable. We can talk about it later, but we really feel like this is not going to work if it’s a world of walled gardens. And we want to make sure that there’s fair data practices and representation of other tools and services in this environment.
[00:22:17.26] Lastly, I’ll be talking about Remarq. Remarq was originally an invitation platform. We acquired RedLink, and with the RedLink business, we got Remarq. And we’ve beefed it up into a visual environment that connects our apps and any other apps that might be using Connect to become part of this shared ecosystem. And we’re trying to build rich connections across the tools to make it really easy for researchers to take advantage of all of them, and to promote the different tools to the researcher when they’re in one so they see the benefit of using the other.
[00:22:58.58] Here’s a quick screenshot of Manuscripts. So you can see that it’s really an environment that was built by researchers, for researchers. It includes robust handling of figures, tables. There’s integration with Jupyter, lab notebooks. And we can round trip these kinds of resources into Literatum. Code Ocean is actually the first third-party app that is integrated with Connect. And so you can actually round trip content that you load from Jupiter or other computational resources, put them in a Code Ocean capsule, and then that’ll be visible in line in Literatum on the journal page.
[00:23:39.32] I don’t have a slide for it, but a key element of Manuscripts is, we know that there are going to be lots of people who are going to be authoring in Word, or LaTeX, or who have PDFs and other resources. So we’re building really robust conversion capabilities to make sure that you can import that content into Manuscripts.
[00:23:55.92] And in fact, we acquired Inera very recently. Are people familiar with Inera? They have the eXtiles product. So it’s been around for quite some time. So we’re going to still continue to market all the Inera eXtiles products. But we’re also baking some of those capabilities into Manuscripts to enable us to really accurately ingest Microsoft Word, convert it to XML, and then render the content in Manuscripts or push it into the publishing platforms.
[00:24:28.07] The last aspect of Manuscripts I’m going to mention is that it includes a large library of templates that enable researchers, at the click of a button, to reformat their manuscript for another journal. These are mostly maintained by researchers, not by the publishers. We’d love to see publishers adopt their manuscript templates both as a service to the authors– because if there is an accurate template there, you take away hours of work for the authors to change their manuscript and reformat it for submission to another journal, whether it’s yours or somebody else’s– and you also take away work from yourself when people submit content that you have to change and munge and mess around with or send back to them. It just, again, eases friction for both you and for the authors that you’re working with.
[00:25:19.51] Next, a quick snapshot of Connect. I’m not going to get into all these details. I think in the coming weeks, as we’re rolling these things out, if our marketing team is any good, you’re going to get lots more information and opportunities to see demos and stuff like that. So I’m not going to go into tremendous depth.
[00:25:34.75] But the whole idea around it is to give users granular control around their privacy settings for all the different sites and services that participate beyond just clicking a GDPR sort of, “are you OK with cookies” or something like that. We want to make them feel like it’s a service to let them customize their experience in participating environments. We understand that those users are not our users, that they’re the users of the third-party app or of the publisher website.
[00:26:01.32] And I think I mentioned the comparison to RA-21 earlier. So this is perfectly comparable– or, I wouldn’t say comparable– compatible. So RA-21 in solving the institutional problem and making that seamless. We’re solving the individual problem and making that seamless.
[00:26:19.48] Importantly, unlike CASA– with CASA, Google is collecting all of their information and not passing it to the publishers. Whereas with Connect, the publishers get whatever it is that the researcher or user is willing to share with them. And it’s based on open standards, OpenID Connect and OAuth.
[00:26:42.63] Here’s a quick look at the sign up page. You can see that you can sign up with a– for an ect account– Connect account with your ORCID ID, with Google, with Twitter. And we’re going to be adding LinkedIn, as well. And we’re promoting our apps at the bottom because they’re all part of Connect. I could imagine adding a link or another box promoting the other apps as they sign on and start to participate so that we make it fair and equitable for all.
[00:27:10.53] Here’s the landing page of somebody’s Scitrus feed. So Maria is a physicist. It knows exactly the domain of physics that she works in. And as she’s interacted with it, it’s learned more and more from her. The content that you see here is really just key image metadata and the abstract that we’re getting from– again, from the various resources that I mentioned in my intro.
[00:27:37.68] So we’re not taking the full text and bringing it to Scitrus. We’re bringing the user to the publisher– the authoritative publisher website. So we’re driving traffic to publishers’ websites with this tool, because again, we know where our bread is buttered. And we’re not going to be pulling users away from your websites onto some additional service that we’re adding.
[00:28:00.00] And these– the articles are laid out in an attractive, magazine-like layout in an automated way. And the articles at the top are going to be the most recent, most relevant. As you go down, they are somewhat less relevant. The user can– has a tab, a dial where they can set the amount of relevance or noise, filter it down to the top 50% relevant, top 25% relevant. There’ll be smaller articles represented here, if they’re less relevant.
[00:28:31.71] And you’ll see that this user has a sidebar over there, where it says American Institute of Physics. One of the ways we’re engaging with publishers is to– in exchange for promoting Scitrus to your subscribers or to your members, when they sign up using the link that you provide to them, they’ll have a feed that you can control in the sidebar. And you could use that to provide either a feed of your journal articles or other news about your organization or your field, or even promotional messaging or advertising. So along the– further out down the path, we see this as, actually, a revenue generator for publishers, because it can sell ads into a space that didn’t exist before.
[00:29:19.41] I’m going to lastly show Remarq. This is still in development. The others are all launching this month. This is newer, because we– the RedLink acquisition is more recent. And so the look and feel may change, but the core concept is to have a shared library across the– across our apps and the ability to represent any apps, in some manner, within this environment to put all of the tools that the user is part of their workflow at their fingertips.
[00:29:50.11] One of the things that we’ve learned about researchers– the problem with engaging with publishers, for them, is that they feel like they’re having to come to the publisher and do all these things as if they’re playing different roles and are separate people. And what they would love to see is to have that universe of content discovery, content authoring, collaboration, sharing– hey, I read this thing, and we’re working on this paper together. Check it out– all of that in one place so that it’s coming to them and it’s part of their workflow.
[00:30:23.02] So quickly, here, you can see that there’s filters being used to group articles in a library by the relationship to this author’s paper or thesis based on whether it’s contradicting, supporting, or complementary. Here, you can see filters inside Remarq being used to group their manuscripts by their status of review. So you can submit to conferences or submit to journals and have one snapshot of your submission status everywhere. And all your collaborators can see the same thing.
[00:30:51.24] And here, the user is using filters to group their contacts by institution. So these are the people that are also– that also have Connect accounts that you work with. And so they can work with you on papers and you can discover things in Scitrus and send it to them. And you can also have private annotation within your little group. So they’re looking at this paper and flagging things that they think would be useful in the paper that they’re offering.
[00:31:21.26] I’m going to wrap up with this slide. So this is a very busy month for our team, because several of these are launching in the next week or two. We have a cold start with Scitrus. We’ve been doing some private betas. So we have users. We have user feedback. We’ve made a lot of changes. We’ve worked with two societies to optimize how we interact with them. And Connect is also a cold start [INAUDIBLE] app, but fostered by the activity across all the other apps. And potential participation by the publishers who adopt Connect, as well.
[00:32:00.94] Authorea and Manuscripts are actually being combined into a single solution. And that has a significant user base. Manuscripts had about 65,000 users when they stopped developing their OS X app and moved in the direction of launching the V1 online version this month. And then Authorea had almost 200,000 users at the time that we acquired Authorea. It’s still up and running, but we’re migrating the authoring Microsoft Word-like capability to Manuscripts, which is more powerful and more modern in that respect, and turning Authorea into a preprinting capability.
[00:32:38.44] So one of the opportunities for publishers to work with Authorea is going to be, when an author submits a manuscript, they can check a box in a submission portal that we’re developing– separate initiative that I’m not going have time to talk about here– saying, I would like this to be available in a journal-branded preprint environment. And if that’s supported by the journal, the article will immediately, upon submission, be visible with a badge saying “under review at such and such journal.”
[00:33:10.48] And then, at the end of that review process, if it’s rejected, the auth– it can either go into a generic Authorea preprint environment which already exists– there’s tons of papers on there– or it can be pushed to archive, or pub/archive, or bioRxiv, or whatever community preprint server is relevant to that or preferred by that particular author. If it’s accepted, it can stay there and be an open access version of the article and then have a link to resolve to the version of record that’s in the publisher’s website.
[00:33:39.99] So I’ve probably used up my 12 minutes. I’m going to stop there. And I’m looking forward to questions.
[00:33:45.17] [APPLAUSE] [00:33:51.05] JENNIFER SCHIVAS: Brilliant. So I would like to introduce our third and final speaker, Andrew Smeall from Hindawi.
[00:34:03.90] ANDREW SMEALL: Hi, everyone. So I’m Andrew Smeall from Hindawi. So that’s Hindawi. We’re a publisher, about 200 journals. But the thing that may be less known about us is that we’re also a technology provider to partners like Wiley and AAAS. So I’m going to talk a little bit about the projects that we’re doing, but not really focusing on that. I just thought it might be interesting to people to talk a little bit more generically about open source as a software development approach and why we think it works well for Hindawi, and a little bit about how businesses can make money on open source projects.
[00:34:39.40] So we build open source software through a– primarily through a community called Pubsweet, which is a framework of components that can be used to support publishing workflows. It’s technology that’s stewarded by a foundation called the Collaborative Knowledge Foundation, otherwise known as Coko. You may have seen them or heard about them at other conferences.
[00:35:03.13] So Pubsweet– like I said, it’s really a set of raw ingredients that can be used to create publishing workflows that are developed in the open by teams at different organizations. So primarily, Hindawi, eLife, Europe PMC are the big ones. And then a bunch of other small organizations. There’s also a parallel group in the book publishing world called [? Editoria, ?] which reuses some of the same technology.
[00:35:26.57] But the idea is, Pubsweet is not itself a product or a platform or something like that. It’s just components that we make open so that perhaps we can reuse each other’s work. We share certain philosophies about software development architecture and things that we think are good ideas. But also, we’re all free to go build and customize our own solutions that work better for us.
[00:35:47.86] So we’re a community of– I said developers, but I should say also designers, product managers– people from all different areas of the business, really– who have a like-minded approach to software development. And as individuals, we may have idealistic goals for what we’re trying to do with open source software. But really, we’re also– we all work for organizations that have business goals. So at the end of the day, we’re trying to support those, as well.
[00:36:14.16] So these are just some quick examples. Europe PMC built a metadata review portal for– to ingest articles into Europe PMC. So it’s really a simple version of a peer review process where you can see the metadata of an article that’s going into your PMC and a reviewer can make improvements to that metadata. eLife has built a quite interesting collaborative, HTML-first peer review system. Really forward looking. Right now, it’s live for the submission of new articles to eLife. And they’re working on the piece that will actually handle the review process.
[00:36:47.85] And Hindawi has built a– really, a more traditional, straight-forward peer review process that is live on two of our journals now. But something quite simple that would look familiar to anyone that’s worked with any other peer review system.
[00:37:04.08] So that’s it on the Pubsweet stuff. Of course, I’m always happy to answer questions. But really, I want to talk a little bit more about open source more generally.
[00:37:10.98] So what is free and open source software? There’s a debate in the community whether you should even call it open source software or, should you call it free software? I’m not going to get too much into that. But the most important thing is that it doesn’t mean free as in the software’s free to use or free to run or free to maintain. It means free in the sense that– like free speech. It’s something where it’s a licensing scheme, really. It’s using a license that allows some part of the code to be viewable by people, and delegates certain rights to users of the code.
[00:37:44.07] So there’s lots of different variations. Those rights could be rights to reuse, rights to copy, rights to modify, rights to distribute, rights to build commercial applications on top of it. There’s a whole number of different licenses that can be applied. But the point is “free speech, not free beer,” as Richard Stallman said.
[00:38:03.42] Does it make better software? Not necessarily. Open source doesn’t say anything about the quality of the software. You could build terrible open source software or you can build great open source software. There’s some groups within the software community that think exposing yourself to public scrutiny has some kind of forcing function on making higher quality software. But I don’t think there’s any conclusive evidence that that’s true or not true. You can, of course, build terrible software. You can build open software that no one cares about and no one looks at. So there’s not always even that forcing function. The point is that it’s just an approach. It’s not something that says your software is better or not better.
[00:38:44.21] Is it better for researchers? Not necessarily. I think researchers care, ultimately, about the experience that they get. And you can build a great user experience on open source software. You can build a terrible open source experience. So there may be some mission-driven researchers or people that– certain governments and funders that care about whether the underlying technology is open source. But ultimately, I don’t think authors are really that concerned about it. Ultimately, they care about publishing their paper and getting it done in a timely way with a minimal amount of obstacles.
[00:39:17.61] Now, is it better for developers? Now we’re getting into the area where maybe there are some advantages to talk about here. I think when open source projects are going well, you’re part of a broader community. So you really can be inspired by the fact that you’re contributing to something bigger than yourself, something that might live on and be reused by others. Of course, that’s not true of every open source community. But there is that– there is that element of inspiration there.
[00:39:43.13] You also get– your contributions are publicly visible. So it’s easier to get rewarded for your work. You can see what you’ve done, what you’ve contributed. You can share that with others. You can take it with you from job to job. You also can have fun, because you’re meeting new people. You’re going to meetings. You’re interacting.
[00:40:00.51] So there’s something that’s fundamentally interactive and collaborative about it that can be fun. Not every developer cares about these things, but it’s certainly important to some subset of developers. And as a business, recruiting developers can be really challenging and really expensive. So things that attract certain people to your projects can make running a software team easier.
[00:40:21.41] Now, is it better for technology customers? And this is where I’ll come back to why we think this approach works for Hindawi. I think there’s real arguments that open source software can be better for technology customers, the point being that with open source platforms, some form of lock-in goes away. That code and that data can be owned by the customer.
[00:40:45.80] If you think that the service provider maintaining it for you, whether it’s, say, Hindawi or anyone else is doing a bad job, you can replace that service provider with someone different. And if something were to happen to that service provider– if they were acquired or went out of business– that platform can live on and you can take it and maintain it yourself or bring it to someone else to maintain. That doesn’t mean this will be cheap or easy. It just means that it’s possible. You’re protected from certain forms of unpredictable risk.
[00:41:13.97] So what are people paying for, though, if something’s open– if something’s open source? Why are they paying you for something that you’re giving away for free? So I thought I would run through, just quickly, some example business models that have worked in open source in the past, and some companies that have applied them.
[00:41:28.46] So a very common one is called Open Core. And this would be, you make some part of your code open source. And other things are closed source. And you sell access to those closed source features. This is a really common model. Some people argue that it’s not really open source, because you’re really selling closed source software on top of an open core.
[00:41:47.57] But GitLab is one example. Like GitHub, they’re a repository for code bases that allow you to do version control. And they’ve built a business that does over $100 million in revenue off of selling what is, essentially, a free software.
[00:42:05.05] Another model is dual licensing. So this would be the idea that you have some kind of– you have an open version of your platform that has some kind of viral license. So so-called copyleft licenses, like the new Public License, which is like, anything that you build with that software has to share that open license. So it’s viral– like a share-alike license, so to speak.
[00:42:27.19] And companies may say, well, look, I can’t make my whole platform open. But I want to use your technology. Can you sell me a version of the software that has a less– a more permissive license? And that’s how those companies make money. So MySQL, which is a database technology company, offered a version of MySQL that was freely available. But if a commercial person wanted to integrate that technology into a closed proprietary system, they would pay for a different licensed version of the system. MySQL was doing about $70 million in revenue before they sold to Oracle– to Sun Oracle in 2010.
[00:43:03.16] Another option is software as a service. So a classic example of this is WordPress, which is run by a company called Automattic. And so you can download and install WordPress for yourself absolutely free, but it requires some technical skills. You need to upload the packages using an FTP to a server somewhere and you need to do some customization. If you’d rather not do that, you can pay WordPress for a fully hosted version where they’ll take care of all that for you. So WordPress is– again, it’s still a private company, but they do about $100 million in revenue off of what is essentially a– something where you have a free option if you wanted to choose it.
[00:43:40.75] Another idea is support. So Red Hat is the company that does a distribution of Linux that is very popular with enterprise users. They give way that– Linux is a free operating system. They give that away for free. But you would then sign up for a support subscription with them. They’ll do training and 24/7 support on your system, and maybe integrations and some custom work. And that should say revenue, not valuation. They were doing about $3.5 billion in revenue when they were acquired by IBM for, I think, $35 billion last summer.
[00:44:18.13] Another option is advertising and royalties. So Mozilla, who run Firefox– they can do things like charge to integrate a certain search engine as a default tool in their app. And they have a really large user base, so a search engine provider might pay some royalty to make sure that their search engine is used by users of the app. Mozilla can also make money off of things like selling merchandise. But that number comes from their 990 form, so it’s an estimate of how much money they’re bringing in from these types of deals. But that’s a very widely used and sustainable model for certain kinds of open source projects.
[00:44:56.65] And finally, grants and donations. That’s something used, say, for Jupyter Notebook. They’re part of a foundation. That foundation has grant funding from Sloan and Moore and some other funders that supports their ongoing work. They also use underlying technologies like Python, for example, which is itself a grant-funded community. And also, you have projects, say, like Wikipedia, for example, which are funded by private donations. So there’s lots of models in this area. I think it’s hard to scale this to a really, really large project, but it has proven sustainable for lots of really interesting and important work.
[00:45:38.10] So why open source for Hindawi? This is just our thesis. It may not be true. It’s what we believe, but it doesn’t necessarily mean it’ll work for everyone. We think that, first of all, open source aligns with some of our mission goals. So we have open science goals that we’d like to achieve. We believe in preserving a vibrant and competitive publishing community. And that means having lots of options out there that are publicly available and not necessarily privately held walled gardens.
[00:46:07.08] And of course, closed source tools can follow open standards and integrate with other systems. But we think, by making things open source, it’s easier for people to see how to integrate with our systems. It’s a little bit easier for us to get feedback on how to implement certain open standards.
[00:46:25.38] We also, as a business, want to attract customers for software as a service or support business models. So we think that by having publicly available technology, it makes it easier for people to transition to open access. So societies and other publishers that are maybe struggling with the question of how to launch an open access journal will have some example implementations that they can look at for how to get up and running.
[00:46:50.55] And they’ll know people that they can go contact and ask questions about, can you help me support this? Can you provide customization for this for me? And we ultimately want to attract and keep those customers because we provide a good service, not because we’ve locked them into a long-term contract. So by making the underlying code and data easily transferable, that provides a discipline for us where– make sure that we’re providing a good service to those people, because they can always walk away.
[00:47:18.66] And then finally, that additional public scrutiny– we feel, for us, it really fosters a good mindset of, everything you develop, you’re developing in the open. So don’t take shortcuts. Always look for feedback on what you’re doing, because it’s going to be found out eventually.
[00:47:35.04] And then finally, there’s, of course, the idea around reducing costs with open source. So reusing things that already exist, reusing other open source projects that are out there, reducing variation and complexity in publishing workflows. This is a goal that’s really bigger than software. I think it’s just making sure that if something’s already built and editors and authors are used to doing things a certain way, don’t complicate it by coming up with your own flavor. Try to follow existing workflows.
[00:48:04.21] Collaborate to distribute costs– so we can work on some areas of the tool, eLife can work on other areas of the tool. Ultimately, we can share with each other everything that we built. We benefit from shared expertise. eLife has an amazing UX team that contributes lots of research and work to this. We have a large scale where we have challenges around processing large numbers of papers. And so we can share expertise in that area. And attract talented developers– anything that can help us recruit and maintain our team is really important.
[00:48:35.11] So our progress so far– we started working with Pubsweet on what we call Phenom– it’s the brand name for our software in 2017. In 2018, we released our review platform. 2019, we’re releasing a new series of applications in the same area– so a screening, quality check tool, finance, invoice processing tool, hosting platform, and a syndication tool to ANI databases. And next year, we’ll shift focus to the production side of things– so working on typesetting, proofing, and also hosted solutions and continuous improvements to everything that we’re doing.
[00:49:11.93] So that’s all. You can go find the code for yourself in our GitLab. You can, of course, go take it and do anything you want with it. It’s open source. Also, you can come talk to us if you have questions. Thanks.
[00:49:23.98] [APPLAUSE] [00:49:30.84] JENNIFER SCHIVAS: Thank you, Andrew. So I think we are a little bit over time, so I’m afraid we’re not going to have time for questions now. But I’d encourage anyone with questions to catch up with any of our three speakers or myself during one of the networking breaks. But I’d like you all just to join me in thanking our three speakers again. Thanks.
[00:49:47.89] [APPLAUSE] [00:49:48.79] [MUSIC PLAYING]