Thank you to everyone who responded with feedback on the Op Cit proposal. This post clarifies, defends, and amends the original proposal in light of the responses that have been sent. We have endeavoured to respond to every point that was raised, either here or in the document comments themselves.
We strongly prefer for this to be developed in collaboration with CLOCKSS, LOCKSS, and/or Portico, i.e. through established preservation services that already have existing arrangements in place, are properly funded, and understand the problem space.
I’m pleased to share the 2023 board election slate. Crossref’s Nominating Committee received 87 submissions from members worldwide to fill seven open board seats.
We maintain a balance of eight large member seats and eight small member seats. A member’s size is determined based on the membership fee tier they pay. We look at how our total revenue is generated across the membership tiers and split it down the middle. Like last year, about half of our revenue came from members in the tiers $0 - $1,650, and the other half came from members in tiers $3,900 - $50,000.
Crossref acquires Retraction Watch data and opens it for the scientific community Agreement to combine and publicly distribute data about tens of thousands of retracted research papers, and grow the service together
12th September 2023 —– The Center for Scientific Integrity, the organisation behind the Retraction Watch blog and database, and Crossref, the global infrastructure underpinning research communications, both not-for-profits, announced today that the Retraction Watch database has been acquired by Crossref and made a public resource.
Today, we are announcing a long-term plan to deprecate the Open Funder Registry. For some time, we have understood that there is significant overlap between the Funder Registry and the Research Organization Registry (ROR), and funders and publishers have been asking us whether they should use Funder IDs or ROR IDs to identify funders. It has therefore become clear that merging the two registries will make workflows more efficient and less confusing for all concerned.
Back in 2014, Geoffrey Bilder blogged about the kick-off of an initiative between Crossref and Wikimedia to better integrate scholarly literature into the world’s largest knowledge space, Wikipedia. Since then, Crossref has been working to coordinate activities with Wikimedia: Joe Wass has worked with them to create a live stream of content being cited in Wikipedia; and we’re including Wikipedia in Event Data, a new service to launch later this year. In that time, we’ve also seen Wikipedia importance grow in terms of the volume of DOI referrals.
How can we keep this momentum going and continue to improve the way we link Wikipedia articles with the formal literature? We invited Alex Stinson, a project manager at The Wikipedia Library (and one of our first guest bloggers) to explain more:
Wikipedia provides the most public gateway to academic and scholarly research. With millions of citations to academic as well as non-academic but reliable sources, like those produced by newspapers, its ecosystem of 5 million English Wikipedia articles and 35 million articles in hundreds of languages provides the first stop for researchers in both scholarly and informal research situations. The practice of “checking Wikipedia” has become ubiquitous in a number of fields; for example, Wikipedia is the most visited source of medical information online, even providing the first stop for many medical students and medical practitioners when looking for medical literature.
The Wikipedia Library program helps Wikipedia’s volunteer editors access and use the best sources in their research and citations. Through partnerships with over fifty leading publishers and aggregators, like JSTOR, Project Muse, Elsevier, Newspapers.com, Highbeam, Oxford University Press and others, we have been able to give over 3000 of our most prolific volunteers access to over 5500 accounts. These are clear, win-win relationships where Wikipedia editors get to use these databases to improve Wikipedia, while in turn linking to authoritative resources and enhancing their discovery.
JSTOR has been working with us since 2012, providing over 500 accounts to our editors. Kristen Garlock at JSTOR writes:
“We’re very happy to collaborate with the Wikipedia Library to provide JSTOR access to Wikipedia editors. Supporting the initiative to increase editor access to scholarly resources and improve the quality of information and sources on Wikipedia has the potential to help all Wikipedia readers. In addition to providing more discoverability for our institutional subscribers, introducing new audiences to the scholarship on JSTOR them discover access opportunities like our Register & Read program.”
There are strong signals that Wikipedia’s role in the citation ecosystem helps ensure the best materials reach the public through its over 400 million monthly readers:
Two of our access partners have found that around half of the referrals arriving from Wikipedia were able to authenticate into their subscription resources, suggesting that a large portion of our readers can take advantage of subscriptions provided by scholarly institutions.
Altmetrics tools (such as Altmetric.com, ImpactStory or Plum Analytics) are recognizing Wikipedia’s importance by including Wikipedia citations in their impact metrics.
Despite these advances, we think this is only the beginning of Wikipedia’s impact on the landscape of scholarly research and discovery. Wikipedia can become a highly integrated research platform within the broader research ecosystem, where the best scholarship is summarized and discoverable-where Wikipedia effectively becomes the front matter to all research.
However, there are some clear barriers to fulfilling this vision. Currently, most citations on Wikipedia are stored in free-text and not readily available in machine-readable formats; our community is working to fix this. Wikipedia also has major systematic gaps in topics where either we lack volunteer interest or Wikipedia reflects larger systemic biases within society or scholarship.We need the help of volunteers, experts, industry partners, and information technologists to grow Wikipedia’s collection of citations, especially around key missing areas, and to transform existing citations into structured formats.
WikiData, Wikipedia’s sister project which crowdsources structured metadata, offers an excellent opportunity for improving the impact of Wikipedia in research. Having Wikipedia citations stored in this structured ecosystem, connecting metadata with semantic meaning, would allow the citations in Wikipedia to become the backbone for discovery tools which emphasize the hand-curated interrelationships between authoritative sources and the knowledge collected by Wikipedia and Wikidata editors.
We need more collaborators to realize the full vision of Wikipedia supporting research in the most effective ways:
We need help from publishers with subscription databases, to help us give our editors access to the databases through The Wikipedia Library’s access partnership program. These high-quality source materials allow our editors to expose that research in a number of languages and for millions of readers.
We need the larger research community to promote Wikipedia as a scholarly communications tool and make contributing to Wikipedia an important part of the social responsibility of experts. Wider citation of sources in Wikipedia ensures widespread discovery and dissemination of that research.