When each line of code is written it is surrounded by a sea of context: who in the community this is for, what problem we’re trying to solve, what technical assumptions we’re making, what we already tried but didn’t work, how much coffee we’ve had today. All of these have an effect on the software we write.
By the time the next person looks at that code, some of that context will have evaporated.
It turns out that one of the things that is really difficult at Crossref is checking whether a set of Crossref credentials has permission to act on a specific DOI prefix. This is the result of many legacy systems storing various mappings in various different software components, from our Content System through to our CRM. To this end, I wrote a basic application, credcheck, that will allow you to test a Crossref credential against an API.
Subject classifications have been available via the REST API for many years but have not been complete or reliable from the start and will soon be deprecated. dfdfd
The subject metadata element was born out of a Labs experiment intended to enrich the metadata returned via Crossref Metadata Search with All Subject Journal Classification codes from Scopus. This feature was developed when the REST API was still fairly new, and we now recognize that the initial implementation worked its way into the service prematurely.
Crossref and DOAJ share the aim to encourage the dissemination and use of scholarly research using online technologies and to work with and through regional and international networks, partners, and user communities for the achievement of their aims to build local institutional capacity and sustainability. Both organisations agreed to work together in 2021 in a variety of ways, but primarily to ‘encourage the dissemination and use of scholarly research using online technologies, and regional and international networks, partners and communities, helping to build local institutional capacity and sustainability around the world.
Back in 2014, Geoffrey Bilder blogged about the kick-off of an initiative between Crossref and Wikimedia to better integrate scholarly literature into the world’s largest knowledge space, Wikipedia. Since then, Crossref has been working to coordinate activities with Wikimedia: Joe Wass has worked with them to create a live stream of content being cited in Wikipedia; and we’re including Wikipedia in Event Data, a new service to launch later this year. In that time, we’ve also seen Wikipedia importance grow in terms of the volume of DOI referrals.
How can we keep this momentum going and continue to improve the way we link Wikipedia articles with the formal literature? We invited Alex Stinson, a project manager at The Wikipedia Library (and one of our first guest bloggers) to explain more:
Wikipedia provides the most public gateway to academic and scholarly research. With millions of citations to academic as well as non-academic but reliable sources, like those produced by newspapers, its ecosystem of 5 million English Wikipedia articles and 35 million articles in hundreds of languages provides the first stop for researchers in both scholarly and informal research situations. The practice of “checking Wikipedia” has become ubiquitous in a number of fields; for example, Wikipedia is the most visited source of medical information online, even providing the first stop for many medical students and medical practitioners when looking for medical literature.
The Wikipedia Library program helps Wikipedia’s volunteer editors access and use the best sources in their research and citations. Through partnerships with over fifty leading publishers and aggregators, like JSTOR, Project Muse, Elsevier, Newspapers.com, Highbeam, Oxford University Press and others, we have been able to give over 3000 of our most prolific volunteers access to over 5500 accounts. These are clear, win-win relationships where Wikipedia editors get to use these databases to improve Wikipedia, while in turn linking to authoritative resources and enhancing their discovery.
JSTOR has been working with us since 2012, providing over 500 accounts to our editors. Kristen Garlock at JSTOR writes:
“We’re very happy to collaborate with the Wikipedia Library to provide JSTOR access to Wikipedia editors. Supporting the initiative to increase editor access to scholarly resources and improve the quality of information and sources on Wikipedia has the potential to help all Wikipedia readers. In addition to providing more discoverability for our institutional subscribers, introducing new audiences to the scholarship on JSTOR them discover access opportunities like our Register & Read program.”
There are strong signals that Wikipedia’s role in the citation ecosystem helps ensure the best materials reach the public through its over 400 million monthly readers:
Two of our access partners have found that around half of the referrals arriving from Wikipedia were able to authenticate into their subscription resources, suggesting that a large portion of our readers can take advantage of subscriptions provided by scholarly institutions.
Wikipedia is highly influential in the open access ecosystem as well, with a recent study showing higher citation rates for OA materials than those behind a paywall.
Altmetrics tools (such as Altmetric.com, ImpactStory or Plum Analytics) are recognizing Wikipedia’s importance by including Wikipedia citations in their impact metrics.
Despite these advances, we think this is only the beginning of Wikipedia’s impact on the landscape of scholarly research and discovery. Wikipedia can become a highly integrated research platform within the broader research ecosystem, where the best scholarship is summarized and discoverable-where Wikipedia effectively becomes the front matter to all research.
However, there are some clear barriers to fulfilling this vision. Currently, most citations on Wikipedia are stored in free-text and not readily available in machine-readable formats; our community is working to fix this. Wikipedia also has major systematic gaps in topics where either we lack volunteer interest or Wikipedia reflects larger systemic biases within society or scholarship.We need the help of volunteers, experts, industry partners, and information technologists to grow Wikipedia’s collection of citations, especially around key missing areas, and to transform existing citations into structured formats.
WikiData, Wikipedia’s sister project which crowdsources structured metadata, offers an excellent opportunity for improving the impact of Wikipedia in research. Having Wikipedia citations stored in this structured ecosystem, connecting metadata with semantic meaning, would allow the citations in Wikipedia to become the backbone for discovery tools which emphasize the hand-curated interrelationships between authoritative sources and the knowledge collected by Wikipedia and Wikidata editors.
We need more collaborators to realize the full vision of Wikipedia supporting research in the most effective ways:
We need help from publishers with subscription databases, to help us give our editors access to the databases through The Wikipedia Library’s access partnership program. These high-quality source materials allow our editors to expose that research in a number of languages and for millions of readers.
We need your expertise to build our structured metadata ecosystem, by helping Wikidata map and collect citation data.
We need the larger research community to promote Wikipedia as a scholarly communications tool and make contributing to Wikipedia an important part of the social responsibility of experts. Wider citation of sources in Wikipedia ensures widespread discovery and dissemination of that research.