Quality metadata is foundational to the research nexus and all Crossref services. When inaccuracies creep in, these create problems that get compounded down the line. No wonder that reports of metadata errors from authors, members, and other metadata users are some of the most common messages we receive into the technical support team (we encourage you to continue to report these metadata errors).
We make members’ metadata openly available via our APIs, which means people and machines can incorporate it into their research tools and services - thus, we all want it to be accurate.
The other day I was out and about and got into a conversation with someone who asked me about my doctoral work in English literature. I’ve had the same conversation many times: I tell someone (only if they ask!) that my dissertation was a history of the villanelle, and then they cheerfully admit that they don’t know what a villanelle is, and then I ask them if they’re familiar with Dylan Thomas’s poem “Do not go gentle into that good night.
Having joined the Crossref team merely a week previously, the mid-year community update on June 14th was a fantastic opportunity to learn about the Research Nexus vision. We explored its building blocks and practical implementation steps within our reach, and within our imagination of the future.
Read on (or watch the recording) for a whistlestop tour of everything – from what on Earth is Research Nexus, through to how it’s taking shape at Crossref, to how you are involved, and finally – to what concerns the community surrounding the vision and how we’re going to address that.
TL;DR A year ago, we announced that we were putting the “R” back in R&D. That was when Rachael Lammey joined the R&D team as the Head of Strategic Initiatives.
And now, with Rachael assuming the role of Product Director, I’m delighted to announce that Dominika Tkaczyk has agreed to take over Rachael’s role as the Head of Strategic Initiatives. Of course, you might already know her.
We will also immediately start recruiting for a new Principal R&D Developer to work with Esha and Dominika on the R&D team.
We test a broad sample of DOIs to ensure resolution. For each journal crawled, a sample of DOIs that equals 5% of the total DOIs for the journal up to a maximum of 50 DOIs is selected. The selected DOIs span prefixes and issues.
The results are recorded in crawler reports, which you can access from the depositor report expanded view. If a title has been crawled, the last crawl date is shown in the appropriate column. Crawled DOIs that generate errors will appear as a bold link:
Click Last Crawl Date to view a crawler status report for a title:
The crawler status report lists the following:
Total DOIs: Total number of DOI names for the title in system on last crawl date
Checked: number of DOIs crawled
Confirmed: crawler found both DOI and article title on landing page
Semi-confirmed: crawler found either the DOI or the article title on the landing page
Not Confirmed: crawler did not find DOI nor article title on landing page
Bad: page contains known phrases indicating article is not available (for example, article not found, no longer available)
Login Page: crawler is prompted to log in, no article title or DOI
Exception: indicates error in crawler code
httpCode: resolution attempt results in error (such as 400, 403, 404, 500)
httpFailure: http server connection failed
Select each number to view details. Select re-crawl and enter an email address to crawl again.
Page owner: Isaac Farley | Last updated 2020-April-08