Quality metadata is foundational to the research nexus and all Crossref services. When inaccuracies creep in, these create problems that get compounded down the line. No wonder that reports of metadata errors from authors, members, and other metadata users are some of the most common messages we receive into the technical support team (we encourage you to continue to report these metadata errors).
We make members’ metadata openly available via our APIs, which means people and machines can incorporate it into their research tools and services - thus, we all want it to be accurate.
The other day I was out and about and got into a conversation with someone who asked me about my doctoral work in English literature. I’ve had the same conversation many times: I tell someone (only if they ask!) that my dissertation was a history of the villanelle, and then they cheerfully admit that they don’t know what a villanelle is, and then I ask them if they’re familiar with Dylan Thomas’s poem “Do not go gentle into that good night.
Having joined the Crossref team merely a week previously, the mid-year community update on June 14th was a fantastic opportunity to learn about the Research Nexus vision. We explored its building blocks and practical implementation steps within our reach, and within our imagination of the future.
Read on (or watch the recording) for a whistlestop tour of everything – from what on Earth is Research Nexus, through to how it’s taking shape at Crossref, to how you are involved, and finally – to what concerns the community surrounding the vision and how we’re going to address that.
TL;DR A year ago, we announced that we were putting the “R” back in R&D. That was when Rachael Lammey joined the R&D team as the Head of Strategic Initiatives.
And now, with Rachael assuming the role of Product Director, I’m delighted to announce that Dominika Tkaczyk has agreed to take over Rachael’s role as the Head of Strategic Initiatives. Of course, you might already know her.
We will also immediately start recruiting for a new Principal R&D Developer to work with Esha and Dominika on the R&D team.
To work out which version you’re on, take a look at the website address that you use to access iThenticate. If you go to ithenticate.com then you are using v1. If you use a bespoke URL, https://crossref-[your member ID].turnitin.com/ then you are using v2.
v1 Creating and finding your Similarity Report, keep reading:
For each document you submit for checking, the Similarity Report provides an overall similarity breakdown. This is displayed in the form of percentage of similarity between the document and existing published content in the iThenticate database. iThenticate’s repositories include the published content provided by Crossref members, plus billions of web pages (both current and archived content), work that has previously been submitted to Turnitin, and a collection of works including thousands of periodicals, journals, publications.
Matches are highlighted, and the best matches are listed in the report sidebar. Other matches are called underlying sources, and these are listed in the content tracking mode. Learn more about the different viewing modes (Similarity Report mode, Content tracking mode, Summary report mode, Largest matches mode).
If two sources have exactly the same amount of matching text, the best match depends on which content repository contains the source of the match. For example, for two identical internet source matches, the most recently crawled internet source would be the best match. If an identical match is found to an internet source and a publication source, the publication source would be the best match.
Accessing the Similarity Report (v1)
To access the Similarity Report through iThenticate, start from the folder that contains the submission, and go to the Documents tab. In the Report column, you will see a button - click this Similarity Score to open the document in the Document Viewer.
The Document Viewer (v1)
The Document Viewer screen opens in the last used viewing mode. There are three sections:
Along the top of the screen, the document information bar shows details about the submitted document. This includes the document title, the date the report was processed, the word count and the number of matching sources found in the selected databases.
The left panel is the document text. This shows the full text of the submitted document, highlighting areas of overlap with existing published content.
The colors correspond to the matching sources, listed in the sources panel on the right.
The layout will depend on your chosen report mode:
Match Overview (show highest matches together) shows the best matches between the submitted document and content from the selected search repositories. Matches are color-coded and listed from highest to lowest percentage of matching word area. Only the top or best matches are shown - you can see all other matches in the Match Breakdown and All Sources modes.
All Sources shows matches between the submission and a specifically selected source from the content repositories. This is the full list of all matches found, not just the top matches per area of similarity, including those not seen in the Match Overview because they are the same or similar to other areas which are better matches.
Match Breakdown shows all matches, including those that are hidden by a top source and therefore don’t appear in Match Overview. To see the underlying sources, hover over a match, and click the arrow icon. Select a source to highlight the matching text in the submitted document. Click the back arrow next to Match Breakdown to return to Match Overview mode.
Side-By-Side Comparison is an in-depth view that shows a document’s match compared side-by-side with the original source from the content repositories. From the All Sources view, choose a source from the sources panel, and a source box highlights on the submitted document similar content within a snippet of the text from the repository source. In Match Overview, select the colored number at the start of the highlighted text to open this source box. To see the entire repository source, click Full Source View, which opens the full-text of the repository source in the sources panel and all the matching instances. The sidebar shows the source’s full text with each match to the document highlighted in red. Click the X icon in the top right corner of the full source text panel to close it.
Use the view mode icons to switch between the Match Overview (default, left icon) and All Sources Similarity Report viewing modes. Click the right icon to change the Similarity Report view mode to All Sources.
Viewing live web pages for a source (v1)
You may access web-based sources by clicking on the source title/URL. If there are multiple matches to this source, use the arrow icons to quickly navigate through them.
If a source is restricted or paywalled (for example, subscription-based academic resources), you won’t be able to view the full-text of the source, but you’ll still see the source box snippet for context. Some internet sources may no longer be live.
From Match Overview, click the colored number at the start of a piece of highlighted text on the submitted document. A source box will appear on the document text showing the similar content highlighted within a snippet of the text from the repository source. The source website will be in blue above the source snippet - click the link to access it.
From Match Breakdown or All Sources, select the source for which you want to view the website, and a diagonal icon will appear to the right of the source. Click this to access it.
Page owner: Kathleen Luschek | Last updated 2020-May-19