Blog

Improved processes, and more via Metadata Manager

Hi, Crossref blog-readers. I’m Shayn, from Crossref’s support team. I’ve been fielding member questions about how to effectively deposit metadata and register content (among other things) for the past three years. In this post, I’ll take you through some of the improvements that Metadata Manager provides to those who currently use the Web Deposit form.

Resolutions 2019: Journal Title Transfers = Metadata Manager

UPDATE, 12 December 2022
Due to the scheduled sunsetting of Metadata Manager, this title transfer process has been deprecated. Please find detailed guidance for transferring titles on our documentation site here.

When you thought about your resolutions for 2019, Crossref probably didn’t cross your mind—but, maybe it should have…

LIVE18: Roaring attendees, incomplete zebras, and missing tablecloths

Running a smooth event is always the goal, but not always the case! No matter how well managed an event is, there is always a chance that things will not go according to plan. And so it was with LIVE18. For the first day we were without the tablecloths we had ordered, which actually gave the room quite a nice, but unintentional, ‘rustic’ look. When they finally did arrive the following day, we realized we preferred the rustic look!

Reference matching: for real this time

In my previous blog post, Matchmaker, matchmaker, make me a match, I compared four approaches for reference matching. The comparison was done using a dataset composed of automatically-generated reference strings. Now it’s time for the matching algorithms to face the real enemy: the unstructured reference strings deposited with Crossref by some members. Are the matching algorithms ready for this challenge? Which algorithm will prove worthy of becoming the guardian of the mighty citation network? Buckle up and enjoy our second matching battle!

Phew - its been quite a year

As the end of the year approaches it’s useful to look back and reflect on what we’ve achieved over the last 12 months—a lot! To be honest, there were some things we didn’t get done—or didn’t make as much progress with as we hoped—but that happens when you have an ambitious agenda. However, we also got some things done that we didn’t expect to or that weren’t even on our radar at the end of 2017—this is inevitable as the research and scholarly communications landscape is rapidly changing.

Newly approved membership terms will replace existing agreement

In its July 2018 meeting, the Crossref Board voted unanimously to approve and introduce a new set of membership terms. At the same meeting, the board also voted to change the description of membership eligibility in our Bylaws, officially broadening our remit beyond publishers, in line with current practice and positioning us for future growth.

Updates to our by-laws

Good governance is important and something that Crossref thinks about regularly so the board frequently discusses the topic, and this year even more so. At the November 2017 meeting there was a motion passed to create an ad-hoc Governance Committee to develop a set of governance-related questions/recommendations. The Committee has met regularly this year and the following questions are under deliberation regarding term limits, role of the Nominating Committee, implications of contested elections, and more.

Data Citation: what and how for publishers

We’ve mentioned why data citation is important to the research community. Now it’s time to roll up our sleeves and get into the ‘how’. This part is important, as citing data in a standard way helps those citations be recognised, tracked, and used in a host of different services.

Matchmaker, matchmaker, make me a match

Matching (or resolving) bibliographic references to target records in the collection is a crucial algorithm in the Crossref ecosystem. Automatic reference matching lets us discover citation relations in large document collections, calculate citation counts, H-indexes, impact factors, etc. At Crossref, we currently use a matching approach based on reference string parsing. Some time ago we realized there is a much simpler approach. And now it is finally battle time: which of the two approaches is better?

What does the sample say?

At Crossref Labs, we often come across interesting research questions and try to answer them by analyzing our data. Depending on the nature of the experiment, processing over 100M records might be time-consuming or even impossible. In those dark moments we turn to sampling and statistical tools. But what can we infer from only a sample of the data?