create a function to remove duplicates in "works"
also update spelling of infromation to information !
please enable a function to remove duplicates in "works"
We’ve just released a new interface that addresses the issue with duplicate works added from different sources. The new interface groups works together by the work’s identifiers so that duplicates are shown as multiple versions of a single work.
The full announcement for about the new interface is at http://orcid.org/blog/2014/12/11/new-feature-friday-new-orcid-record-interface
Israel Hanukoglu commented
I confirm the problem reported by Stuart Ray on March 17, 2017.
The duplicate records in ORCID is a very serious problem that should be fixed.
AdminORCID (APAC/MEA) (Admin, ORCID) commented
Your issue appears to be related to case (in)sensitivity of identifiers. All identifiers are presently case sensitivity in our Registry, which is an issue because some identifiers are case insensitive. We're working on this and recommend that you follow this related iDea for more information and updates: https://support.orcid.org/forums/175591/suggestions/16897273
ORCID Community Team
Stuart Ray commented
It appears (from my profile) that the DOI merge function fails to match (and merge) duplicates when there are differences in DOI capitalization from different sources (e.g. Scopus vs. ResearcherID vs. CrossRef).
Vladimir Alexiev commented
It matches "items in your record that share the same DOI or other identifier". As remarked by others, that's not good enough. Eg in my profile http://orcid.org/0000-0001-7508-7428 there are 2 copies of "Implementing CIDOC CRM Search Based on Fundamental Relations and OWLIM Rules".
If I import http://vladimiralexiev.github.io/pubs/Alexiev-bibliography.bib to https://orcid.org/my-orcid, the number grows from 71 to 102 works!!
The matching should be by Title and Authors.
Paul Fowler commented
useless, absolutely useless. The link does NOT take the reader to any useful information - like most "help" links in ORCID or sent by ORCID staff. NOTHING in the ORCID website addresses duplicate works. between ORCID and RESEARCHFISH a lot of researcher time is being wasted.
The duplicate problem still exists!!!!
The lack of this capability in ORCID has so far prevented me from taking ORCID seriously, or including my ORCID on my website.
Bob Grove commented
Admin: PLEASE GET YOUR ACT TOGETHER ON THIS ONE. As noted by many, many people over a long period of time, the one-at-a-time delete function is indeed VERY cumbersome and time-consuming.
This really isn't good enough. This substantial problem was acknowledged almost a year ago and still no solution is offered. DOI matching is surely simple to implement. Many other publication databases also offer a manual merge function for records.
Rolf Sander commented
Unfortunately, ORCID is still useless for me as long as I have all these
duplicates in my list of papers. There is no need to wait months and
years for a fancy solution. Simply create a button saying: "Keep this
entry and delete all others with the same DOI".
A tool to assist the user for duplicates removal should have been developed well before thinking about integration with other databeses such as ResearchID, Scopus, ... . The multiple selection is also missed. Come on!
This major problem is still hanging around. As others have commented, this makes ORCID unusable. I had thought of updating my records from ResearcherID today, but until ORCID fix this issue I'll stay well away.
The easiest way to do this would be to allow free sorting of works in the editing window. This would be a useful feature on its own and should take all of 30 min to implement. Then it would be relatively easy to remove (delete) duplicate works. Although I do think that automatic duplicate removal would be useful too (for those with hundreds of papers). I gave this feature the highest priority.
Igor Reva commented
My ORCID number is 0000-0001-5983-7743
I have co-authored more than 100 publications.
Imported them once from Researcher ID, and twice from SCOPUS
Curretnly my ORCID record has more than 300 records, most of them triplicated.
Cleaning this is a big pain...
The easiest way of identifying articles as unique might be by DOI (Digital Object Identifier).
Can it be somehow implemented to avoid multiplication of already existing records?
Elston Van Steenburgh commented
I don't think an author should be subjected to this orcid torture. I want to submit a paper for publication, not fool around with expired links just to get started. Some start! I quit.
David W. Lawrence commented
For journal articles, matching can approach 100% through the use of
the first few title words
author and second author names
beginning page number or location number for online articles or doi
publication year isn't so important because there can be confusion between ahead-of-print items and final versions that cross years.
It is unnecessary to perfectly match everything every time. Give us an easy way to edit and delete our things.
Just re-checked, I still have loads of duplicates (from ResearcherID) and can't delete all at once. Makes ORCID unusable. As people have been saying for more than a year!
Well, as a minimun, you must make creata an automatic system which can identify when the same item has been imported from researcherID and from SCOPUS.
It is a big pain for the author to go through all the duplicates that your automatic system has created. And this duplication is re-made each time I import e.g. from SCOPUS.
I will refuse to remove the duplicates until you provide something which works better for the authors.
So in case you want a clean and good database and if you want the authors to do work for free for you, you must take care of this issue immedieately!
A merge tool could be used, so that the user could combine duplicate entries, into one with complete information, such as page number and external link.
It would seem removing a simple tool to identify duplicate DOI's would be the first step in creating a tool to do this. Either that or a simple "select all works" then delete them followed by re-import from another source would do the trick.