Showing posts with label ORCID. Show all posts
Showing posts with label ORCID. Show all posts

Friday, March 10, 2017

DataCite services, Zenodo and MANY ORCID integration services (and impactstory)



DataCite services

Locate, identify, and cite research data with DataCite, a global provider of DOIs for research data.

https://www.datacite.org/
(less services than crossRef).

http://stephane-mottin.blogspot.fr/2017/01/datacite-inist-cern-metadata-schema.html

DOI handbook
http://www.doi.org/hb.html

In order to create new DataCite DOIs and assign them to your content, it is necessary to become a DataCite member or work with one of the current members.

Through the web interface or the API of the DataCite Metadata Store you will be able to submit a name, a metadata description following the DataCite Metadata Schema and at least one URL of the object to create a DOI. Once created, information about a DOI is available through our different services search, event data, OAI-PMH and others).

The DataCite Metadata Store is a service for data publishers to mint DOIs and register associated metadata. The service requires organisations to first register for an account with a DataCite member.
https://mds.datacite.org/

http://schema.datacite.org/

status of the services
http://status.datacite.org/

https://blog.datacite.org/

Services of DataCite profiles

DataCite Profiles integrates DataCite services from a user’s perspective and provides tools for personal use. In particular, it is a key piece of integration with ORCID, where researchers can connect their profiles and automatically update their ORCID record when any of their works contain a DOI.

https://profiles.datacite.org/
example
https://profiles.datacite.org/users/0000-0002-7088-4353
0000-0002-7088-4353 is my ORCID Id.

You can get

  • your ORCID Token 
  • your API Key
    (if you want to use the ORCID API).

In your profile, you can select how to connect:

You can also follow ORCID claims.
For example: "You have 5 successful claims, 0 notification claims, 0 queued claims and 0 failed claims"

You are also linked to Impactstory
Impactstory is an open-source website that helps researchers explore and share the the online impact of their research.
https://impactstory.org/u/0000-0002-7088-4353
0000-0002-7088-4353 is my ORCID id.

You must authorize impactstory.org to link with your ORCID.
You must also connect impactstory.org with your twitter.
In impactstory, Zenodo from ORCID are "datasets".

Zenodo and DataCite METADATA

If you use ZENODO, an Open Archive with DataCite DOI, DataCite services are interesting.

Zenodo gives a DataCite DOI and an export to a clean datacite METADATA (XML DataCite 3.1).

(see also my posts on this blog with the tag "zenodo")

Zenodo DataCite and Orcid

If you use ZENODO and your ORCID Id  then you have some services:

You must allow Zenodo to "Get your ORCID iD".

You must allow DataCite to  allow ORCID "Add works"
But only 5 METADATA are automatically sent to ORCID by DataCite.

  • Title, 
  • Year, 
  • Description (the full field of Zenodo), 
  • Contributor (the field 'creator' of Zenodo = authors),
  • DOI

You can add metadata in ORCID...
Change type 'Work category' and  'Work type'.
Then Source is changed from 'zenodo' to 'Stéphane MOTTIN'

In impactstory, Zenodo links (from ORCID) are considered as "datasets" with only 4 METADATA
  • Title, 
  • Year, 
  • Contributor (the field 'creator' of Zenodo = authors),
  • DOI

ORCID

You can see the list of ORCID "trusted organization"
at https://orcid.org/account
DataCite can 'add works'.


For other ORCID Search & link wizards:
http://support.orcid.org/knowledgebase/articles/188278-link-works-to-your-orcid-record-from-another-syste

for example
  • The Crossref Metadata Search integration allows you to search and add works by title or DOI. Once you have authorized the connection and are logged into ORCID, Crossref search results will also include a button to add works to your ORCID record.
    http://search.crossref.org/
  • The DataCite integration allows you to find your research datasets, images, and other works. Recommended for locating works other than articles and works that can be found by DOI.
  • The ISNI2ORCID integration allows you to link your ORCID and ISNI records and can be used to import books associated with your ISNI. Recommended for adding books.
    http://isni2orcid.labs.orcid-eu.org/
  • Use this tool to link your ResearcherID account and works from it to your ORCID record, and to send biographical and works information between ORCID and ResearcherID.
    http://wokinfo.com/researcherid/integration/
  • Use this wizard to import works associated with your Scopus Author ID; see Manage My [Scopus] Author Profile for more information. Recommended for adding multiple published articles to your ORCID record.
    https://www.elsevier.com/solutions/scopus/support/authorprofile

Tuesday, February 14, 2017

Zotero translators : import metadata bibliography and export (TEI P5)


Translators are at the core of one of Zotero’s most popular features: its ability to import and export item metadata from and to a variety of formats. Below we describe how translators work, and how you can write your own.

Zotero translators are stored as individual JavaScript files in the “translators” subdirectory of the Zotero data directory.
https://www.zotero.org/support/zotero_data#locating_your_zotero_library
for example:
/Users/<username>/Library/Application Support/Firefox/Profiles/<randomstring>/zotero/Translators

Each translator contains a JSON metadata header, followed by the translator’s JavaScript code.

https://www.zotero.org/support/dev/translators

https://github.com/zotero/translators
(à chaque mise à jour, cette liste de GitHub est mise dans le dossier “translators

Data formats

list pf all import/export 
https://www.zotero.org/support/dev/data_formats

The main problem is the quality of the js code for the mapping.
https://www.zotero.org/support/kb/field_mappings

For example, the "Hal archive ouverte.js" does not import the Id of authors (and affiliation (in Hal it's an internal number; no Id of affiliation))

export file "TEI" in Hal (the other formats have no Id).


see ORCID
ORCID identifier
This element is used to record the ORCID iD of users. The ORCID iD is formed of a 16-digit number
Within <orcid-identifier> the iD will be recorded in the following child elements:
    <uri> the full path to the ORCID record
    <path> just the 16 digit ORCID identifier
    <host> the domain of the uri
 <orcid-identifier>
    <uri>http://orcid.org/0000-0001-5727-2427</uri>
    <path>0000-0001-5727-2427</path>
    <host>orcid.org</host>
 </orcid-identifier>

Organization Affiliation
The affiliations section of an ORCID record in XML is recorded under <orcid-activities>. This area is separate from the <orcid-bio> fields.
XML for Affiliations for more information: http://members.orcid.org/api/xml-affiliations

Funding information
The Funding section of an ORCID record is recorded under <orcid-activities>.  For information about adding funding via the API see XML for funding.

https://members.orcid.org/api/record-xml-structure

ORCID can be used in repository systems to clearly link authors – and all their name variants - with their research work, improving search and retrieval. Repository systems can also exchange data with the ORCID registry -for example, retreiving ORCID record information in order to populate author profiles and updating ORCID records with publication information each time repository deposit is made.
https://members.orcid.org/repositories

Import

https://www.zotero.org/support/getting_stuff_into_your_library

 Zotero can import from various bibliographic formats:
  • Zotero RDF, Bibliontology RDF
  • MODS (Metadata Object Description Schema)
  • BibTeX (rich format but lacks of standardization)
  • CSL JSON
  • Endnote XML (this is actually very similar to RIS in importing from Endnote, but may have some small advantages and is one of the few styles that will preserve italics across imports.)
  • MAB2, MARC, MARCXML
  • MEDLINE/nbib
  • OVID Tagged
  • PubMed XML
  • RIS (this can be convenient for quick edits between export & import because of its simple structure)
  • RefWorks Tagged (recommend for ReferenceWorks)
  • Web of Science Tagged
  • Refer/BibIX (discourage if any other option present)
  • XML ContextObject
  • Unqualified Dublin Core RDF

export via TEI.js

The tei-zotero-translator is a simple translator that seeks to bridge the gap between editing documents following the TEI Guidelines and maintaining the bibliographies with Zotero <http://www.zotero.org>.

The translator exports items from the Zotero database to TEI biblStruct elements. It integrates with Zotero, such that it is possible to select TEI as a target export format. Initially, it has been developed to create bibliographies for papers written in TEI P5, but should as well be useful for other projects.
http://wiki.tei-c.org/index.php/TEIZoteroTranslator



Friday, February 3, 2017

metadata, DOI and crossRef 2.0 (2017)

Intro

Metadata Search is our primary user interface for searching and filtering of our metadata. It allows users to quickly enter any term and users can search and filter on a number of elements, including ISSN, ORCIDs, funding data and more. It can be used to lookup the DOI for a reference or a partial reference or a set of references.
https://www.crossref.org/services/metadata-delivery/

In order to encourage publishers and other content producers to embed metadata into their PDFs, we have released an experimental tool called “pdfmark”, This open source tool allows you to add XMP metadata to a PDF. What’s really cool, is that if you give the tool a Crossref DOI, it will lookup the metadata in Crossref and then apply said metadata to the PDF. More detail can be found on the pdfmark page on the Crossref Labs site. The usual weasels words and excuses about “experiments” apply.
dec 2009
https://www.crossref.org/blog/add-crossref-metadata-to-pdfs-using-xmp/

Using Crossref metadata to enable auditing of conformance to funder mandates: A Guide for publishers
Funders are increasingly setting mandates around publications that result from research they have funded. The mandates include specifications about licenses, embargoes, and notifications of publication acceptance and/or publication. This poses logistical problems for all the parties involved. Funders will need a way to track outputs from thousands of publishers. Publishers will need a standard and efficient way to demonstrate conformance to the mandates. All the stakeholders in the process (funders, publishers, institutions and researchers) will span disciplines, institutions, geographies and jurisdictions. Crossref was setup specifically to deal with these sorts of multiple bilateral relationships.

Crossref has extended its metadata schemas and Application Programming Interfaces (APIs) to enable funding agencies, institutions and publishers to use Crossref as a metadata source that can be used to track research that is subject to these mandates and to ensure that said research is being disseminated according to the requirements of the mandates.

https://data.crossref.org/schemas/

Ref.
https://github.com/CrossRef/rest-api-doc/blob/master/funder_kpi_metadata_best_practice.md

60 ref in
https://www.crossref.org/categories/metadata/

Get ready for Crossmark 2.0

 Publishers can upgrade to the new and improved Crossmark 2.0 including a mobile-friendly pop-up box and new button. We will provide a new snippet of code for your landing pages, and we’ll support version v1.5 until March 2017.
We recently revealed a new look for the Crossmark box, bringing it up-to-date in design and offering extra space for more metadata. The new box pulls all of a publication’s Crossmark metadata into the same space, so readers no longer have to click between tabs. 
Linked Clinical Trials and author names (including ORCID iDs) now have their own sections alongside funding information and licenses.

https://www.crossref.org/blog/get-ready-for-crossmark-2.0/

https://www.crossref.org/blog/crossref-to-auto-update-orcid-records/

This is a summary of the technical and production steps that a publisher will need to follow to participate in CrossMark.
Sign up

Drop an email to crossmark_info@crossref.org to let us know that you want to get started with CrossMark. CrossMark fees are activated when you start to deposit, and are US$0.20 for current content, $0.02 for back file (older than two years)

CrossMark metadata should be deposited as part of a regular CrossRef DOI deposit, but can also be deposited as stand-alone data to help publishers populate their backfiles.

Record the DOI in HTML metadata

The publisher should ensure that the DOI is embedded in the HTML metadata for all content to which CrossMark buttons are being applied as follows:

<meta name=”dc.identifier” content=”doi:10.5555/12345678” />

http://crossmarksupport.crossref.org/technical-implementation-guidelines/

DOI

Over 20,000 DOI name prefixes within the DOI System
Over 5 billion DOI resolutions per year

Relation to other schemes

Strong focus on interoperability and on working with existing and new schemes; technical, syntactic, and semantic interoperability


http://www.doi.org/factsheets/DOIKeyFacts.html

Thursday, January 12, 2017

API IMPORT in Zenodo, Zenodo Github. Research data repository and open access archives. ORCID and DataCite Metadata


Zenodo is a research data repository. It was created by OpenAIRE and CERN to provide a place for researchers to deposit datasets.
https://home.cern/about/updates/2013/05/cern-and-openaireplus-launch-european-research-repository

some examples

an example of a .zip with many pdf

https://zenodo.org/record/168580#.WIeylGrNzdQ

7 blocks (see green arrows)

  1. Title
  2. author
  3. abstract
  4. acknowledgments
  5. frame with pdf or zip...
  6. Files
  7. References




DOI





many export solutions

  1. BibTeX Export
  2. Citation Style Language JSON Export
  3. DataCite XML Export
  4. Dublin Core Export
  5. JSON Export
  6. MARC21 XML Export
  7. a link to Mendeley:
    https://www.mendeley.com/sign/in/?acw=&utt=


If you select JSON for example, you will get directly in the window:


another example, communities COAR

Publications and outputs from or related to the Confederation of Open Access Repositories (COAR). Topics on open access repositories, interoperability, usage data, vocabularies, training, licenses and more.
https://zenodo.org/communities/coar

another example, an article

https://zenodo.org/communities/2249-0205/?page=1&size=20
And
google search
gives this 2nd position:

a web service

Zenodo, a CERN service, is an open dependable home for the long-tail of science, enabling researchers to share and preserve any research outputs in any size, any format and from any science.

DOI

Zenodo assigns all publicly available uploads a Digital Object Identifier (DOI) to make the upload easily and uniquely citeable. Zenodo further supports harvesting of all content via the OAI-PMH protocol.
Withdrawal of data and revocation of DOIs:
Content not considered to fall under the scope of the repository will be removed and associated DOIs issued by Zenodo revoked. Please signal promptly, ideally no later than 24 hours from upload, any suspected policy violation. Alternatively, content found to already have an external DOI will have the Zenodo DOI invalidated and the record updated to indicate the original external DOI. User access may be revoked on violation of Terms of Use.

 DOI from DataCite not CrossRef then you cannot use the crossRef's services.
http://stephane-mottin.blogspot.fr/2017/02/tous-les-doi-noffrent-pas-des-services.html

log

You can log by
  • ORCID Id/passORCID
  • GitHub username/pass
  • email/pass

Upload

What can I upload?

All research outputs from all fields of science are welcome. In the upload form you can choose between types of files: publications (book, book section, conference paper, journal article, patent, preprint, report, thesis, technical note, working paper, etc.), posters, presentations, datasets, images (figures, plots, drawings, diagrams, photos), software, videos/audio and interactive materials such as lessons. We do check every piece of content being uploaded to ensure it is research related.

Dans le champ "description" qui a un  Rich text editor, on ne peut même pas copier/coller du HTML par exemple d'un article PLOS. On ne peut même pas mettre un lien.
On peut saisir une équation en TeX sous la forme par exemple (entre {}):
x = {-b \pm \sqrt{b^2-4ac} \over 2a}

community

Zenodo allows you to create your own collection and accept or reject uploads submitted to it. Creating a space for your next workshop or project has never been easier. Plus, everything is citeable and discoverable!

Want your own community?
It's easy. Just click the button to get started.
  • Curate — accept/reject what goes in your community collection.
  • Export — your community collection is automatically exported via OAI-PMH
  • Upload — get custom upload link to send to people
We currently accept up to 50GB per dataset (you can have multiple datasets); there is no size limit on communities.

Metadata types and sources

All metadata is stored internally in MARC according to the schema defined in http://inveniosoftware.org/wiki/Project/OpenAIREplus/DevelopmentRecordMarkup.
Metadata is exported in several standard formats such as MARCXML, Dublin Core, and DataCite Metadata Schema according to OpenAIRE Guidelines.


Open source

Powered by Invenio
Zenodo is a small layer on top of Invenio http://github.com/inveniosoftware/invenio, a ​free software suite enabling you to run your own ​digital library or document repository on the web.

code:
https://github.com/zenodo/zenodo


GitHub

Zenodo has integration with GitHub to make code hosted in GitHub citable.
  • Select the repository you want to preserve, and toggle the switch below to turn on automatic preservation of your software.
  • Go to GitHub and create a release. Zenodo will automatically download a .zip-ball of each new release and register a DOI.
  • After your first release, a DOI badge that you can include in GitHub README will appear next to your repository below.

https://zenodo.org/account/settings/github/

---

Ref.

https://en.wikipedia.org/wiki/Zenodo
https://en.wikipedia.org/wiki/Category:Open-access_archives

IMPORT in zenodo

resources

Invenio

Zenodo is a small layer on top of Invenio <http://github.com/inveniosoftware/invenio>, a ​free software suite enabling you to run your own ​digital library or document repository on the web.

Invenio is a free software suite enabling you to run your own digital library or document repository on the web. The technology offered by the software covers all aspects of digital library management, from document ingestion through classification, indexing, and curation up to document dissemination. Invenio complies with standards such as the Open Archives Initiative and uses MARC 21 as its underlying bibliographic format. The flexibility and performance of Invenio make it a comprehensive solution for management of document repositories of moderate to large sizes.

Invenio has been originally developed at CERN to run the CERN document server, managing over 1,000,000 bibliographic records in high-energy physics since 2002, covering articles, books, journals, photos, videos, and more. Invenio is nowadays co-developed by an international collaboration comprising institutes such as CERN, DESY, EPFL, FNAL, SLAC and is being used by many more scientific institutions worldwide.

zenodo interface

pour un upload à la main, il y a 11 catégories de champs

  1. Upload type 
    1. Book section 
    2. ... Journal article, etc
  2. Basic Info
    1. date
    2. Title
    3. Authors (one by one)!!!
    4. Description (only text (and math formula) without link!!!)
    5. Keyword
    6. Additional notes, for example sommaire
  3. License
    1. Open
    2. CC 4.0; you must add its category
  4. Communities
    1. integrations (for example)
  5. Funding
    1. CNRS (for exemple)
  6. related/alt identif
    1. ISSN, ISBN, URL
  7. Contributors for example the dir of collection
  8. reference
  9. journal
  10. c
  11. Book
    1. Publisher
    2. Place
    3. ISBN
    4. Book Title
    5. Page (of this book)

zenodo API

The process

an example:
Similar to figshare, Zenodo can store your data and give you a DOI to make it citable.
We have started to deposit all Brain Catalogue’s data at Zenodo, and soon you should be able to cite your favourite brains in your works.
Initially, we uploaded the data manually, but that became tedious very soon. Luckily, Zenodo has a very simple to use and well documented API. In just 3 lines of code using curl you can easily deposit a data file and make it citable (Full information is available at https://zenodo.org/dev).

Before starting anything you need to obtain a token, which is a random alphanumeric string that identifies your queries. You only need to do this once. With your token safely stored (I keep it in the $token variable), data uploading takes just 3 steps:

1. Create a new deposit and obtain a deposit ID:

curl -i -H "Content-Type: application/json" -X POST --data '{"metadata":{"access_right": "open","creators": [{"affiliation": "Brain Catalogue", "name": "Toro, Roberto"}],"description": "Brain MRI","keywords": ["MRI", "Brain"],"license": "cc-by-nc-4.0", "title": "Brain MRI", "upload_type": "dataset"}}' https://zenodo.org/api/deposit/depositions/?access_token=$token |tee zenodo.json

Zenodo responds with a json file, which here I’m saving to zenodo.json. Now you can use awk to parse that file and recover the deposit id. I do that like this:
zid=$(cat zenodo.json|tr , '\n'|awk '/"id"/{printf"%i",$2}')

With your deposit ID in hand, you are ready to upload your data file

2. Upload data file:

curl -i -F name=MRI.nii.gz -F file=@/path/to/the/data/file/MRI.nii.gz https://zenodo.org/api/deposit/depositions/$zid/files?access_token=$token

The server will respond with a HTTP 100 ‘Continue’ message, and depending on the size of your file you’ll have to wait some time. Once the upload is finished you are ready to

3. Publish your dataset:

curl -i -X POST https://zenodo.org/api/deposit/depositions/$zid/actions/publish?access_token=$token

And that’s it. You can now go to Zenodo and view the web page for your data


Ref.
http://siphonophore.org/blog/2016/01/16/at-brain-catalogue-we-love-zenodo/

---
A bug in JSON object
https://github.com/zenodo/zenodo/issues/865
on the web documentation API Documentation for developers ( https://zenodo.org/dev)
Resources > Representations > Deposition metadata > subjects

the example of json object for subject is :
[{"term": "Astronomy",
"id": "http://id.loc.gov/authorities/subjects/sh85009003",
"scheme": "url"}]
but id is not supported and the json is rejected
the field must named 'identifier'

resources

http://developers.zenodo.org/ 
(Zenodo REST API documentation uses Slate. )

bof: https://zenodo.readthedocs.io/



Ref. https://indico.cern.ch/event/533421/contributions/2330179/attachments/1378438/2094268/kumasi2016-practical-exercises-rest-api.pdf

blog zenodo

http://blog.zenodo.org/
Zenodo docs have landed!
by  Krzysztof Nowak on January 23, 2017

wiki zenodo

https://github.com/zenodo/zenodo/wiki/What's-new%3F

YAML Github
Zenodio is a Python package we’re building to interact with Zenodo. For our various doc/technote/publishing projects we want to use YAML files (embedded in a Git repository, for example) to maintain deposition metadata so that the upload process itself can be automated.
The zenodio.metadata sub package provides a Python representation of Zenodo metadata (but not File or Zenodo deposition metadata).
Zenodio is a simple Python interface for getting data into and out of Zenodo, the digital archive developed by CERN. Zenodo is an awesome tool for scientists to archive the products of research, including datasets, codes, and documents. Zenodio adds a layer of mechanization to Zenodo, allowing you to grab metadata about records in a Zenodo collection, or upload new artifacts to Zenodo with a smart Python API.
We’re still designing the upload API, but metadata harvesting is ready to go.
Zenodio is built by SQuaRE for the Large Synoptic Survey Telescope.
https://github.com/lsst-sqre/zenodio/tree/metadata_api
http://zenodio.lsst.io/en/latest/
https://jira.lsstcorp.org/browse/DM-4852

Differences between ORCID and DataCite (DOI) Metadata

THOR is a 30 month project funded by the European Commission under the Horizon 2020 programme. It will establish seamless integration between articles, data, and researchers across the research lifecycle. This will create a wealth of open resources and foster a sustainable international e-infrastructure.

Differences between ORCID and DataCite Metadata
One of the first tasks for DataCite in the European Commission-funded THOR project, which started in June 2015, was to contribute to a comparison of the ORCID and DataCite metadata standards. Together with ORCID, CERN, the British Library and Dryad we looked at how contributors, organizations and artefacts - and the relations between them - are described in the respective metadata schemata, and how they are implemented in two example data repositories, Archaeology Data Service and Dryad Digital Repository. The focus of our work was on identifying major gaps. Our report was finished and made publicly available in September 2015. The key findings are on these topics:
  • Common Approach to Personal Names
  • Standardized Contributor Roles
  • Standardized Relation Types
  • Metadata for Organisations
  • Persistent Identifiers for Projects
  • Harmonization of ORCID and DataCite Metadata

https://project-thor.readme.io/docs/differences-between-orcid-and-datacite-metadata

This document identifies gaps in existing PID infrastructures, with a focus on ORCID and DataCite Metadata and links between contributors, organizations and artefacts. What prevents us from establishing interoperability and overcoming barriers between PID platforms for contributors, artefacts and organisations, and research solutions for federated attribution, claiming, publishing and direct data access? It goes on to propose strategies to overcome these gaps.:
https://zenodo.org/record/30799#.WIi5DmrNzdQ

Saturday, January 7, 2017

IdHal; ORCID my QR-code, ResearcherID, ARXiv, IdRef, identifier researcher, et bibliograpie

Identifier

You can also add URL of social networks


You can build your CV with your IdHal

Le CV est composé de 3 parties :
- un titre et du texte,
- la liste des publications déposées dans HAL, 
- des métadonnées extraites des publications déposés dans HAL (disciplines, mots-clés, années de publication, co-auteurs, revues), extraite du compte (photo), de l’idHAL (identifiants externes), des widgets extérieurs (twitter, facebook,…).

No import (only documents uploaded to HAL).

Export the displayed publications:

Interoperability, ORCID and ResearcherID

In ORCID: "You haven't added any works, add some now"
you can import/export directly to your ResearcherID