Tom Richardson is BookNet Canada’s Bibliographic Manager and friend to all ONIX producers and users. He will be speaking at Tech Forum about Standards & Certification and What’s in a Date: Best practices regarding dates in the Canadian Bibliographic Standard.
In the mid-90s, when I first started working with metadata, dBase IV and FileMaker Pro were going to solve all our problems and Amazon had staff who keyed copy. Seriously. For a brief few months, I had contact names for actual workers at Amazon and they’d even ask me questions. Back in those days, if a retailer wanted to sell products online they took direct responsibility for the copy. Some still do. But even then the responsibility of producing product metadata was starting to shift to publishers and distributors. And by the late ’90s the ONIX standard was in development using then cutting-edge XML to support data exchange.
The point of my nostalgia is simply to say that who pays for metadata isn’t written in stone. But don’t get me wrong: I think that the responsibility — I’d argue the “natural” responsibility — for selling products rests with the company creating or representing that product. They know their book best, should know their primary market, and should take the lead in providing retailers the information needed to sell it to consumers. BookNet Canada has always held the position that publishers are responsible for their product’s metadata.
But not all the metadata that retailers — or libraries, other services, or “discovery” in general — need is product-based. My main point here is that book-related metadata creation isn’t a problem restricted to publishers.
Publishers can take a lesson from EDItEUR. They organize ONIX for Books as a neutral player in metadata. Their responsibility is to listen to publishers, retailers, and a range of industry players worldwide and to ensure that if there’s something that needs to be communicated there’s a sensible way to do it. If you have information to trade about a book’s supply then you have to name the supplier, but even then there are two ways to do it: by identifier and by name. EDItEUR doesn’t force much and its guidelines focus on how to communicate clearly and use the standard. The actual communication is open and any two trading partners — or an entire supply chain — can use ONIX liberally and ignore many, if not most, of those guidelines.
I think data producers — and their aggregators — should follow that model. Metadata creation isn’t any one company’s problem and that’s especially true if it’s not product-based.
Publishers demonstrated this in a negative way when ISTC, the work identifier standard, was famously rejected. Publishers understood what it was and how it would benefit both retailers and discovery in general, but when it came down to it they wanted to know why they should bear the not-insignificant cost of its production and pay to make it easier to have a foreign market link its product to theirs. A publisher doesn’t have the “natural” responsibility for ISTC. The copyright holder, end users, and discovery tool creators have it. Retailers need it more than publishers do. I’d say publishers have a responsibility to support a persistent universal work identifier if they are asked to; it should be the natural consequence of working with other companies.
ISNI, the international standard name identifier, is another example where publishers are at best a secondary beneficiary while authors and copyright holders would benefit the most. ISNI would link together all content associated with a particular name. Publishers would benefit if their book products are identified as the “best” product associated with that name — superior to the magazine, blog, or social media that might be also linked to it — and if their book data is feeding back into a general discovery process with that name. So why not promote ISNI at the contract level as a bonus you can support in the metadata?
Similarly, retailers and publishers would benefit from easier brand name identification, but I don’t understand why the licensing agencies aren’t specifying it as a metadata requirement. Publishers should simply ensure that they can support it and supply it to their retailers in the feed. For that matter, if a retailer has access to a universal identifier — or for that matter has developed series information that improves sales and isn’t available in the publisher’s feed — why don’t they supply it to them?
Quality metadata is defined by a willingness to change, add a value, improve a data point, and subdivide data to its “natural” level of maximum utility. Everyone should be working together to strive towards quality metadata.
We hope to see you at Tech Forum to hear more about metadata standards and lots of other interesting topics!