Schema.org Déjà vu

schema-org1The Web has been around for getting on for a couple of decades now, and massive industries have grown up around the magic of making it work for you and your organisation.  Some of it, it has to be said, can be considered snake-oil.  Much of it is the output of some of the best brains on the planet.  Where, on the hit parade of technological revolutions to influence mankind, the Web is placed is oft disputed, but it is definitely up there with fire, steam, electricity, computing, and of course the wheel.  Similar debates, are and will virtually rage, around the hit parade of web features that will in retrospect have been most influential – pick your favourites, http, XML, REST, Flash, RSS, SVG, the URL, the href, CSS, RDF – the list is a long one.

I have observed a pattern as each of the successful new enhancements to the web have been introduced, and then generally adopted.  Firstly there is a disconnect between the proponents of the new approach/technology/feature and the rest of us.  The former split their passions between focusing on the detailed application, rules, and syntax of it’s use and; broadcasting it’s worth to the world, not quite understanding why the web masses do not ‘get it’ and adopt it immediately.  This phase is then followed by one of post-hype disillusionment from the creators, especially when others start suggesting simplifications to their baby.  Also at this time back-room adoption by those who find it interesting, but are not evangelistic about it, starts to occur.  The real kick for the web comes from those back-room folks who just use this next thing to deliver stuff and solve problems in a better way.  It is the results of their work that the wider world starts to emulate, so that they can keep up with the pack and remain competitive.  Soon this new feature is adopted by the majority, because all the big boys are using it, and it becomes just part of the tool kit.

A great example of this was RSS.  Not a technological leap but a pragmatic mix of current techniques and technologies mixed in with some lateral thinking and a group of people agreeing to do it in ‘this way’ then sharing it with the world.  As you will see from the Wikipedia page on RSS, the syntax wars raged in the early days – I remember it well 0.9, 0.91, 1.0, 1.1, 2.0- 2.01, etc.  I also remember trying, not always with success, to convince people around me to use it, because it was so simple.  Looking back it is difficult to say exactly when it became mainstream, but this line from Wikipedia gives me a clue: In December 2005, the Microsoft Internet Explorer team and Microsoft Outlook team announced on their blogs that they were adopting the feed icon first used in the Mozilla Firefox browser. In February 2006, Opera Software followed suit.  From then on, the majority of consumers of RSS were not aware of what they were using and it became just one of the web technologies you use to get stuff done.

I am now seeing the pattern starting to repeat itself again, with structured and linked data.  Many, including me, have been evangelising the benefits of web friendly, structured, linked data for some time now – preaching to a crowd that has been slow in growing, but growing it is.   Serious benefit is now being gained by organisations adopting these techniques and technologies, as our selection of case studiesdemonstrate.  They are getting on with it, often with our help, using it to deliver stuff.  We haven’t hit the mainstream yet.  For instance, the SEO folks still need to get their head around the difference between content and data.

Something is stirring around the edge of the Semantic Web/Linked Data community  that has the potential to give structured web enabled data the kick towards mainstream that RSS got when Microsoft adopted the RSS logo and all that came with it.   That something is schema.org, an initiative backed by the heavyweights of the search engine world, Google, Yahoo, and Bing.  For the SEO and web developer folks, schema.org offers a simple attractive proposition – embed some structured data in your html and, via things like Google’s Rich Snippets, we will give you a value added display in our search results.  Result, happy web developers with their sites getting improve listing display.  Result, lots of structured data starting to be published by people that you would have had an impossible task in convincing that it would be a good idea to publish structured data on the web.

I was at Semtech in San Francisco in June, just after schema.org was launched and caused a bit of a stir.  They’ve over simplified the standards that we have been working on for years, dumbing down RDF, diluting the capability, with to small a set of attributes, etc., etc.  When you get under the skin of schema.org, you see that with support for RDFa and supporting RDFa 1.1 lite, they are not that far from the RDF/Linked Data community.

Schema.org should be welcomed as an enabler for getting loads more structured and linked data on the web.  Is their approach now perfect,? No.  Will it influence the development of Linked Data? Yes.  Will the introduction be messy? Yes.  Is it about more than just rich snippets?  Oh yes.  Do the webmasters care at the moment? No.

If you want a friendly insight in to what schema.org is about, I suggest a listen to this month’s Semantic Link podcast, with their guest from Google/schema.org Ramanathan V. Guha.

Now where have I seen that name before? – Oh yes, back on the Wikipedia RSS pageThe basic idea of restructuring information about websites goes back to as early as 1995, when Ramanathan V. Guha and others in Apple Computer’s Advanced Technology Group developed the Meta Content Framework.”  So it probably isn’t just me who is getting a feeling of Déjà vu.

This post was also published on the Talis Consulting Blog

Will Europe’s National Libraries Open Data In An Open Way?

A significant step towards open bibliographic data was made in Copenhagen this week at the 25th anniversary meeting of the Conference of European National Librarians (CENL) hosted by the Royal Library of Denmark. From the CENL announcement:

…the Conference of European National Librarians (CENL), has voted overwhelmingly to support the open licensing of their data. What does that mean in practice? It means that the datasets describing all the millions of books and texts ever published in Europe – the title, author, date, imprint, place of publication and so on, which exists in the vast library catalogues of Europe – will become increasingly accessible for anybody to re-use for whatever purpose they want. The first outcome of the open licence agreement is that the metadata provided by national libraries to Europeana.eu, Europe’s digital library, museum and archive, via the CENL service The European Library, will have a Creative Commons Universal Public Domain Dedication, or CC0 licence. This metadata relates to millions of digitised texts and images coming into Europeana from initiatives that include Google’s mass digitisations of books in the national libraries of the Netherlands and Austria. ….it will mean that vast quantities of trustworthy data are available for Linked Open Data developments

There is much to be welcomed here. Firstly that the vote was overwhelming.   Secondly that the open license chosen to release this data under is Creative Commons CC0 thus enabling reuse for any purpose. You cannot expect such a vote to cover all the detail, but the phrase ‘trustworthy data are available for Linked Open Data developments’ does give rise to some possible concerns for me.   My concern is not from the point of view that this implies that the data will need to be published as Linked Data – this also should be welcomed. My concern comes from some of the library focused Linked Data conversations, presentations and initiatives I have experienced over the last few months and years. Many in the library community, that have worked with Linked Data, lean towards the approach of using Linked Data techniques to reproduce the very fine detailed structure and terminology of their bibliographic records as a representation of those records in RDF (Linked Data data format).  Two examples of this that come to mind:

  1. The recent release of an RDF representation of the MARC21 elements and vocabularies by MMA – Possibly of internal use only to someone transforming a library’s MARC record collection to identify concepts and entities to then describe as linked data.  Mind-numbingly impenetrable for anyone who is not a librarian looking for useful data.
  2. The Europeana Data Model (EDM).  An impressive and elegant Linked Data RDF representation of the internal record structure and process concerns of Europeana.  However again not modelled in a way to make it easy for those outside the [Europeana] library community to engage with, understand and extract meaning from.

The fundamental issue I have with the first of these and other examples is that their authors have approached this from the direction of wishing to encode their vast collections of bibliographic records as Linked Data.  Whereas they would have ended up with a more open [to the wider world] result if they had used the contents of their records as a rich resource from which to build descriptions of the resources they hold.  In that way you end up with descriptions of things (books, authors, places, publishers, events, etc.) as against descriptions of records created by libraries. Fortunately there is an excellent example of a national library publishing Linked Data which describe the things they hold.   The British Library have published descriptions of 2.6 million items they hold in the form of the British National Bibliography. I urge those within Europeana and the European National libraries community, who will be involved in this opening up initiative, to take a close look at the evolving data model that the BL have shared, to kick-start the conversation on the most appropriate [Linked Data] techniques to apply to bibliographic data.  For more detail see this Overview of the British Library Data Model. This opening up of data is a great opportunity for trusted librarian curated data to become a core part of the growing web of data, that should not be missed.  We must be aware of previous missed opportunities, such as the way XMLMarc just slavishly recreated an old structure in a new format.   Otherwise we could end up with what could be characterised, in web integration terms as, a significant open data white elephant. Nevertheless I am optimistic, with examples such as the British Library BnB backing up this enthusiastic move to open up a vast collection of metadata, in a useful way, that will stimulate Linked Data development, I have some confidence in a good outcome. Disclosure:Bibliographic domain experts from the British Library worked with Linked Data experts from the Talis team, in the evolution of the BnB data model – something that could be extended and or/repeated with other national and international library organisations.

This post was also published on the Talis Consulting Blog