The latest release of (v3.3 August 2017) included enhancements proposed by The Tourism Structured Web Data Community Group.

Although fairly small, these enhancements have significantly improved the capability for describing Tourist Attractions and hopefully enabling more tourist discoveries like the one pictured here.

The TouristAttraction Type

The TouristAttraction type has been around, as a subtype of Place, from the earliest days back in 2011.  However it did not have any specific properties of its own or examples for its use.  For those interested in promoting and sharing data to help with tourism and the discovery of tourist attractions, the situation was a bit limiting.

As a result of the efforts by the group, TouristAttraction now has two properties — availableLanguage (A language someone may use with or at the item, service or place.) and touristType (Attraction suitable for type(s) of tourist. eg. Children, visitors from a particular country, etc.).   So now we can say, in data, for example that an attraction is suitable for Spanish & French speaking tourists who are interested in wine.

Application and Examples

At initial view the addition of a couple of general purpose properties does not seem much of an advancement.  However the set of examples, that the Group has provided, demonstrate the power and flexibility of this enhanced type for use with tourism.

The principle behind their approach, as demonstrated in the examples, is that most anything can be of interest to a tourist — a beach, mountain, costal highway, ancient building, work of art, war cemetery, winery, amusement park — the list is endless.  It was soon clear that it would not be practical to add tourist relevant properties to all such relevant types within

Multi-Typed Entities (MTEs)
The Multi-Type Entity is a powerful feature of which has often caused confusion and has a bit of a reputation for being complex.  When describing a thing (an entity in data speak) has the capability to indicate that it is of more than one type.

For example you could be describing a Book, with an author, subject, isbn, etc., and in order to represent a physical example of that book you can also say it is a Product with weight, width, purchaseDate, itemCondition, etc.  In you simply achieve that by indicating in your mark-up that the thing you are describing is both a Book and a Product.

Utilising the MTE principle for tourism, you would firstly describe the thing itself, using the appropriate Types (Mountain, Church, Beach, AmusementPark, etc.).  Having done that you then add the TouristAttraction type. 

In Microdata it would look like this:
  <div itemtype=“” itemscope>
    <link itemprop=“additionalType” href=“” />
    <meta itemprop=“name” content=“Villers–Bretonneux Australian National Memorial />

<script type=“application/ld+json”>
“@context”: “”,
 “@type”: [“Cemetery”,“TouristAttraction”],
 “name”: “Villers–Bretonneux Australian National Memorial”,

If there is no specific type for the thing you are describing, you would just identify it as being a TouristAttraction, using all the properties inherited from Place to describe it as best as you can.

Following the ‘anything can be a tourist attraction principle’ it is perfectly possible for a Person to also be a TouristAttraction.  Take the human statue street performer who dresses up as the Statue of Liberty and stands in Times Square New York on most days (or at least seems to be there every time I visit).  

Using he/she can be described as a Person with name, email, jobTitle (Human Statue of Liberty), etc., and in addition as a TouristAttraction.  Sharing this information in on a page describing this performer would possibly increase their discoverability to those looking for tourist things in New York.

Public Access

In addition to the properties and examples for TouristAttraction, the Group also proposed a new publicAccess property.  It was felt that this had wider benefit than just for tourism and hence it was added to Place, and hence inherited by TouristAttraction.  publicAccess is a boolean property enabling you to state if a Place is or is not accessible to the public.  There is no assumed default value for this property if omitted from markup.


(Tourist image by Lain
Statue image by Bruce Crummy)



Comment   or   Contact us

A theme of my last few years has been enabling the increased discoverability of Cultural Heritage resources by making the metadata about them more open and consumable.

Much of this work has been at the libraries end of the sector but I have always have had an eye on the broad Libraries, Archives, and Museums world, not forgetting Galleries of course.

Two years ago at the LODLAM Summit 2015 I ran a session to explore if it would be possible to duplicate in some way the efforts of the Schema Bib Extend W3C Community Group which proposed and introduced an extension and enhancements to the vocabulary to improve its capability for describing bibliographic resources, but this time for archives physical, digital and web.

Interest was sufficient for me to setup and chair a new W3C Community Group, Schema Architypes. The main activity of the group has been the creation and discussion around a straw-man proposal for adding new types to the vocabulary.

Not least the discussion has been focused on how the concepts from the world of archives (collections, fonds, etc.) can be represented by taking advantage of the many modelling patterns and terms that are already established in that generic vocabulary, and what few things would need to be added to expose archive metadata to aid discovery.

Coming up next week is LODLAM Summit 2017, where I have proposed a session to further the discussion on the proposal.

So why am I now suggesting that there maybe an opportunity for the discovery of archives and their resources?

In web terms for something to be discoverable it, or a description of it, needs to be visible on a web page somewhere. To take advantage of the current structured web data revolution, being driven by the search engines and their knowledge graphs they are building, those pages should contain structured metadata in the form of markup.

Through initiatives such as ArchivesSpace and their application, and ArcLight it is clear that many in the world of archives have been focused on web based management, search, and delivery views of archives and the resources and references they hold and describe. As these are maturing it is clear that the need for visibility on the web is starting to be addressed.

So archives are now in a great place to grab the opportunity to take advantage of the benefits of to aid discovery of their archives and what they contain. At least with these projects, they have the pages on which to embed that structured web data, once a consensus around the proposals from the Schema Architypes Group has formed.

I call out to those involved with the practical application of systems for the management, searching, and delivery of archives to at least take a look at the work of the group and possibly engage on a practical basis, exploring the potential and challenges for implementing

So if you want to understand more behind this opportunity, and how you might get involved, either join the W3C Group or contact me direct.


*Image acknowledgement to The Oberlin College Archives

Comment   or   Contact us

I spend a significant amount of time working with Google folks, especially Dan Brickley, and others on the supporting software, vocabulary contents, and application of  So it is with great pleasure, and a certain amount of relief, I share the announcement of the release of 3.1.

That announcement lists several improvements, enhancements and additions to the vocabulary that appeared in versions 3.0 & 3.1. These include:

  • Health Terms – A significant reorganisation of the extensive collection of medical/health terms, that were introduced back in 2012, into the ‘health-lifesci’ extension, which now contains 99 Types, 179 Properties and 149 Enumeration values.
  • Finance Terms – Following an initiative and work by Financial Industry Business Ontology (FIBO) project (which I have the pleasure to be part of), in support of the W3C Financial Industry Business Ontology Community Group, several terms to improve the capability for describing things such as banks, bank accounts, financial products such as loans, and monetary amounts.
  • Spatial and Temporal and DatasetsCreativeWork now includes spatialCoverage and temporalCoverage which I know my cultural heritage colleagues and clients will find very useful.  Like many enhancements in the community, this work came out of a parallel interest, in which  Dataset has received some attention.
  • Hotels and Accommodation – Substantial new vocabulary for describing hotels and accommodation has been added, and documented.
  • Pending Extension – Introduced in version 3.0 a special extension called “pending“, which provides a place for newly proposed terms to be documented, tested and revised.  The anticipation being that this area will be updated with proposals relatively frequently, in between formal releases.
  • How We Work – A HowWeWork document has been added to the site. This comprehensive document details the many aspects of the operation of the community, the site, the vocabulary etc. – a useful way in for casual users through to those who want immerse themselves in the vocabulary its use and development.

For fuller details on what is in 3.1 and other releases, checkout the Releases document.

Hidden Gems

Often working in the depths of the vocabulary, and the site that supports it, I get up close to improvements that on the surface are not obvious which some [of those that immerse themselves] may find interesting that I would like to share:

  • Snappy Performance – The site, a Python app hosted on the Google App Engine, is shall we say a very popular site.  Over the last 3-4 releases I have been working on taking full advantage of muti-threaded, multi-instance, memcache, and shared datastore capabilities. Add in page caching imrovements plus an implementation of Etags, and we can see improved site performance which can be best described as snappiness. The only downsides being, to see a new version update you sometimes have to hard reload your browser page, and I have learnt far more about these technologies than I ever thought I would need!
  • Data Downloads – We are often asked for a copy of the latest version of the vocabulary so that people can examine it, develop form it, build tools on it, or whatever takes their fancy.  This has been partially possible in the past, but now we have introduced (on a developers page we hope to expand with other useful stuff in the future – suggestions welcome) a download area for vocabulary definition files.  From here you can download, in your favourite format (Triples, Quads, JSON-LD, Turtle), files containing the core vocabulary, individual extensions, or the whole vocabulary.  (Tip: The page displays the link to the file that will always return the latest version.)
  • Data Model Documentation – Version 3.1 introduced updated contents to the Data Model documentation page, especially in the area of conformance.  I know from working with colleagues and clients, that it is sometimes difficult to get your head around’s use of Multi-Typed Entities (MTEs) and the ability to use a Text, or a URL, or Role for any property value.  It is good to now have somewhere to point people when they question such things.
  • Markdown – This is a great addition for those enhancing, developing and proposing updates to the vocabulary.  The rdfs:comment section of term definitions are now passed through a Markdown processor.  This means that any formatting or links to be embedded in term description do not have to be escaped with horrible coding such as &amp; and &gt; etc.  So for example a link can be input as [The Link]( and italic text would be input as *italic*.  The processor also supports WikiLinks style links, which enables the direct linking to a page within the site so [[CreativeWork]] will result in the user being taken directly to the CreativeWork page via a correctly formatted link.   This makes the correct formatting of type descriptions a much nicer experience, as it does my debugging of the definition files. Winking smile

I could go on, but won’t  – If you are new to, or very familiar, I suggest you take a look.

Comment   or   Contact us

cogsbrowser Marketing Hype! I hear you thinking – well at least I didn’t use the tired old ‘Next Generation’ label.

Let me explain what is this fundamental component of what I am seeing potentially as a New Web, and what I mean by New Web.

This fundamental component I am talking about you might be surprised to learn is a vocabulary –  But let me first set the context by explaining my thoughts on this New Web.

Having once been considered an expert on Web 2.0 (I hasten to add by others, not myself) I know how dangerous it can be to attach labels to things.  It tends to spawn screen full’s of passionate opinions on the relevance of the name, date of the revolution, and over detailed analysis of isolated parts of what is a general movement.   I know I am on dangerous ground here!

To my mind something is new when it feels different.  The Internet felt different when the Web (aka HTTP + HTML + browsers) arrived.  The Web felt different (Web 2.0?) when it became more immersive (write as well as read) and visually we stopped trying to emulate in a graphical style what we saw on character terminals. Oh, and yes we started to round our corners. 

There have been many times over the last few years when it felt new – when it suddenly arrived in our pockets (the mobile web) – when the inner thoughts, and eating habits, of more friends that you ever remember meeting became of apparent headline importance (the social web) – when [the contents of] the web broke out of the boundaries of the browser and appeared embedded in every app, TV show, and voice activated device.

The feeling different phase I think we are going through at the moment, like previous times, is building on what went before.  It is exemplified by information [data] breaking out of the boundaries of our web sites and appearing where it is useful for the user. 

library_of_congress We are seeing the tip of this iceberg in the search engine Knowledge Panels, answer boxes, and rich snippets,ebay-rich-snippet   The effect of this being that often your potential user can get what they need without having to find and visit your site – answering questions such as what is the customer service phone number for an organisation; is the local branch open at the moment; give me driving directions to it; what is available and on offer.  Increasingly these interactions can occur without the user even being aware they are using the web – “Siri! Where is my nearest library?
A great way to build relationships with your customers. However a new and interesting challenge for those trying to measure the impact of your web site.

So, what is fundamental to this New Web

There are several things – HTTP, the light-weight protocol designed to transfer text, links and latterly data, across an internet previously used to specific protocols for specific purposes – HTML, that open, standard, easily copied light-weight extensible generic format for describing web pages that all browsers can understand – Microdata, RDFa, JSON, JSON-LD – open standards for easily embedding data into HTML – RDF, an open data format for describing things of any sort, in the form of triples, using shared vocabularies.  Building upon those is – an open, [de facto] standard, generic vocabulary for describing things in most areas of interest. 

icon-LOVWhy is one vocabulary fundamental when there are so many others to choose from? Check out the 500+ referenced on the  Linked Open Vocabularies (LOV) site. however differs from most of the others in a few key square 

  • Size and scope – its current 642 Types and 992 Properties is significantly larger and covers far more domains of interest than most others.  This means that if you are looking to describe a something, you are highly likely to to find enough to at least start.  Despite its size, it is yet far from capable of describing everything on, or off, the planet.
  • Adoption – it is estimated to be in use on over 12 million sites.  A sample of 10 billion pages showed over 30% containing markup.  Checkout this article for more detail: Evolution of Structured Data on the Web – Big data makes common schemas even more necessary. By Guha, Brickley and Macbeth.
  • Evolution – it is under continuous evolutionary development and extension, driven and guided by an open community under the wing of the W3C and accessible in a GitHub repository.
  • Flexibility – from the beginning was designed to be used in a choice of your favourite serialisation – Microdata, RDFa, JSON-LD, with the flexibility of allowing values to default to text if you have not got a URI available.
  • Consumers – The major search engines Google, Bing, Yahoo!, and Yandex, not only back the open initiative behind but actively search out markup to add to their Knowledge Graphs when crawling your sites.
  • Guidance – If you search out guidance on supplying structured data to those major search engines, you are soon supplied with recommendations and examples for using, such as this from Google.  They even supply testing tools for you to validate your markup.

With this support and adoption, the initiative has become self-fulfilling.  If your objective is to share or market structured data about your site, organisation, resources, and or products with the wider world; it would be difficult to come up with a good reason not to use

Is it a fully ontologically correct semantic web vocabulary? Although you can see many semantic web and linked data principles within it, no it is not.  That is not its objective. It is a pragmatic compromise between such things, and the general needs of webmasters with ambitions to have their resources become an authoritative part of the global knowledge graphs, that are emerging as key to the future of the development of search engines and the web they inhabit.

Note that I question if is a fundamental component, of what I am feeling is a New Web. It is not the fundamental component, but  one of many that over time will become just the way we do things

Comment   or   Contact us

Thing_rdfa In the previous post in this series Pt1: The Bits and Pieces I stepped through the process of obtaining your own fork of the GitHub repository; working on it locally; uploading your version for sharing; and proposing those changes to the community in the form of a GitHub Pull Request.

Having covered the working environment; in this post I now intend to describe some of the important files that make up and how you can work with them to create or update, examples and term definitions within your local forked version in preparation for proposing them in a Pull Request.

files The File Structure
If you inspect the repository you will see a simple directory structure.  At the top level you will find a few files sporting a .py suffix.  These contain the python application code to run the site you see at  They load the configuration files, build an in-memory version of the vocabulary that are used to build the html pages containing the definitions of the terms, schema listings, examples displays, etc.  They are joined by a file named app.yaml, which contains the configuration used by the Google App Engine to run that code.

At this level there are some directories containing supporting files: docs & templates contain static content for some pages; tests & scripts are used in the building and testing of the site; data contains the files that define the vocabulary, its extensions, and the examples used to demonstrate its use.

The Data Files
The data directory itself contains various files and directories.  schema.rdfa is the most important file, it contains the core definitions for the majority of the vocabulary.  Although, most of the time, you will see schema.rdfa as the only file with a .rdfa suffix in the data directory, the application will look for and load any .rdfa files it finds here.  This is a very useful feature when working on a local version – you can keep your enhancements together only merging them into the main schema.rdfa file when ready to propose them.

Also in the data directory you will find an examples.txt file and several others ending with –examples.txt.  These contain the examples used on the term pages, the application loads all of them.

Amongst the directories in data, there are a couple of important ones.  releases contains snapshots of versions of the vocabulary from version 2.0 onwards.  datafilesThe directory named ext contains the files that define the vocabulary extensions and examples that relate to them.  Currently you will find auto and bib directories within ext, corresponding to the extensions currently supported.  The format within these directories follows the basic pattern of the data directory – one or more .rdfa files containing the term definitions and –examples.txt files containing relevant examples.

Getting to grips with the RDFa

Enough preparation let’s get stuck into some vocabulary!

Take your favourite text/code editing application and open up schema.rdfa. You will notice two things – it is large [well over 12,500 lines!], and it is in the format of a html file.  Schema_html This second attribute makes it easy for non-technical viewing – you can open it with a browser.

Once you get past a bit of CSS formatting information and a brief introduction text, you arrive [about 35 lines down] at the first couple of definitions – for Thing and CreativeWork.

The Anatomy of a Type Definition
Standard RDFa (RDF in attributes) html formatting is used to define each term.  A vocabulary Type is defined as a RDFa marked up <div> element with its attributes contained in marked up <span> elements.

The Thing Type definition:

The attributes of the <div> element indicate that this is the definition of a Type (typeof=”rdfs:Class”) and its canonical identifier (resource=””).  The <span> elements filling in the details – it has a label (rdfs:label) of ‘Thing‘ and a descriptive comment (rdfs:comment) of ‘The most generic type of item‘.  There is one formatting addition to the <span> containing the label.  The class=”h” is there to make the labels stand out when viewing in a browser – it has no direct relevance to the structure of the vocabulary.

The CreativeWork Type definition

Inspecting the CreativeWork definition reveals a few other attributes of a Type defined in <span> elements.  The rdfs:subClassOf property, with the associated href on the <a> element, indicates that is a sub-type of

Acknowledge Finally there is the dc:source property and its associated href value.  This has no structural impact on the vocabulary, its purpose is to acknowledge and reference the source of the inspiration for the term.  It is this reference that results in the display of a paragraph under the Acknowledgements section of a term page.

Defining Properties
The properties that can be used with a Type are defined in a very similar way to the Types themselves.

The name Property definition:

The attributes of the <div> element indicate that this is the definition of a Property (typeof=”rdf:Property”) and its canonical identifier (resource=””).  As with Types the <span> elements fill in the details.

name Properties have two specific <span> elements to define the domain and range of a property.  If these concepts are new to you, the concepts are basically simple.  The Type(s) defined as being in the domain of a property are are those for which the property is a valid attribute.  The Type(s) defined as being in the range of a property, are those that expected values for that property.  So inspecting the above name example we can see that name is a valid property of the Thing Type with an expected value type of Text.  Also specific to property definitions is rdfs:subPropertyOf which defies that one property is a sub-property another.  For html/RDFa format reasons this is defined using a link entity thus: <link property=”rdfs:subPropertyOf” href=”” />.

Those used to defining other RDF vocabularies may question the use of and to define these relationships. This is a pragmatic approach to producing a flexible data model for the web.  For a more in-depth explanation I refer you to the Data Model documentation.

Not an exhaustive tutorial in editing the defining RDFa but hopefully enough to get you going!


Making Examplesexamples

One of the most powerful features of the documentation is the Examples section on most of the term pages.  These provide mark up examples for most of the terms in the vocabulary, that can be used and built upon by those adding data to their web pages.  These examples represent how the html of a page or page section may be marked up.  To set context, the examples are provided in several serialisations – basic html, html plus Microdata, html plus RDFa, and JSON-LD.  As the objective is to aid the understanding of how may be used, it is usual to provide simple basic html formatting in the examples.

Examples in File
As described earlier, the source for examples are held in files with a –examples.txt suffix, stored in the data directory or in individual extension directories.

One or more examples per file are defined in a very simplistic format.

An example begins in the file with a line that starts with TYPES:, such as this:

This example has a unique identifier prefixed with a # character, there should be only one of these per example.  These identifiers are intended for future feedback mechanisms and as such are not particularly controlled.  I recommend you crate your own when creating your examples.  Next comes a comma separated list of term names.  Adding a term to this list will result in the example appearing on the page for that term.  This is true for both Types and Properties.

Next comes four sections each preceded by a line containing a single label in the following order: PRE-MARKUP:, MICRODATA:, RDFA:, JSON:.  Each section ends when the next label line, or the end of the file is reached.  The contents of each section of the example is then inserted into the appropriate tabbed area on the term page.  The process that does this is not a sophisticated one, there are no error or syntax checking involved – if you want to insert the text of the Gettysburg Address as your RDFa example, it will let you do it.

I am not going to provide tutorials for html, Microdata, RDFa, or JSON-LD here there are a few of those about.  I will however recommend a tool I use to convert between these formats when creating examples.  RDF Translatorrdftranslator is a simple online tool that will validate and translate between RDFa, Microdata, RDF/XML, N3, N-Triples, and JSON-LD.  A suggestion, to make your examples as informative possible – when converting between formats, especially when converting to JSON-LD, most conversion tools reorder he statements. It is worth investing some time in ensuring that the mark up order in your example is consistent for all serialisations.

Hopefully this post will clear away some of mystery of how is structured and managed. If you have proposals in mind to enhance and extend the vocabulary or examples, have a go, see if thy make sense in a version on your own system, suggest them to the community on Github.

In my next post I will look more at extensions, Hosted and External, and how you work with those, including some hints on choosing where to propose changes – in the core vocabulary, in a hosted or an external extension.

Comment   or   Contact us

Culture_Grid A bit of a profound question – triggered by a guest post on Museums Computer Group by Nick Poole CEO of The Collections Trust about Culture Grid and an overview of recent announcements about it.

Broadly the changes are that:

  • The Culture Grid closed to ‘new accessions’ (ie. new collections of metadata) on the 30th April
  • The existing index and API will continue to operate in order to ensure legacy support
  • Museums, galleries, libraries and archives wishing to contribute material to Europeana can still do so via the ‘dark aggregator’, which the Collections Trust will continue to fund
  • Interested parties are invited to investigate using the Europeana Connection Kit to automate the batch-submission of records into Europeana

The reasons he gave for the ending of this aggregation service are enlightening for all engaged with or thinking about data aggregation in the library, museum, and archives sectors.

Throughout its history, the Culture Grid has been tough going. Looking back over the past 7 years, I think there are 3 primary and connected reasons for this:

  • The value proposition for aggregation doesn’t stack up in terms that appeal to museums, libraries and archives. The investment of time and effort required to participate in platforms like the Culture Grid isn’t matched by an equal return on that investment in terms of profile, audience, visits or political benefit. Why would you spend 4 days tidying up your collections information so that you can give it to someone else to put on their website? Where’s the kudos, increased visitor numbers or financial return?
  • Museum data (and to a lesser extent library and archive data) is non-standard, largely unstructured and dependent on complex relations. In the 7 years of running the Culture Grid, we have yet to find a single museum whose data conforms to its own published standard, with the result that every single data source has required a minimum of 3-5 days and frequently much longer to prepare for aggregation. This has been particularly salutary in that it comes after 17 years of the SPECTRUM standard providing, in theory at least, a rich common data standard for museums;
  • Metadata is incidental. After many years of pump-priming applications which seek to make use of museum metadata it is increasingly clear that metadata is the salt and pepper on the table, not the main meal. It serves a variety of use cases, but none of them is ‘proper’ as a cultural experience in its own right. The most ‘real’ value proposition for metadata is in powering additional services like related search & context-rich browsing.

The first of these two issues represent a fundamental challenge for anyone aiming to promote aggregation. Countering them requires a huge upfront investment in user support and promotion, quality control, training and standards development.

The 3rd is the killer though – countering these investment challenges would be possible if doing so were to lead directly to rich end-user experiences. But they don’t. Instead, you have to spend a huge amount of time, effort and money to deliver something which the vast majority of users essentially regard as background texture.

As an old friend of mine would depressingly say – Makes you feel like packing up your tent and going home!

Interestingly earlier in the post Nick give us an insight into the purpose of Culture Grid:

.… we created the Culture Grid with the aim of opening up digital collections for discovery and use ….

That basic purpose is still very valid for both physical and digital collections of all types.  The what [helping people find, discover, view and use cultural resources] is as valid as it has ever been.  It is the how [aggregating metadata and building shared discovery interfaces and landing pages for it] that has been too difficult to justify continuing in Culture Grid’s case.

The_Web_of_Data_is_Our_Oyster In my recent presentations to library audiences I have been asking a simple question “Why do we catalogue?”  Sometimes immediately, sometimes after some embarrassed shuffling of feet, I inevitably get the answer “So we can find stuff!“.  In libraries, archives, and museums helping people finding the stuff we have is core to what we do – all the other things we do are a little pointless if people can’t find, or even be aware of, what we have.

If you are hoping your resources will be found they have to be referenced where people are looking.  Where are they looking?

It is exceedingly likely they are not looking in your aggregated discovery interface, or your local library, archive or museum interface either.  Take a look at this chart Library Search Graphdetailing the discovery starting point for college students and others.  Starting in a search engine is up in the high eighty percents, with things like library web sites and other targeted sources only just making it over the 1% hurdle to get on the chart.  We have known about this for some time – the chart comes from an OCLC Report ‘College Students’ Perceptions of Libraries and Information Resources‘ published in 2005.  I would love to see a similar report from recent times, it would have to include elements such as Siri, Cortana, and other discovery tools built-in to our mobile devices which of course are powered by the search engines.  Makes me wonder how few cultural heritage specific sources would actually make that 1% cut today.

Our potential users are in the search engines in one way or another, however it is the vast majority case that our [cultural heritage] resources are not there for them to discover.

Culture Grid, I would suggest, is probably not the only organisation, with an ‘aggregate for discovery’ reason for their existence, that may be struggling to stay relevant, or even in existence.

logo_wcmasthead_en You may well ask about OCLC, with it’s iconic discovery interface. It is a bit simplistic say that it’s 320 million plus bibliographic records are in WorldCat only for people to search and discover through the user interface.  Those records also underpin many of the services, such as cooperative cataloguing, record supply, inter library loan, and general library back office tasks, etc. that OCLC members and partners benefit from.  Also for many years WorldCat has been at the heart of syndication partnerships supplying data to prominent organisations, including Google, that help them reference resources within which in turn, via find in a library capability, lead to clicks onwards to individual libraries. [Declaration: OCLC is the company name on my current salary check]   Nevertheless, even though WorldCat has a broad spectrum of objectives, it is not totally immune from the influences that are troubling the likes of Culture Graph.  In fact they are one of the web trends that have been driving the Linked Data and efforts from the WorldCat team, but more of that later.

How do we get our resources visible in the search engines then?  By telling the search engines what we [individual organisations] have. We do that by sharing a relevant view of our metadata about our resources, not necessarily all of it, in a form that the search engines can easily consume. Basically this means sharing data embeded in your web pages, marked up using the vocabulary. To see how this works, we need look no further than the rest of the web – commerce, news, entertainment etc.  There are already millions of organisations, measured by domains, that share structured data in their web pages using the vocabulary with the search engines.  This data is being used to direct users with more confidence directly to a site, and is contributing to the global web of data.

There used to be a time that people complained in the commercial world of always ending up being directed to shopping [aggregation] sites instead of directly to where they could buy the TV or washing machine they were looking for.  Today you are far more likely to be given some options in the search engine that link you directly to the retailer.  I believe is symptomatic of the disintermediation of the aggregators by individual syndication of metadata from those retailers.

Can these lessons be carried through to the cultural heritage sector – of course they can.  This is where there might be a bit of light at the end of the tunnel for those behind the aggregations such as Culture Grid.  Not for the continuation as an aggregation/discovery site, but as a facilitator for the individual contributors.  This stuff, when you first get into it, is not simple and many organisations do not have the time and resources to understand how to share data about their resources with the web.  The technology itself is comparatively simple, in web terms, it is the transition and implementation that many may need help with.

schema-org1 is not the perfect solution to describing resources, it is not designed to be. It is there to describe them sufficiently to be found on the web. Nevertheless it is also being evolved by community groups to enhance it capabilities. Through my work with the Schema Bib Extend W3C Community Group, enhancements to to enable better description of bibliographic resources, have been successfully proposed and adopted.  This work is continuing towards a bibliographic extension –  There is obvious potential for other communities to help evolve and extend Schema to better represent their particular resources – archives for example. I would be happy to talk with others who want insights into how they may do this for their benefit. is not a replacement for our rich common data standards such as MARC for libraries, and SPECTRUM for museums as Nick describes. Those serve purposes beyond sharing information with the wider world, and should be continued to be used for those purposes whilst relevant. However we can not expect the rest of the world to get its head around our internal vocabularies and formats in order to point people at our resources. It needs to be a compromise. We can continue to use what is relevant in our own sectors whilst sharing data so that our resources can be discovered and then explored further.

So to return to the question I posed – Is There Still a Case for Cultural Heritage Data Aggregation? – If the aggregation is purely for the purpose of supporting discovery, I think the answer is a simple no.  If it has broader purpose, such as for WorldCat, it is not as clear cut.

IMG_1217 I do believe nevertheless that many of the people behind the aggregations are in the ideal place to help facilitate the eventual goal of making cultural heritage resources easily discoverable.  With some creative thinking, adoption of ‘web’ techniques, technologies and approaches to provide facilitation services, reviewing what their real goals are [which may not include running a search interface]. I believe we are moving into an era where shared authoritative sources of easily consumable data could make our resources more visible than we previously could have hoped.

Are there any black clouds on this hopeful horizon?  Yes there is one. In the shape of traditional cultural heritage technology conservatism.  The tendency to assume that our vocabulary or ontology is the only way to describe our resources, coupled with a reticence to be seen to engage with the commercial discovery world, could still hold back the potential.

As an individual library, archive, or museum scratching your head about how to get your resources visible in Google and not having the in-house ability to react; try talking within the communities around and behind the aggregation services you already know.  They all should be learning and a problem shared is more easily solved.  None of this is rocket science, but trying something new is often better as a group.

Comment   or   Contact us

schema-org1 About a month ago Version 2.0 of the vocabulary hit the streets.

This update includes loads of tweaks, additions and fixes that can be found in the release information.  The automotive folks have got new vocabulary for describing Cars including useful properties such as numberofAirbags, fuelEfficiency, and knownVehicleDamages. New property mainEntityOfPage (and its inverse, mainEntity) provide the ability to tell the search engine crawlers which thing a web page is really about.  With new type ScreeningEvent to support movie/video screenings, and a gtin12 property for Product, amongst others there is much useful stuff in there.

But does this warrant the version number clicking over from 1.xx to 2.0?

These new types and properties are only the tip of the 2.0 iceberg.  There is a heck of a lot of other stuff going on in this release that apart from these additions.  Some of it in the vocabulary itself, some of it in the potential, documentation, supporting software, and organisational processes around it.

Sticking with the vocabulary for the moment, there has been a bit of cleanup around property names. As the vocabulary has grown organically since its release in 2011, inconsistencies and conflicts between different proposals have been introduced.  So part of the 2.0 effort has included some rationalisation.  For instance the Code type is being superseded by SoftwareSourceCode – the term code has many different meanings many of which have nothing to do with software; surface has been superseded by artworkSurface and area is being superseded by serviceArea, for similar reasons. Check out the release information for full details.  If you are using any of the superseded terms there is no need to panic as the original terms are still valid but with updated descriptions to indicate that they have been superseded.  However you are encouraged to moved towards the updated terminology as convenient.  The question of what is in which version brings me to an enhancement to the supporting documentation.  Starting with Version 2.0 there will be published a snapshot view of the full vocabulary – here is  So if you want to refer to a term at a particular version you now can.

CreativeWork_usage How often is Schema being used? – is a question often asked. A new feature has been introduced to give you some indication.  Checkout the description of one of the newly introduced properties mainEntityOfPage and you will see the following: ‘Usage: Fewer than 10 domains‘.  Unsurprisingly for a newly introduced property, there is virtually no usage of it yet.  If you look at the description for the type this term is used with, CreativeWork, you will see ‘Usage: Between 250,000 and 500,000 domains‘.  Not a direct answer to the question, but a good and useful indication of the popularity of particular term across the web.

In the release information you will find the following cryptic reference: ‘Fix to #429: Implementation of new extension system.’

This refers to the introduction of the functionality, on the site, to host extensions to the core vocabulary.  The motivation for this new approach to extending is explained thus: provides a core, basic vocabulary for describing the kind of entities the most common web applications need. There is often a need for more specialized and/or deeper vocabularies, that build upon the core. The extension mechanisms facilitate the creation of such additional vocabularies.
With most extensions, we expect that some small frequently used set of terms will be in core, with a long tail of more specialized terms in the extension.

As yet there are no extensions published.  However, there are some on the way.

As Chair of the Schema Bib Extend W3C Community Group I have been closely involved with a proposal by the group for an initial bibliographic extension ( to  The proposal includes new Types for Chapter, Collection, Agent, Atlas, Newspaper & Thesis, CreativeWork properties to describe the relationship between translations, plus types & properties to describe comics.  I am also following the proposal’s progress through the system – a bit of a learning exercise for everyone.  Hopefully I can share the news in the none too distant future that bib will be one of the first released extensions.

W3C Community Group for
A subtle change in the way the vocabulary, it’s proposals, extensions and direction can be followed and contributed to has also taken place.  The creation of the Community Group has now provided an open forum for this.

So is 2.0 a bit of a milestone?  Yes taking all things together I believe it is. I get the feeling that is maturing into the kind of vocabulary supported by a professional community that will add confidence to those using it and recommending that others should.

Comment   or   Contact us

schema-org1 is basically a simple vocabulary for describing stuff, on the web.  Embed it in your html and the search engines will pick it up as they crawl, and add it to their structured data knowledge graphs.  They even give you three formats to chose from — Microdata, RDFa, and JSON-LD — when doing the embedding.  I’m assuming, for this post, that the benefits of being part of the Knowledge Graphs that underpin so called Semantic Search, and hopefully triggering some Rich Snippet enhanced results display as a side benefit, are self evident.

The vocabulary itself is comparatively easy to apply once you get your head around it — find the appropriate Type (Person, CreativeWork, Place, Organization, etc.) for the thing you are describing, check out the properties in the documentation and code up the ones you have values for.  Ideally provide a URI (URL in for a property that references another thing, but if you don’t have one a simple string will do.

There are a few strangenesses, that hit you when you first delve into using the vocabulary.  For example, there is no problem in describing something that is of multiple types — a LocalBussiness is both an Organisation and a Place.  This post is about another unusual, but very useful, aspect of the vocabulary — the Role type.

At first look at the documentation, Role looks like a very simple type with a handful of properties.  On closer inspection, however, it doesn’t seem to fit in with the rest of the vocabulary.  That is because it is capable of fitting almost anywhere.  Anywhere there is a relationship between one type and another, that is.  It is a special case type that allows a relationship, say between a Person and an Organization, to be given extra attributes.  Some might term this as a form of annotation.

So what need is this satisfying you may ask.  It must be a significant need to cause the creation of a special case in the vocabulary.  Let me walk through a case, that is used in a Blog post, to explain a need scenario and how Role satisfies that need.

Starting With American Football

Say you are describing members of an American Football Team.  Firstly you would describe the team using the SportsOrganization type, giving it a name, sport, etc. Using RDFa:

Then describe a player using a Person type, providing name, gender, etc.:

Now lets relate them together by adding an athlete relationship to the Person description:


Let’s take a look of the data structure we have created using Turtle – not a html markup syntax but an excellent way to visualise the data structures isolated from the html:

So we now have Chucker Roberts described as an athlete on the Touchline Gods team.  The obvious question then is how do we describe the position he plays in the team.  We could have extended the SportsOrganization type with a property for every position, but scaling that across every position for every team sport type would have soon ended up with far more properties than would have been sensible, and beyond the maintenance scope of a generic vocabulary such as

This is where Role comes in handy.  Regardless of the range defined for any property in, it is acceptable to provide a Role as a value.  The convention then is to use a property with the same property name, that the Role is a value for, to then remake the connection to the referenced thing (in this case the Person).  In simple terms we have have just inserted a Role type between the original two descriptions.


This indirection has not added much you might initially think, but Role has some properties of its own (startDate, endDate, roleName) that can help us qualify the relationship between the SportsOrganization and the athlete (Person).  For the field of organizations there is a subtype of Role (OrganizationRole) which allows the relationship to be qualified slightly more.



and in Turtle:

Beyond American Football

So far I have just been stepping through the example provided in the blog post on this.  Let’s take a look at an example from another domain – the one I spend my life immersed in – libraries.

There are many relationships between creative works that libraries curate and describe (books, articles, theses, manuscripts, etc.) and people & organisations that are not covered adequately by the properties available (author, illustrator,  contributor, publisher, character, etc.) in CreativeWork and its subtypes.  By using Role, in the same way as in the sports example above,  we have the flexibility to describe what is needed.

Take a book (How to be Orange: an alternative Dutch assimilation course) authored by Gregory Scott Shapiro, that has a preface written by Floor de Goede. As there is no writerOfPreface property we can use, the best we could do is to is to put Floor de Goede in as a contributor.  However by using Role can qualify the contribution role that he played to be that of the writer of preface.


In Turtle:

and RDFa:

You will note in this example I have made use of URLs, to external resources – VIAF for defining the Persons and the Library of Congress relator codes – instead of defining them myself as strings.  I have also linked the book to it’s Work definition so that someone exploring the data can discover other editions of the same work.

Do I always use Role?
In the above example I relate a book to two people, the author and the writer of preface.  I could have linked to the author via another role with the roleName being ‘Author’ or <>.  Although possible, it is not a recommended approach.  Wherever possible use the properties defined for a type.  This is what data consumers such as search engines are going to be initially looking for.

One last example

To demonstrate the flexibility of using the Role type here is the markup that shows a small diversion in my early career:

This demonstrates the ability of Role to be used to provide added information about most relationships between entities, in this case the employee relationship. Often Role itself is sufficient, with the ability for the vocabulary to be extended with subtypes of Role to provide further use-case specific properties added.

Whenever possible use URLs for roleName
In the above example, it is exceedingly unlikely that there is a citeable definition on the web, I could link to for the roleName. So it is perfectly acceptable to just use the string “Keyboards Roadie”.  However to help the search engines understand unambiguously what role you are describing, it is always better to use a URL.  If you can’t find one, for example in the Library of Congress Relater Codes, or in Wikidata, consider creating one yourself in Wikipedia or Wikidata for others to share. Another spin-off benefit for using URIs (URLs) is that they are language independent, regardless of the language of the labels in the data  the URI always means the same thing.  Sources like Wikidata often have names and descriptions for things defined in multiple languages, which can be useful in itself.

Final advice
This very flexible mechanism has many potential uses when describing your resources in There is always a danger in over using useful techniques such as this. Be sure that there is not already a way within Schema, or worth proposing to those that look after the vocabulary, before using it.

Good luck in your role in describing your resources and the relationships between them using

Comment   or   Contact us

fb Google announced yesterday that it is the end of the line for Freebase, and they have “decided to help transfer the data in Freebase to Wikidata, and in mid-2015 we’ll wind down the Freebase service as a standalone project”.wd

As well as retiring access for data creation and reading, they are also retiring API access – not good news for those who have built services on top of them.  The timetable they shared for the move is as follows:

Before the end of March 2015
– We’ll launch a Wikidata import review tool
– We’ll announce a transition plan for the Freebase Search API & Suggest Widget to a Knowledge Graph-based solution

March 31, 2015
– Freebase as a service will become read-only
– The website will no longer accept edits
– We’ll retire the MQL write API

June 30, 2015
– We’ll retire the Freebase website and APIs[3]
– The last Freebase data dump will remain available, but developers should check out the Wikidata dump

The crystal ball gazers could probably have predicted a move such as this when Google employed, the then lead of Wikidata, Denny Vrandečić a couple of years back. However they could have predicted a load of other outcomes too. 😉

In the long term this should be good news for Wikidata, but in the short term they may have a severe case of indigestion as they attempt to consume data that will, in some estimations, treble the size of Wikidata adding about 40 million Freebase facts into its current 12 million.  It won’t be a simple copy job.

Loading Freebase into Wikidata as-is wouldn’t meet the Wikidata community’s guidelines for citation and sourcing of facts — while a significant portion of the facts in Freebase came from Wikipedia itself, those facts were attributed to Wikipedia and not the actual original non-Wikipedia sources. So we’ll be launching a tool for Wikidata community members to match Freebase assertions to potential citations from either Google Search or our Knowledge Vault, so these individual facts can then be properly loaded to Wikidata.

There are obvious murmurings on the community groups about things such as how strict the differing policies for confirming facts are, and how useful the APIs are. There are bound to be some hiccups on this path – more of an arranged marriage than one of love at first sight between the parties.

I have spent many a presentation telling the world how Google have based their Knowledge Graph on the data from Freebase, which they got when acquiring Metaweb in 2010.

So what does this mean for the Knowledge Graph?  I believe it is a symptom of the Knowledge Graph coming of age as a core feature of the Google infrastructure.  They have used Freebase to seed the Knowledge Graph, but now that seed has grow into a young tree fed by the twin sources of Google search logs, and the rich nutrients delivered by structured data embedded in millions of pages on the web. Following the analogy, the seed of Freebase, as a standalone project/brand, just doesn’t fit anymore with the core tree of knowledge that Google is creating and building.  No coincidence that  they’ll “announce a transition plan for the Freebase Search API & Suggest Widget to a Knowledge Graph-based solution”.

As for Wikidata, if the marriage of data is successful, it will establish it as the source for open structured data on the web and for facts within Wikipedia.

As the live source for information that will often be broader than the Wikipedia it sprang from, I suspect Wikidata’s rise will spur the eventual demise of that other source of structured data from Wikipedia – DBpedia.   How in the long term will it be able to compete, as a transformation of occasional dumps of Wikipedia, with a live evolving broader source?   Such a demise would be a slow process however – DBpedia has been the de facto link source for such a long time, its URIs are everywhere!

However you see the eventual outcomes for Frebase, Wikidata, and DBpedia, this is big news for structured data on the web.

Comment   or   Contact us

image It is one thing to have a vision, regular readers of this blog will know I have them all the time, its yet another to see it starting to form through the mist into a reality. Several times in the recent past I have spoken of the some of the building blocks for bibliographic data to play a prominent part in the Web of Data.  The Web of Data that is starting to take shape and drive benefits for everyone.  Benefits that for many are hiding in plain site on the results pages of search engines. In those informational panels with links to people’s parents, universities, and movies, or maps showing the location of mountains, and retail outlets; incongruously named Knowledge Graphs.

Building blocks such as; Linked Data in; moves to enhance capabilities for bibliographic resource description; recognition that Linked Data has a beneficial place in library data and initiatives to turn that into a reality; the release of Work entity data mined from, and linked to, the huge data set.

OK, you may say, we’ve heard all that before, so what is new now?

As always it is a couple of seemingly unconnected events that throw things into focus.

Event 1:  An article by David Weinberger in the DigitalShift section of Library Journal entitled Let The Future Go.  An excellent article telling libraries that they should not be so parochially focused in their own domain whilst looking to how they are going serve their users’ needs in the future.  Get our data out there, everywhere, so it can find its way to those users, wherever they are.  Making it accessible to all.  David references three main ways to provide this access:

  1. APIs – to allow systems to directly access our library system data and functionality
  2. Linked Datacan help us open up the future of libraries. By making clouds of linked data available, people can pull together data from across domains
  3. The Library Graph –  an ambitious project libraries could choose to undertake as a group that would jump-start the web presence of what libraries know: a library graph. A graph, such as Facebook’s Social Graph and Google’s Knowledge Graph, associates entities (“nodes”) with other entities

(I am fortunate to be a part of an organisation, OCLC, making significant progress on making all three of these a reality – the first one is already baked into the core of OCLC products and services)

It is the 3rd of those, however, that triggered recognition for me.  Personally, I believe that we should not be focusing on a specific ‘Library Graph’ but more on the ‘Library Corner of a Giant Global Graph’  – if graphs can have corners that is.  Libraries have rich specialised resources and have specific needs and processes that may need special attention to enable opening up of our data.  However, when opened up in context of a graph, it should be part of the same graph that we all navigate in search of information whoever and wherever we are.

Event 2: A posting by ZBW Labs Other editions of this work: An experiment with OCLC’s LOD work identifiers detailing experiments in using the OCLC WorldCat Works Data.

ZBW contributes to WorldCat, and has 1.2 million oclc numbers attached to it’s bibliographic records. So it seemed interesting, how many of these editions link to works and furthermore to other editions of the very same work.

The post is interesting from a couple of points of view.  Firstly the simple steps they took to get at the data, really well demonstrated by the command-line calls used to access the data – get OCLCNum data from WorldCat.or in JSON format – extract the schema:exampleOfWork link to the Work – get the Work data from WorldCat, also in JSON – parse out the links to other editions of the work and compare with their own data.  Command-line calls that were no doubt embedded in simple scripts.

Secondly, was the implicit way that the corpus of WorldCat Work entity descriptions, and their canonical identifying URIs, is used as an authoritative hub for Works and their editions.  A concept that is not new in the library world, we have been doing this sort of things with names and person identities via other authoritative hubs, such as VIAF, for ages.  What is new here is that it is a hub for Works and their relationships, and the bidirectional nature of those relationships – work to edition, edition to work – in the beginnings of a library graph linked to other hubs for subjects, people, etc.

The ZBW Labs experiment is interesting in its own way – simple approach enlightening results.  What is more interesting for me, is it demonstrates a baby step towards the way the Library corner of that Global Web of Data will not only naturally form (as we expose and share data in this way – linked entity descriptions), but naturally fit in to future library workflows with all sorts of consequential benefits.

The experiment is exactly the type of initiative that we hoped to stimulate by releasing the Works data.  Using it for things we never envisaged, delivering unexpected value to our community.  I can’t wait to hear about other initiatives like this that we can all learn from.

So who is going to be doing this kind of thing – describing entities and sharing them to establish these hubs (nodes) that will form the graph.  Some are already there, in the traditional authority file hubs: The Library of Congress LC Linked Data Service for authorities and vocabularies (, VIAF, ISNI, FAST, Getty vocabularies, etc.

As previously mentioned Work is only the first of several entity descriptions that are being developed in OCLC for exposure and sharing.  When others, such as Person, Place, etc., emerge we will have a foundation of part of a library graph – a graph that can and will be used, and added to, across the library domain and then on into the rest of the Global Web of Data.  An important authoritative corner, of a corner, of the Giant Global Graph.

As I said at the start these are baby steps towards a vision that is forming out of the mist.  I hope you and others can see it too.

(Toddler image: Harumi Ueda)

Comment   or   Contact us