It is great to launch a new venture, and I am looking forward to launching this one – Data Liberate.
Having said that, there is much continuity in this step. Those that know me from the conference circuit, my work with the Talis Group and more recently with Talis Consulting, will recognise much in the core of what Data Liberate has to offer.
What will I be doing at Data Liberate? The simple answer is much of the same, but with a wider audience & client base and less restricted to specifically Linked Data and Semantic Web techniques and technologies. Extracting value from data, financial or otherwise, is at the core of the next the next big wave of innovation on the web. The Web has become central to what we all do commercially, corporately, socially and as individuals. This, data driven, next wave of innovation will therefore influence, and potentially benefit, us all.
As with all ‘new waves of innovation’ there is much technological gobbledegook, buzzwords and marketing hype surrounding it. My driving focus for many years has been to make new stuff understandable to those that can benefit. A focus I intend to continue and increase over the coming months and years. This is not just a one way process. Experience has shown me that those with the technology are often not very adept at promoting it’s value to others in terms that they can connect with. I therefore intend to spend some of my efforts with these technology providers, helping them get their message over for the benefit of all.
Trawling the blogosphere you will find proponents of Open Data, Big Data, Linked Data, Linked Open Data, Enterprise Data and Cloud Computing. All trying to convince us that their core technology is the key to the future. As with anything the future, for extracting and benefiting from the value within data, is a mixture of most if not all of the above. Hence Data Liberate’s focus is on Data in all it’s forms.
Contact me if you want to talk through what Data Liberate will be doing and how we can help you.
The Library of Congress made an announcement earlier this week that has left some usually vocal library pundits speechless.
MARC is Dead! – RDA made irrelevant! – cries that can be heard rattling around the bibliographic blogo-twittersphere. My opinion is that this is an inevitable move based upon serious consideration, and has been building on several initiatives that have been brewing for many months.
Bold though – very bold. I am sure that there are many in the library community, who have invested much of their careers in MARC and its slightly more hip cousin RDA, who are now suffering from vertigo as they feel the floor being pulled from beneath their feet.
The Working Group of the Future of Bibliographic Control, as it examined technology for the future, wrote that the Library community’s data carrier, MARC, is “based on forty-year-old techniques for data management and is out of step with programming styles of today.”
Many of the libraries taking part in the test [of RDA] indicated that they had little confidence RDA changes would yield significant benefits…
And on a more positive note:
The Library of Congress (LC) and its MARC partners are interested in a deliberate change that allows the community to move into the future with a more robust, open, and extensible carrier for our rich bibliographic data….
….The new bibliographic framework project will be focused on the Web environment, Linked Data principles and mechanisms, and the Resource Description Framework (RDF) as a basic data model.
There is still a bit of confusion there between a data carrier and a framework for describing resources. Linked Data is about linking descriptions of things, not necessarily transporting silos of data from place to place. But maybe I quibble a little too much at this early stage.
So now what:
The Library of Congress will be developing a grant application over the next few months to support this initiative. The two-year grant will provide funding for the Library of Congress to organize consultative groups (national and international) and to support development and prototyping activities. Some of the supported activities will be those described above: developing models and scenarios for interaction within the information community, assembling and reviewing ontologies currently used or under development, developing domain ontologies for the description of resources and related data in scope, organizing prototypes and reference implementations.
I know that this is the way that LoC and the library community do things, but do I hope that this doesn’t mean that they will disappear into an insular huddle for a couple of years to re-emerge with something that is almost right yet missing some of the evolution that is going on around them over that period.
One very relevant example of the success of applying open thinking and approach to the bibliographic word using Linked Data is the open publishing of the British National Bibliography (BnB). Readers of this blog will know that we at Talis have worked closely with the team at the BL in their ground breaking work. The data model they produced is an example of one of those things that may induce that feeling of vertigo that I mentioned. It doesn’t look much like a MARC record! I can assure the sceptical that although it may be very different to what you are used to, it is easy to get your head around. (Drop us a line if you want some guidance).
As Talis host the BnB Linked Data for the BL, I can testify to the success of this work – only launched in mid July. It’s use is growing rapidly, receiving just short of 2 million hits in the last month alone.
With the British Library, along with the National Libraries of Canada and Germany, being quoted as partners with the LoC in this initiative, plus their work being referenced as an exemplar in the other reports I mention, I hold out a great hope that things are headed in the right direction.
As comments to some of my previous posts attest, there is concern from some in the community of domain experts, that this RDF stuff is too simple and light-weight and will not enable them capture the rich detail that they need. They are missing a couple of points. Firstly, it is this simplicity that will help non-domain experts to understand, reference and link to their rich resources. Secondly, RDF is more than capable of describing the rich detail that they require – using several emerging ontologies including the RDA ontology, FRBR, etc. Finally and most importantly, it is not a binary choice between widely comprehended simplicity and and domain specific detailed description. The RDF for a resource can, and probably should, contain both.
So Library of Congress, I welcome your announcement and offer a friendly reminder that you not only need to draw expertise from the forward thinking library community, but also from the wider Linked Data world. I am sure your partners from the British Library willreinforce this message.
This post was also published on the Talis Consulting Blog
The mission of the Library Linked Data Incubator Group was to help
increase the global interoperability of library data on the Web by
focusing on the potential role of Linked Data technologies.
This report contains several messages that are not just interesting and relevant for the Linked Data enthusiast in the library community. It contains some home truths for those in libraries who think that a slight tweak to the status quo, such as adopting RDA, will be sufficient to keep libraries [data] relevant in the rapidly evolving world of the web.
Linked Data is not about creating a different Web, but rather about enhancing the Web through the addition of structured data.
By promoting a bottom-up approach to publishing data, Linked Data creates an opportunity for libraries to improve the value proposition of describing their assets.
Linked Data may be a first step toward a “cloud-based” approach to managing cultural information, which could be more cost-effective than stand-alone systems in institutions.
With Linked Open Data, libraries can increase their presence on the Web, where most information seekers can be found.
The use of the Web and Web-based identifiers will make up-to-date resource descriptions directly citable by catalogers.
History shows that all technologies are transitory, and the history of information technology suggests that specific data formats are especially short-lived.
Library developers and vendors will directly benefit from not being tied to library-specific data formats.
Most information in library data is encoded as display-oriented, natural-language text.
Work on library Linked Data can be hampered by the disparity in concepts and terminology between libraries and the Semantic Web community.
Relatively few bibliographic datasets have been made available as Linked Data, and even less metadata has been produced for journal articles, citations, or circulation data.
A major advantage of Linked Data technology is realized with the establishment of connections between and across datasets.
Libraries should embrace the web of information, both by making their data available for use as Linked Data and by using the web of data in library services. Ideally, library data should integrate fully with other resources on the Web, creating greater visibility for libraries and bringing library services to information seekers.
Also, from the report:
Relatively few bibliographic datasets have been made available as Linked Data, and even less metadata has been produced for journal articles, citations, or circulation data — information which could be put to effective use in environments where data is integrated seamlessly across contexts. Pioneering initiatives such as the release of the British National Bibliography reveal the effort required to address challenges such as licensing, data modeling, the handling of legacy data, and collaboration with multiple user communities. However, these also demonstrate the considerable benefits of releasing bibliographic databases as Linked Data. As the community’s experience increases, the number of datasets released as Linked Data is growing rapidly.
Talis Consulting has been closely and actively involved in the modelling, data transformation, publishing, and hosting of the British National Bibliography (BnB) as Linked Data. A great overview of the approach taken to modelling of bibliographic data in a way that makes it easily compatible with the wider Web of Data, is provided by Tim Hodson in his post – British Library Data Model: Overview. As can bee seen from their work, the modelling used for the BnB differs from the approach taken by many attempting to publish bibliographic data as Linked Data – it describes the resources (the books, authors, publishers, etc.) as people, places, events, and things, as against attempting to represent the records that libraries keep about their stock of resources.
With intentions to release open library data specifically mentioning Linked Data, the sentiments from this report are already influencing the wider forward thinking library community. I will leave the last word to the report’s final paragraph which some, in the traditional record-based cataloguing community, may have difficulty in getting their head around. I encourage them to look at libraries from the point of view of the wider [non-library] web consumers, and read it again.
One final caveat: data consumers should bear in mind that, in contrast to traditional, closed IT systems, Linked Data follows an open-world assumption: the assumption that data cannot generally be assumed to be complete and that, in principle, more data may become available for any given entity. We hope that more “data linking” will happen in the library domain in line with the projects mentioned here.
This post was also published on the Talis Consulting Blog
The Web has been around for getting on for a couple of decades now, and massive industries have grown up around the magic of making it work for you and your organisation. Some of it, it has to be said, can be considered snake-oil. Much of it is the output of some of the best brains on the planet. Where, on the hit parade of technological revolutions to influence mankind, the Web is placed is oft disputed, but it is definitely up there with fire, steam, electricity, computing, and of course the wheel. Similar debates, are and will virtually rage, around the hit parade of web features that will in retrospect have been most influential – pick your favourites, http, XML, REST, Flash, RSS, SVG, the URL, the href, CSS, RDF – the list is a long one.
I have observed a pattern as each of the successful new enhancements to the web have been introduced, and then generally adopted. Firstly there is a disconnect between the proponents of the new approach/technology/feature and the rest of us. The former split their passions between focusing on the detailed application, rules, and syntax of it’s use and; broadcasting it’s worth to the world, not quite understanding why the web masses do not ‘get it’ and adopt it immediately. This phase is then followed by one of post-hype disillusionment from the creators, especially when others start suggesting simplifications to their baby. Also at this time back-room adoption by those who find it interesting, but are not evangelistic about it, starts to occur. The real kick for the web comes from those back-room folks who just use this next thing to deliver stuff and solve problems in a better way. It is the results of their work that the wider world starts to emulate, so that they can keep up with the pack and remain competitive. Soon this new feature is adopted by the majority, because all the big boys are using it, and it becomes just part of the tool kit.
A great example of this was RSS. Not a technological leap but a pragmatic mix of current techniques and technologies mixed in with some lateral thinking and a group of people agreeing to do it in ‘this way’ then sharing it with the world. As you will see from the Wikipedia page on RSS, the syntax wars raged in the early days – I remember it well 0.9, 0.91, 1.0, 1.1, 2.0- 2.01, etc. I also remember trying, not always with success, to convince people around me to use it, because it was so simple. Looking back it is difficult to say exactly when it became mainstream, but this line from Wikipedia gives me a clue: In December 2005, the Microsoft Internet Explorer team and Microsoft Outlook team announced on their blogs that they were adopting the feed icon first used in the Mozilla Firefox browser. In February 2006, Opera Software followed suit. From then on, the majority of consumers of RSS were not aware of what they were using and it became just one of the web technologies you use to get stuff done.
I am now seeing the pattern starting to repeat itself again, with structured and linked data. Many, including me, have been evangelising the benefits of web friendly, structured, linked data for some time now – preaching to a crowd that has been slow in growing, but growing it is. Serious benefit is now being gained by organisations adopting these techniques and technologies, as our selection of case studiesdemonstrate. They are getting on with it, often with our help, using it to deliver stuff. We haven’t hit the mainstream yet. For instance, the SEO folks still need to get their head around the difference between content and data.
Something is stirring around the edge of the Semantic Web/Linked Data community that has the potential to give structured web enabled data the kick towards mainstream that RSS got when Microsoft adopted the RSS logo and all that came with it. That something is schema.org, an initiative backed by the heavyweights of the search engine world, Google, Yahoo, and Bing. For the SEO and web developer folks, schema.org offers a simple attractive proposition – embed some structured data in your html and, via things like Google’s Rich Snippets, we will give you a value added display in our search results. Result, happy web developers with their sites getting improve listing display. Result, lots of structured data starting to be published by people that you would have had an impossible task in convincing that it would be a good idea to publish structured data on the web.
I was at Semtech in San Francisco in June, just after schema.org was launched and caused a bit of a stir. They’ve over simplified the standards that we have been working on for years, dumbing down RDF, diluting the capability, with to small a set of attributes, etc., etc. When you get under the skin of schema.org, you see that with support for RDFa and supporting RDFa 1.1 lite, they are not that far from the RDF/Linked Data community.
Schema.org should be welcomed as an enabler for getting loads more structured and linked data on the web. Is their approach now perfect,? No. Will it influence the development of Linked Data? Yes. Will the introduction be messy? Yes. Is it about more than just rich snippets? Oh yes. Do the webmasters care at the moment? No.
If you want a friendly insight in to what schema.org is about, I suggest a listen to this month’s Semantic Link podcast, with their guest from Google/schema.org Ramanathan V. Guha.
Now where have I seen that name before? – Oh yes, back on the Wikipedia RSS page “The basic idea of restructuring information about websites goes back to as early as 1995, when Ramanathan V. Guha and others in Apple Computer’s Advanced Technology Group developed the Meta Content Framework.” So it probably isn’t just me who is getting a feeling of Déjà vu.
This post was also published on the Talis Consulting Blog
A significant step towards open bibliographic data was made in Copenhagen this week at the 25th anniversary meeting of the Conference of European National Librarians (CENL) hosted by the Royal Library of Denmark. From the CENL announcement:
…the Conference of European National Librarians (CENL), has voted overwhelmingly to support the open licensing of their data. What does that mean in practice? It means that the datasets describing all the millions of books and texts ever published in Europe – the title, author, date, imprint, place of publication and so on, which exists in the vast library catalogues of Europe – will become increasingly accessible for anybody to re-use for whatever purpose they want. The first outcome of the open licence agreement is that the metadata provided by national libraries to Europeana.eu, Europe’s digital library, museum and archive, via the CENL service The European Library, will have a Creative Commons Universal Public Domain Dedication, or CC0 licence. This metadata relates to millions of digitised texts and images coming into Europeana from initiatives that include Google’s mass digitisations of books in the national libraries of the Netherlands and Austria. ….it will mean that vast quantities of trustworthy data are available for Linked Open Data developments
There is much to be welcomed here. Firstly that the vote was overwhelming. Secondly that the open license chosen to release this data under is Creative Commons CC0 thus enabling reuse for any purpose. You cannot expect such a vote to cover all the detail, but the phrase ‘trustworthy data are available for Linked Open Data developments’ does give rise to some possible concerns for me. My concern is not from the point of view that this implies that the data will need to be published as Linked Data – this also should be welcomed. My concern comes from some of the library focused Linked Data conversations, presentations and initiatives I have experienced over the last few months and years. Many in the library community, that have worked with Linked Data, lean towards the approach of using Linked Data techniques to reproduce the very fine detailed structure and terminology of their bibliographic records as a representation of those records in RDF (Linked Data data format). Two examples of this that come to mind:
The recent release of an RDF representation of the MARC21 elements and vocabularies by MMA – Possibly of internal use only to someone transforming a library’s MARC record collection to identify concepts and entities to then describe as linked data. Mind-numbingly impenetrable for anyone who is not a librarian looking for useful data.
The Europeana Data Model (EDM). An impressive and elegant Linked Data RDF representation of the internal record structure and process concerns of Europeana. However again not modelled in a way to make it easy for those outside the [Europeana] library community to engage with, understand and extract meaning from.
The fundamental issue I have with the first of these and other examples is that their authors have approached this from the direction of wishing to encode their vast collections of bibliographic records as Linked Data. Whereas they would have ended up with a more open [to the wider world] result if they had used the contents of their records as a rich resource from which to build descriptions of the resources they hold. In that way you end up with descriptions of things (books, authors, places, publishers, events, etc.) as against descriptions of records created by libraries. Fortunately there is an excellent example of a national library publishing Linked Data which describe the things they hold. The British Library have published descriptions of 2.6 million items they hold in the form of the British National Bibliography. I urge those within Europeana and the European National libraries community, who will be involved in this opening up initiative, to take a close look at the evolving data model that the BL have shared, to kick-start the conversation on the most appropriate [Linked Data] techniques to apply to bibliographic data. For more detail see this Overview of the British Library Data Model. This opening up of data is a great opportunity for trusted librarian curated data to become a core part of the growing web of data, that should not be missed. We must be aware of previous missed opportunities, such as the way XMLMarc just slavishly recreated an old structure in a new format. Otherwise we could end up with what could be characterised, in web integration terms as, a significant open data white elephant. Nevertheless I am optimistic, with examples such as the British Library BnB backing up this enthusiastic move to open up a vast collection of metadata, in a useful way, that will stimulate Linked Data development, I have some confidence in a good outcome. Disclosure:Bibliographic domain experts from the British Library worked with Linked Data experts from the Talis team, in the evolution of the BnB data model – something that could be extended and or/repeated with other national and international library organisations.
This post was also published on the Talis Consulting Blog