The Costs of Cultural Heritage Data Services: The CIDOC CRM or Aggregator formats?

Martin Doerr (Research Director at the Information Systems Laboratory and Head of the Centre for Cultural Informatics, FORTH)
Dominic Oldman (Principal Investigator of ResearchSpace, Deputy Head IS, British Museum

June 2013

Many larger cultural institutions are gradually increasing their engagement with the Internet and contributing to the growing provision of integrated and collaborative data services. This occurs in parallel with the upcoming so-called aggregation services, seemingly striving to achieve the same goal. At a closer look however, there are quite fundamental differences that produce very different outcomes.

Traditional knowledge production occurred in an author’s private space or a lab with local records, notwithstanding field research. This space or lab may be part of an institution such as a museum. The author (scholar or scientist) would publish results and by making content accessible it would then be collected by libraries. The author ultimately knows how to interpret the statements in his/her publication and relate that to the reality referred in the publication, from the field, from a lab or from a collection. Many authors are also curators of knowledge and things.

The librarian would not know this context, would not be a specialist of the respective field, and therefore must not alter in any way the content. However, (s)he would integrate the literature under common dominant generic concepts and references, such as “Shakespeare studies”, and preserve the content.

In the current cultural-historical knowledge life-cycle, we may distinguish three levels of stewardship of knowledge: (1) the curator or academic, (2) the disciplinary institution (such as the Smithsonian, the British Museum or smaller cultural heritage bodies) (3) the discipline-neutral aggregator (such as Europeana or IMLS-DCC).  Level (2) typically acts as “provider” to the “aggregator”.

Obviously, the highest level can make the least assumptions about common concepts, in particular a data model, in order to integrate content. Therefore, it can offer services only for very general relationships in the provided content. On the other side, questions needing such a global level of knowledge will be equally generic. Therefore, the challenge is NOT to find the most common fields in the provider schemata (“core fields”), but the most relevant generalizations (such as “refers to”, avoiding overgeneralizations (such as “has date”). These generalizations are for accessing content, but should NOT be confused with the demand of documenting knowledge. At that level some dozens of generic properties may be effective.

The preoccupation of providers and aggregators with a common set of fields has the result that they only support rudimentary connections between the datasets they collect and as a result reduce the ability for researchers to determine where the most relevant knowledge may be located. As with the library, the aggregator’s infrastructure can only support views of the data (search interfaces) that reflect their own limited knowledge because the data arrives with little or no context and over-generalized cross-correlations (“see also”, “relation”, ”coverage”).

The common aggregation process itself strips context away from the data creating silos within the aggregator’s repository. Without adequate contextual information searching becomes increasingly inadequate the larger the aggregation becomes. This limitation is passed on through any Application Programming Interfaces that the aggregator offers. Aggregators slowly begin to understand that metadata is an important form of content, and not only a means to query according to current technical constraints. Some aggregators, such as the German Digital Library, store and return rich “original metadata” received from providers and derive indexing data at the aggregator side, rather than asking providers to strip down their data.

The institution actually curating content must document it so that it will not only be found, but understood in the future. It therefore needs an adequate [1] representation of the context objects come from and their meaning. This representation already has some disciplinary focus, and ultimately allows for integrating the more specialized author knowledge or lab data. For instance, chronological data curves from a carbon dating (C14) lab should be integrated at a museum level (2) by exact reference to the excavation event and records, but on an aggregator level (3) may be described just by a creation date.

A current practice of provider institutions to manually normalize their data with millions of pounds, dollars or euros directly to aggregator formats appears to be an unbelievable waste of money and knowledge. The cost of doing so exceeds by far the cost of the software of whatever sophistication. It appears much more prudent to normalize data at an institutional level to an adequate representation, from which the generic properties of a global aggregator service can be produced automatically, rather than producing, in advance of the aggregation services, another huge set of simplified data for manual integration.

This is precisely the relationship between the CRM and aggregation formats like the EDM. The EDM is the minimal common generalization at the aggregator level, a form to index data at a first level. The CRM is a container, open for specialization, for data about cultural-historical contexts and objects. The CRM is not a format prescription. Concepts of the CRM are used as needed when respective data appear at the provider side. There is no notion of any mandatory field. Each department can select what it regards as mandatory for its own purpose, and even specialize further, without losing the capacity of consistent global querying by CRM concepts. CRM data can automatically be transformed to other data formats, but even quite complex data in a CRM compatible form can effectively be queried by quite simple terms [3].

Similarly, institutions may revise their data formats such that the more generic CRM concepts can automatically be produced from them, i.e., make their formats specializations of the CRM to the degree this is needed for more global questions. For instance, the features of the detailed curve of a C14 measurement are not a subject for a query at an institutional level. Researchers would rather query to retrieve the curve as a whole.

The British Museum understands this fundamental distinction and therefore understands the different risks and costs. This means both the long term financial costs of providing data services, important to organizations with scarce resources, but also the cost to cultural heritage knowledge communities and to society in general. As a consequence they publish using the CRM standard. They also realize that data in the richer CRM format is much more likely to be comprehensible in the future than in “core metadata” form.

Summarizing, we regard publishing and providing information in a CRM compatible form [2] at the institutional or disciplinary level to be much more effective in terms of research utility (and the benefits of this research to other educational and engagement activities). The long-term costs are reduced even with further specializations of such a form, and the costs of secondary transformation algorithms to aggregation formats like EDM are marginal.

Dominic Oldman


[1]  Smith B. Ontology. The Blackwell Guide to the Philosophy of Computing and Information., pages 155–166, 2003. Floridi, L. (ed). Oxford: Blackwell.

[2] Official Version of the CIDOC CRM, the version 5.0.4 of the reference document.
Nick Crofts, Martin Doerr, Tony Gill, Stephen Stead, Matthew Stiff (editors), Definition of the CIDOC Conceptual Reference Model, December 2011.
Available: doc file (3.64 Mb), pdf file (1.56 Mb)

[3] Tzompanaki, K., & Doerr, M. (2012). A New Framework For Querying Semantic NetworksMuseums and the Web 2012: the international conference for culture and heritage on-line. April 11-14, San Diego, CA, USA


The British Museum, CIDOC CRM and the Shaping of Knowledge

At the British Museum we are fast approaching a new production version of our currently beta Semantic Endpoint. The production version will remove some of the current restrictions and provide a more robust environment to develop applications against. It will also come with much needed documentation detailing a new mapping to the CIDOC CRM (Conceptual Reference Model) prompted by feedback received from the current version and by requirements to support the ResearchSpace project.

The use of the CIDOC CRM itself has raised questions and criticisms, mostly from developers. This comes about for a variety of reasons; the lack of current CRM resources; a lack of experience of using it (an issue with any new method or approach); a lack of documentation about particular implementations; but also, particular to this type of publication, a lack of domain knowledge by those creating cultural heritage web applications. The CRM exposes a real issue in the production and publication of cultural heritage information about the extent to which domain experts are involved in digital publication and, as a result, its quality.

The debate about whether we should focus on providing data in a simple format for others to use in web pages and at hack days, against a richer and more ontological approach (requiring a deeper understanding of collection data) is one in which the former position is currently dominant. To support this there are some exceptional projects using simple schemas designed to achieve specific and collaborative objectives. However, many linked data points lack the quality to be more than basic information jukeboxes that, in turn, support applications with limited usefulness and shelf life. In short, the current cultural heritage linked data movement, concentrating on access (a fundamental objective), may have ignored some of reasons for establishing networks of knowledge in the first place.

The British Museum’s source of object data has its stronger and weaker elements but it has descriptions, associations and taxonomies developed over the last 30 years of digitisation. In order to exploit this accumulated knowledge and provide support for a wide range of users, including humanist scholars, it needs to be described within a rich semantic framework. This is a first step to developing the new taxonomies needed to allow different relationships and interpretations of harmonised collections to be exposed. Semantic data harmonisation is not just about linking database records together but is about exploring and discovering (inferring) new knowledge.

The full power of the CRM comes when there is a sufficient mass of conforming data providing a coverage of topics such that the density of information and events generates a resource from which the inference of knowledge can occur. Research tool-kits built around such a collaboration of data would uncover new facts that could never be discovered using traditional methodologies. In this respect it is an ontology tailor made for making intelligent sense of the mass of online cultural heritage data. Its adoption continues to grow but it has also reached a ‘chicken and egg’ stage needing the implementation of public applications to clearly demonstrate its unique properties and value to humanities research.

By bringing data together in a meaningful way rather than just treating it as a technical process or act of systems integration we can start to deconstruct the years of separation and institutional classifications designed to support narrower curatorial and administrative aims. Regardless of the resources available to research projects, this historical limitation, and the lack of any cost effective digital solution, has made the problem of asking a broader range of questions a difficult challenge. But to ask the broader questions that may lead to more interesting, valuable and sustainable web applications, requires appropriate semantic infrastructures. The CRM provides a starting point.

The publication of BM data in the CRM format comes from a concern that many Semantic Web / Linked Data implementations will not provide adequate support for a next generation of collaborative data centric humanities projects. They may not support the types of tools necessary for examining, modelling and discovering relationships between knowledge owned by different organisations at a level currently limited to more controlled and localized data-sets. Indeed, the proliferation of different uncoordinated linked data schemas may create a confusing and complex environment of mappings between data stores and thereby limit the overall effectiveness of semantic technology and produce outputs that don’t push digital publications much beyond those achieved using existing database technology.

The CRM is difficult not because of what it is (a distillation of existing and known cultural heritage concepts and relationships) but because it requires real cross disciplinary collaboration to implement properly – and this type of collaboration is difficult. The aim of the British Museum Endpoint is to deliver a technical interface but also to demystify the processes underlying the implementation of the CRM as well as the BM’s CRM mapping itself. By doing this the Endpoint should support a wide range of publication objectives for different audiences, a wide range developers with varying experience and domain knowledge and crucially fulfill the future needs of humanities scholars.

In particular the aim is to raise the bar on what can be achieved on the Internet and allow researchers to transfer data modelling techniques, that are currently only serviced by specialist relational database models, into the online world. These techniques will allow scholars, with access to CRM aligned datasets, to make sense of and tackle ‘big data’ littered with many different classifications and taxonomies and allow a broader, specialist and contextual re-examination of historical data and historical events.

Dominic Oldman