The Costs of Cultural Heritage Data Services: The CIDOC CRM or Aggregator formats?

Martin Doerr (Research Director at the Information Systems Laboratory and Head of the Centre for Cultural Informatics, FORTH)
Dominic Oldman (Principal Investigator of ResearchSpace, Deputy Head IS, British Museum

June 2013

Many larger cultural institutions are gradually increasing their engagement with the Internet and contributing to the growing provision of integrated and collaborative data services. This occurs in parallel with the upcoming so-called aggregation services, seemingly striving to achieve the same goal. At a closer look however, there are quite fundamental differences that produce very different outcomes.

Traditional knowledge production occurred in an author’s private space or a lab with local records, notwithstanding field research. This space or lab may be part of an institution such as a museum. The author (scholar or scientist) would publish results and by making content accessible it would then be collected by libraries. The author ultimately knows how to interpret the statements in his/her publication and relate that to the reality referred in the publication, from the field, from a lab or from a collection. Many authors are also curators of knowledge and things.

The librarian would not know this context, would not be a specialist of the respective field, and therefore must not alter in any way the content. However, (s)he would integrate the literature under common dominant generic concepts and references, such as “Shakespeare studies”, and preserve the content.

In the current cultural-historical knowledge life-cycle, we may distinguish three levels of stewardship of knowledge: (1) the curator or academic, (2) the disciplinary institution (such as the Smithsonian, the British Museum or smaller cultural heritage bodies) (3) the discipline-neutral aggregator (such as Europeana or IMLS-DCC).  Level (2) typically acts as “provider” to the “aggregator”.

Obviously, the highest level can make the least assumptions about common concepts, in particular a data model, in order to integrate content. Therefore, it can offer services only for very general relationships in the provided content. On the other side, questions needing such a global level of knowledge will be equally generic. Therefore, the challenge is NOT to find the most common fields in the provider schemata (“core fields”), but the most relevant generalizations (such as “refers to”, avoiding overgeneralizations (such as “has date”). These generalizations are for accessing content, but should NOT be confused with the demand of documenting knowledge. At that level some dozens of generic properties may be effective.

The preoccupation of providers and aggregators with a common set of fields has the result that they only support rudimentary connections between the datasets they collect and as a result reduce the ability for researchers to determine where the most relevant knowledge may be located. As with the library, the aggregator’s infrastructure can only support views of the data (search interfaces) that reflect their own limited knowledge because the data arrives with little or no context and over-generalized cross-correlations (“see also”, “relation”, ”coverage”).

The common aggregation process itself strips context away from the data creating silos within the aggregator’s repository. Without adequate contextual information searching becomes increasingly inadequate the larger the aggregation becomes. This limitation is passed on through any Application Programming Interfaces that the aggregator offers. Aggregators slowly begin to understand that metadata is an important form of content, and not only a means to query according to current technical constraints. Some aggregators, such as the German Digital Library, store and return rich “original metadata” received from providers and derive indexing data at the aggregator side, rather than asking providers to strip down their data.

The institution actually curating content must document it so that it will not only be found, but understood in the future. It therefore needs an adequate [1] representation of the context objects come from and their meaning. This representation already has some disciplinary focus, and ultimately allows for integrating the more specialized author knowledge or lab data. For instance, chronological data curves from a carbon dating (C14) lab should be integrated at a museum level (2) by exact reference to the excavation event and records, but on an aggregator level (3) may be described just by a creation date.

A current practice of provider institutions to manually normalize their data with millions of pounds, dollars or euros directly to aggregator formats appears to be an unbelievable waste of money and knowledge. The cost of doing so exceeds by far the cost of the software of whatever sophistication. It appears much more prudent to normalize data at an institutional level to an adequate representation, from which the generic properties of a global aggregator service can be produced automatically, rather than producing, in advance of the aggregation services, another huge set of simplified data for manual integration.

This is precisely the relationship between the CRM and aggregation formats like the EDM. The EDM is the minimal common generalization at the aggregator level, a form to index data at a first level. The CRM is a container, open for specialization, for data about cultural-historical contexts and objects. The CRM is not a format prescription. Concepts of the CRM are used as needed when respective data appear at the provider side. There is no notion of any mandatory field. Each department can select what it regards as mandatory for its own purpose, and even specialize further, without losing the capacity of consistent global querying by CRM concepts. CRM data can automatically be transformed to other data formats, but even quite complex data in a CRM compatible form can effectively be queried by quite simple terms [3].

Similarly, institutions may revise their data formats such that the more generic CRM concepts can automatically be produced from them, i.e., make their formats specializations of the CRM to the degree this is needed for more global questions. For instance, the features of the detailed curve of a C14 measurement are not a subject for a query at an institutional level. Researchers would rather query to retrieve the curve as a whole.

The British Museum understands this fundamental distinction and therefore understands the different risks and costs. This means both the long term financial costs of providing data services, important to organizations with scarce resources, but also the cost to cultural heritage knowledge communities and to society in general. As a consequence they publish using the CRM standard. They also realize that data in the richer CRM format is much more likely to be comprehensible in the future than in “core metadata” form.

Summarizing, we regard publishing and providing information in a CRM compatible form [2] at the institutional or disciplinary level to be much more effective in terms of research utility (and the benefits of this research to other educational and engagement activities). The long-term costs are reduced even with further specializations of such a form, and the costs of secondary transformation algorithms to aggregation formats like EDM are marginal.

Dominic Oldman


[1]  Smith B. Ontology. The Blackwell Guide to the Philosophy of Computing and Information., pages 155–166, 2003. Floridi, L. (ed). Oxford: Blackwell.

[2] Official Version of the CIDOC CRM, the version 5.0.4 of the reference document.
Nick Crofts, Martin Doerr, Tony Gill, Stephen Stead, Matthew Stiff (editors), Definition of the CIDOC Conceptual Reference Model, December 2011.
Available: doc file (3.64 Mb), pdf file (1.56 Mb)

[3] Tzompanaki, K., & Doerr, M. (2012). A New Framework For Querying Semantic NetworksMuseums and the Web 2012: the international conference for culture and heritage on-line. April 11-14, San Diego, CA, USA


The Semantic Web: The new Enlightenment in an Age of Unreason

Located in the King’s Library of the British Museum, off the east side of the Great Court, you will find the Enlightenment Gallery. This gallery is unique being the only permanent space that comes close to a genuine time machine. It takes visitors back to the age of the eighteenth century collector and organises objects to show the broad historical concerns studied by the wealthy scholars of the day. Their vigorous interest, underpinned by a position of economic dominance, was partly directed towards developing a more detailed and systematic (scientific) understanding of the world and humankind from ancient times. It was also a period known as the, ‘Age of Reason’.

However, the efforts of these private collectors meant that even the Royal Society’s collection came under increasing pressure due to competition with individual collectors for artefacts and specimens generated often by its own members, most notably Sir Hans Sloane. It was therefore hugely significant that Sloane’s own extensive collection of over 71,000 objects including flora and fauna, coins, prints and drawings, books, manuscripts and other curiosities found their way to the world’s first national public museum. In one single act (enshrined by Parliament) a collection previously only accessible to the privileged few became available to visitors who, as today, came to London from around the world.

As a result of this transfer from private to public, the British Museum of the time spanned both the artificial and natural world (including a substantial library), and would have been an awe inspiring (albeit sometimes confusing) experience for the new visitors – and for the new administrators difficult to organise and manage. Nevertheless the themes of scholarship previously only available to a privileged few became available for any visitor to cast their eye over and, over time, the British Museum would become a natural home for other previously private and inaccessible collections.

The more objects in the Museum’s collection the more evidence available to scholars to support developing theories and improved interpretations of our history. In some ways the eighteenth century preoccupation with collecting objects to solve the big questions of humanity equates to the modern day call of Tim Berners-Lee to support the web of data. To the modern day researcher the more available data the more comprehensive and valid the research and the better the quality of the conclusions. There are of course further comparisons with Sloane, the Royal Society and the British Museum in terms of levels of accessibility, competing interests and the ability to manage and make sense of ever increasing bodies of information.

However, the transfer of private collections into public museums and libraries went hand in hand with the development of different classifications that departed from some of the broader (or period) concerns of Sloane and his colleagues, evolving to match the academic and administrative agendas of more specialist museums. The eventual division of the Sloane collection is associated with the creation of the Natural History Museum and the British Library, but the result was not just a physical separation but the start of viewing objects and managing collections with different approaches, separate taxonomies and, as these new organisations established themselves, with very different organisational cultures. These new cultures created further internal divisions along departmental and administrative lines often resulting in more narrow agendas, perspectives and cataloguing habits.

Today an initiative to reconstruct the Sloane collection (‘Reconstructing Sloane’) is confronted with 250 years of separation. This means that the organisations involved need to embrace collaboration and attempt to bring together their accumulated knowledge, stored using different information schema and different terminologies, to answer a new set of questions prompted by digital unification. In this respect the Sloane project confronts the issue of how researchers will transfer the type of analysis currently reserved for smaller more narrowly focused and controlled datasets, to the issue of ‘big data’ typically dispersed and controlled by many different organisations.

The Internet provides the physical infrastructure to bring together different cultural heritage organisations, and the Semantic Web provides the protocols by which we may harmonise our data (or knowledge) and find new ‘enlightenment’. However, to establish true networks of knowledge will require new attitudes towards research, analysis, interaction and collaboration. The Sloane project may provide an interesting model for understanding the dynamics of collections working together to harness the potential of the Internet and to break the current collaborative stalemate created by a continued reliance on ‘Gutenberg’ publication models.

To manually sift through the different materials owned by the Sloane partners and attempt to uncover and understand their relationships would require more person years than any normal cultural heritage project team could hope to allocate. The use of already digitised material and further digitisation efforts mean that computers (and computing) can be used to help perform the analysis. But the requirement to search across natural history, textual, art and antiquity datasets from different institutions and answer questions as if the collection had never been separated requires a new and radical approach. The different proprietary schema need to be mapped to a common (and the author would argue Semantic Web) framework to create a digital version of the Enlightenment Gallery capable of supporting, not just one, but a multitude of different interpretations.

But what happens when organisations restrict access to knowledge and assets and insist on applying barriers for the sake of licensing revenue and off-setting publication costs. Putting aside the administrative overheads that these restrictions create it means that semantic relationships, and the potential inferences derived from combining and harmonising knowledge from different organisations, will be frustrated. Paywalls applied at any stage of this process will simply limit its effectiveness and reduce digital projects to staged productions perpetuating the charges of charlatanism and blandness thrown at many cultural heritage web sites.

The Semantic Web works by bringing together data so that relationships and connections can be discovered and explored rather than predetermined by individual museums views of the world. But the process of modelling and analysis of data across networks is fundamentally precluded by primitive commercial barriers and treasure house mentalities. For collaborations such as ‘Reconstructing Sloane’ the only feasible way forward is a reciprocal agreement to provide digital material to the project without access limitations and free from charges.

Why isn’t the principle of reciprocation (the cancelling out of charging between cultural organisations to reduce costs) applied universally and outside formal projects? We now have the strange situation in which anyone can reuse data and high resolution images online (and in real time) from the Yale Center for British Art (YCBA) without any correspondence with them whatsoever using open access and open standard computer interfaces, and others like the National Gallery in Washington is set down a similar path. Yet if these organisations wish to create their own web resource, say on the work of Constable or Turner (artists for which the Center owns very important examples), they are charged. Specific reciprocation agreements between certain organisations for certain limited projects, however, do not solve the problem of semantic and knowledge networks.

When you take into account the overheads of managing image licensing (and many organisations still do not understand the full cost); the savings that a free exchange of assets would provide (the costs of purchasing assets are never set against licensing income); and the benefits created by friction free networks of knowledge; then one can only conclude that the major concern for museums must be the perception that by providing free access they will somehow miss out on a bonanza of income that might present itself sometime in the future.

In reality any income streams are more likely to be associated with innovative services (what you do with digital assets rather than the assets themselves) which require an engagement, financial investment, resource and a degree of risk that most, if not all, museums are unable to consistently sustain. In any event, services of sufficient interest to a large audience will typically require the raw assets of a number of different institutions – a prime reason why they have not materialised (see the Constable example above).

Nevertheless the cultural heritage sector is fearful that the commercial sector will make profits they have missed or have been unable to generate themselves over the last 20 years. But what would happen if, like Yale University, the whole sector provided complete open and free access. It may well attract interest and may result in services with business potential (inspite of the free availability of the assets used in those services). The assets may be used for merchandising, they may provide services that make better sense of the mass of information made available – and they may or may not be successful in creating a profitable business model. It would certainly allow many more open access Sloane type projects at a fraction of the current cost and provide greater incentives for more organisations to contribute to larger networks of public knowledge.

For those who insist on finding additional income streams wouldn’t it be better to let others (commercial and non-commercial) take some of the risks and to encourage innovation from third parties for which we (in our aim to disseminate and educate) can only benefit. Shouldn’t the cultural heritage sector feel confident that successful models could easily be improved upon (if so desired) using those other assets that set the sector apart - our knowledge, expertise and reputations. Alternatively, we can reward the innovation of others, and potentially share in any benefits, by endorsing successful services that meet with our standards and approval.

In this new digital world museums have the opportunity to better use their knowledge, expertise and reputation to more fully and wholeheartedly engage with the cultural Internet if barriers to knowledge and content are lifted. They can still produce their own digital services (perhaps invigorated by a more vibrant digital economy), they can still attempt to generate income through their own services or through the endorsement of others work. But most importantly they can concentrate on their main reason for being and, extending the hopes of the private collectors of the eighteenth century, initiate a more inclusive, accessible and collaborative enlightenment towards a new digital age of reason.

Dominic Oldman

The British Museum, CIDOC CRM and the Shaping of Knowledge

At the British Museum we are fast approaching a new production version of our currently beta Semantic Endpoint. The production version will remove some of the current restrictions and provide a more robust environment to develop applications against. It will also come with much needed documentation detailing a new mapping to the CIDOC CRM (Conceptual Reference Model) prompted by feedback received from the current version and by requirements to support the ResearchSpace project.

The use of the CIDOC CRM itself has raised questions and criticisms, mostly from developers. This comes about for a variety of reasons; the lack of current CRM resources; a lack of experience of using it (an issue with any new method or approach); a lack of documentation about particular implementations; but also, particular to this type of publication, a lack of domain knowledge by those creating cultural heritage web applications. The CRM exposes a real issue in the production and publication of cultural heritage information about the extent to which domain experts are involved in digital publication and, as a result, its quality.

The debate about whether we should focus on providing data in a simple format for others to use in web pages and at hack days, against a richer and more ontological approach (requiring a deeper understanding of collection data) is one in which the former position is currently dominant. To support this there are some exceptional projects using simple schemas designed to achieve specific and collaborative objectives. However, many linked data points lack the quality to be more than basic information jukeboxes that, in turn, support applications with limited usefulness and shelf life. In short, the current cultural heritage linked data movement, concentrating on access (a fundamental objective), may have ignored some of reasons for establishing networks of knowledge in the first place.

The British Museum’s source of object data has its stronger and weaker elements but it has descriptions, associations and taxonomies developed over the last 30 years of digitisation. In order to exploit this accumulated knowledge and provide support for a wide range of users, including humanist scholars, it needs to be described within a rich semantic framework. This is a first step to developing the new taxonomies needed to allow different relationships and interpretations of harmonised collections to be exposed. Semantic data harmonisation is not just about linking database records together but is about exploring and discovering (inferring) new knowledge.

The full power of the CRM comes when there is a sufficient mass of conforming data providing a coverage of topics such that the density of information and events generates a resource from which the inference of knowledge can occur. Research tool-kits built around such a collaboration of data would uncover new facts that could never be discovered using traditional methodologies. In this respect it is an ontology tailor made for making intelligent sense of the mass of online cultural heritage data. Its adoption continues to grow but it has also reached a ‘chicken and egg’ stage needing the implementation of public applications to clearly demonstrate its unique properties and value to humanities research.

By bringing data together in a meaningful way rather than just treating it as a technical process or act of systems integration we can start to deconstruct the years of separation and institutional classifications designed to support narrower curatorial and administrative aims. Regardless of the resources available to research projects, this historical limitation, and the lack of any cost effective digital solution, has made the problem of asking a broader range of questions a difficult challenge. But to ask the broader questions that may lead to more interesting, valuable and sustainable web applications, requires appropriate semantic infrastructures. The CRM provides a starting point.

The publication of BM data in the CRM format comes from a concern that many Semantic Web / Linked Data implementations will not provide adequate support for a next generation of collaborative data centric humanities projects. They may not support the types of tools necessary for examining, modelling and discovering relationships between knowledge owned by different organisations at a level currently limited to more controlled and localized data-sets. Indeed, the proliferation of different uncoordinated linked data schemas may create a confusing and complex environment of mappings between data stores and thereby limit the overall effectiveness of semantic technology and produce outputs that don’t push digital publications much beyond those achieved using existing database technology.

The CRM is difficult not because of what it is (a distillation of existing and known cultural heritage concepts and relationships) but because it requires real cross disciplinary collaboration to implement properly – and this type of collaboration is difficult. The aim of the British Museum Endpoint is to deliver a technical interface but also to demystify the processes underlying the implementation of the CRM as well as the BM’s CRM mapping itself. By doing this the Endpoint should support a wide range of publication objectives for different audiences, a wide range developers with varying experience and domain knowledge and crucially fulfill the future needs of humanities scholars.

In particular the aim is to raise the bar on what can be achieved on the Internet and allow researchers to transfer data modelling techniques, that are currently only serviced by specialist relational database models, into the online world. These techniques will allow scholars, with access to CRM aligned datasets, to make sense of and tackle ‘big data’ littered with many different classifications and taxonomies and allow a broader, specialist and contextual re-examination of historical data and historical events.

Dominic Oldman