Barriers and solutions to assessing digital library reuse: preliminary findings

Genya Morgan O’Gara (VIVA – The Virtual Library of Virginia, George Mason University, Fairfax, Virginia, USA)
Liz Woolcott (Utah State University, Logan, Utah, USA)
Elizabeth Joan Kelly (Loyola University New Orleans, New Orleans, Louisiana, USA)
Caroline Muglia (University of Southern California, Los Angeles, California, USA)
Ayla Stein (University of Illinois at Urbana-Champaign, Urbana-Champaign, Illinois, USA)
Santi Thompson (University of Houston, Houston, Texas, USA)

Performance Measurement and Metrics

ISSN: 1467-8047

Article publication date: 4 October 2018

Issue publication date: 18 October 2018

5259

Abstract

Purpose

The purpose of this paper is to highlight the initial top-level findings of a year-long comprehensive needs assessment, conducted with the digital library community, to reveal reuse assessment practices and requirements for digital assets held by cultural heritage and research organizations. The type of assessment examined is in contrast to traditional library analytics, and does not focus on access statistics, but rather on how users utilize and transform unique materials from digital collections.

Design/methodology/approach

This paper takes a variety of investigative approaches to explore the current landscape, and future needs, of digital library reuse assessment. This includes the development and analysis of pre- and post-study surveys, in-person and virtual focus group sessions, a literature review, and the incorporation of community and advisory board feedback.

Findings

The digital library community is searching for ways to better understand how materials are reused and repurposed. This paper shares the initial quantitative and qualitative analysis and results of a community needs assessment conducted in 2017 and 2018 that illuminates the current and hoped for landscape of digital library reuse assessment, its strengths, weaknesses and community applications.

Originality/value

In so far as the authors are aware, this is the first paper to examine with a broad lens the reuse assessment needs of the digital library community. The preliminary analysis and initial findings have not been previously published.

Keywords

Citation

O’Gara, G.M., Woolcott, L., Joan Kelly, E., Muglia, C., Stein, A. and Thompson, S. (2018), "Barriers and solutions to assessing digital library reuse: preliminary findings", Performance Measurement and Metrics, Vol. 19 No. 3, pp. 130-141. https://doi.org/10.1108/PMM-03-2018-0012

Publisher

:

Emerald Publishing Limited

Copyright © 2018, Genya Morgan O’Gara, Liz Woolcott, Elizabeth Joan Kelly, Caroline Muglia, Ayla Stein and Santi Thompson

License

Published by Emerald Publishing Limited. This article is published under the Creative Commons Attribution (CC BY 4.0) licence. Anyone may reproduce, distribute, translate and create derivative works of this article (for both commercial and non-commercial purposes), subject to full attribution to the original publication and authors. The full terms of this licence may be seen at http://creativecommons.org/licences/by/4.0/legalcode


Introduction and problem statement

Current assessment efforts that focus on the unique content found in digital libraries are not meeting the needs of the practitioners that serve them, or the communities that use them[1]. Although numerous quantitative ways to view digital object use exist, from page views, to downloads, to newly developed multimedia standards for institutional repository use[2], these measurements do not reliably demonstrate the value of these materials in ways that resonate with the digital library community. Specifically, they are too often unfavorably, and perhaps unfairly, compared to very different types of library use, such as journal article downloads, chapter views or circulation statistics.

A key indicator of the impact of digital collections is content reuse, specifically how materials are utilized and repurposed. There have been ongoing efforts within the digital library community to demonstrate and measure reuse. These types of investigations have produced meaningful results, but efforts have been scattered among institutions and organization types. Additionally, in this relatively new arena, few recommended practices or gold standards have emerged. The lack of reuse metrics, combined with the lack of community assessment norms, has a two-fold effect: it makes it difficult for institutions to develop, using appropriate data, strong infrastructures and collections that are responsive to user needs; and it suppresses the ability of a digital library to demonstrate its value to stakeholders, including administrators, funding bodies and potential users.

An Institute of Museum and Library Services (IMLS) national leadership grant, “Developing a Framework for Measuring Reuse of Digital Objects” (Thompson et al., 2017), attempts to address the challenges faced in assessing content reuse through a comprehensive needs assessment of the digital library community. The eventual goal behind the grant is a multidimensional framework to support digital libraries in demonstrating their value and better advocating for the resources and platforms that will best serve their communities. In order to do this, a deeper understanding of how digital objects are used and repurposed is critical. This must be coupled with an understanding of what types of resources already exist, how reuse is or is not valued differently from access use, and what approaches and tools need to be developed. This paper shares the initial quantitative and qualitative analysis and results of the community needs assessment conducted in 2017 and 2018 that aims to illuminate digital library reuse strengths, weaknesses and community applications.

Literature review

In 2015, the white paper, “Surveying the Landscape: Use and Usability Assessment of Digital Libraries,” detailed both the efforts and the challenges in assessing content reuse (Chapman et al., 2015). Analyzing 26 articles published between 2010 and 2014, the authors focused on understanding current strengths and gaps in the research, with the aim of highlighting areas that could benefit from future study. This review found that there were promising assessment developments, including a growing body of research on reuse patterns and practices among content related to work in the humanities and arts. It also identified a series of deficiencies, including: difficulty in identifying who uses digital libraries (sometimes due to user privacy safeguards) and how research focus (e.g. general vs scholarly use) affects how and what is reused; a lack of research on methods to track online reuse through hyperlinking; and difficulties measuring the reuse of digital objects in virtual and analog environments. The white paper made several recommendations, including a call for additional studies to better understand diverse user groups, the connection between reuse and repository design, and when users can ethically and legally reuse digital library content. Most importantly for this study, the authors recommended “the development of a reuse assessment framework and an accompanying toolkit or best practices to help unify future studies and discussions of reuse in the digital library field.”

Since the release of “Surveying the Landscape,” more recent literature has addressed some of the identified research gaps, in particular, reuse assessment work done beyond humanities and arts user groups. One such area is data set reuse. Open access funding mandates, coupled with strong data management outreach efforts and a growing open science movement, appear to be facilitating reuse and its measurement in this arena. In the article “Discovery and Reuse of Open Datasets: An Exploratory Study,” the authors suggest that these efforts have led to digital data objects that are well documented, usable with different software, shared in open access repositories and using persistent identifiers, are allowing reuse to be both tracked and assessed (Mannheimer et al., 2016). Further, projects such as “Always Already Computational” start to bridge the gap between largely science research and the humanities user groups by positioning digital collections themselves as data sets (Padilla et al., 2016).

General movement within the digital library community helps in developing and broadly applying standards that demonstrate the impact of repositories abound. Although not specifically focused on reuse, two current projects should be noted. One, JISC’s Institutional Repository Usage Statistics (IRUS) UK[3] is working to aggregate and compare usage statistics across repositories using Counting Online Usage of NeTworked Electronic Resources (COUNTER) – compliant metrics[4]. Another, the Repository Analytics & Metrics Portal (RAMP)[5] is investigating the difficulties digital repositories face in tracking and evaluating use. Benchmarking at the aggregate has been problematic for institutional repositories, and these efforts highlight the current drive for adaptable, transparent metrics.

Rights have also risen to the surface, particularly research highlighting the relationship between copyright, permissions and reuse of digital materials. Clearer terms of use, the encouragement to embrace open access policies, and object attribution for digital content are critical to maximizing the potential for reuse (Terras, 2015). Unstandardized rights metadata can complicate users’ attempts to reuse content, such as that found in the Digital Public Library of America (DPLA), and this has accelerated efforts by such referatories to standardize aggregated rights metadata (Frick, 2016).

Recent literature has also highlighted some of the approaches information professionals can use to track and measure reuse over the web. Several studies have documented the benefits and limitations of using reverse image lookup to trace digital image reuse (Kelly, 2015; Reilly and Thompson, 2017). Kelly has also evaluated both the reuse of digital collection content in Wikipedia entries through tracking citations (Kelly, 2017a) and the employment of Google Alerts to signal image reuse (Kelly, 2017b). Further research has tested the viability of using embedded metadata to identify reuse (Bushey, 2013; Thompson and Reilly, 2018).

Fewer articles have tackled the challenge of formulating analytical frameworks for assessing reuse. In “Beyond Clicks, Likes, and Downloads: Identifying Meaningful Impacts for Digitized Ethnographic Archives,” the authors devised a framework for “documenting, demonstrating, and assessing the impact of digitized ethnographic collections” (Punzalan et al., 2017). The researchers formulated six topical areas of potential impact, including: knowledge, professional discourse, attitudes, institutional capacity, policy and relationships. They note that these areas can assist how “institutions and communities articulate and assess major sorts of impact that are most relevant to institutional projects to digitize and share knowledge.” This work shows the potential for information professionals to re-define the current framework for measuring impact. The research detailed here extends these kinds of analyses by collecting and analyzing community feedback to provide important use cases and functional requirements that expand the profession’s perspective on assessment.

Methodology

The structure for the needs assessment is both multi-part and iterative. It includes data collection and analysis of pre- and post-community surveys, in-person and virtual focus groups, and regular incorporation of advisory board and community feedback. The culmination of this work will be the identification of shared tools and best practices for reuse assessment within the digital library and cultural heritage communities.

The research team began the needs assessment by administering a pre-survey to digital library practitioners – its purpose was to identify potential gaps in reuse assessment practices. The survey instrument was developed on the Qualtrics platform, and included 19 questions aimed at gathering institutional demographic information, as well as statistics gathered for digital collection use, available infrastructure support, existing assessment barriers and what kinds of data would be useful for practitioners. The survey was distributed on 25 listservs related to research data professionals, cultural heritage organizations, digital library and library assessment groups. The Digital Library Federation advertised the survey as well on their social media channels. The full survey instrument can be found here https://osf.io/ptvh5/. Of the responses, 302 were kept for analysis. These were respondents that had agreed to the institutional review board statement and answered a minimum of one question.

The focus groups were structured after the pre-survey was completed. Their purpose was to gain qualitative insight into the use cases for assessment metrics, and to drive the investigations of the research team. Close attention was paid to gathering a broad perspective from academic, public and special focus institutions. Suggested participant criteria included: demonstrated experience building, preserving and/or assessing digital library collections; self-identified individuals from underrepresented groups; representatives from organizations with diverse and inclusive collections; and self-identified representatives from cultural heritage organizational types that might not be represented in the research team membership.

The focus group questions centered on identifying reuse assessment needs within different communities, tools and methods currently being applied, financial and institutional costs involved in reuse assessment, user privacy concerns, functionalities needed in assessment tools, and the costs of building and maintaining assessment instruments. For each focus group a facilitator and a note-taker attended the sessions. Prior to the session, participants received an outline of topics to be covered. Sessions were recorded to ensure all themes were captured. First round focus groups were held in October and December of 2017, participants came from academic libraries or archives (29 percent), public libraries (21 percent), museums (21 percent), consortiums (21 percent) and special focus institutions (8 percent). Participants included practitioners (58 percent), administrative decision makers (21 percent), as well as some in more removed roles at consortial levels (21 percent).

The second round of focus groups, held in February and March 2018, built off of the themes found in the Exploratory Focus Groups. Participants in this round were typically from academic institutions (80 percent) but there was also representation from public libraries, museums and special focus institutions, further detailed in the findings. Participant makeup was almost exactly split between practitioners and decision makers.

Researchers applied a grounded theory approach to analyzing data generated from the focus groups (Charmaz, 1983).The team used Dedoose to code the notes generated during in-person and virtual focus group sessions.

The full needs assessment is structured so that each component builds upon the completion of the last. For example, upon the completion of the pre-survey, the research team worked with the grant advisory board[6] to refine the approach to analyzing the results, and re-examine how to best structure future surveys. Likewise, the results of the first focus groups shifted the next focus group topics to areas that had warranted more discussion, including the definition of reuse, sticky metadata and privacy concerns.

This paper, and the methodology detailed, centers on the initial findings of the yearlong research project, including the results of the pre-survey, and preliminary analysis and noteworthy themes from two of the in-person focus groups and two of the virtual focus groups. Future analysis, research and subsequent steps are detailed below, but as with the approach thus far, future research will build upon these results. Next steps already informed by this approach include an upcoming post-survey, the results and analysis of an additional focus group, and the final recommendations for the functional requirements of a reuse toolkit.

Survey findings

Pre-survey

The first survey was sent in the Fall of 2017 and provided a critical snapshot of the current assessment practices within the digital library community. In addition to the demographic data captured, questions explored if and which use and reuse metrics were collected, and what the major pain points were for assessment work. The majority of respondents to the survey (56 percent) came from academic institutions. Other respondents represented public libraries (10 percent) and museums (8 percent), with the remaining respondents (26 percent) hailing from government libraries, historical societies, archives, community archives, data archives, consortia, for-profit organizations, or “other” institution types. In total, 30 percent of respondents reported serving traditionally underserved communities, with the largest of this respondent group serving tribal institutions. While more intricate demographic data about institution and population size were collected, analysis was not possible due to a design flaw in the survey question.

The survey found that a majority (80 percent) of respondents were doing some manner of assessment and that they typically utilized Google Analytics. This was fairly consistent across institution types, ranging from 61 to 100 percent reporting collecting use information. Respondents were gathering basic data points such as number of visitors, downloads or clicks. Respondents who were not engaged in any data gathering were typically just getting their digital collections online or lacked the staffing to dedicate toward assessment.

Tracking reuse was more difficult, with only 40 percent of respondents noting that they were gathering reuse data. Most often reuse was tracked by gathering social media metrics, or through citation analysis. When asked why they did not gather reuse data, 37.5 percent of respondents cited the lack of an accepted methodology for gathering and/or interpreting reuse data. In all, 16 percent of respondents also cited lack of staffing and time as a significant barrier. Academic institutions were the least likely institution type to measure reuse, with only 28 percent (n=40) of respondents reporting in the affirmative. Comparatively, respondents from other institutions reported much higher engagement with reuse assessment: 45 percent (n=14) of public libraries respondents, 52 percent (n=11) of museum respondents, 81 percent (n=13) of government library respondents, 62 percent (n=5) of historical society respondents, 60 percent (n=9) of archives respondents, 40 percent (n=2) of community archives or libraries respondents, 40 percent (n=2) of consortia respondents, 100 percent (n=2) of for-profit or corporate organizations respondents and 35 percent (n=6) of “other” institution respondents.

Ultimately, respondents wanted to do more assessment and felt that documented standards would be the best support for digital library assessment work. They reported needing technologies that functioned across platforms, were simple to implement and were reliable. Respondents wanted more training on data interpretation and communicating results. They also expressed concern about patron privacy and desired thoughtful consideration about the ethical implications of data collection.

The results of this pre-survey were documented in depth in the conference proceedings, “Measuring Reuse of Digital Objects: Preliminary Findings from the IMLS-funded project” presented at the Joint Conference on Digital Library international forum in June 2018 (Kelly et al., 2018).

Focus group findings

The following summaries are broken out by type of focus group (Exploratory or Technologies and Standards), and within those categories, by major topics. The project team conducted a total of three rounds of focus groups, with each round offering participants the opportunity to partake virtually or in-person. At the time of publication, only two types of focus groups (Exploratory and Technologies and Standards) had been completed, and that is the analysis summarized here. In total, the project team interacted with 38 people through these groups. Half of all participants (47.4 percent) represented academic libraries. Special focus libraries, such as corporate and government libraries, made up on nearly one-fifth of the participants (18.4 percent), with remaining participants representing museums (10.5 percent), public libraries (10.5 percent), consortia (7.9 percent) and academic archives (5.3 percent).

The project team conducted each focus group with two grant team members present – a facilitator and a note-taker. A set of questions was used to frame and guide the focus groups, and these were distributed to participants in advance of the conversation. Each session was recorded, and these recordings were used to supplement notes. Per the grant specification, audio recordings were destroyed after 48 h. The qualitative analysis software, Dedoose, was used to code the focus group notes for further analysis.

Exploratory Focus Groups

This first set of focus groups concentrated on developing an in-depth understanding of what assessment metrics were and were not being collected by digital libraries. Participants analyzed the researchers’ definitions of use and reuse, critiquing the clarity of the descriptions and adding examples from their own practices (Woolcott et al., 2018). They pointed out the overly academic tone of the definition and examples, and shared ways to be more inclusive. One way the group attempted to distinguish between use vs reuse was to think of an interaction with a digital object that happened within a repository as use, and reuse as any interaction with a digital object outside of a repository. Participants noted that without context, use and subsequent impact could not be assessed, and that lack of usage could be indicative of a range of issues, such as discovery platforms, rather than object relevance.

Many participants noted that they had not considered the concept of reuse when gathering assessment metrics; instead they tended to focus solely on use. The majority reported doing some manner of assessment, and were most likely to use Google Analytics to gather data. The use of platform-supplied analytics was also reported (e.g. YouTube, Facebook or CONTENTdm). Additionally, participants reported collecting data on analog usage, including visitor logs, visitor surveys or requests for materials. A few reported that online usage was not collected at all.

Current collection and application of use data

Usage data were often represented as the number of visitors to a site, the time spent on a page, the number of times a page was visited or the number of object downloads. These kinds of data were used to inform UX decisions. For example, participants reported that page loading time was analyzed to find problems that might impact how user interacted with a repository. Occasionally usage data were reported to show impact for decision makers or funders. For analog collections, it was used to determine demand for digitization. Typically, if the usage was analyzed, participants reported that they looked for aggregated data that showed trends. Many noted that they gathered use data but do not do much with it, often because analysis takes time, staffing, expertise and training.

Current collection and applications of reuse data

The collection of reuse data typically took the form of social media analytics such as likes or shares on Facebook, retweets on Twitter or likes on Instagram. One participant reported analyzing the number of times a specific object was re-pinned on Pinterest. Other data included Google Alerts or measuring citations on Wikipedia. Participants noted that in addition to these quantifications, qualitative information including stories of how collections impacted individuals was important to assessment.

Reuse data were used to show the reach of digital objects. Some participants reported using social media analytics to demonstrate platform efficiency (e.g. images on Instagram, documents on Facebook). Institutions also used social media metrics to make collection development and digitization decisions. Some noted that reuse data provide a deeper context than use data because it has more potential for showing “who” is using something.

Use cases for assessment

The application of specific use case examples was explored. Common themes revolved around monetary considerations, specifically use cases for reporting impact to show good use of current or future funding, justifying existing expenses or personnel and justifying continued membership in consortiums that helped broaden their outreach. Also discussed were ways metrics could be used to improve work, including better understanding who users are and how they interact with digital repositories. Assessment metrics could better inform which collections would be worth full processing at the descriptive level, or which collections might need specific treatment, such as transcription, to better serve a community. The focus groups found that assessment provided the ability to measure the effectiveness of outreach to specific communities. In some cases, digital repositories may use such metrics to tie work to the outcomes of parent institutions, such as support for online learning.

Concerns and negative implications of assessment

When asked about ways that use and reuse assessment could be controversial, participants noted that, while not directly related to assessment, digital content can be reused in ways that are inappropriate or hurtful. Participants suggested developing a common code of conduct for both online users and also cultural heritage institutions. Additional closely related discussions included concerns about cultural appropriation of publicly posted content meant for specific audiences, or unauthorized commercial use of content. Concerns about patron privacy and data collection policies that unintentionally informed third parties weighed heavily, and participants felt these concerns warranted deep consideration by the digital library community in the development of tools and standards.

What practitioners need

When asked about what a toolkit could provide the digital library community, participants identified a number of must-haves, including the need to understand statistics and assessment practices, training on tools that could help with data visualization, and methods for translating data points into impact statements that would be relatable to stakeholders. Certain technologies and standards were mentioned as imperative, such as unique identifiers, sticky metadata that stays with digital objects across platforms and citation standards for digital objects. Participants noted that having objects on multiple platforms made it impossible to see larger trends. A dashboard that pulled metrics into a single location was identified as a way to make assessment more attainable for institutions with less resources.

The information gathered during the Exploratory Focus Groups and the pre-survey sets the stage for the composition and questions of the Technology and Standards Focus Groups.

Technology and Standards Focus Groups

Building on the previous groups findings, the second focus group discussions began with a review of use and reuse definitions. Participants cited many of the same things that earlier focus groups had identified. They continued the discussion around the context of digital content use. Participants again noted that stories were as important as quantitative metrics, particularly as they created space for meaningful context. They stressed that understanding what was learned or gained from interaction with a digital object was critical to their work.

Metrics and Standards

When discussing what metrics should be collected as part of an assessment framework, participant examples included social media metrics, clicks and downloads. Participants also discussed the impact of versioning on tracking reuse and use, particularly as related to pre-prints. Participants again brought up the need for embedded metadata that was consistent across platforms; this was of particular import for tracking object identifiers or relational metadata. They also noted that existing non-library databases already do this.

In developing a common standard or set of metrics, participants in all of the focus groups placed heavy emphasis on articulating a purpose for measurements specific to the outcomes and goals prioritized by a given organization. In the case of academic institutions, this could be learning outcomes for students, or demonstrations of research impact. For public libraries or museums, this could be community engagement. Participants noted that mission and priorities naturally differed, making a master standards problematic. They recommended that a toolkit focuses on a variety of methods that institutions could choose from, including instructions on how to use and set up reports in Google Analytics, best practices for interpreting data, examples of statistics and metrics, examples of qualitative and quantitative methodologies, and rubric templates that take into account institution demographics and discrete collection parameters.

Content management systems and cross-platform tools

The Tools and Technology Focus Groups discussed what kinds of system architecture would be needed to facilitate the collection of reuse data. Responses showed that flexibility and modularity were crucial. Content management systems or data collection systems should be able to collect use and reuse data on a granular level. This included being able to collect specific kinds of object, system or audience data with the ability to turn on or off the gathering of specific data points at the collection level. The ability to develop reports from the data that would show trends across or within collections or across or within repositories was identified as key, as was the ability to send alerts when digital collection material was getting more traffic than usual, and the ability to identify its source. Additionally, system architecture would ideally interface with aggregators’ data (e.g. DPLA or Internet Archive). Two of the focus groups discussed International Image Interoperability Framework (IIIF)[7] as a model to investigate options for measuring reuse, as it provides embeddable images that are not downloads or derivatives.

The concern about using third-party software for gathering and analyzing assessment data came up in all groups. There was anxiety that using Google Analytics fed information to Google on user behaviors without digital libraries determining what kinds of information they were willing to share. Participants would like an analytics gathering tool that functions independently of third-party vendors and gives control over what is shared. In lieu of this type of software, participants noted that a toolkit that demonstrated options for licensing language about data collection that could be used in vendor negotiations would be helpful. They also pointed out that a toolkit should explain how content management systems and analytics software gather and expose data.

Collaborative data sets and ethical implications of assessment

Building off the suggestion of a collaborative statistical database discussed previously, a collaborative statistical database was envisioned as a repository for use and reuse data that could show the larger impact trends for digital collections across institutions. Such a database would allow for benchmarking, and provide information about what collections could be digitized that would have the biggest impact, or what collections still need to be digitized in order to better serve underrepresented groups. They also mentioned using such a system to leverage data with vendors to build or modify existing systems to provide better user experiences and data tracking.

Even with these benefits, participants noted that there were issues to consider. For instance, benchmarking could help institutions set and meet goals, but it could also set inappropriate expectations for use of collections. Participants warned that many institutions may not provide data for such a data set due to fear that comparisons might lead institutions to overstate impact, nullifying the data. Additionally, participants felt that a collaborative database would require careful construction to ensure that metrics were appropriately contextualized.

Participants felt an assessment toolkit should address ethical implications for both individual institutional assessment and collaborative assessment, including recommendations on what kinds of data to collect. Determining “good” or “bad” data collection was seen as problematic. Detailed individual user data collection helps to provide a clear picture of outcomes, but may also be used by third-party vendors in ways that are out of sync with the general values of the digital library community. A toolkit may provide more utility in simply explaining how data could potentially be used – both appropriately and inappropriately – therefore empowering institutions to make individual decisions about assessment practices.

Conclusions

Analysis of the survey and focus group data highlights several key themes in reuse assessment. The most prominent of which is that the digital library community is looking for field-wide approaches for assessing the impact of reuse in order to better understand, and tell the story of, what has been learned or gained by a user when they repurpose a digital object. Although standardized assessment approaches are critical, it is equally important that they be both modular and flexible enough that a range of institution types can apply them within the context of community-specific values and needs.

Although a majority of institutions do track limited types of use data, very few track reuse data, and even less consistently do so across collections. For those that are tracking use data, analysis and dissemination of this information is haphazard. It is clear from this research that metrics developed must be purposeful, and link directly to the outcomes prioritized by the organization. Too often practitioners felt they were collecting data for data’s sake, and adding to that data noise would be a mistake.

Embedded metadata that is consistent across platforms will be important to tracking object identifiers and relational metadata. Ideally, content management or data collections systems can collect use and reuse data on a granular level, and institutions will have the ability to turn on or off the gathering of specific data points. An ability to interface with aggregator data is crucial for smaller institutions, and specific approaches to data collection and dissemination should be detailed in licensing negotiations with vendors. The IIIF, with its community research focus, defined APIs and compatible software, can serve as a model for the reuse assessment toolkit. Similarly, RAMP and JISC’s IRUS can serve as models for large scale aggregation and implementation of standards across institutional repositories.

Finally, this research points to the need within the digital library community for benchmarking, and the ability to show relationships and patterns among digital objects that are being missed by currently collected data. Although software and approaches have been identified, foundational assessment techniques and training opportunities may be more immediately useful to the community at large. This includes resources on everything from data visualization, to methods for translating data to impact statements, to building and normalizing assessment practices that are specific to cultural heritage, data and digital repositories, and digital library organizations.

Ethical considerations and community values should be at the forefront of all discussions, and a future toolkit should explore how data can be used, the good and the bad, to empower institutions to make decisions.

Finally, this research has made clear that the digital library community is hungry for ways to understand the users and uses of digital collections by applying a new type of assessment lens – a lens that examines how materials are reused and repurposed within various communities, rather than simply measuring volume of use.

Further study

Beyond the final analysis of the upcoming focus groups, post-survey, and framework development and recommendations, significant themes have emerged. Any year-long project that engages a large group of practitioners and interacts with a hands-on advisory board will inspire additional areas of study. During the course of the analysis to date, the following areas have emerged that warrant deeper interrogation outside the parameters of the current research project.

Re-contextualization concerns were surfaced by the focus groups in two ways. First, the ethical consequences related to the tension of capturing detailed reuse metrics with the challenge of third-party vendors using the information. In fact, this theme generated so much conversation that the project team added related questions to subsequent focus group outlines. Digital librarians expressed an interest in utilizing specific personal information within a digital library to inform priorities. Indeed, the potential for third-party vendors (such as Google) to use the same information in a context misaligned to the goals of the institution poses a threat to the information and its custodians. As a community keenly sensitive to the misuse (or perhaps this too is a form of reuse) of personal information, how should standards be set? What is the digital library community’s role, as a group that regularly interfaces with for-profit, third-party entities, in shaping this conversation?

Second, focus groups revealed that digital librarians prepare for, but may not feel empowered to handle instances where digital assets are being used in a controversial context. Re-contextualizing digital assets, especially for use by extremist, revisionist or hate groups likely occurs. Digital librarians rarely have a vehicle to consistently track usage of their images in any forum. However, in the situation where digital assets are being used to promote a credo that stands in opposition to that of the original collection or home institution, digital librarians and other stakeholders may find themselves in a difficult position. While legal deeds and rights may be cited to prevent institutional liability, digital librarians must consider the implications to the relationships forged, partnerships cultivated and personal histories misused.

Third, platforms need to keep up with practitioner needs and community usage of digital assets. Perhaps the only sustainable approach for digital librarians to interact with third-party systems that meet immediate (and changing) demands is to be involved in the development of new platforms and tools. Do digital librarians and information management professionals leverage positions with vendors who supply tools that are integral to daily operations in all the ways they can? Every focus group raised the topic of Google Analytics as a platform through which usage was collected and analyzed, which presents a portrait of their significance in the institution or the community. However, most focus group participants did not report using the same platform to track reuse. If reuse metrics are equally important to initial use, how can these types of tools assist?

Finally, the importance of shaping a strong narrative to demonstrate the value and impact of a digital library on relevant communities stands out as critical to the sustainability for digital libraries in a competitive environment. Stakeholders and funding sources in academic or cultural heritage settings can range considerably, and are quite diverse. One way this diversity impacts digital libraries is through their ability to tell stories to all the entities that need to hear them. Beyond use and reuse metrics, how can we empower the community to better convey the impact of the digital library and related services to stakeholders?

Through engagement with the digital library community the researchers initiated a series of conversations with experts where few were occurring around topics of reuse. As a result of this unearthing, the research team, along with the digital library community at large, has much still to do in an effort toward a fluency of the importance on reuse for the future of digital libraries.

Notes

1.

For the purposes of this paper “digital library community” refers to practitioners employed by cultural heritage organizations including academic digital libraries, special, government, public and museum digital special collections and archives, and institutional and subject repositories.

2.

In the COUNTER 5 release an Item Master Report (IR) is used to assess multimedia usage from institutional repositories. An example of the report components can be seen here: http://bit.ly/2n0w34m

3.

JISC’s IRUS-UK: www.jisc.ac.uk/irus

4.

COUNTER is the non-profit organization that maintains a Code of Practice, or standards, for vendors and publishers to follow when providing usage statistics of online resources to libraries: www.projectcounter.org/

7.

International Image Interoperability Framework (IIIF) website: http://iiif.io/

References

Bushey, J. (2013), “Trustworthy digital images and the cloud: early findings of the records in the cloud project”, in Gathegi, J.N., Tonta, Y., Kurbanoğlu, S., Al, U. and Taşkın, Z. (Eds), International Symposium on Information Management in a Changing World, Springer, Berlin and Heidelberg, pp. 43-53.

Chapman, J., DeRidder, J., Hurst, M., Kelly, E.J., Kyrillidou, M., Muglia, C., O’Gar, G., Stein, A., Thompson, S., Trent, R., Woolcott, L. and Zhang, T. (2015), “Surveying the landscape: use and usability assessment of digital libraries,working paper, Digital Library Federation Assessment Interest Group, User Studies Working Group, December.

Charmaz, K. (1983), “The grounded theory method: an explication and interpretation”, in Emerson, R.M. (Ed.), Contemporary Field Research, Little, Brown, Boston, MA, pp. 109-126.

Frick, R. (2016), “The state of practice and use of digital collections: the Digital Public Library of America as a platform for research”, Digital Libraries (JCDL), 2016 IEEE/ACM Joint Conference, IEEE, June, pp. 3-3.

Kelly, E.J. (2015), “Reverse image lookup of a small academic library digital collection”, Codex: The Journal of the Louisiana Chapter of the ACRL, Vol. 3 No. 2, pp. 80-92.

Kelly, E.J. (2017a), “Use of Louisiana’s digital cultural Heritage by Wikipedians”, Journal of Web Librarianship, Vol. 12 No. 2, pp. 1-22.

Kelly, E.J. (2017b), “Content analysis of google alerts for cultural heritage institutions”, Journal of Web Librarianship, Vol. 12 No. 1, pp. 28-45.

Kelly, E.J., Muglia, C., O’Gara, G., Stein, A., Thompson, S. and Woolcott, L. (2018), “Measuring Reuse of Digital Objects: Preliminary Findings from the IMLS-funded project”, Proceedings of the 18th ACM/IEEE-CS on Joint Conference on Digital Libraries, New York, NY, June.

Mannheimer, S., Sterman, L.B. and Borda, S. (2016), “Discovery and reuse of open datasets: an exploratory study”, Journal of eScience Librarianship, Vol. 5 No. 1, pp. 1-14.

Padilla, T., Allen, L., Varner, S., Potvin, S., Roke, E.R. and Frost, H. (2016), “IMLS National Leadership Grant (LG-73-16-0096-16) ‘Always already computational’”, Grant Award, available at: www.imls.gov/grants/awarded/LG-73-16-0096-16 (accessed March 29, 2018).

Punzalan, R.L., Marsh, D.E. and Cools, K. (2017), “Beyond clicks, likes, and downloads: identifying meaningful impacts for digitized ethnographic archives”, Archivaria, Vol. 84 No. 1, pp. 61-102.

Reilly, M. and Thompson, S. (2017), “Reverse image lookup: assessing digital library users and reuses”, Journal of Web Librarianship, Vol. 11 No. 1, pp. 56-68.

Terras, M. (2015), “Opening access to collections: the making and using of open digitised cultural content”, Online Information Review, Vol. 39 No. 5, pp. 733-752.

Thompson, S. and Reilly, M. (2018), “Embedded metadata patterns across web sharing Environments”, International Journal of Digital Curation, Vol. 13 No. 1, pp. 1-12, available: https://uh-ir.tdl.org/uh-ir/handle/10657/3072

Thompson, S., O’Gara, G., Kelly, E., Stein, A., Muglia, C. and Woolcott, L. (2017), “IMLS National Leadership Grant (LG-73-17-0002-17) ‘Developing a framework for measuring reuse of digital objects’”, Grant Award, available at: www.imls.gov/grants/awarded/lg-73-17-0002-17 (accessed March 29, 2018).

Woolcott, L., Kelly, E.J., Muglia, C., O’Gara, G., Stein, A. and Thompson, S. (2018), “Use vs reuse: assessing the value of our digital collections”, paper presented to Code4Lib, Washington, DC, February 15, available at: https://osf.io/cv3jt/ (accessed March 27, 2018).

Acknowledgements

The authors would like to acknowledge that this project was made possible in part by the Institute of Museum and Library Services National Forum Grant lg-73-17-0002-17. The views, findings, conclusions or recommendations expressed in this paper do not necessarily represent those of the Institute of Museum and Library Services. The authors would also like to acknowledge the Digital Library Federation for its support of the Assessment Interest Group, and for their aid amplifying the work of this grant project and its deliverables. Finally, the authors are grateful to their home institutions for their encouragement of this research, and support in participating in this grant project.

Corresponding author

Genya Morgan O’Gara is the corresponding author and can be contacted at: gogara@gmu.edu

About the authors

Genya Morgan O’Gara serves as Deputy Director of VIVA, the academic library consortium of Virginia, where she has been working since 2015. Prior to this she worked as Director of Collections at James Madison University, and in the Collection Management and Special Collections Departments of North Carolina State University. She publishes and presents on emerging models of content development and assessment, with a focus on digital collections, scholarly publishing and collaborative collection development.

Liz Woolcott is Head of Cataloging and Metadata Services at Utah State University where she manages the MARC and non-MARC metadata creation of the University Libraries and is the co-founder of the Library Workflow Exchange. She publishes and presents on workflow and assessment strategies for library technical services, innovative collaboration models, the impact of organizational structures on library work, creating strategic partnerships for libraries and building consortial consensus for metadata standards.

Elizabeth Joan Kelly is Digital Programs Coordinator at Loyola University New Orleans where she manages digitization activities for Special Collections and Archives and is also responsible for collecting, maintaining and assessing usage data for the library’s digitized collections. Kelly publishes and presents on archives, digital library assessment and library pedagogy, and co-founded the DLF Digital Library Pedagogy group.

Caroline Muglia is Co-Associate Dean of Collections; Head, Resource Sharing and Collection Assessment Librarian at the University of Southern California (USC) where she has worked since 2015. Prior to her role at USC, Caroline worked as a Manuscript and Digital Archivist at the Library of Congress, and later as a Data Librarian for an educational technology firm in Washington, DC. She serves as Adjunct Professor in the USC Marshall Business School’s Masters in Management in Library and Information Science (MMLIS) Program where she teaches a course in Collection Assessment, Evaluation and Analysis. Caroline was recently selected as a cohort member of the 2018-2019 ARL Leaders Fellowship.

Ayla Stein is the Metadata Librarian at the University of Illinois at Urbana-Champaign (UIUC). She supports the metadata needs for scholarly communication, data curation and preservation in the Library. She has published and presented on digital repository evaluation, metadata development for data repositories and digital library system migration. Her research interests include digital repositories; metadata and linked data; and the place of metadata in critical librarianship.

Santi Thompson is Head of Digital Research Services at the University of Houston Libraries. He develops policies and workflows for the digital components of scholarly communications, including digital research support and digital repositories. Santi publishes on the assessment of digital repository metadata, software and content reuse. He also currently serves as Principal Investigator for the IMLS-funded “Developing a Framework for Measuring Reuse of Digital Objects” grant project and Co-principal Investigator for the IMLS-funded “Bridge2Hyku Toolkit: Developing Migration Strategies for Hyku.” He earned the MA degree in Public History and MLIS from the University of South Carolina.

Related articles