Metadata, Sense-Making, and the Changing Role of the Library: A Report on the CIC metadata Conference

,

Library Hi Tech News

ISSN: 0741-9058

Article publication date: 1 May 1999

185

Citation

Bregman, A. and Sandore, B. (1999), "Metadata, Sense-Making, and the Changing Role of the Library: A Report on the CIC metadata Conference", Library Hi Tech News, Vol. 16 No. 5. https://doi.org/10.1108/lhtn.1999.23916eac.001

Publisher

:

Emerald Group Publishing Limited

Copyright © 1999, MCB UP Limited


Metadata, Sense-Making, and the Changing Role of the Library: A Report on the CIC metadata Conference

Alvan Bregman and Beth Sandore

Introduction

The Committee on Institutional Cooperation (CIC) Metadata Conference, which was held 17-19 November 1998 at Beckman Institute, University of Illinois at Urbana-Champaign (UIUC), was conceived to present a survey of latest developments in a field of increasing importance to member institutions, especially as they affected technical services operations in libraries. We proposed the conference to the CIC Technical Services Directors' Group, following Sandore's attendance at the May 1998 LITA/ALCTS Metadata Institute held in Washington, DC. Barbara McFadden Allen, Director for the CIC Center for Library Initiatives, spearheaded the effort within the CIC, secured widespread support for the conference among the CIC library directors, and worked closely with Sandore and Bregman in the formation of the proposal and the subsequent conference planning.

As the conference planners discussed the purpose and goals of the conference, a number of points became clear to us about the evolving nature of electronic information and the role of libraries. First, the conceptual parameters of the library catalog are expanding exponentially. There now exists an integratable world of diverse information that libraries seek to bring together in order to serve our users better. What we describe as information has broadened considerably in scope, content, and format, while the process of creating and organizing metadata involves significant changes in the ways in which we provide access to this information. Where libraries determine it is important to integrate diverse forms of information, it is critical that we possess the ability to build metadata structures and effective schemes for mapping from one format to another. We need to agree on common goals in creating and managing metadata, and work collaboratively to create tools that will be useful to these ends. Standards currently under development will enable us to define and create this new information environment. The conference planners felt that CIC librarians could benefit from a shared understanding of metadata resource development, of the interrelated nature of their colleagues' seemingly separate digital library efforts, and of the different metadata formats that support description of electronic and print resources in diverse disciplines.

With these points in mind, we tried to design a conference that would introduce CIC librarians to metadata issues and terms. We also sought to provide a forum for the exchange of information by those already involved in metadata projects, which could be showcased at the conference. While most of the presentations were given in plenary session, an afternoon of concurrent sessions was held to allow participants to get a closer, "hands-on" feel for the details of either the Dublin Core or Text Encoding Initiative (TEI) metadata formats. The conference Web site was created by Karen Singer at CIC. Mark Jacobs of the University of Illinois Library put together a portal page of useful metadata links at http://www.cic.uiuc.edu/cli/metadatalinks.htm. The conference was fully subscribed, with about 120 registrants representing all the CIC libraries.

Students from the UIUC Graduate School of Library and Information Science provided assistance to speakers with audio-visual equipment and transportation. They also shared the task of keeping notes on the talks. This helped us greatly in preparing this report, and we would like to recognize May Chang, Alex Dunkel, David Hamilton, Ann Hanlon, and Qin He for their contributions. We also wish to express our thanks to Elaine Wolff and the staff of the University of Illinois Office of Conferences and Institutes for their excellent support from the early planning stages through the final conference evaluation.

In this conference report, we have tried to present a clear précis of each presentation, which were all of exceptionally high quality. We take responsibility for any shortcomings in the reporting. Special presentations and poster sessions are described in Appendixes 1 and 2 respectively.

Presentations

Sense-Making in the Network Environment: Beyond Access in Digital Libraries

Wendy Pradt Lougee, of the University of Michigan, delivered the opening keynote address of the conference. She introduced the concept of "sense-making", the task of bringing coherence and intellectual order to a complex and inchoate "information landscape". This landscape is buffeted by a number of winds of change affecting the library, information technology, education, publishing and other communities that inhabit this environment. Digital libraries are seen as attempts to "make sense" of this situation.

Lougee then presented a memorable analogy between the growth of such institutions and the stages of human development. In its infancy, innovation takes the form of tentative projects. If the field develops, it will go through stages of adolescence and young adulthood before reaching maturity. In these intermediate stages, discrete projects will give way to collaborative groupings that use common standards. Finally, innovation will be accepted and institutionalized: it will be mature. Most metadata systems and digital library initiatives are in relatively early stages of their life cycles.

Lougee discussed three terms as they relate to the new information landscape: heterogeneity, granularity and interoperability. Attempts to create a heterogeneous solution are not fated to succeed, because the environment is too complex. Hierarchical structures that allow for the expression of greater "granularity" will provide an intermediate solution. But finally, interoperability ­ a term that would crop up often at the Conference ­ will be required, so that issues such as policies, rules of behavior and economics would be included in metadata structures along with discovery and description techniques.

Metadata from a Bird's Eye View

The talk by Robin Wendler of Harvard University built upon Lougee's by drawing out the distinctions to be made among types of metadata. It is not enough to say that metadata is simply "data about data" and define it as an alternate (and inferior) kind of cataloging. Depending on context, metadata may itself have independent content, such as information used to identify, locate, manage,access and deliver an electronic resource. With this in mind, metadata may also be defined as machine-readable information about any resource, whether or not the resource is online.

The "bird's-eye" perspective detects a shift away from libraries as society's focal point for the production of information for resource discovery purposes. Other institutions and communities are now pursuing similar ends, and a new infrastructure is required to control and ensure access to digital information created in a variety of places. Moreover, the kinds of metadata being created are proliferating. Descriptive metadata are now found alongside administrative and structural metadata. The open question for librarians becomes, "What functions other than local access and delivery do we want to support through the creation of metadata?". Wendler ended by discussing the need to bring all stakeholders together in the design of the new metadata environment.

Concurrent Track No.1: Dublin Core

The Dublin Core Metadata Initiative: A Semantic Framework for Resource Discovery on the Web

Stuart Weibel, Senior Research Scientist at OCLC's Office of Research, presented a long and lively session introducing Dublin Core (DC) in the contexts of its development, structural features and use. Once one enters the "Internet Commons", one enters a world of varying semantics, structures and syntaxes. The DC Metadata Initiative is an attempt to provide standard conventions for the interchange of information about resources in this world. DC itself consists of a 15-element set of metadata categories, some of which are applicable for the description of the content of a resource (e.g. title, subject), some of which pertain to intellectual property information (e.g. creator, publisher), and some of which reflect "instantiation" of a resource (e.g. date, format).

Dublin Core and other metadata formats are themselves being placed within a modular infrastructure defined by the Warwick Framework, which supports the specification, collection, encoding and exchange of different packages of metadata. Weibel described the importance of modularization to allow for the distributed management of much more than simple description of a resource. The implications, as we learned from others at this conference, were that online public access catalogs can no longer be considered the only source of discovery for information and that libraries must begin making some accommodation to this fact.

Weibel also discussed at length several issues regarding the content to be carried by metadata structures. The DC element set needs to serve a variety of communities that use varying controlled vocabularies and taxonomies. To do this, the elements are being made extensible by the use of qualifiers. (This has to be done without promoting "element creep", the proliferation of elements.) While some trade-offs must be made, interoperability has been set as the highest priority. Still, the battle will rage between those who desire simplicity and those who desire precision.

Finally, Weibel gave an overview of RDF, the Resource Description Format, a set of conventions developed by the World Wide Web Consortium "to support interoperability among applications that exchange metadata". Complex relationships were described with clear diagrams, clearly discussed. Those who attended this session came away with a better understanding of thelatest activities in metadata development.

Concurrent Track No. 2: Text encoding Initiative

An Introduction to the Text Encoding Initiative: Structure, Content, and Its Relationship to Other Metadata Initiatives

William Fietzer, of University of Minnesota, presented a comprehensive overview of the structure, content, and the uses of the Text Encoding Initiative (TEI) header. By way of introduction, he described how the growth of Internet access and the need for structured access to similar types of electronic documents had strongly encouraged the development of full-text markup languages. SGML (Standard Generalized Markup Language) is one of the most powerful of the existing markup languages because it enables the encoding of both format and content information within a text document. TEI is one of the best-known and most fully developed of the SGML subsets used for the description and markup of full text in the humanities. The TEI header, Fietzer explained, is a critical component in the preparation of the text that is marked up in SGML because it contains information about the print source of the electronic text and the type of encoding that has been performed on the text, and because it provides access points such as keywords, subjects, and dates that can be used in the information discovery and retrieval process. Furthermore, MARC records can be derived from the TEI header.

Fietzer introduced the group to the structure of TEI headers, as well as to their potential content. He described how TEI can be modified to fit the needs of specific user groups. Best practice models suggestthat TEI headers should be structured as much as possible to produce the most richly indexed and most easily retrievable document. Information about provenanceand creation documentation can also be recorded in the TEI header, as wellas information about the encoder. Not only does this help to authenticate thedocument but it gives the user helpful information about its history. A glossary of terms and a bibliography (print and electronic resources) accompanied the session.

TEI and Header Processing and Best Practices

In the second TEI presentation, Judy Ahronheim and Tom Champagne, of the University of Michigan, reviewed the development of best practices for creating and processing TEI headers. The University of Michigan has been operating a TEI Text Service since 1988, and therefore has had an opportunity to experiment with and refine best practices for TEI header creation. The presenters illustrated the University of Michigan's TEI practices, and described their efforts to develop a best practice handbook for TEI headers in collaboration with other groups, such as the University of Virginia Electronic Text Center.

At Michigan, an automated routine that provides the first layer of SGML is applied to an electronic full-text document. Next, a careful analysis of content is done to determine what chunks of information the user will want to retrieve fromthe document. This step can be labor intensive and works best when applied to batches of the same kind of text. The text is then parsed, normalized and validated to match the conditions of the DTD (Document Type Description). Basic header information is then entered and sent to a cataloger, and a MARC record is created for the electronicdocument. The MARC record number is then re-entered into the TEI header.

Ahronheim and Champagne reviewed the several commercial SGML editors that are currently available, but emphasized that the needs of particular institutions must be taken into account when making a choice of SGML editing software. The University of Michigan uses a combination of applications, including Emacs, a basic ASCII text editor used for initial markups and author/editor, a WYSIWYG editing program, used for more complex tagging and validation.

The session included examples of best practices, following the working group recommendations that emanated from the June 1998 Library of Congress Workshop on TEI and XML in Digital Libraries. The Michigan Humanities Best Practices handbook is available on the Web at: http://www.lib. umich.edu/libhome/ocu/teiguide.html.

Instructional Management Systems

Instructional Management Systems (IMS) is a national learning infrastructure project aimed at building an Internet architecture for describing instructional resources by managing this area's evolving metadata systems. There are more than 30 investing members in the project, including the CIC, individual academic institutions, and other groups from the educational community. Because of its concentration on learning resources, IMS has a narrower focus than Dublin Core, a focus that Tom Wason, of the IMS Project, feels allows a balance between control and change.

Wason presented a technical survey of the IMS project, starting with the general IMS architecture, which consists of four maincomponents: user's browser, metadata tool, metadata schema and metadatarepository. The metadata schema offers tight controls because of its structure and simple syntax. Schemas are built using logical structures called elements that are defined in a dictionary. To allow greater flexibility, the dictionary focuses more on defining ideas rather than more concrete terms. Some of the important schema were explained in detail, and the structure of the IMS repository and schemarepository were also introduced. More information about IMS can be found at its Web site, http://www.imsproject.org/metadata/.

W3C and RDF: Resource Description Framework

The second day of the conference began with another lively presentation by Eric Miller, a member of OCLC's Office of Research. Miller described how the W3C, the World Wide Web Consortium, aims to lead the World Wide Web to its full potential by developing common protocols that promote its evolution and ensure interoperability among Web systems. To that end, the RDF (Resource Description Framework) is designed to provide an infrastructure to support metadata across many Web-based activities.

Currently, metadata are transmitted across the Web in three ways: embedded in other data, associated in http header streams, or delivered by trusted third parties. The interoperability of metadata systems requires conventions on semantics, structures and syntax. Dublin Core, RDF and XML are proposed for this to work at different levels. RDF is the result of a number of metadata communities bringing together their needs and designing a robust and flexible architecture for supporting metadata on the Internet and the WWW. Example applications of RDF include resource description, site maps, content rating, electronic commerce, collaboration and privacy protection. The components of RDF include model, syntax, schema, and query. TheRDF model links resource to value (or another resource) based on properties. RDF syntax is similar to that of SGML. The RDF schema reuses vocabularies to enable communities to share machine-readable tokens as well as local definitions. Rules are used to specify relationships among vocabulary elements.

Much more time could have been allotted to Miller and his topic, but participants nevertheless got a good sense of the more complex issues underlying metadata creation and use.

Perspectives on Cataloging and Metadata Creation for the Digital Library: Challenges and Working Models

This session focused on the practical implications of developments in metadata with respect to library operations, and especially cataloging. Since cataloging records are essentially metadata constructs, the issue breaks down into whether MARC-format metadata maintained in dedicated databases are to be superseded by other metadata formats in other computer-based architectures. Carol Mandel, from Columbia University, outlined the many advantages of the current model of library cataloging, as well as the substantial challenges being mounted to this model. Some of these challenges include the "rapidly increasing array and complexity of new forms of digital resources"; the need to manage access to components of packaged electronic resources, and to remote and locally created digital resources; and the need to provide both multiple access paths to metadata and multiple presentations of metadata. Unfortunately, MARC does not deal adequately with several key aspects of the new metadata environment and library staff often do not have the training to work with new forms of metadata or technology. The traditional view of the OPAC as the center of the library universe also contrasts with the increased reality of a decentralized information-providing environment.

Mandel then described Columbia University's "Master Metadata File Project", which mediates among numerous forms of input and output. Examples of input include MARC records and non-MARC metadata, and various kinds of image records; examples of output include HTML menus and pages, MARC- and XML-based objects. Finally, Mandel considered the implications that new metadata environments will have on all aspects of the library, from the organization of services and the design of systems, to the training of staff and professional librarians.

The presentation by Lynn Marko, from University of Michigan, began with an examination of the recent report of the ALCTS CC:DA Task Force on Metadata and the Cataloging Rules (June 1998), which reached a number of very cautious conclusions. The report confirmed the notion of the traditional catalog as the primary source of information resources within libraries. External information resources were to be molded to mirror standards for library catalogs. Metadata was not to be seen as an alternative to traditional cataloging standards.

Marko offered a different view of the issues. To her, changes in the digital and online environment should not be seen as threats to an old order, but as opportunities. Reference librarians have learned that they are not the only service providers for information discovery, an activity which has become democratized. They have reacted by becoming experts in gaining access to resources outside their institutions and their institutions' catalogs. Similarly, catalogers are urged to reinvent their profession by becoming expert in non-MARC metadata systems. Catalogers should realize that descriptive metadata resides alongside administrative and structural metadata in importance, and that rights management and information delivery issues are becoming increasingly important aspects of object records. Once again, this expansion of the "professional perspective" is an exciting opportunity to be grasped, not something threatening to be shunned.

Challenges to Building Visual Repositories

John Weise, from University of Michigan, and Beth Sandore, from University of Illinois, identified the unique aspects of working with metadata for visual image repositories, presented observations about the state of metadata format development and the challenges of multidisciplinary image repository creation, reviewed several models for building repositories, and provided examples of works in progress at both the University of Illinois and the University of Michigan.

Numerous types of image collections exist across disciplines and institutions. For example, in the academic setting, image collections can be found in many departments including art and art history, archaeology, natural history, medicine, and architecture. Further, there are several types of cultural heritage institutions, including museums, libraries, special collections, and archives, that collect and provide access to visual resources. Physical access to these types of collections is often restricted by location, handling, rights, and simply limited knowledge about what is available and where. The creation of multidisciplinary, online repositories can provide better access and knowledge of visual resources collections to many more potential users than is currently the case.

Online access requires rich descriptive, structural, and administrative resources to encompass therights, structure, and content of electronic visual resources. Often the type of information actually required to view, manipulate, and use the content of an image (and other media as well) is more complex than that for texts. Sandore and Weise introduced the concept of the "managed image" ­ image plus descriptive, administrative, and structural text coined by Besser and Yamashita in their suite of 1998 studies of the Museum Educational Site Licensing Project. The managed image is the ideal representation of a digital image because the information necessary to identify, view, and understand the context of the image accompanies it. However, they cautioned that the degree to which an image can be "managed" depends largely on the institutional resources available for indexing or cataloging, as well as the extent of local practice. The presenters also noted that due to the multidisciplinary nature of image use, there is a serious lack of consistency in metadata production related to images, which consequently affects the user's ability to retrieve related information.

Weise and Sandore described two models for mapping image metadata from one format to another:

  1. 1.

    physical mapping, to migrate or convert information from an obsolete format to a new system; and

  2. 2.

    virtual mapping, to provide federated or unified access across diverse collections based on the identification of common data elements.

Examples from the Museum Site Licensing Project (MESL), the University of Michigan Digital Library Production Service http://images.umdl. umich.edu, and the University of Illinois Digital Imaging Initiative http://images.grainger.uiuc.edu were presented and discussed.

Information Architectures for Interoperability

Clifford Lynch, Director of Coalition for Networked Information (CNI), gave a fascinating closing keynote talk that focused on three main topics: Interoperability, the Nature ofMetadata and Infrastructure. Lynch warned that we need to be careful when talking about Interoperability, since there is general uncertainty about what that term connotes. He pointed out that the goal of achieving interoperability did not involve the reconciliation of different world views. Indeed, a focus on metadata alone may be problematic. Rather, Lynch suggested, the information community ought to concentrate on first identifying what goals it wants to achieve through the interoperation of entities, determine what processes we want to carry out, and then decide how the interoperability of metadata systems can further these ends.

In discussing the nature of metadata, Lynch focused on the Warwick Framework. He emphasized that there is no single metadata set ­ there are many, each of them developed for the most part independently to serve the needs of specific communities. Furthermore, the number and diversity of metadata formats are likely to increase as more disciplines develop their own metadata sets to accommodate their own subject-specific information.

As we heard often during the conference, three categories of metadata have begun to emerge that are critical in the discovery of networked resources: administrative metadata, dealing with rights, permissions, and ownership information; structural metadata, carrying information about the types of applications or environment necessary to view a resource; and descriptive metadata, or information describing the content of a resource. Given such diversity, we need multiple approaches to metadata and major research still needs to be pursued to clarify the subject. Because the creation and ongoing maintenance of metadata represents a significant investment ­ a cost that is easy to overlook ­ it calls for ongoing evaluation.

Lynch concluded with a discussion of the enormous need for metadata infrastructure. On the indexing side, it is necessary to develop filtering mechanisms. Trust models need to be systematized, but both of these areas are basically untouched. We need to look at new types of metadata, descriptive practices, how tomap metadata from one format to another, the long-term evolution of metadata, and what standards and practices will be of best use. Finally, by far the biggest challenge is deciding how we will exploit metadata and to what ends.

Conclusion

According to the evaluations we received, the CIC Metadata Conference amply succeeded in its goals to elevate participants' general understanding of metadata issues, and to showcase metadata-related projects. While respondents wanted more opportunities to interact with speakers ­ a tribute to the quality of the presentations ­ time and space limitations made this difficult. When asked what future conferences might cover, respondents suggested more "practical applications and training", "hands-on workshops", case studies on the implementation of different metadata formats, and just plain "more of the same". The CIC will continue to coordinate important metadata activities, such as member participation in OCLC's Cooperative Online Resources Catalog (CORC) project, which utilizes the Dublin Core metadata element set.

Clearly, the development of data structure standards such as RDF and XML, the use of qualified Dublin Core and other metadata element sets, and the interaction of various knowledge communities in metadata creation will have a great impact on the online information environment. There are an increasing number of reasons for consortia like the CIC to work collaboratively on metadata creation projects. Building on the advantages of the traditional cataloging model, cooperative description of commonly used network resources has the potential of saving considerable duplication of effort in individual institutions. By deploying common metadata formats or establishing agreed-upon schemes to map one metadata format to another, we can greatly facilitate information sharing and access, especially to unique collections.

We would happily support calls for follow-up conferences on these and other topics.

Appendix 1: Special Presentations

Special presentations were held describing important digital library projects at the University of Illinois at Urbana-Champaign. In addition, a number of poster sessions were mounted by researchers at CIC member libraries and affiliates. The synopses which follow are based on abstracts supplied by the presenters.

The University of Illinois Digital Libraries Initiative

William Mischo and Timothy Cole, of the University of Illinois at Urbana-Champaign (UIUC), presented an overview of the NSF/DARPA/NASA Digital Library Initiative at UIUC and its current status. The research project focuses onproviding online access to the full text of selected scientific journals,and the development of an experimental large-scale test bed using thedocument structure to provide federated search across publisher collections.

Dynamic metadata are used in the normalization of the description as well asmanagement and processing of links across the multitude of publications. Metadata are added on the fly. Other aspects of this research includeend-user search behavior and needs, and identifying models for effective retrieval in an electronic full-text publishing environment.

An outline of their presentation is available at http://magni.grainger.uiuc.edu/cicmeta/DLImetadata/index.htm Please note that some pages mentioned are only accessible on the UIUC campus due tolicensing and other agreements. The Digital Library Initiative homepage is at http://www.dli.grainger.uiuc.edu.

The Kolb/Proust Archive for Research

The Kolb-Proust Archive was established in 1993 to make available to scholars worldwide, through the Internet, the research notes and documentation of Philip Kolb, professor of French at the University of Illinois and editor of the correspondence of Marcel Proust. The first series of documents from the collection are now accessible on the World Wide Web through a searchable SGML-encoded Virtual Archive. The presentation was by Thomas Kilton, Carolyn Szylowicz, and Patrick Reidenbach, of the University of Illinois at Urbana-Champaign Library. Further information is available at http://www.library.uiuc.edu/kolbp/.

Appendix 2: Poster Sessions

MARC + SGML: A Dual Catalog for Poster Images

The Preservation Department of Northwestern University provides Web searching access to a database of images and descriptive text from two poster collections: a set of over 300 propaganda posters issued by federal agencies to support home front efforts during World War II, and another set of 365 posters from the Melville J. Herskovits Africana collection documenting the international anti-apartheid movement, liberation movements in former Portuguese colonies, life under apartheid, and the South African elections of 1994. To provide most effective access to these collections, it was decided to make the posters searchable both in the online catalog and via the Web. Rebecca Routh and Virginia Kerr, of Northwestern University Library, describe their work to map MARC cataloging records into SGML-encoded text, which is searchable using Open Text's Livelink software via the Web. Further information is available at: http://www.library.nwu.edu/preservation/afpostpg.html.

The Emerge Distributed Search Architecture

Emerge is an effort by the National Center for Supercomputing Applications (NCSA), Urbana, Illinois, to develop middleware components of a new distributed search infrastructure which addresses the scale and heterogeneity of scientific data. Its components enable search services to interoperate across scientific domains by providing user-configurable tools for mapping between metadata schemas, performing search queries against multiple data sources, and performing query pre- and post-processing. Access to the Emerge search services is through platform-neutral standard and emerging-standard tools such as Z39.50, XML, and Java. The presentation was by NCSA's Joe Futrelle. Further information is available at: http://emerge.ncsa.uiuc.edu/.

Michigan's Federating Model for Image Collection Access

The University of Michigan, Digital Library Production Service (DLPS), Image Services has implemented broad library access to image collections with a federating model. Central to the federating model is the notion that public access must be separate from the management of collections. Museums, and visual resources collections in general, have well-established practices for collection management, but generalized access is typically not a strength, and usefulness to academia is therefore constrained. DLPS acknowledges, and in many ways values, inherent disparities among collections which have long histories of distributed management. The federating model respects and accommodates these differences, and adds value by providing a library approach to access. The presentation was by John Wiese, Coordinator of DLPS Image Services. Further information is available at: http://images.umdl.umich.edu.

Metadata Considerations for an Outreach Information System

An outreach information system (OIS) is being developed by a cooperative effort among the Agricultural, Consumer and Environmental Sciences Library, Grainger Engineering Library, and AIM Lab of the Agricultural, Consumer and Environmental Sciences College at the University of Illinois at Urbana-Champaign. This OIS will be a Web-based digital library of full-text electronic information searchable through a common interface. The project aims to examine publication practices of document creators, and will investigate methods that publishers can employ to foster metadata creation and document markup. Methods will be developed to generate metadata automatically when possible. The presentation was made by Pat Allen, University of Illinois at Urbana-Champaign Library,

Metadata in Biodiversity Research

In order to produce useful biodiversity information for the range of governmental and other agencies which have urgent need of reliable data, there is a great need to standardize and integrate data formats over a very wide range of scientific disciplines: ecology, palaeoecology, systematics, botany, biology, genetics and GIS to name but a few. The site was constructed to act as a guide to metadata issues in biodiversity.This Web site is the first part of a project to survey database owners in the field of biodiversity and report on issues arising as they relate to recent government initiatives on biodiversity. This presentation was by Geoffrey C. Bowker, Graduate School of Library and Information Science, and Shubha Nagarkar, Mortensen Center for International Library Programs, University of Illinois at Urbana-Champaign Library.

Further information is available at: http://www.lis.uiuc.edu/~nagarkar/biodiversity/main2.html.

Alvan Bregman is Head, Research and Planning, Technical Services, and Visiting Professor of Library Administration, abregman@uiuc.edu. Beth Sandore is Coordinator, Digital Imaging Initiative, and Associate Professor of Library Administration, sandore@uiuc.edu, University of Illinois at Urbana-Champaign.

Related articles