The Role of Content Analysis in Evaluating Metadata for the U.S. Government Information Locator Service (GILS): Results from an Exploratory Study

William E. Moen
School of Library and Information Sciences
University of North Texas
P.O. Box 13796
Denton, TX 76203
Phone: 940-565-3563
Fax: 940-565-3101

Erin L. Stewart
School of Library and Information Sciences
University of North Texas
P.O. Box 13796
Denton, TX 76203
Phone: 940-565-3563
Fax: 940-565-3101

Charles R. McClure
Distinguished Professor
School of Information Studies
Syracuse University
4-218 Center for Science and Technology
Syracuse, NY 13244-4100
Phone: 315-443-2743
Fax: 315-443-5806


This paper discusses application of qualitative and quantitative content analysis techniques to assess metadata records from 42 Federal agencies' implementation of the Government Information Locator Service (GILS). GILS databases respond to a late-1994 initiative to "identify public information resources throughout the Federal government, describe the information available in those resources, and provide assistance in obtaining the information [and] serve as a tool to improve agency electronic records management practices" [1]. GILS metadata records describe agencies' automated information systems, Privacy Act systems of records, and locators that cover its information dissemination products. The authors used record content analysis, and several other methods, to examine whether GILS is helping agencies fulfill information dissemination and management responsibilities and the extent to which GILS is meeting user expectations [2]. Criteria used in the current analysis were informed in part by results of user and service-implementor questionnaires and focus groups. The record content analysis itself, in turn, informed creation of a scripted online assessment for users, and data from that user assessment supplemented results of the content analysis.

The quality of metadata for networked resources is as of yet a relatively unexplored research area. At this point, no consensus has been reached on operational and conceptual definitions of quality; likewise, validated procedures for assessing metadata are lacking. On the basis of the exploratory analysis described here, the authors conclude that a range of criteria and procedures may be needed for different types of metadata (e.g., descriptive, transactional, etc.). In addition to supporting the larger evaluation study of GILS, the results of this analysis of metadata content will contribute to a developing dialog about assessing the quality of metadata.

Copyright 1997 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE.


During September 1996-June 1997, the researchers conducted a multi-method evaluation of the U.S. government's implementation of GILS [2], the results of which are reported in the study's final report [3]. The goal of the study was to understand how:

Recognizing the complexity of GILS as a networked information service, the researchers considered five aspects of GILS: users, policy, technology, content, and standards.

One activity of this expansive evaluative research effort centered on GILS records and an assessment of the metadata they comprise. Metadata, in the context of GILS, are a set of standardized elements that can be used to describe government agency information resources, serve as surrogates for those resources, and support networked information discovery and retrieval. The researchers applied quantitative and qualitative content analysis techniques to explore metadata quality and investigate assessment methods. In preparation for this activity, a review of the literature confirmed that while there is considerable discussion about metadata, methods and tools to test metadata are less evident.

The researchers found, however, an evolving and propitious synergy among the study's multiple methods; the record content analysis both was informed by, and served to inform, other data collection and instrument-development activities. For example, implementor discussions at the 1996 GILS Conference and GILS focus groups highlighted recurring issues about GILS records such as resource aggregation, suitability of metadata elements, consistency, and quality of presentation. In turn, the record content analysis proved invaluable in developing the scripted online user assessment.

At least two levels of quality assessment can be defined for metadata records. First, given documented requirements for metadata composition, an audit can be conducted to determine compliance (the extent to which records are free from errors, complete, current, etc.) Second, the outcome of metadata records can be assessed for utility and appropriateness of elements in terms of whether they support the purpose and goals of the metadata scheme. This study focused on the first level.

This paper describes techniques and procedures used in a exploratory, systematic assessment GILS metadata records. The assessment included identifying and operationalizing a set of criteria for quality, and developing a procedural framework for the collection and analysis of sample records. The paper also reports selected results from the assessment and describes how the findings lead to pragmatic recommendations for GILS implementors to address deficiencies in GILS records.

A primary objective of this paper is to demonstrate the utility of metadata assessment for identifying systemic problems and for developing recommendations to improve record quality. Another objective is to identify conceptual and methodological issues in metadata assessment that require additional research attention. Systematic methods for evaluating metadata are essential for system designers and implementors to refine and improve metadata in support of networked information discovery and retrieval.


GILS is a U.S. Federal initiative to improve access to publicly available government information [1][4]. The Clinton Administration's National Information Infrastructure: Agenda for Action intended GILS to be a "virtual card catalogue that will indicate the availability of government information in whatever form it takes" [5]. In keeping with the analogy of a library card catalog, GILS uses a standardized metadata scheme (comparable to a collection of bibliographic records) to describe information resources. Users access one or more agencies' GILS and submit key word/term queries against selected metadata elements (e.g., "Originator") or all metadata available (called "full text") to retrieve records. Searchers then use the information presented in the GILS record to access or acquire the information resource. The reader is directed to the Environmental Protection Agency's GILS site [6] as an example implementation; Appendix B in the study's final report [3] presents a list of GILS URLs known to the researchers.

GILS embodies a metadata approach to information resource description and uses discrete data elements in a standardized record structure to create surrogates that assist users in identifying, locating, and accessing government information. The Office of Management and Budget (OMB) Bulletin 95-01 [1] directed agencies to describe three categories of information resources-regardless of format-by means of GILS records: locators that cover agency information dissemination products, information systems, and Privacy Act Systems. Some GILS implementors define government information resources broadly to include "people, organizations, events, artifacts, etc." and these too may be subject to GILS description [7].

The architecture of GILS comprises several underlying components including a set of agency-based network-accessible information servers, a collection of structured records (metadata) describing agency information resources, and a standard search and retrieval protocol, ANSI/NISO Z39.50 [8][9][10]. GILS decentralizes record creation, and more than 40 U.S. Federal agencies have created records of their information resources in accordance with the Federal Information Processing Standard 192 [10] and National Archives and Records Administration's The Government Information Locator Service: Guidelines for the Preparation of GILS Core Entries [11]. Moen and McClure [12] noted that "an important factor in the overall utility of a GILS will be the quality of the data in GILS records. Quality criteria will include accuracy, consistency, completeness, and currency. In order to encourage the creation of high quality information that will populate GILS servers, the development of written guidelines for creating GILS records is essential." The NARA Guidelines [11] represents a first effort to fulfill this requirement.


The researchers believe that the effectiveness of metadata in supporting networked information discovery and retrieval (NIDR) rests on their quality. However, to date, no consensus has been reached on conceptual and operational definitions of metadata quality. A significant body of research on library catalogs' bibliographic records has documented the effects of quality on user satisfaction and collection management; however, research is only beginning to explore the role of metadata quality in access to networked resources.

Traditional practices of bibliographic description, ongoing development of metadata schemes, and digital library initiatives lay a foundation for assessing metadata quality. A review of the literature on these activities provides possible direction for defining criteria and developing measurement procedures. Table 1 summarizes criteria from several researchers.

Table 1--Assessment Criteria Identified in the Literature



































Data Structure        


Ease Of Creation  


Ease Of Use









Fitness For Use    






















A parallel can be drawn between contemporary usage of the term metadata and the concept of bibliographic description familiar to library and information professionals. The bibliographic record, as manifest in library catalogs, is in essence metadata representing entities such as books, journals, and other forms of recorded information [19]. In the environment of library catalogs, and especially large bibliographic databases that support record sharing, the quality of bibliographic records has been the subject of significant research for 20 years.

Principles of bibliographic control certainly apply to the representation of networked resources in terms of rule-based creation (emphasis on structure and consistency to facilitate access), guidance by experts, and a consideration of user needs. But in practice, creation of metadata differs in several key respects. The resources to be described are volatile and distributed; no single, professional group has authority to dictate procedures; and not only are rules absent (at the Anglo-American Cataloguing Rules [20] level of detail), there is no consensus that they should be created.

Networked resources are highly heterogeneous, and various metadata schemes appear to reflect attributes assigned in a de facto fashion by different user communities. (For example, Gluck [21] explores the usability of geospatial metadata.) Given this force of user perspective on the representation of volatile information, and the lack of proven standards, systems of metadata for NIDR may require uniquely tailored approaches to quality assessment. As articulated by Younger [18] "The prescribed level of 'quality' must relate to user expectations and the environment." Schemes inevitably represent a state of compromise among considerations of cost, efficiency, flexibility, completeness, and usability, and, thus, standards for quality must be based on the essential characteristics of each of these considerations. This flexible approach recognizes various metadata lifeforms already exist in the "metadata ecology" [22].


Within the overall evaluation study, the analysis of GILS record content served three purposes:

This analysis attempted to describe the "quality" of GILS records in terms of character or attributes rather than strict compliance to specifications. The latter, which constitutes an audit, would require a greater level of operational detail than current policy and standards provide and is a technique better suited to a more mature information service. Where adherence to published direction was relevant, FIPS Pub. 192 [10] and the NARA Guidelines [11] served as bases for evaluation.

The following objectives guided development and selection of tools and techniques:

  • 1. Assess the accuracy of GILS records in terms of errors in format and spelling

    2. Gauge and compare the relative record "completeness" or level of description

    3. Characterize a general profile of GILS product in terms of record types, aggregation levels, and containers (dissemination media)

    4. Evaluate records' serviceability, including factors affecting NIDR, convenience, aesthetics and readability, and relevance judgments.

  • These objectives were served by the methodology described in detail below.

    5.0 METHOD

    The study method falls into the broad category of content analysis in that the researchers used a "set of procedures to make valid inferences from text" [23]. Specifically, the analysis comprised an assessment of the presence or absence of data (counting), comparison with published specifications and usage guidance (conformance), and expert evaluation of the content of the data elements (interpretation). As a research technique, the researchers' approach reflects Krippendorf's sentiments about content analysis, the purpose of which "is to provide knowledge, new insights, a representation of 'facts,' and a practical guide to action" [24].

    The analysis occurred in two phases. Phase 1 involved examination of 80 records from the known pool of participating Federal agencies. To create this sample, researchers deliberately retrieved records to represent a range of information resource characteristics (e.g., media, resource type, file sizes, formats, etc.). These records, two records from each agency's database, served as the basis for developing and operationalizing a set of more than 50 qualitative and quantitative evaluative criteria. The researchers examined and compared the records to produce the assessment categories shown in Figure 1.

    Figure 1--Record Content Analysis Categories and Assessment Criteria

  •  Accuracy
  •  Completeness
  • Profile
  •  Serviceability

  • In Phase 2, the researchers systematically applied these criteria to a second sample of 83 GILS records retrieved January 13 and 14, 1997, from 42 agencies' GILS. Results, therefore, reflect record content at the time of retrieval and represent a "snapshot" during only one, and arbitrary, point in the GILS system lifecycle.

    In creating the Phase 2 sample, researchers accessed agency GILS databases by means of the presented user interface. For GILS featuring a search engine, the researchers retrieved the first and last "hits" resulting from a "full-text" query of the agency acronym (using the default "number of records to return"). For GILS on which this was not possible (those mounted on a web server of HTML files that present only a picklist of record titles as if for known-item retrieval or browsing), the researchers retrieved the first and last items listed. In the event of multiple file formats per record (e.g., HTML, SGML, PDF, ASCII), HTML was selected. Bearing in mind that generalizability of findings was not a primary goal of the analysis, these procedures served to reduce selection bias and provide randomization in the sample. Appendix C-4 of the study's final report [3] presents a full discussion of methodology.

    The sample records were printed for ease of study and comparative reference, and to preserve their state "as discovered." Record characteristics were assessed and recorded in a relational database containing controlled values and subsequently transferred to a spreadsheet for analysis using descriptive statistics. (Appendix D-4 of the study's final report [3] presents the database fields used to collect results.) Researchers evaluated more than 3,500 instances of metadata for incidence and/or content, and entered results into the database for coding and analysis. In addition, the researchers maintained a log of lessons learned and areas for further research that may be utilized by system developers, specification and procedures writers, and people with direct responsibility for GILS record quality.


    The following discussion of results from the GILS record content analysis is offered to illustrate the outcomes of the exploratory methodology. (For complete results and discussion see Appendix E-2 of the final report [3].) In addition, this section discusses the process of deriving recommendations from the analysis.

    6.1 High-Level Profile of GILS Resources

    To accomplish a principal objective of the analysis, the researchers characterized the aggregation of information resources described by the 83 records. This effort assessed the granularity of featured resources-an attribute known in the bibliographic community as the "unit of analysis." The following conceptual definitions served as an starting point for describing the phenomenon of aggregation:

    During performance of Phase 1 of the content analysis, those definitions were refined to provide the following operational guidelines for coding (percent of Phase 2 samples in each category is shown in parentheses):

    The researchers found the coding of aggregation levels to provide a low return on time invested in terms of profiling the record sample, in part because of the deficiency in descriptive information in the records. The researchers, however, developed a categorization scheme for the information objects described by the sample records (see Table 2). This approach assisted in enumerating the range of objects described by GILS metadata and could serve as a preliminary taxonomy of government agency information resources.

    Table 2--Information Object Semantics


    Operational Definition


    % Of Sample*

    Administrative Catalog A locator listing of procedural actions related to the conduct of agency business FERC's "Directory Of External Information Collection Requirements"
  • 4%

  • Agency Homepage Information mounted on an http server "Superintendent Of Documents Home Page On The World Wide Web"
  • 10%

  • Bibliographic Database An automated information system comprising metadata about bibliographic entities/publications DOE's "OPENNET"
  • 4%

  • Form A document designed to elicit and transmit specific information from the user to the supplier, respectively "Request For Registration For Political Risk Insurance" 
  • 5%

  • Job Line A telephonic recording of employment opportunities "DOI Employment Center"
  • 1%

  • Miscellaneous Documents in an ad hoc Collection Plurality of documents grouped by function or subject Bulletins and memoranda; public comments; speeches


    Organization A set of human resources defined by an agency to provide specific products or services Information center/library
  • 7%

  • Program A prescribed set of activities and functions performed to accomplish an objective Report management
  • 2%

  • Publication Discrete monographic document published one-time or in serial mode to disseminate information User's manual; "The Federal Register"


    Publications Catalog A fixed, flat (non-machine-searchable) listing of selected or all agency publications FEMA's "Publications Catalog"
  • 5%

  • Subject Matter Database Single, stand-alone automated information system (AIS) comprising data, records, or multiple documents on technical or administrative subject(s) and/or definable reference themes Aviation accidents


    System Of Systems Macro-AIS comprising or integrating multiple databases and/or single-AISs DOD's "Enterprise Information System"
  • 4%

  • * For 1% of the records, the lack of descriptive metadata precluded coding.
  • 6.2 Accuracy and Completeness of GILS Records

    For the criteria of "accuracy," researchers counted the number of "visible" errors in each record (e.g., spelling or typographical errors, file formatting errors, or incorrect date formats); in the current sample, 10-30% of records featured such errors. The importance of accuracy was confirmed by the evaluation study's scripted online user assessment, which revealed users' poor tolerance of formatting errors. The criteria of "completeness" addressed the fullness of sampled records in terms of inclusion of elements in the record. The GILS record structure defines 67 metadata elements, some of which are "mandatory" and some "optional" [10][11]. Table 3 summarizes selected results that address the criteria of completeness.  

    Table 3--Significant Findings Related to the Criteria of Completeness

     Number of populated elements per record max 190*; min 11; avg 42
    Records containing "blank" (labeled but null value) elements 36%
    Utilization of 12 mandatory elements 96%
    Utilization of optional Controlled Vocabulary 12%
    Utilization of optional Local Subject Index 54%
  • * This figure largely reflects the incidence of repeatable elements. 
  • These measurements of record completeness, or metadata utilization rates, provide an opportunity to demonstrate interpretation of content analysis results holistically, or within the context of other study data. For example, occurrence of "blank" elements (an element label appears in the displayed record, but no data value is associated with the element) may be perceived by users as agency negligence or system error-an idea confirmed by user assessment. Another useful association of these data may arise from interviewing record creators as to their rationale for omitting metadata: for example, do they perceive them as irrelevant or not mission-critical? This type of supplementary interpretation provides a depth of understanding and much-needed context for the results of a "completeness" assessment, which constitutes a relatively shallow audit when conducted in isolation.

    6.3 Characterizing the Serviceability of GILS Records

    The criteria of "serviceability" considered the sampled records' effectiveness in enhancing NIDR (including promotion of relevance judgment), providing convenience to the user, and record readability. Table 4 summarizes selected results.

    Table 4--Significant Findings Related to the Criteria of Serviceability 

    Records with spelling or typographical errors 10%
    Records featuring Controlled Vocabulary 12%
    Elements in preferred display order 64%
    Point of Contact types 50% offices; 23% personal names; 9% job title
    Availability-Order Process element utilized 86%
    Records with hypertext 25%
    Records with descriptive Title 75%
    Records with descriptive Abstract 86%

    The assessment of serviceability relied more heavily on the researchers' judgment and interpretation than did the criteria of accuracy and completeness, which were more amenable to objective counting of occurrences. For example, the researchers determined the degree of "descriptiveness" of titles and abstracts based on instructions and examples in the NARA Guidelines [11]. Titles such as "Annual Reports" absent the year of coverage and "General Files" and "Minutes" were judged nondescriptive from the perspective of assisting users in making selection and relevance decisions.

    In summary, accuracy, completeness, and delineation of information resource type as criteria for quality are interdependent and best evaluated against an overarching standard for "serviceability." While the prescription and exploitation of GILS metadata have yet to receive a systematic user-oriented analysis that would result in such standards, content analysis of existing records, coupled with a technique such as directed searching by users, provides a promising inroad.

    6.4 From Results to Recommendations

    One objective of the overall GILS evaluation study was to recommend actions to improve GILS. Results of the exploratory content analysis identified some aspects of GILS metadata that are amenable to remedy in both concept and execution. Table 5 presents selected examples to demonstrate the orientation of results interpretation within a service-wide, or system-level, context as a prelude to development of a user-based GILS specification. The process of deriving recommendations assumes that a cause for the "error" or "lack of quality" has been determined; while the record content analysis alone might not identify a "cause," complementary research activities such as focus groups and interviews with personnel responsible for record creation could illuminate the cause. Other research components in the GILS evaluation confirmed that deficiencies and problems identified in the metadata assessment were affecting their usability. For example, the scripted online user assessment results suggested that variation in metadata presentation and content reduced users' confidence in GILS as a coherent information locator service.

    Table 5--Selected Examples of Recommendation Development

    GILS Record Content Analysis Finding

    Synergistic Assessment To Determine Significance

    Recommendation To Improve Quality

    File formatting errors, resulting in faulty record display User assessment by means of scripted searching questionnaire and/or talk-aloud protocol Devise a hard-/software-independent template and/or HTML editor for record formatting, or limit formatting responsibility to personnel with web browsers
    Dates not presented in prescribed format Known-item search tests Use "auto-correct/auto-format" macros to standardize dates.
    Spelling and typographical errors Known-item search tests Use machine-based spell checkers, or assign checking responsibility to someone other than the inputter
    Difficulty in recognizing /characterizing the information object User assessment by means of scripted searching questionnaire and/or talk-aloud protocol Provide the additional element "object represented" in order to evaluate aggregation and container; revise "resource description" element definition to contain values recognizable to the user rather than the distributor
    Variation in record appearance User assessment by means of scripted searching questionnaire and/or talk-aloud protocol Standardize record display, including type font, weight, and size, as well as indentation and capitalization to "moor" users in GILS information space and promote the concept of a government-wide rather than agency-centric program
    Misapplication of date of last modification element Known-item search tests Rename the element "record revision date"

    The final example in Table 5 concerning the element "date of last modification" is especially instructive. The analysis indicated that some record creators, rather than entering the date the GILS record was last modified in accordance to specification [10][11], provided a date associated with the information resource described by the record. The scripted online user assessment verified this element's susceptibility to misinterpretation; thus, the researchers recommended less ambiguous semantics. A pragmatically oriented assessment such as that conducted for this study, by casting light on quality-related assumptions made by scheme architects, record creators, and users, can lead to concrete recommendations for improvement.


    This examination of GILS records provided an evidentiary basis for exploring factors related to user satisfaction with the current implementation of U.S. Federal GILS. It also served the purposes of informing subsequently deployed methodologies, such as the scripted online user assessment, and corroborating results of earlier data collection activities. In these key respects, the researchers conclude that metadata quality analysis achieves maximum utility when placed in the context of user-based assessments of metadata in service.

    The goal of this analysis was not to characterize or infer from this small sample the quality of the GILS record universe. The procedures used may be modifiable to support a more comprehensive exercise; however, in the current study generalizability (i.e., the extent to which results can be assumed valid for the entire population) was neither planned nor achieved. The sample was small, less than 2% of the estimated total of approximately 5,000 GILS records, and the sampling technique was largely convenience-driven due to resource constraints. The assessment, however, provided valuable evidence to substantiate anecdotal data gathered through interviews and focus groups with cognizant personnel. In other words, the record content analysis enabled the researchers to clarify, or set forth in a way to facilitate constructive dialog, systemic issues such as ambiguity of element semantics and uneven levels of description.

    The procedures developed for this assessment were very time-consuming and labor-intensive in terms of the development of criteria, their operationalization, actual examination, coding, and data entry for thousands of instances of metadata. Much of the burden could be alleviated by machine processing (e.g., for element counts, incidence of hypertext, etc.). Additionally, "content analysis" in the traditional sense of characterizing the "message" of textual information (e.g., newspaper stories) could be applied to narrative metadata such as Abstract and Access Constraints. Results of such an effort would assist in agencies' accurate portrayal of the resource collection represented by their GILS and, perhaps, in standardizing values to assist users in NIDR.


    The researchers discovered that this study's approach for assessing metadata will likely find optimal utility when employed in circumstances where specific user-defined criteria are known. An understanding of how users read, evaluate, and "use" GILS records could inform a re-evaluation of which data elements are critical to NIDR, the content and presentation of metadata values, and a refined set of criteria for assessing the records. Additional user-based research is essential to cultivating criteria for metadata quality.

    This study focused on an assessment of metadata records in terms of their alignment with GILS standards and record creation guidelines. The researchers identified a range of areas for further research of record content as an indicator of how well GILS is meeting user needs. Selected areas include:

    Such supplementary research would fill in many gaps of current understanding concerning user perceptions about and acceptance of the utility of GILS metadata records in discovering and acquiring information resources.


    In summary, the method employed to analyze the content of GILS records proved highly satisfactory in rendering the type of results that would inform the overall evaluation. By providing a bird's-eye view of the "product on the shelf" at a given point in time, this method allowed a comparison of planned vs. actual outcomes for quality.

    The researchers defined relevant areas of observation (e.g., accuracy, completeness, serviceability) and proposed operational definitions for the criteria and well as a preliminary set of systematic procedures that may be useful in ongoing assessment of GILS records. Agencies' continuous analysis and reporting of record content will serve well in complementing evaluations of the effectiveness of the NARA Guidelines [11], implementation maturity, and user satisfaction. In addition, the pragmatic approach of this analysis resulted in actionable recommendations to improve the quality of GILS metadata. Finally, the record content analysis methodology strongly supported the holistic, multi-method research strategy adopted for the overall GILS evaluation study.

    The results of this assessment indicate an uneven understanding or appreciation among GILS implementors of the value of metadata to support a distributed information service. Although GILS is a standard-based networked service, the records are not predictable or consistent in terms of metadata utilization or content quality. The GILS study concluded that what currently exists is a set of agency GILS rather than a uniform and coherent government-wide locator service. Different points of view exist on topics of appropriate metadata elements, content of records, and their presentation. Policy guidance should promote standard application of metadata elements according to the unit of analysis and character of the information object described, uniform levels of description, and optimal presentation. Reaching agreements on standards for GILS metadata will require a "standards process" through which primary stakeholders (e.g., agency staff, Federal policymakers, librarians, public interest groups, citizen representatives) can build consensus on GILS metadata standards. In addition, continuous user evaluation is essential to inform implementors whether GILS is achieving its objectives, and if not, how agencies' selection, application, and display of metadata influence user acceptance.

    The development and deployment of GILS metadata requires a cooperative effort that includes not only the technologists, but also those responsible for agency information resources, the various classes of users of GILS, and the policy makers. Assessments of metadata as reported here will provide valuable feedback for the ongoing refinement of GILS records and support continuous improvement of the service.


    [1] U.S. Office of Management and Budget. (December 7, 1994). Office of Management and Budget Bulletin 95-01: Establishment of the Government Information Locator Service. Washington, DC: Office of Management and Budget. URL

    [2] Moen, William E. and McClure, Charles R. (1996). Technical Proposal: An Evaluation of the Federal Government's Implementation of the Government Information Locator Service (GILS). URL

    [3] Moen, William E. and McClure, Charles R. (1997). An Evaluation of the Federal Government's Implementation of the Government Information Locator Service(GILS): Final Report. URL

    [4] U.S. Congress. 104th Congress, 1st Session. 1995. Paperwork Reduction Act of 1995: Public Law 104-13, 104th Congress, 1st Session.

    [5] Information Infrastructure Task Force. (1993). The National Information Infrastructure: Agenda for Action. Washington, D.C.: Information Infrastructure Task Force. URL gopher://

    [6] Environmental Protection Agency. (1997) Government Information Locator Service. URL

    [7] Christian, Eliot J. (1996, December). GILS: What is it? Where is it going? D-Lib Magazine. URL

    [8] Information Infrastructure Task Force. (1994). The Government Information Locator Service (GILS): Report to the Information Infrastructure Task Force, May 2, 1994. Washington, D.C.: Information Infrastructure Task Force. URL

    [9] National Information Standards Organization. (1992). ANSI/NISO Z39.50-1992, Information Retrieval Application Service Definition and Protocol Specification for Open Systems Interconnection. Bethesda, MD: NISO Press. URL

    [10] National Institute of Standards and Technology. (1994). Federal Information Processing Standards Publication 192, Application Profile for the Government Information Locator Service (GILS). Federal Register, 59 (December 7): 63075-63077. URL

    [11] National Archives and Records Administration. (1995). The Government Information Locator Service: Guidelines for the Preparation of GILS Core Entries. Washington, D.C.: National Archives and Records Administration. URL

    [12] Moen, William E. & McClure, Charles R. (1994). The Government Information Locator Service (GILS): Expanding Research and Development on the ANSI/NISO Z39.50 Information Retrieval Standard, Final Report. Prepared for the United States Geological Survey and the Interagency Working Group on Data Management for Global Change, Washington, DC [USGS, Cooperative Agreement No. 143493A1182]. Bethesda, MD: NISO Press.

    [13] Ede, Stuart. (1995). Fitness for purpose: The Future Evolution of Bibliographic Records and Their Delivery. Catalogue & Index, No. 116, 1-3.

    [14] Heery, Rachel. (1996). Review of Metadata Formats. Program, 30(4). URL

    [15] Mangan, Elizabeth U. (1995). The Making of a Standard. Information Technology and Libraries, 14(2): 99-110.

    [16] Taylor, Arlene G. (1992). Introduction to Cataloging and Classification (8th ed.). Englewood, CO: Libraries Unlimited.

    [17] Xu, Amanda. 1996. Accessing Information on the Internet: Feasibility Study of USMARC Format and AACR2. Proceedings of the OCLC Internet Cataloging Colloquium, January 19, 1996, San Antonio, Texas. URL

    [18] Younger, Jennifer A. (1996). Interview with Jennifer A. Younger, Ohio State University. OCLC Newsletter, No. 221. URL

    [19] Dempsey, Lorcan. (1996). Meta detectors. Ariadne, Issue 3. URL

    [20] Gorman, Michael and Paul W. Winkler, eds. (1988). Anglo-American Cataloguing Rules. 2nd ed. Chicago: American Library Association.

    [21] Gluck, Myke and Bruce T. Fraser. (in progress). Descriptive Study of the Usability of Geospatial Metadata. Partially funded by OCLC LISRG Program. Preliminary results to be presented at GIS/LIS Conference, Cincinnati, OH, November 1997.

    [22] Dempsey, Lorcan, and Weibel, Stuart L. (1996, July/August). The Warwick Metadata Workshop: A Framework for the Deployment of Resource Description. D-Lib Magazine. URL

    [23] Weber, Robert Philip. (1990). Basic Content Analysis. 2nd ed. Newbury Park, CA: Sage Publications, p. 9.

    [24] Krippendorf, Klaus. (1980). Content Analysis: An Introduction to its Methodology. Newbury Park, CA: Sage Publications, p. 21.


    This research was supported by the General Services Administration Office of Information Technology Integration, Washington, DC, in response to Solicitation No. KECI-96-006 by William E. Moen and Charles R. McClure on August 28, 1996.