CC:DA/TF/Logical Structure of AACR/3
August 16, 1999
Committee on Cataloging: Description & Access
Task Force on the Review of The Logical Structure of AACR
Please note that the purpose of this document is to facilitate the work of the Committee and to provide a means for outreach to both library and non-library cataloging communities. This document is intended for the exclusive use of CC:DA and its cataloging constituencies, and is presented for discussion in the ongoing process of rule revision. Under no circumstances should the information here be copied or re-transmitted without prior consultation with the current Chair of CC:DA.
The Task Force on the Review of The Logical Structure of the Anglo-American Cataloguing Rules was charged with the detailed review of Parts One and Two of The Logical Structure of the Anglo-American Cataloguing Rules by Tom Delsey. The Task Force was instructed to focus on the key issues and recommendations contained in Delsey's logical analysis of AACR and not the structure of the model itself. The Delsey model is quite complex and the Task Force hopes it has done it justice in the short time it has had to compile a response to the recommendations contained therein.
Delsey undertook his analysis at the request of the Joint Steering Committee for Revision of AACR as an outgrowth of the International Conference on the Principles and Future Development of AACR (October 1997). The analysis Delsey provides is intended to be used to reveal the current structure of AACR and also as a logical starting point for its next developmental stage. In creating his analysis, Delsey confined himself to what was explicit or implicit in the code itself. A cataloger, of course, will bring a wealth of experience to his/her perception of the code and it is this context that informs his/her judgment in interpreting and applying the rules. It is this experience which has rightly been removed from Delsey's analysis, for even though it is an essential element in the code's interpretation, it may obscure the true logical structure of the code as written. This absence of interpretation and context, at times, makes the analysis seem artificial; but the result is a model that is thorough, exacting, and brutally direct.
As such, it is an excellent tool for revealing the structure of the code and for testing possible revisions. However, it will be necessary to evaluate such revisions against the knowledge gained by experience in applying and interpreting the code.
Does the concept of class of materials as currently reflected in the code serve as a viable basis for an extended structure accommodating new forms of digital materials?
Delsey Recommendation: Use the model developed for this study to assess options for structuring Part 1 of the code to facilitate the integration of rules for new forms of expression and new media. One option for consideration would be to use ISBD(G) areas of description as the primary organizing element for the overall structure of Part 1.
Task Force Response: The Task Force agrees that this is a significant issue and a problem that must be addressed. The model presents convincing evidence that the chapters in Part I are based on a confusing mixture of intellectual content (and assumptions about the intellectual content), physical carrier, type of publication, published/unpublished status and other factors. The "CLASS OF MATERIALS" entity is fundamental to the rules particularly relating to its function in rule 0.24. In an age when material that belongs to more than one class is increasingly common, conflicts in applying rule 0.24 are inevitable. The rules need to include a clear directive in what order and how the cataloger deals with the many facets of an item to be cataloged.
As to the recommendation, the Task Force notes that the CC:DA Task Force on Rule 0.24 has accepted the suggestion to reorganize Part I of the rules according to the ISBD(G) areas of description. This Task Force supports that recommendation.
In such a reorganized code, the "CLASS OF MATERIALS" entity could be deconstructed. Since it no longer defines a complete set of rules for describing the material in question (a separate chapter in Part I), distinct categories can be defined as the scope for each special rule. (These categories could still be called "classes of materials," although it might be a good idea to use a new term.) Categories could be defined under each of the relevant facets (intellectual content, physical carrier, type of publication, etc.) for which there are special rules in the code. Special rules could thus be phrased "For [category], do [rule]." and this explicit labeling would help catalogers to find the relevant special rules in each area. This should make the code easier to use.
The distinction between general and special rules is perhaps one of the benefits of the reorganization of the code. Some of us participated not long ago in "format integration" the merging of the different USMARC bibliographic formats into a single format. In the course of this exercise, it was surprising in how many cases there were different specifications for the same data elements in different formats. The prototypes of reorganized chapters for Area 1 and Area 2 that are given in Appendix A of the report of the CC:DA Task Force on Rule 0.24 show that, despite our best collective efforts, the same situation occurs in AACR: rules that are probably intended to be applied to all relevant material, but are currently stated with slight variations in the different chapters. The editorial process of preparing a reorganized Part I will need to resolve all of these discrepancies and decide what should be a general rule and what should be the scope of each special rule. The result should be a code that is not only simpler, but also more consistent.
The prototypes referred to above suggest that the conflicts among special rules would not be frequent. In most cases, the applicable rules can be applied in sequence because they deal with different facets of the item (content and carrier, for instance). However, there will be cases in which conflicts will occur. The one case that comes most readily to mind are the rules for selecting the chief source or, in the reorganized rules, the source for the title and statement of responsibility area. The current rules are again based on a mixture of intellectual content, physical carrier or other facets. If the current rules were simply brought together, then there would be conflicts for those items to which both a content-based rule and a carrier-based rule applied. This may be one of the aspects of Part I that will require the most careful substantive re-writing: determining which factors should be most important in selecting sources for transcription and developing an order of precedence.
Summary: This Task Force supports the reorganization of Part I of AACR into chapters for each ISBD(G) area of description. We feel that the result is likely to be a simpler and more consistent set of rules. The new rules should include a set of categories that cover all significant facets of the items to be described and for which there are special rules in the code.
Does the physicality inherent in the concept of DOCUMENT constrain the logical development of the code to accommodate the cataloguing of electronic resources?
Delsey Recommendation: Use the model developed for this study as the basis for examining the feasibility of modifying the internal logic of the code to accommodate documents that are defined in non-physical terms. Consultation should be undertaken with experts in the area of electronic document architecture.
Task Force Response: The Task Force suspects that this question is aimed at electronic resources available over the Web. Other electronic resources (e.g., a music CD) have had their data become fixed (i.e., uneditable) and closely tied to a physical carrier.
However, documents that are available over the Web are not without a type of physical existence as well. Even though an information resource is digital, it still must exist and be stored in at least one location. This physicality, however, is elusive as it cannot be perceived by our senses. As it cannot be readily perceived, changes to it cannot be detected directly. The instability of this medium becomes twofold. The stored data may become altered with no immediately perceived change to the display, and the same data may be displayed in alternate forms, giving the impression of differing documents. A true dilemma for a cataloger trying to determine the relationship between two items.
This confusion over the content/display of a digital document extends yet further to the definition of its "boundaries". Does the content included by the hypertext links constitute part of the document? If these links are broken or the content of the hyperlinked material altered, has the document itself been altered? Are these alterations serious enough so that we should consider a new work has been created or trivial enough that these changes need not even be mentioned on the catalog record? Catalogers have become comfortable with the metaphysical nature of the "work". Extending this concept to the document itself is counterintuitive. We allow certain variations in the expressions of the "work" to exist and still consider the work to be the same. Will we allow the same variance in the expressions of a document?
The Delsey model also shows that the code is "inconsistent". For example, the logic does not always use physical existence as the determining factor in Area 5, at times the determining factor is intellectual content. The Task Force agrees. The rules for Area 5 in Chapter 2 make the assumption that an item is text in book or pamphlet form, black ink on white paper. And so, the intellectual content component of the document that it is text is never mentioned, only the extent of the carrier. Yet the specific material designations in AACR2 are a mixture of content types (such as "map"), carrier types (such as "sound recording") and those that are both (such as "motion picture"). To be consistent in its treatment of all forms of content, Area 5 should always contain both the specific type of intellectual content and the specific type of carrier, as has been the practice for cartographic materials for many years. As new forms of artistic expression and material type increase, and we are faced with organizing large numbers of digital resources available over the Web, the assumption that our bibliographic world will be predominantly printed text is no longer valid and the expression of those assumptions in AACR2 must be re-evaluated.
Summary: The Delsey report states that "... networked electronic resources ... effectively have no physical dimension." It might be more accurate to say that networked electronic resources have no STABLE physical dimension. It is this instability of storage and display, and the articulation of acceptable variance within the defined boundaries of an electronic resource that AACR2 needs to address. Delsey has picked up on a key assumption in the structure of the rules and a difficult, subtle problem to resolve.
Is the division of the universe of objects described into two categories published and unpublished adequate to accommodate the description of digital objects disseminated on-line?
Delsey Recommendation: Using the model developed for this study as a frame of reference, examine the issues raised with respect to the notion of "publication" in a networked context in consultation with experts in the area of electronic documents.
Task Force Response: Delsey's recommendation in this simple form makes sense, however, his expanded explanation is troublesome. Below are a few pertinent definitions from the report:
The "real world" entities that make up an unpublished document are, to a large extent, the same that make up a published document. The additional "real world" entities attached to a published document are: manufacture, release, copy, impression, issue, and edition.
The model fails to recognize a major change of flow with digital objects disseminated on-line. The acts relating to a physical object are active ones: publication, dissemination, manufacture. With digital objects, the transitive acts of production, manufacture, and release are replaced by the passive (to the document) aspect of access. The transitive aspects have been placed in the hands of the user rather than the producer. Even the realm of creation becomes blurry as electronic "documents" become broken down into constituent parts and are able to be reconstituted in unique combinations by individual researchers.
The vocabulary of the model, as it reflects AACR2 as currently written, has been designed to describe the production, manufacture, and release of physical items. The question posed is whether or not the dissemination of non-physical documents can be accommodated in this model. As the vocabulary, however, was not geared to adequately describe electronic documents, the fit can never be right. The vocabulary used determines the result. Delsey states "The question to be addressed, therefore, is whether the concepts underlying the entities defined as RELEASE and COPY can be extended ...". Again, the assumption is the model will still apply with an extension of vocabulary.
The concept of copy in itself is equally confusing in digital documents. Digital documents, of course, do have a physical component in that they exist as stored information. However, this stored information may be displayed in various ways. If someone captures this display in a hard copy, is it the same item as the stored digital information? Another suite of problems centers around the ease with which the data that comprises the object may change and how this is related to your mode of access. A cataloger may catalog a digital item accessed remotely. The control of the content of this item is out of the hands of the accessing agency and may very well change with time. Obtaining a "copy" of the digital information may help an institution to fix it, but the original from which it was copied may continue to evolve and the two no longer contain similar content. What is the relationship between the two and how is this reflected in the catalog?
Another perplexing term is manufacture. For many digital objects disseminated online, the act of production and manufacture are the same: there may be only one copy of an item to which multiple users are allowed access (e.g., websites). Or, a resource may be manufactured in the traditional sense, and then the separate copies made available on-line to multiple users (e.g., Cataloger's Desktop). Perhaps the only valid distinction is made by "release"? If no access is allowed to an electronic item, should it be considered unpublished? Does the mere act of accessibility constitute publication (e.g., allowing networked access to a digital "copy" of a dissertation)? Where would the fixing (through downloading to a disk) of a networked serial resource that is meant to be continuously updated fit into the scheme?
Summary: The question posed by Delsey is an important one. However, his implied solution (expansion of current model vocabulary) is inadequate. The vocabulary of Delsey's model is simply not rich enough to capture the myriad of possibilities in the description of digital objects disseminated on-line. By merely enhancing the same terms used to discuss physical items in order to encompass digital objects, the result becomes artificial. Unless we are cautious, we will move from a set of outdated rules to an equally constrained model.
Can the notion of "seriality" as reflected in the code be extended to accommodate electronic forms of "publication" or dissemination of documents "intended to be continued indefinitely"?
Delsey Recommendation: Continue the examination of the "seriality" issue initiated as a follow-up to the Conference on the Principles and Future Development of AACR2, using the frame of reference set out in the model developed for this study as a tool to assist in the analysis of the issues.
Task Force Response: This recommendation has been overtaken by subsequent events. The report on "Revising AACR2 to Accommodate Seriality" has since been issued. It did not use the Delsey model to formulate or test its recommendations, but it does address most of the issues raised in Recommendation 4.
The Seriality report does not address one of the issues raised in the Delsey report, which suggested that electronic transmission of digital objects needs to be worked into the concepts of publication, manufacture, release, copy, edition, etc. Current U.S. practice has been to propound the controversy as to whether or not electronic transmission should be treated as publication. Further work is clearly needed.
Summary: The Seriality report addresses the issues raised in this section of the Delsey report and provides a tight conceptual structure which is also workable in practice. The report does not squarely face one implication of its recommendations: the transformation of Type of Publication into Type of Release or Type of Issuance needs to be examined more carefully, as does the nature of electronic transmission as a form of publication.
What are the implications of applying the logic of the code to documents in which the intellectual content is not permanently "fixed" within a physical object?
Delsey Recommendation: Review the conventions and rules for reflecting change in the attributes of an item described, as currently established, to determine their applicability to changes in the attributes of digital objects, and extend them as necessary to accommodate a broader range of variables.
Task Force Response: Internet resources can, and sometimes do, change frequently and, at times, quite drastically. There is no guarantee that the resource one sees on the screen today will be exactly the same tomorrow. This has several implications for descriptive cataloging.
First, how much can a document change before it must be considered a new document even a new work? The boundaries between items now shift dynamically while we aren't looking. The code needs to provide guidelines for determining how much change can be accommodated within a single bibliographic description, and when it is necessary to make a new description. The LC Rule Interpretation for rule 1.0 gives some guidelines on determining if a given item is a new edition, or just a copy (with perhaps some modifications) of an existing item. AACR2 and the rule interpretation assume, however, that one has ready access to the various versions of the item, or at least to their surrogates (the catalog records). With digital documents, however, in most cases, the new version completely replaces the old version of the document, and thus the original "document" no longer exists as a separate entity. Even the bibliographic description for that original document may not have sufficient information to determine if what we see now is a new work or a new expression or not.
It seems important that AACR2 be revised to accommodate this reality of digital documents, giving guidelines on how much a document might change before it must be considered an entirely new item. The CC:DA Task Force on Rule 0.24 has done some good work on this issue, and we support its recommendation that guidelines on when to create a new record be added to the code. It is important that such guidelines be applicable to all materials, but also be able to deal with the challenges of digital materials. The concept of acceptable variance is an important one to consider across all types of materials, and the resulting guidelines should be consistent across material boundaries.
Second, within a single bibliographic description, how should changing data be reflected? Delsey provides a good summary of the existing techniques and recommends an analytical approach towards the question. He suggests identifying the attributes that change and then organizing them (and the rules) in such a manner that future revisions to the code will be minimized. We would add that the changing attributes and the nature of the changes also need to be evaluated for their significance to users, so that the need for revisions to bibliographic records themselves will be minimized and confined to those attributes that are truly significant. This analytical and evaluative approach constitutes a positive step towards keeping the code viable and relevant for meeting future needs.