The Investigation Process Research Resource Site
A Pro Bono site with hundreds of resources for Investigation Investigators
Home Page Site Guidance FAQs Old News Site inputs Forums
. . . .. . . . . . . . last updated 8/8/09

INVESTIGATING
INVESTIGATIONS

to advance the
State-of-the-Art of
investigations, through
investigation process
research.



Research Resources:

Search site for::


Launched Aug 26 1996.

 
ALTERNATIVE ASSESSMENT METHODS FOR
RAIL-HIGHWAY GRADE CROSSING REGULATIONS

By Ludwig Benner, Jr. and Jeffrey Chapman

Contents

ABSTRACT

Current regulation assessment practices are a mess and awash in controversy. A national safety issue of major importance is the present inability of reasonable, dedicated, conscientious and informed individuals to find a convincing and harmonious way to resolve controversies about safety regulations, such as the “stop” requirement for certain vehicles at grade crossings. Far too much safety regulation is driven by seat of the pants opinions, without clearly defined safety performance goals. Serious ethical considerations are also involved.

The purpose of this paper is to examine this issue, reasons it exists, and possible actions to resolve it. A proposed change in a Federal rail—highway grade crossing regulation, a study to assess the proposed change, and another study assessing the assessment study are used to illustrate the problems and alternative approaches available now. Continuing existence of this issue is attributed to four interacting failures. Technical and managerial actions and insights into alternatives that could bring about improved safety regulation assessment capabilities and safety performance are discussed.

INTRODUCTION

This paper is written for presentation in January, 1987. Over four years ago, an Advanced Notice of Proposed Rule Making was published in the Federal Register by the Federal Highway Administration (FHWA) of the U. S. Department of Transportation, asking the public for data or information about grade crossing accidents (1). [1]

The information was requested to help FHWA decide whether or not to change a mandatory “stop” regulation applying to certain vehicles at rail—highway grade crossings. The response was quite underwhelming, considering the long history of such accidents - and inadequate to support a decision to change or retain the “stop” regulation.

Aware that its decision involved controversy, the FHWA contracted for a study “designed to determine the difference between the potential consequences of requiring and not requiring certain vehicles to stop at [rail/highway] crossings with active warning devices. [2]

The study contained 12 diverse conclusions. It concluded, for example, that in essence there would be a net decrease in train—involved accidents. It also estimated the “excess annual expenditures” attributable to requiring vehicles to stop at crossings with active devices when not activated amounted to about $14 million. A conclusion about the expected reduction in accident loses to offset these costs was not included among these Conclusions. Several of the main conclusions were actually restatements of data, and at least two contained obviously speculative or intuitive assertions. The trade—offs developed in the study were not presented in a readily assimilable form, and to our knowledge were not required to be so presented. However, the study is very valuable for a discussion of the controversy about the assessment.

After the study was completed, the FHWA let another contract to do an assessment of the first study, in part because of controversy which the first study had generated, as we understood the situation. The second study, concluding that the first should not be relied on as the basis for rule-making, apparently generated still more controversy. [3] For convenience, the first study will be called Study 1 and the other Study 2.

Two assessment studies, and two different conclusions! More controversy.

THE ISSUE: CONTINUING CONTROVERSY

Disagreement is not confined to the persons who did the studies, or attending the presentation of this paper. Irrefutable evidence of continuing disagreement is found in the variations in the State “stop” safety regulations adopted by state officials. Same states continue to require stops at grade crossings; others do not. Which “safety” regulation is the “best” and what does “best” mean?. The disagreement and controversy has existed for a long time, and a consensus or even a pathway to a consensus to resolve the differences is not in sight. It is this issue — the continuing disagreement and lack of a pathway to its resolution that needs to be addressed and resolved. The differences within the Studies are only reflections of this deeper issue. Current safety regulation assessment practices are a mess.


SYMPTOMS OF RAIL-HIGHWAY GRADE CROSSING
SAFETY ANALYSIS AND ASSESSMENT PROBLEMS
Differences in requirements among jurisdictions
Controversy about which is best requirement
Duration of controversy
Assumptions now required to evaluate opinions
No agreed proofs to resolve disagreements

Figure 1.

Why has there been no clear pathway leading to harmonious and convincing resolution of these differences about assessing a safety regulation? The reasons suggested by over 18 years of inquiry into related safety problems can be observed in the two studies. To present those reasons, some background will be necessary.

Unfortunately, research and analysis of the regulatory assessment issues and technology has not addressed the underlying assumptions and concepts on which assessments are based. Thus, little information about technical or managerial assessment choices and their significance is available. Interest in risk analysis and risk acceptance as an approach to regulatory assessment is growing, and the state of the art is advancing, but this has not produced the needed harmonious pathway to resolution of differences. For reasons that will be shown, it suffers from similar difficulties. The bottom line is that there is no technical pathway to support a consensus on assessments.

This lack of agreed technical safety proofs leaves the resolution of assessment differences to an adversarial debate, where debating skills rather than technical proofs are most decisive to the “jury” of observers. This is true not only of our legal processes, but of our regulatory processes. Is this an acceptable way to settle issues dealing with the life and death of people and environmental risks, and substantial economic costs?

I think not, and therefore would like to examine the continuing assessment controversy involved in greater detail, using the grade—crossing stop regulation to illustrate the issues.

WHY DOES THE ISSUE EXIST?

The two studies provide useful insights. Neither study described the research methodology selection decision or the selection criteria on which the work was based. In retrospect, both should have, for reasons that will be apparent shortly. Nevertheless, Study 1 is particularly valuable for the insights it provides into the issues reported in Study 2, and Study 2 provides a springboard to raise the issues.

The assumptions in Study 1 provide a starting point for our examination. As is typical of regulatory assessment studies, Study 1 contained many
assumptions and relied on them for its conclusions. These assumptions were apparently made for a variety of reasons, including

  • reflections of the “conventional wisdom” in the field.

  • absence of previously defined criteria for assessment.

  • misperceptions of the accident phenomenon.

  • inadequacies of accident data.

  • demands imposed by the analytical methodology used.

  • their toleration by the assessment methodology used.

Some assumptions reflected uncritical acceptance of what is probably best termed the “conventional wisdom” in the field - of the “everybody knows that....” variety. For example, the first sentence in the introduction of Study 1 begins with such an assumption, which leads it inexorably down what observations of literally thousands of accidents show is a misdirected regulatory assessment path. It asserts that “collisions between trains and vehicles transporting either hazardous materials ... have potential for catastrophic consequences.” Thus, Study 1 begins by assuming “the problem.”

This is not supportable with facts about rail—highway grade crossing accident consequences or hazardous materials accidents in general. Only a few types of hazmat shipments truly have that potential, and any regulation assessment must take these variations into account in its problem statement.

A second assumption is related to the objective of the “stop” regulation. The objective had to be assumed for the assessment study because no record of its original safety goals or objectives and the trade—offs in the regulation decision had been documented and kept with the regulatory history. Therefore, Study 1 ‘s authors were compelled to assume the criteria and trade—offs against which their assessment was made. (Study 1, p 14)

A third type of assumption reflects perceptions of the nature of the accident phenomena. In Study 1, these assumptions were extremely subtle, and likely to go unnoticed by most readers who have not been sensitized to the issues. For example, it is assumed that the types of accidents selected covered the full range of accidents that might be affected by the present regulations. Additionally, it is assumed that the recorded attributes in the accident cases used by the study could reflect the full range of accidents influenced by the “stop” regulation. Further it assumed that the accidents could be adequately described and classified for analysis by a very small number of attributes, such as train struck truck accidents, or vice versa, contrary to numerous research findings., [4],[5],[6],[7])

Assumptions about accident data were also pivotal to the outcome of Study 1. It went to great lengths to “validate” the accident data before it was used. However, close examination of the validation process disclosed that it validated data for consistency of the selected attributes or “characteristics” with the case selection criteria for accident types, and assumed the attributes in the surviving records were properly recorded and trustworthy, despite a dropout rate of over 60% for some of the records. It also assumed uncritically the data’s fidelity with respect to the processes constituting the accident phenomena.

Significantly, both these assumptions satisfied the methodology selected for the assessment. While Study 1 highlighted data inadequacies, its complaints about the accident records flowed from criteria that had to be met to satisfy the assessment methodology, rather than the accident risks to be controlled. The methodology selected for Study 1 provided no technical method for validating the fidelity of the data to the accident phenomena — only their representativeness relative to each other.

Other implicit assumptions relate to accommodation of changes. Study 1 used data from a 9 year time period. Many changes occurred during that period, including numerous programs to reduce the number of grade crossings, better training of drivers, the cooperative Life—Saver programs, and increasing use of double bottoms, among others. The use of the data in the study assumed that these changes would somehow be accommodated by the assessment methodology used. That methodology provides no way to predictively assess the effects of these changes on the accident attributes; such predictions are better performed with different data and analyses.

It is noteworthy that the study turned to process—oriented analysis methodology in Chapter 6 after the assessment problems and vagaries of the statistical data analyses were observed. That decision indicates the low value of the attribute data for assessment purposes. That low value, in turn, raises doubts about the value of the effort to acquire it, and suggests that the data acquisition effort, and the resources expended to acquire it, were wasted.

Study 2 was an evaluation of the evaluation - again, not an uncommon practice. It addressed primarily issues relating to the merits of the assessment and its methodology and practices, rather than the merits of the regulation being assessed. Study 2 was critical of the conclusions reached in Study 1, and concluded that the agency should not base its rule making on its conclusions. The report concluded that the Study 1 authors may have lacked a sufficient understanding of the accident phenomenon to ... satisfy the study objectives. A major concern was the well known logic fallacy of assumptions which were “poisoning the wells”

In essence, Study 2 spoke to the assessment issues from a different view than Study 1. Study 1 uncritically adopted an “attribute-based” assessment methodology and manipulated “validated” data with statistical tests to reach certain conclusions about the regulations. Study 2, using primarily an event— based process approach, criticized the assessment methodology, assumptions, the use of the data, and the conclusions.

THE BROADER ISSUE

These unresolved assessment differences represent symptoms of what is truly a national safety issue of major importance. That issue is the continuing inability of reasonable, dedicated, conscientious and informed individuals to find a way to resolve controversies about the most desirable safety action in this and many other comparable safety problem areas. This conclusion about its importance results from being a participant in many such controversies, and observing the players, roles and actions, interests and outcomes in those experiences, including continuing losses from inaction or misdirected action.

Controversy about safety action results has grave consequences. It breeds delay, it breeds misdirection or the wrong action, and it breeds excuses for no action at all. It can result in adversarial relationships among the very people who need to work together to control the risks. Is that acceptable where lives and societal resources are at stake? To me, the situation is intolerable.

To resolve the controversy issue, we need to examine it in terms of the goals of science: we need to understand and be able to predict it, and then control it.

Why does controversy exist? My observations indicate that the controversy flows from interacting failures in current safety regulation formulation and assessment practices.



A DIFFERENT VIEW OF THE PROBLEM

  • Failure to demand regulatory objectives
  • Failure to acknowledge nature of accidents
  • Failure to use investigators properly
  • Failure to recognize consequences of regulation assessment methodology selection decision

    Figure 2.

Let’s examine each of these failures in detail.


FAILURE 1: Failure to demand safety I
objectives for safety regulations.

One of the most significant and valuable contributions of Study 1 was the reported lack of documentation describing why the safety regulation in question was established in the first place. There is little doubt that the grade-crossing stop regulation was established in the name of “improved safety.” How much more safety was expected, and how was that safety improvement to be weighed against offsetting trade—offs? My experience, with few exceptions, has been that such an estimate of the expected safety improvement has rarely been demanded or offered. The grade-crossing regulation is no an exception.

Why is that significant? It means that no basis for assessing the success of the regulation has been possible to measure the success of the regulation over the years. It also means anyone can now offer any objective they wish to introduce into the controversy as an assessment criterion. Is it any wonder controversy follows?

Such objectives can be developed and used, from a technical perspective. In some instances, estimates were provided, such as the railroad head shield and coupler retrofit regulatory initiatives. In the Federal Railroad Administration’ s regulatory docket HM 144, the safety objectives were specified as a part of the rule making, but only after the accident process was defined and demonstrated - with motion pictures of reproduced accidents. The accidents could be reproduced, which means they were adequately understood and predictable.

The objectives were originally formulated to demonstrate the consequences of delayed regulatory action, but it was found they could serve as safety objectives for the rule. However, it must be recognized that they were introduced originally as a peripheral issue, forced into a predominantly economic framework. The safety objectives were not demanded as a routine part of the regulation development process. It should be noted that the objectives were used to track the success of the regulations.

The absence of a statement of safety objectives for the original stopping regulation precludes a direct comparison between the intended safety performance and the actual safety performance achieved by the regulation. Thus, the authors were forced to do the study without explicit objectives against which to measure the current or changed regulation’ s success. In other words, they had no assessment criteria to determine if the present regulation was achieving its intended safety performance.

Why don’t we routinely have safety objectives for regulations? My research and subsequent observations indicate this failure flows from two other technical failures.

Extensive analysis of accident investigations and the data they produce has disclosed that two major kinds of accident data are gathered and used as a basis for safety and regulatory action:

  • accident attributes.

  • accident process descriptions.

Attribute data is data describing the characteristics of an accident. These characteristics usually are static characteristics. Example: “grade-crossing” is used as an attribute of some accidents. Other attributes included train— involved, did—not—stop, unknown, less-then 10 mi/h, etc. (Study 1, p 44) Careful examination discloses that there are no consistent forms or contents of the attributes. As practiced, attributes can be anything the creator desires to propose. Process data is data describing interactions among system elements that produce an outcome. Process data is most often described in narrative form or modeled with flow chart type descriptions. It is constructed of dynamic events.


RAIL - HIGHWAY HAZMAT GRADE CROSSING ACCIDENTS
ANALYSIS AND EVALUATION CHOICES
 o Accident factors
 o Abstractions
 o Statistical tests
 o Long-term evaluation
 o Accident process
 o Descriptions
 o Sequential logic tests
 o Real-time validation
Figure 3.

Distinctions between the two kinds of data need to be recognized to understand the controversy issue. Attribute data can be any “factor” or characteristic of an accident that anyone wishes to record; relationships among attributes are established by statistical analysis. Attributes are usually conclusions, high on S.I.Hayakawa’s ladder of abstraction, introducing ambiguities that mask uncertainties. [8]) Process data is interdependent; process data must be in a descriptive form that will accommodate spatial and temporal relationships and satisfy sequential logic tests. Attribute data is predominantly static; process data is dynamic. Attribute data is abstract; process data must be concrete. Attribute data lends itself to counts and their mathematical manipulation or testing; process data lends itself to interactive change analyses. Attribute-based analyses require validation by future occurrences of the same attributes; event-based process analyses are validated by observing operations for the occurrence of event sets.


FAILURE 2: Failure to acknowledge
nature of accidents

The many assumptions in Study 1 provide an instructive example of the problems created by using attribute data describing static abstractions about accidents. During the accident validation process in a very familiar area —hazardous materials truck grade-crossing accidents — only 161 of 440 reported accidents (less than 37%) could be “verified” as acceptable for the study. A close look at the verification process discloses it only verified that the vehicle was a truck or tractor—trailer while simultaneously indicating that hazardous materials were being transported by either the highway user or both the highway user and/or the railroad. In other words, the validation required only that the attributes be consistently reported. IT DID NOT VALIDATE THAT THE DATA ACCURATELY DESCRIBED THE ACCIDENT PHENOMENA IN A PROCESS SENSE. The 63+% dropout rate, based on a relatively simple reporting decision by the accident investigators and reporting organizations, raises grave doubts about the quality of the remaining data about the accident characteristics the study authors were compelled to use, and even graver doubts about the validity as process descriptors.

A more subtle problem with the attribute-based approach is the inherent adoption of the “single cause” perception of the accident phenomena. This results from the treatment of attributes in isolation from other elements of the accident phenomena, to distinguish the independent from the dependent variables. Tables in Study 1 have extensive listings of individual attributes and break them down into percentages from which conclusions are drawn. This quickly and almost inexorably leads to the “single fix” presentation of data and mindset. For example, if we show you the relative proportion of driver actions related to accidents in a pie chart, what is your initial reaction? Most frequently, do something about the “did not stop” accidents.

Does this tell us that the stop regulation is ineffective, or being ignored, or what? Obviously, we need more data. Don’ t we always need more data when we start trying to interpret attribute data to determine what action to take?

Study 2 commented on some of the strained interpretations of the attribute data and the conclusions drawn from that data. It is revealing that Study 1 shifts away from the attribute data and toward process data in Chapter 6, toward the end of the study.


FAILURE 3: The failure to use I
I accident investigators properly.

The domination of the attribute-based approach for regulatory assessments also leads to misuse of accident investigation and reporting resources. The use of only 161 of 440 reported accidents (less than 37%) that could be “verified” as acceptable suggests a shameful waste of effort. From previous research, the reasons this occurs are clear.

Two types of accident investigation and reporting approaches drive the development of accident data. Investigators function either as

(a) data gatherers, filling in forms to provide data for others to analyze and use, or

(b) researchers trying to understand the phenomenon they are investigating.

Most accident data (probably 99% in one author’s experience) is generated in support of (a). For forms preparation, replicability of the entries if of far greater concern than replicability of the accident phenomenon. [9] Yet understanding, prediction and control of the accident phenomena is required to produce effective safety regulation and assessment. Despite this, only the MORT accident investigation program routinely makes an effort to satisfy the latter need, but relies heavily on abstractions in checklists. [10] Same governmental agencies (NTSB, DOT) do (b) type investigations in limited numbers, but usually in major accidents. A new book about accident investigation takes the issue a step further (10)

The significance of the extensive use of data gatherers is that the approach de-emphasizes the demand to understand the phenomena, so they can be controlled promptly. This approach poses ethical questions, too: it is a disguised form of testing on the public, an issue that the late Henry Wakeland expressed so well at the NTSB in the late 1960s. Yet it is still widely practiced.

The bottom line is that most investigation time is wasted.


FAILURE 4: The failure to acknowledge the
consequences of assessment methodology decisions

Subtly influencing each of the above failures is the decision about the analysis and assessment methodology selected for safety issues. The authors initially selected the popular analysis methodology currently driven by statistical inference technology, and inductive or deductive logic. Is this important?

Our research has shown that the selection of statistical inference techniques using currently available accident data has grave consequences for the

  • - development of regulatory actions

  • design of investigation programs

  • work of investigators

  • safety analyses and assessments that can be produced

  • monitoring of safety performance

These issues have been reported elsewhere in detail. (6), [11],[12])

Note that the Judicial sector is awakening to the problem, too.(11) The reason for highlighting them again here is that we are convinced — without reservation —-that the methodology selection decisions contribute directly to the controversies about proper regulatory safety action, and our inability to resolve these controversies.

RESOLVING THE ASSESSMENT ISSUES

For 12 years, at the National Transportation Safety Board, one of the authors was faced with producing recommendations from single hazmat accidents, because the types of major accidents investigated were so infrequent that it was not possible to build a database to accumulate enough cases to draw statistical inferences or analyze trends. In other words, we just couldn’t wait to build a data base from more Bhopals or Texas Citys. Also, it was considered unethical to adopt a methodology that required more major accidents to occur before our premises could be “proven.” Therefore, a search for alternative approaches was initiated. Since leaving the Board, additional research and applications have provided even clearer insights into these solutions.

In retrospect, existing methodological alternatives rest on two fundamental approaches. One is based on accident attributes and the other on accident events descriptions. Both were tried, to perform various functions, including accident investigations, safety problem definition, development of safety action recommendations, assessment of safety programs, assessment of regulations and others. Differences became clear during these applications. How are they different and what do those differences mean to the assessment controversies?

The consequences resulting from the selection of either approach were found to be very significant.

One of the major consequences was the difference is in the rigor of the “building blocks” that are used in the tasks. Attribute-based work uses “factors” which does not disqualify anything related to an accident from consideration or analysis. Event-based work, on the other hand, uses “events” which, while not defined with precision for many years, were disciplined by the need to satisfy at least temporal and spatial sequential logic tests. In applications, the main problem with “factors” has been and still is their ambiguity, and the delays encountered in determining relevance. However, and here is the main problem, since any input could and would be entertained with statistical analysis methods, the data screening for relevance did not occur until substantial data collection effort was expended. As a result, a factor could gain a life of its own before its validity was disproved using conventional statistical methods.

A second result was that the huge numbers of hypotheses and quantities of data overloaded the analysis systems. That meant analysts had to be selective about the data used, which created cascading problems and opportunities for “second guessing.” Event-based work, on the other hand, dealt with a different type of data which was organized by the end of the investigation. You knew what part of the accident process you understood, and what you didn’t understand. The surviving data had also been tested for relevance by logically determining if the description of the accident process was feasible and fit all the “evidence” left by that process. Controversy was easy to narrow and resolve with further investigation, with simulations or testing, or even with structured and disciplined speculations by experts.

Another major technical consequence was the level of abstraction that became involved. The greater the degree of abstraction tolerated, the less the discipline exerted on investigators. Attribute-based work was observed to resolve ambiguities by moving UP Hayakawa’ s ladder of abstraction, toward restatement of factors in more general terms to overcome the ambiguities. Thus it is easy to report “human error” as an accident cause with attribute based investigations, but almost impossible to do so with event-based methods.

Another consequence was that the more abstract the “factor” the more impossible it became to use it as a basis for daily control of the risk. For example, human error is an abstraction that must be restated in terms that are more concrete before an effective control action can be determined or recommended. Event-based work, on the other hand, tended to move down the ladder of abstraction toward more and more concrete descriptions of the deviation from expected actions. In other words, attribute-based work tended to overcome ambiguity and controversy by moving toward more generalized descriptions -increasing the potential for controversy, while event-based work tended to become more concrete, decreasing the area for disagreement.

Another major consequence was the credibility of the results of the “truth” tests applied to the accident data. By truth tests, we mean validating the role of a “factor” vs. an “event” in the accident. Without going into arguments about the stochastic vs. deterministic nature of phenomena, let us point out that the attribute work (driven by stochastic views) tested data of any kind with count or measurement-oriented statistical tests, while event-based work tested data with precede/follow interactions and sequential logic tests.

Now, the significance lies in the discovery that attribute-based data was rarely expressed conclusively, while event-based methodological choices led to descriptions that could be demonstrated and credible to reasonable observers. In addition, event-based choices provided for the real-time definition of accident process unknowns, and thus defined the remaining data needs during an investigation, rather than months or years later when aggregated data from more accidents was analyzed. Thus, the technical testing did not require additional accidents to occur to determine data validity or additional data needs, overcoming a major ethical problem.

Mother technical benefit was that the event sequencing provided an orderly method for identifying and evaluating event control actions to produce a much larger choice of control actions; the accident process could be controlled at any point in the process by introducing a change in one of the events or an event pair.

Mother technical benefit was the ability to observe real time operations to determine whether accident events or event sets were still present in the operations, and to assess the success of controls that had been implemented in controlling the phenomenon. The difference is like being able to watch vehicle and train behaviors at crossings to look for specific actions by each vehicle, vs. sitting there waiting for an accident to happen so you could capture its abstract attributes.

The dwell time work introduced in Study 1 illustrates the ultimate need to try to understand the accident process before acting, and is very commendable in that respect. As this is written, the outcome of that technical work is not known, but it is likely that it can contribute to resolution of controversy rather than intensifying it. Because it moves down the ladder of abstraction toward a more concrete process description, the new work will contribute to the more definitive control of the accident process.


CONSEQUENCES OF ANALYSIS METHODOLOGY SELECTION
ATTRIBUTE-BASED
o - Regulate factor
o - investigate to test
hypothesis
o - accident investigators =
data gatherers o accident investigators
o - analyze data to test for
significance o analyze event sets for
o - monitor trends
EVENT-BASED
o - control process
o - investigate to understand
accident process
o - accident investigators = hypothesis developers
o - analyze events sets for
consequence
o - monitor operations

Figure 4.

NON-TECHNICAL IMPLICATIONS FOR REGULATION ASSESSMENT.

Thus far, the technical implications of the alternative approaches have been discussed. Of far greater significance to the issue are the managerial implications of the alternative assessment methodologies. A shift toward event-based analyses can provide regulatory, operational and scientific managers new opportunities to overcome the failures cited above. From a managerial perspective, objectives for regulatory safety actions can be established promptly by focusing on the specific events pairs or sets that are to be controlled by the regulatory action. The regulation can be assessed by watching future operations to see if those event pairs or sets have been eliminated or controlled. The objectives can be defined and assessed in terms of the exposure, frequency and consequence ranges, as illustrated by the HM 144 rule making.

From that analysis, target safety performance objectives can be established by the regulating organization, whether it be a governmental organization, industry group setting standards, or individual organizations establishing safety procedures. Thus we can move away from the ambiguities and potential for controversy introduced by the attribute-based approaches, toward an approach that fair-minded persons can understand and consider reasonable.

A second benefit is the opportunity for redirection and reorganization of the data acquisition efforts required to support safety actions and assessments. The redirection would include

  • a restatement of the grade-crossing accident investigation mission, objectives and approaches to produce accident process descriptions.

  • establishment of a plan for coordination and conduct of those investigations and assessments that are undertaken,

  • changes in the investigative and assessment methodologies,

  • coordination of the accumulation and analyses of the investigation work products, and

  • new feedback and assessment schemes after actions are taken

The new mission of accident data acquisition and assessments could be to understand, predict and control accident processes to achieve a reasonable and predetermined level of safety performance in the operation of interest, such as highway operations at grade crossings. The new mission should acknowledge that both pre-mishap assessment and ongoing assessments of control actions such as regulations are needed.

The new objectives could be to ensure thorough understanding of the accident process, and presentation of that understanding in a new descriptive format to which everyone could contribute harmoniously and in which the uncertainties could be described in persuasive ways. Event-based displays have been found to achieve both objectives.

The event—based displays have also been used as a vehicle to coordinate the contributions and facilitate the acquisition of data to fill gaps in accident understanding. The primary benefit has been the ability of event-based work to define unknowns on which attention should be focused to reach a common understanding of an accident process. This has permitted parties with widely divergent interests to work together toward a common goal — understanding the accidents, and how to control them, from which informed control actions can be undertaken with minimal controversy. The experience has also included participation in investigations where a thorough understanding of an accident
has prompted unilateral action by several parties. That understanding has also provided the basis for subsequent monitoring of the their effectiveness.

Accompanying these experiences has been an awareness of the need to reconsider assessment methodologies, and also a way to do the assessment better. Time and again, we have observed a lack of confidence in “the numbers” and seen a process description accepted without dispute if it looks plausible. An unequivocal and complete description of an accident has been found to be a persuasive motivator of action, when presented candidly to show all the knows and unknowns.

Another need is a way to aggregate data, but for reasons that differ from current practices. Accident data needs to be aggregated in a way that permits the widest possible dissemination of each accident description as soon as we know what happened. Here again, it has been found that event-based descriptions of mishaps, properly done, enable both operational and research actions to be take on individual accident cases. Unfortunately, aggregation of data today is aimed primarily at supporting hypothesis testing. We really need to take a new look at what we are doing, why we are doing it that way, and what the payout is.

Another need is to change our perspectives about what we investigate. Attribute-based approaches essentially demand that we gather data about significant accidents that have occurred. Significant means above some threshold loss value, usually. With event-based approaches, it has been found that lesser mishaps and even near misses in which the learning potential is not distorted by the need to determine who pays for the loss, are much more valuable to understanding mishap processes than accidents where the witnesses are gone, or the things are demolished during the mishap. Further, the experiences can be related in a non—threatening way to current operations and work processes.

Finally, event-based approaches provide the basis for future assessment of current safety actions, in that the structure of the mishap descriptions permits real time monitoring of current activities and how they are performed. Thus, high-risk event sets can be pre-investigated, that is addressed before serious mishaps occur in a constructive rather than a punitive way.

The technology to support these changes is in place. However, administrative and scientific managers’ determination to apply it are not in place. It seems a safe prediction that, as the potential for reducing controversy, assessing safety and economic effectiveness and the ethical considerations become more widely recognized, we will begin to see both administrative and scientific managers’ demands of investigators and assessment analysts changing rapidly. It was a costly experience, but we commend the history of the resolution of the tank car head—shield controversy [13] to you as must reading if you really want to find out how to begin resolving controversy about regulatory actions and their assessment.


FOOTNOTES


[1] Advanced Notice of Proposed Rule Making 82-10, FR 47:22
[2] Bowman, B.L. and McCarthy, K.P., (1985) CONSEQUENCES OF MANDATORY STOPS AT RAILROAD-HIGHWAY CROSSlNGS, Goodall-Grivas, Inc. Southfield, MI.
[3] Events Analysis, Inc. TASK ORDER NO. 1 REFORT UNJER BASIC ORDERING AGREEMENT NO. NTFH6l-85-A-00002, February 1986
[4] King, K., (1978), FEASIBILITY OF SECURING RESEARCH DEFINING ACCIDENT STATISTICS, Publication 78-180, National Institute for Occupational Safety and Health
[5] Surry, J., (1969), INDUSTRIAL SAFETY RESEARCH: A HUMAN ENGINEERING APPROACH, University of Toronto, Toronto, Ontario
[6] Benner, L., (1985), RATING ACCIDENT MODELS AND INVESTIGATION METHODOLOGIES, Journal of Safety Research, Fall, 1985
[7] Kjellen,U., (1983), ANALYSIS AND DEVELOPMENT OF CORPORATE PRACTICES FOR ACCIDENT CONTROL, OARU, Royal Institute of Technology, Stockholm, Sweden
[8] American National Standards Institute, (1969), METHOD OF RECORDING BASIC FACTS RELATING TO NATURE AND OCCURRENCE OF WORK INJURIES: Z.16.2, ANSI, New York
[9] Johnson, WAG., (1980), MORT SAFETY ASSURANCE SYSTEMS, Marcel Dekker, New York
[10] Hendrick, K.M., and Benner, L., (1986) INVESTIGATING ACCIDENTS WITH STEP, Marcel Dekker, New York.
[11] Marshall, E., (1986), IMMUNE SYSTEM THEORIES ON TRIAL, Science 234:1490
[12] Hayakawa, S.I., (1978), LANGUAGE AND THOUGHT IN ACTION, Harcourt-BraceJanovich, New York
[13] Amendment 179—19 (HM 144), FR 42: 46313, Sept. 15, 1977