The Investigation Process Research Resource Site
A Pro Bono site with hundreds of resources for Investigation Investigators
Home Page Site Guidance FAQs Old News Site inputs Forums
. . . .. . . . . . . . last updated 12/28/20

INVESTIGATING
INVESTIGATIONS

to advance the
State-of-the-Art of
investigations, through
investigation process
research.



Research Resources:


Launched Aug 26 1996.

 
Research Report

(See note about more recent development)

RANKING SAFETY RECOMMENDATION EFFECTIVENESS

 By Ludwig Benner, Jr.
12101 Toreador Lane
Oakton, VA 22124
© Copyright 1992 by Ludwig Benner, Jr .


Abstract

This paper describes observed deficiencies in the investigation-related recommendation development process, discusses their adverse effects, and presents alternatives available to abate those deficiencies. Main deficiencies found include present practices used to identify, rank and select problems to address with recommended actions; practices used to identify, evaluate, rank and select recommended actions to eliminate or control those problems; and present recommendation follow-up practices. Additional deficiencies in investigator training programs found during the development of the paper are also identified.

The author ranks safety recommendation effectiveness low.

Introduction

This paper started out to rank the effectiveness of safety recommendations. The premise was that some ranking system was needed, and it would be worth while to discuss how that might be accomplished. As the paper evolved, a need to present the issue in the broader context of the recommendation development process became clear.

 For many years accident investigations were based on the premise that we investigate accidents and make recommendations to prevent similar future occurrences. Prevention requires that we understand what happened, and why it happened, and from that understanding develop recommendations to prevent recurrence.

This paper focuses on what investigators do after they identified what happened during an investigation. My research into this process during the past 20 years has disclosed many deficiencies in safety recommendation development practices. By any measure these deficiencies constitute a major weakness of the investigation system. My experiences indicate that these deficiencies need to be recognized and acknowledged by our investigation community before we can hope to achieve genuine "investigative excellence" in the future.

Investigation Recommendation Practices

To initiate this presentation it is helpful to look at what has been happening in the investigation field with respect to recommendation development.

Some practices have been changing.

Recommendation practices have changed somewhat over the years. In the good old days when one cause was reported for each accident, a recommendation was typically advanced to fix the cause and prevent future recurrences. That view still influences many observed small private investigation programs, and a few not-so-small programs.[1]

 As the investigating community acknowledged that accidents were more complicated than indicated by a single cause, investigators started making more than one recommendation after their investigation. That view also influences present practices[3].

 The introduction of the view that anything "unsafe" observed during an accident should be addressed by a recommended action has also evolved in some circles. That practice also continues to this date.[4]

 Still another change that has occurred was the notion to send recommendations to fix a problem to more than one party. The basis for this practice has not been identified or articulated, and on its face does not seem unreasonable. It may result from perceptions of the responsibilities borne by each recommendation recipient, relative to the "problem" or "issue" as defined by the investigator.

One result of such changes is that recommendations have proliferated. It is not uncommon for accident reports to have many recommendations. For example, a recent NTSB report contained 16 recommendations[5]. Interestingly, all carried a Class II Priority follow-up rating. (More about that shortly.)

 The arrival of system safety ideas and methods has changed the framework for thinking about the identification of risks and their elimination or control through the application of sound management and technical processes. This has affected the ways risk raisers and risks are identified, ranked and eliminated or controlled. These ideas and practices are beginning to influence what is happening in the accident investigation field. They influence or provide criteria for most of the points made in this paper.

Changes that haven't happened.

Except for the changes noted above, most aspects of the recommendation development process have remained essentially unchanged over the years. Accident Investigators are usually tasked with developing the recommendations that flow from an investigation[6]. The conventional wisdom among investigators and program mangers seems to be that investigators can and will make good recommendations after they determine what happened and the causes. This assumption is rarely reviewed or challenged.

Another aspect of recommendation development that has not changed much is the perception of the knowledge, skills and workload required to produce recommendations. Everyone seems to assume investigators have the required capabilities. Appendix 18 of the ICAO Annex 13 Investigation Manual, for example, describes topics in investigators' training courses; recommendation development training is not among topics listed in that Appendix. The 192-page MORT Accident Investigation Manual contains 2 paragraphs about making recommendations. Additional evidence of this point is the distribution of resources for determining what happened vs. the development of recommendations during an investigation. Within my experience, the observed ratio is typically about 90% man-hours invested in determining what happened and preparing a report of those findings, versus about 10% in recommendation development.[7]

 Other noteworthy "unchanged are recommendation follow-up practices. Follow-up continues to focus on implementation of recommendations, rather than their effectiveness in solving the problems they address.[8] The occurrence of another accident is typically the basis for determining the success of the recommendation or prevention program.[9] Some organizations link subsequent events to previous recommendations in their reports. I have observed that repetitive accidents are viewed as implementation failures or management follow-up failures[10] rather than failures of the investigation or recommendation development process. For some reason, recommendations usually are considered above reproach.

What is their significance?

These circumstances exist in a changing safety environment. They are significant because the investigation community has not kept pace with the new environment.

New system safety thinking involves new ideas about the accident phenomenon, safety management, technical analysis methods, and risk estimation and acceptance decision making, among others, This change has introduced new but thus far largely ignored issues for investigators.[11] The perception of the accident phenomenon as a process to be understood and controlled - prospectively - is spreading rapidly. Contemporary safety management practices call for the discovery, understanding, prediction and control of risks and risk levels before significant losses occur. New understanding of the risk acceptance process elements requires prediction of uncontrolled and residual risks. It also requires monitoring future activities after a risk has been accepted.

The bad news is that most investigators' approach to recommendation development today is not geared to these needs. The deficiencies arise because of unacknowledged differences in the tasks involved. Investigations to determine what happened are retrospective; the investigator must figure out what happened from "historical" data, after the fact. Recommendations, on the other hand, are designed to influence future behavior and performance, and must therefore be predictive in nature.

Development of safety recommendations involves forward-looking tasks and methods. Investigation involves a backward-looking effort to find out what happened and why. Differences include data sources, data acquisition methods, problem assessment methods, effectiveness assessment and prediction methods, quality control processes and future monitoring and verification tasks. Also writing approaches, styles and contents differ markedly.[12] Yet to my knowledge only one document, now makes these distinctions, and describes these process elements in detail. If recommendations are so important, why does this situation exist? Others have made the same point.[13]

The good news is that compatible approaches are readily available for investigators to adopt. They are congruent with some very basic principles, with which I believe we can all agree.

 First, I think all competent investigators and managers agree that there is not enough money in the world to fix every safety "problem" or "hazard" investigators can find.

Secondly, competent investigators will agree that during accidents many events must occur to produce the observed outcome. Removing any event in the scenario would prevent a repetition of that precise accident outcome from occurring again.

 Third, competent investigators will also agree that changing certain of these events will expand the benefits from preventing that accident to preventing a broader group of similar scenarios in the future, e.g., prevent similar kinds of accidents.

 Fourth, competent investigators will agree to the corollary of this principle - that it is not necessary to remove or control all hazards discovered during an investigation to prevent similar accidents. The most successful recommendation will have the broadest loss-reduction effects.

 In my research into the accident investigation and development processes, these principles helped provide insights into the kinds of observations to make, and questions to ask. I discovered that the following questions helped expose some serious problems with the present recommendation process:  

* How was a specific accident event (read problem) determined to require a recommendation?

  * What action choices to eliminate or control that event/problem were considered?

  * What is the predicted safety effectiveness of each action choice considered?

  * What was the rationale for the selection of the action finally recommended?

  * How can the predicted effectiveness and benefits of the recommendations be verified over the life of the system, with minimal losses?

How was a specific problem determined to require a recommendation?

One major insight from my research was my recognition of this act as the fundamental decision leading to all recommendations. The investigator thinks there is a problem that should be fixed. So he or she decides to start working on a recommendation to fix it. From that decision flow all other recommendation development actions.

 I considered it noteworthy that investigators are typically charged, uncritically, with this task. We have already shown why they may not be the best ones to exercise this recommendation development function. At this point, I would add another objection that arose. Investigators need to recognize that this action may compromise their independence because they are usurping a managerial prerogative to decide whether a risk (problem) is acceptable or unacceptable. But that's how it is so let's acknowledge this reality and go on.

How do investigators reach this decision? The predominant method for selecting problems to fix that I have observed is based on concepts of "cause." Investigators make judgment calls and draw conclusions about what they deem "the cause" or "causes" of the accident, or its "causal factors." The need for a recommendation is then clear to the investigator: if you act on the cause(s) or causal factors, the accident will be prevented in the future.[14] Thus a judgment about cause(s) or a conclusion about an issue in a very subtle way drives the determination of a need to act.

The usual result of this approach is to propose one or more recommendations per cause or "issue." Because the need is already established by their judgment call, investigators can proceed with their recommendation work without any analysis of the future significance of the problem. The approach provides one easy quality control criterion: is there a recommendation for each cause? It also reduces investigator "thinkload." [15]

The bad news is that the approach circumvents important basic questions raised by new system safety-based risk management ideas: is the problem properly defined, is it worth fixing, will the proposed actions fix it for the remaining life cycle, and how do you expect to validate your answers?

 Another observed approach in the energy-related activities was to use DOE's Management Oversight and Risk Tree (MORT[16]) program to define the problems that will be fixed. MORT provides a check list of 1500 safety program elements that can be used to define problems (if the problems are on the check list.) Unfortunately, the MORT method reverts to a judgment call to determine what is a problem. The investigator (or agency) must decide if the check list's safety program element observed in the accident investigation is "less than adequate."

Another observed method was to discuss in an analysis section of a report the accident events and the writer's interpretation of the problem that discussion demonstrates. This approach relies on the logic of the arguments supporting the problem statement. The good news is that it is superior to the "cause" approach in that the rationale for defining the problem is somewhat more logical and documented and published. The bad news is that it too neglects the basic questions: is the problem worth fixing, will the proposed actions fix it, and how will you validate your answer?

What action choices to eliminate or control each event/problem were considered

Once an investigator decides a problem needs fixing, for whatever reason, how does the investigator identify potential or candidate actions to "fix" the problem?

My observations disclosed that the investigators' technical approach determines how many options they will identify, and how those options are treated. Some investigators sign out when they are satisfied they have fixed "the cause." Others thoughtfully look at all the information they got from the accident. Each approach produces different results. Observed "methods" include reliance on the investigator's intuition and good "common sense" judgment, reliance on the investigator's "experience" and knowledge of the system, forms of technology transfer techniques, a form of change control, and group brain storming, among others. The observed processes, individually or in the aggregate, provided no replicable procedure, because they are individualized techniques, not systematic, methodical, formalized or validated.

 The deficiency here was twofold. The processes used to search for options were well intentioned but ambiguous, uniquely personal, unstructured or undisciplined. Observed results were erratic, controversial or unconvincing, frequently addressed "last year's problems" and left unaddressed questions about their efficacy.

What is the relative safety effectiveness of each choice considered

Another set of observations relates to how choices are treated when they are considered. I have observed during investigations that alternative actions to solve a problem or meet a need are "considered" or thought about. When this occurs, the relative effectiveness may be discussed but the relative safety effectiveness rarely is evaluated, or documented and reported.

The result is that we rarely see predictions about the expected effectiveness of recommendations, and thus can't monitor performance to verify their predicted effectiveness.[17]

 We don't do it because not many managers know that can be done, and ask for it. Until now, criteria for recommendations have been couched in generalized, abstract terms such as clear, concise, logical, thorough, etc. I couldn't find a single instance where the predicted safety effectiveness of a recommended action was a requirement in any widely-used investigation manual[18] before we resolved this issue with our development work. In recent years the NTSB has published a requirement to analyze proposed recommendations[19]. Item 5 requires consideration of alternatives and Item 12 could be interpreted to require consideration of the effectiveness of the recommendations selected.

 
Figure 1 NTSB Recommendation Evaluation Criteria
Proposed Safety Recommendation
Analysis and Justification
  1. Accident location and date: (or special study title)
  2. Text of proposed recommendation:
  3. Proposed addressee:
  4. Problem addressed by the proposed recommendation: (state page of report where facts, analysis and conclusions are found in support of this recommendation)
  5. Describe any alternate approaches considered:
  6. If regulatory action recommended, discuss reasons for this approach over voluntary industry action or less formal "guideline" approach:
  7. What organizations have capability to implement this recommendation?
  8. Why was proposed addressee chosen to receive this recommendation?
  9. What constituency will benefit from proposed action? (E.G. Public, users, operators, management, etc.)
  10. What are previous related safety recommendations issued related to this problem? Should old recommendations be closed in any way? (Attach list)
  11. What is the estimated first response from the addressee?> 12. How is completion of action to be measured? What is the final result of the recommended action to be?
  12. Other comments to justify this proposed recommendation. 

The NTSB has only rarely reported the options considered and the predicted safety effects of its recommendations. As a result, neither the investigation community (nor anyone else) has any way to determine whether the requirements were observed during the development process, or what safety effects the 8000+ NTSB recommendations were expected to produce. Of course, there is now no way to validate the recommendations. That also means we don't truly know if the problems addressed were solved permanently or not. NTSB's follow-up system addresses implementation but not safety results achieved.

 The previous questions must be acknowledged before this question has relevance. Since the options are not documented, their relative effectiveness or value need not be addressed and reported. I have observed several adverse consequences of this deficiency. The line supervisor or manager responsible for the activity who must act, does not have a genuine choice or a decision to make, because the wrong person (the investigator) preempted the decision, and went public with it - at the wrong time. I have also seen investigators pass over or prematurely dismiss some options which could solve more than one problem or issue, diminishing the effectiveness of the investigation and recommendation processes and its outputs.

A proper answer requires a prediction of the safety effects of an action option on the future operation of the system over its remaining life cycle. When the requirement is addressed, I have observed that the recommendation developer thinks of the longer term more often. The "thinkload" is greater, but the results were worth the effort.

 A final observation. Knowledge of the system and its operation is required for this step to be reasonably effective and credible. This requires a different form of system definition - one that is compatible with the prediction needs. But that is another whole new deficiency recommendation developers need to confront.

What was the rationale for selecting the recommendations made

I can't recall any investigations where only one recommendation could be identified to solve a problem when systematic development procedures were used. That meant the investigators got involved in deciding which solution to select. I could not identify any particular thought process or procedure to guide this process in any documents, manuals or investigator discussions. The influence of cause or causal factors does not address this need.

Reflecting on my experiences during investigations, the recommendation selection process involves trade-offs beside safety effectiveness anytime more than one recommendation possibility surfaced. Figure 1 shows some of these considerations. These tradeoffs can get increasingly complicated as the breadth of the effects of the recommendation expands beyond the immediate operator or operation. That experience led to our attempt to define and document the trade-off identification, weighting and weighing tasks in our book.[20]

 When more than one recommendation is identified, the investigator can propose all the choices, or may chose to recommend only a few or even just one of the choices. In the report cited earlier, why were 16 recommendations necessary? Did they all merit equal priority, as suggested by the Follow-up classification? Were they all equally necessary to get "the problems" fixed? What reduction in risks is expected for each in the years ahead?

Some investigative organizations take the position that sound trade-off analyses are not a part of the mission of a safety recommendation organization. This is irresponsible, because once a recommendation becomes public knowledge, the recipient, who is already on the defensive because of the occurrence of the accident, is faced with a loaded gun pointed at the forehead. If he argues the validity of the recommendation, he takes an anti-safety position. Time for rational consideration of trade-offs has been preempted by the recommending organization, which shoulders no responsibility for the consequences, particularly in view of the monitoring deficiency described next. More detrimentally, controversies of this kind divert energies that could be devoted to action on priority problem lists based on bona fide safety improvements.

How will the predicted effectiveness and benefits of the recommendations be verified over the life of the system?

Among the most troubling observations during my research into the recommendation development processes were the deficiencies noted in the so-called "recommendation follow-up" process.

First, follow-up systems are misdirected, subverting accountability for the recommending function in an organization.. They focus on implementation of recommendations rather than the effectiveness achieved by their implementation.[21] The bad news is that without any estimate of the risk associated with a problem, and the reduction likely to be achieved with a recommendation, this will not be remedied without significant changes.

 I also observed another subtle but much graver problem created by these deficiencies. Nobody has a way to measure the worth or value or effectiveness of the recommending organization. For example, what safety improvement was the expenditure of roughly a half billion dollars over the past 40 years for the NTSB function expected to buy? What improvement DID it actually buy?

Other deficiencies in the investigation process create cascading problems. For example, the emphasis on judgments of cause or causal factors, and the resultant ambiguity of specific actions to monitor in future accidents makes it impossible to design an operational monitoring program to decide that the accident process steps addressed have been eliminated or controlled. I have yet to see a "cause" that would produce an accident every time it was observed, and thus could serve as a basis for proactive monitoring of activities. Until this is deficiency is abated, measurement of recommendation success will continue to depend on retrospective body counts and trends in rates.[22]

Deficiency abatement.

Can these deficiencies be remedied? My unequivocal answer is yes, by taking advantage of technology that has emerged in non-investigation fields.

a. Problem selection deficiency.

The first deficiency can be abated by changing our use of the term "event." By disciplining our use of the term rigorously, investigators can identify unambiguous actions and pairs of actions that need to be controlled. Our ISASI forum paper on Quality Control[23] shows how this is done. A research report for the Department of Transportation expands on the approach, describing the event pairing method for defining problems, and results obtained.[24]

To help investigators identify problems, the MORT investigation approach provides a 1500-item check list of technical and management safety system elements that can be used to identify problem relationships, to a degree. With MORT, the relationships are focused on the adequacy of the safety management system and technical factors, rather than individual events in an accident. As with any check list, new problem discovery is inhibited by any methods based on capturing "experience."

 Other techniques use "logic tree" approaches or "root cause" selection methods or unsafe acts/unsafe conditions judgments to identify problems. However, at this writing their outputs do not meet the need I've identified. As far as I have been able to learn, the STEP investigation system, resulting from research into the process, is the only system to provide the needed tools for the systematic discovery, documentation, evaluation, ranking and quality control of problem statements.

 A widely-used proven approach to show the significance of problems that need fixing, and ranking them, is to use the MILSTD 882 Risk Assessment Code (RAC) procedures. Showing the RAC for a problem when no action is taken can show the relative gravity of a problem. Hazards discovered during the analysis process are regularly assigned RAC codes for that purpose. Numerous refinements are available. The approach has been adapted to military systems, among many others.

b. Alternatives identification deficiency

If the problem identification deficiencies are remedied properly, resolution of the deficiencies with identification of alternatives will follow. The key is to examine each event and relationships among events systematically, from the perspective of the changes that might be conceived and introduced to change the event or relationship. By using safety principles from the system safety and nuclear fields, as well as other sources, events pairs can be examined against those principles to both redefine problems, and find relationships that might be changed - the basis for developing recommendations. My experience in applying this technique has usually led to the discovery of many new ideas and insights for the control or elimination of the risks, and many options to consider FOR EACH RELATIONSHIP. One nice result is that you can make informed judgments about the possibility of an action solving more than one of the target problems, which helps address trade-offs much more effectively.

The MORT check list or "tree" also can provide guidance to corrective actions after the investigator has properly described the accident. The process is to use each of the 1500 elements as thought starters to think about possible changes after the accident events and causal factors are described in the MORT events and causal factors chart.

 Unsafe acts and unsafe conditions, as well as cause or cause factor or root cause approaches have been found very inhibiting, from the perspective of discovery of new options or redefinition of known problems, and thus are not particularly useful for these purposes.

c. Safety effectiveness prediction deficiency.

Using the "no action" RAC to estimate the problem, and the new RAC after an action is implemented (Old RAC/New RAC) provides a relative indicator of a candidate recommendation's predicted safety effectiveness. Estimating the number of target accidents that would occur without action vs. the number expected to occur after the action is implemented is another approach. Our company has developed another approach using a control rating coding system that provides an indicator of the effectiveness of a recommendation, based on technical safety concepts for hazard control.[25]

MORT teaches the use of a "priority problem list" [26] in recognition of managers' need to sequence actions recommended for risk elimination or control reasons. The method for arriving at the problems and priorities is described, and the goal is clear.

The method for developing specific corrective action recommendations or options is unstructured beyond the MORT events and causal factors flow charting and problem identification step.

d. Trade-off assessment deficiency.

The NTSB's assessment approach could be expanded to overcome this deficiency. One of the main needs is to consult recommendation recipients during the evaluation, to clarify and get agreement on the problem and the trade-offs involved[27], and discuss the trade-offs considered - pros and cons - in reports, so the process can be incrementally evaluated and improved. Additionally, the discussion in our STEP investigation book also offers additional evaluation criteria, and workable procedures for defining, weighting and weighing trade-offs during this aspect of the recommendation process.

e. Monitoring deficiency

If the remedies in a, b, c, and d are implemented, the identification and definition of events and relationships that should be monitored can be accomplished relatively easily. With STEP events worksheet displays, for example, systems and accidents are described in terms of what someone or something did to sustain the accident process to its outcome. With this description, it is a short step to identify and assign observation tasks that provide the observed feedback for this purpose - without waiting for the body counts or trends to occur. Even narrative descriptions can be addressed this way if the narrative description is complete and reflects events sets that occurred during the accident. These approaches relate directly to other approaches such as job safety analyses, or task analyses-based safety activities, which have recognized value for this purpose.

Conclusions

The foregoing convinces me that the investigation-related recommendation development process is inadequately conceived, organized, staffed or monitored. Those problems result in major deficiencies in the recommendation development process as presently practiced. Deficiency areas include:

  • discovering, defining, evaluating, ranking and selecting problems to be addressed by recommendations flowing from investigations.

  • discovering, defining, evaluating, ranking and selecting actions proposed to eliminate or control those problems in the future.

  • recommendation effectiveness validation practices.

 One of the key deficiencies is an acknowledgment by each of us that we face a problem.

A second point is that alternatives to abate the deficiencies are available, if the investigative community decides to acknowledge the deficiencies and do something about them.

A third point is that investigators need to start thinking in terms of a complete investigation system that is fully integrated from the initiation of the investigation to the final validation of the effectiveness of recommended actions after they are implemented.

In summary, I'd rank the effectiveness of safety recommendations low.

Epilogue

I'd like to close by sharing one other observation. During the preparation of this paper, I became aware that these deficiencies are reflected in or perhaps attributable in part to investigator training programs. I did not initiate a comprehensive survey of all available courses. However, the training courses described in the ICAO Accident Investigation Manual, for example, and others with which I am familiar, practically ignore the recommendation development process elements, knowledge and skill requirements discussed above. Without a consensus of what the recommendation development tasks are, how they should be performed, and the knowledge and skills that need to be taught to prepare investigators for these tasks, it is unfair to expect trainers to solve the training problems.

 It would seem useful for the ISASI Working Group on Investigation Policy and Standards to address this issue, to try to provide suitable guidance for the resolution of these deficiencies, both in training and in practice.

Footnotes [1] See ICAO Annex 13 Accident Investigation Manual, p I-1-1 (1970) "...establishing probable cause thereof, so that appropriate steps may be taken to prevent recurrence of the accident and the factors which led to it." The utilization of "cause" or "cause factor" data and "causation models" is widely used in present safety research projects to develop recommended safety actions from aggregated accident data.

 [2] - blank

 [3] See ICAO Accident Prevention Manual 1984, 4.2.25, "Accident investigation includes an analysis of the evidence to determine all causes - a process leading to the formulation of safety recommendations." The Department of Energy's Accident Investigation Manual, DOE/SSDC 76-45/27 , p 122 contains similar guidance: "...judgments of needs specify what needs to be done now in response to the accident investigation findings and probably causes."

 [4] See ICAO Accident Prevention Manual 1984, p 31, "Recommendations must cover all hazards revealed during the investigation - not just those directly concerned with the causes."

[5] See NTSB /RAR -91-04, PB 916304, p 47-49

 [6] See NTSB Order NTSB 82, June 11, 1987,

[7] My experience suggests that this ratio should be more nearly in the range of a 50/50 to 60/40 split, but no hard data about this distribution are available.

 [8] See Tables 1-3 and p 6 in Benner, L., APPLYING SYSTEM SAFETY TO THE SAFETY RECOMMENDATION PROCESS, in the Proceedings of the 10th International System Safety Conference, Dallas, TX 1991, 4.4-5-1

 [9] See ICAO Accident Prevention Manual 1984, 4.5.1 " ... Valid comparisons can be based on rate [of accidents, incidents, fatalities, etc.] information."

 [10] For an instructive case study of this phenomenon, consult the legislative history of PL 93-633 establishing the independent National Transportation Safety Board in 1973 after the Turkish Air accident in Paris.

 [11] Wood (HOW DOES THE INVESTIGATOR DEVELOP RECOMMENDATIONS?, 1979) has written on how investigators develop recommendations for the International Society of Air Safety Investigators, (ISASI) whose members' explicit goal is accident prevention through investigation, but does not offer a process description. Bruggink and Fritsch (THE SAFETY RECOMMENDATION PROCESS, 1989) make the point to ISASI members that the safety recommendation process is far from standardized, and for that reason is not as effective as it should be, but do not describe the processes or specific deficiencies.

 [12] For a fuller discussion of this difference, see Hendrick, K.M. and Benner, L. INVESTIGATING ACCIDENTS WITH STEP, 1987, Marcel Dekker, NY p 197.

 [13] See Ferry, T., MODERN ACCIDENT INVESTIGATION AND ANALYSIS, 2nd Edition, Wiley Interscience, New York, NY 1988, p 236. "If recommendations are so important, why don't we develop expert recommenders?"

 [14] See ICAO Accident Prevention Manual, First Edition, 1984, 4.2.25.

[15] "Thinkload" is the work effort devoted to the mental processes that drive actions taken, and include the conceptual and knowledge inputs, the mental processes employed to gather, organize, integrate, and otherwise mentally work with the inputs to arrive at outputs such as the decisions, viewpoints, concerns, comments, conclusions, judgments and mental outputs flowing from the mental processes. The term is used to differentiate between the performance of purely thinking tasks, as contrasted with all other kinds of tasks .

 [16] See SSDC 27, 1976 MORT Accident Investigation Manual or Johnson, W., MORT SAFETY ASSURANCE SYSTEMS, 1980, Marcel Dekker, New York, NY

 [17] See Department of Transportation Docket HM 144, covering tank car head shield regulations, for a major exception to this observation. A 95% effectiveness of the recommended action was predicted; my last knowledge of the follow-up record showed an actual reduction in the target scenarios of over 96% during a 4 year follow-up period.

 [18] See list of governmental agency investigation manuals reviewed in Benner, L., RATING ACCIDENT MODELS AND INVESTIGATION METHODOLOGIES, Journal of Safety Research, Vol. 16, 1985, p 125-6.

 [19] Appendix B from NTSB Order 82, published in 1987. adapted from work encouraged by H. H. Wakeland, former Director of the Bureau of Surface Transportation Safety at the NTSB in the mid-1970s.

 [20] See Hendrick & Benner, op cit, Chapter 8-10 on recommendation development tasks and procedures, as well as evaluation criteria and quality control ideas.

 [21] See full discussion of this issue of misdirected follow-up efforts, in Reference 7, p 11,

[22] See ICAO ACCIDENT PREVENTION MANUAL, section 4.5 Measurement of Safety, which asserts that only loss counts and rates can be used, a totally retrospective approach that ignores system safety approaches.

 [23] See Benner, L. and Rimson, I. , ACCIDENT INVESTIGATION QUALITY CONTROL, ISASI forum 25:1 for a discussion of the ideas and methods.

 [24] Benner, L. "FIRE RISKS IN CARLOAD/TRUCKLOAD TRANSPORTATION OF CLASS A EXPLOSIVES", Report to Department of Transportation, OHMT, Contract No. DTRS57-88-P-82656, March, 1989

 [25] See White, L and Benner, L., "Corrective Action Evaluation" Proceedings of 1985 System Safety Conference, 3.4.5.1, System Safety Society

 [26] See Johnson, op cit, p 444--448 for one of the most condensed, most useful and most practical discussions of the need for and use of priorities in establishing safety program action agendas. Without indicators of the significance of safety problems such a priority problem list is impossible to compile or use for guidance.

 [27] The Institute for Nuclear Power Operations uses a facility evaluation process that might serve as a model for this kind of exchange process.


Notes:
In Goldratt's theory of Constraints, Dettmer provides an alternative approach to introducing change with Current Reality Trees, Future Reality Trees, Conflict Resolution Trees, Prerequisite Trees and Transition Trees, which expand the views and approaches described in this paper.

A 2008 paper "GRADE:an emerging consensus on rating quality of evidence and strength of recommendations,"" BMJ. 2008 Apr 26; 336(7650): 924–926. doi: 10.1136/bmj.39489.470347.AD offers guidelines based on evidence quality to address some of the points in this paper.

Biographical Sketch

 Ludwig Benner, Jr. Currently Exec. V. P., Events Analysis, Inc. BSChE, Carnegie Tech, 9 years advanced safety and risk assessment consulting services with Events Analysis, Inc., including safety, fire and environmental risk assessment, control and investigations; 17 years experience in chemical industry, including extensive transport equipment design operating and investigation experience, safety compliance assurance, and industry consensus standards activities; 12 years with National Transportation Safety Board, including 7 years as Chief, Hazardous Materials Division; 2 years as field faculty member for University of Southern California (plus 6 years on adjunct faculty for USC Graduate Safety Program); Adjunct Professor (Fire Science) at Montgomery College, Rockville, MD. Member, INPO Advisory Council; served on Committee on Alternatives for Inspection of Outer Continental Shelf Operations of the Marine Board of the National Research Council, Commission on Engineering and Technical Systems; Virginia Metropolitan Areas Transportation Study Commission; and Virginia Governor's Land Use Committee, and on Editorial Board for Journal of Safety Research. Over 60 publications and papers on investigation, risk assessment, safety and hazardous materials; co-author of book "Investigating Accidents with STEP." Secretary and member of International Council, ISASI; Fellow and former Executive Secretary of System Safety Society. Developer of the multilinear events sequencing-based investigation system; Time/Loss analysis method for assessment of emergency response effectiveness; and co-developer of hazard reduction control rating coding system for assessing relative effectiveness of risk control measures, many investigation and risk assessment courses.

May 2, 2001, Dec 29, 2020