| The Investigation Process Research Resource Site |
A Pro Bono site with hundreds of resources for Investigation Investigators
|Home Page||Site Guidance||FAQs||Old News||Site inputs||Forums|
to advance the
Search site for::
Launched Aug 26 1996.
This paper was published in two parts in the ISASI forum, October 1991 (24:3) and March 1992 (25:2). Content is still relevant. Reprints are available from the International Society of Air Safety Investigators, 107 E. Holly Ave, Suite11, Sterling, VA 20164 USA.
(Pad 2 of a Two-Part Article)
(Part One appeared in ISASI forum, V 24, #3, October 1991)
Useful techniques evolved for defining criteria to describe what happened during the process of listening to reports of accidents, and the verbal exchanges that followed. Content of investigators' verbal reports varies significantly. When an accident is described in concrete terms-where it happened, who did what and when, and the actions are sequenced properly - relatively few questions are required to find out what happened. As the investigator paints the word picture the listener's mind easily follows, visualizing what happened as a "mental motion picture". We can easily "picture" what happened so long as no blank or double-exposed frames appear on our mental screen. The key criteria are clear statements of who did what, when and where.
What creates confusion when we listen to the description of an accident? When the narrator places the time or location of events out of order confusion begins almost immediately. We cannot construct an orderly picture. When the narrator assigns ambiguous names ("he, they, it") we have difficulty picturing who is doing what. When the narrator uses passive voice ("He was struck on the arm") we can't visualize who did the striking, and sometimes we can't visualize who was struck, either. The picture gets confused when the narrator attributes an action to two people, or to the wrong person. Quality control requirements become evident once the sources of confusion are identified.
WHO, WHAT, WHEN, WHERE and WHY
When describing what someone or something did, the investigator must describe when it happened relative to at least one other event reference point. One way to do it is to display events graphically, as they occurred in their relative time and spatial sequence during the accident. A simple matrix permits organizing the data by providing a method for positioning each event building block (Actor+Action set) relative to every other event building block. (Figure 2) As each event is added to the display it is positioned relative to the events already posted. A new event is placed to the right of any which occurred at an earlier time, and to the left of those which occurred at a later time.
To make sense, events must be ordered in spatial sequence as well as in time. (One cannot fall up a flight of stairs, for example.) Events which establish spatial relationships should be illustrated by photographs, sketches, drawings, renderings, etc., which depict the setting, and from which "mental movies" can be derived.
In Figure 3, the arrows represent causal links. Fig 3a represents a direct causal link from A to B; A will always cause B to occur. (A is the necessary and sufficient cause of B.) Fig 3b represents a process in which events A, a and (a)n are all required to effect the occurrence of B. Fig 3c illustrates the case in which A alone is necessary and sufficient to cause the occurrence of B, b and (b)n. Fig 3d illustrates the case where the events A, a and (a)n are required to effect the occurrence of B, b and (b)n
Critical logic tests should be applied to accident descriptions to select the events which must be reported to describe what happened and why. Testing for causal linkages provides a means to ensure that precede->follow and cause->effect logic govern the accident description.
The following procedure provides the investigator or evaluator with a technique to verify that the accident description is valid and complete, explain why the accident events proceeded in the way they did, eliminate unnecessary data from the description, and identify uncertainties. This procedure does not deal directly with subsequent analytical uses of the accident description, judgments or opinions about the accident or events leading to its occurrence, recommendations, or other aspects of the post-investigation quality control issue. Its objective is to assure that the elementary factual data, and descriptions derived therefrom, are supportably true and accurate. Our premise is that the quality of subsequent functions of the investigative process depends on the quality of the accident description. If the basic description is unsupportable, then all subsequent outputs from that flawed description are useless or, worse, misleading if used.
This example uses an NTSB accident report previously published in forum. (Figure 4a) The Quality Audit procedure is designed to test that Who, What, When, Where and Why are discovered, substantiated and properly reported.
example of application of accident investigation quality audit procedure<>
Begin the procedure with any description of an accident and take the following steps:
(1) Find the reported events. Review the text and mark or highlight each set of descriptors in which an actor performs a concrete action. Each actor/action set is called an "event". Highlight all the events described in the accident narrative. On the first pass, look for text that describes what someone or something did. This initial review is the foundation for assessing data quality. If the data are not presented in actor/action format they may be difficult, or impossible, to use. It is often difficult to identify the event sets; the actor and the action may be separated by many words, or even sentences. Investigators frequently fail to name the actor when reporting an action. "Passive voice" construction, pronouns and ambiguous actor names create imprecision which carries forward throughout the description. Abstract action words like "failed", "erred", etc., generate blank frames in the mental motion pictures of what happened. They are results (outcomes), not actions. In the example the actor/action sets are underlined. (Figure 4b) Note the problems that pronouns can introduce.
Even though it is easy to identify the actor (WHO did something) in this example, some confusion is created. In one place, "He" means the pilot ("He straightened its path..."); another "He" may refer to the aircraft ("He traveled through 10 feet of grass."). As presented, the reader is expected to assign the correct identity to actor. In a different accident this distinction might be important.
Use names of people or objects as they are presented in the report, and the action words, recording the noun/verb description of the event precisely as given. Record inconsistencies, ambiguities or missing data with either a question mark or a blank. Do not assume or infer event sequences which are not reported. You are evaluating the report, not drafting it.
If you adopt strict quality standards, you may accept only specifically stated actor/action events (those events underlined on the Narrative Statement). If you wish to be more liberal you may permit use of inferred events. Both were used in the next example.
(2) Organize the highlighted events. Record all the event sets (actor + action + any words needed to describe the action) onto individual cards. ("Post-It" sticky notes are ideal.) Display the events in their sequence. The most convenient way is to set up a matrix with a time line on the abscissa (horizontal scale), and a list of actors on the ordinate (vertical scale). As you add each new actor, you add a row for the new actor's actions. Add actions into the proper actor's row, one at a time, always placing events that follow to the right of those which precede. (Figure 5 shows how the earlier events are placed at the left of the worksheet, and later events flow to the right.) At this stage, we are only concerned about getting the events into the proper actor's row on the matrix, and into the proper time sequence.
As the display progresses, reexamine each event relative to the others and ensure their time sequence is correct. If two actors were doing things at the same time the events should be placed above and below each other; i.e., in parallel, rather than in series.
The accident events' time sequence is not always clear from their description. It is not clear from our descriptive example whether the brake assembly "failure" occurred before or after the aircraft began to drift left on the runway, or before or after it struck the ditch.
(3) Apply sequencing tests. When all the highlighted events have been located in rows and columns along the worksheet, review their logic to ensure that each is properly aligned according to its spatial sequence during the accident (the "falling upstairs" test). Add incomplete events for which you have only the actor or the action - not both - at proper sequence locations on the worksheet, and use a ? to indicate the missing part of your building block. (Figure 5)
Review the narrative, looking for words or phrases that infer actions that occurred. In the example, we have highlighted these phrases on the Narrative Statement. These are identified with question marks for the appropriate actor or action.
Note what an intransitive verb - identified by "was" or "were" forms - does to you; e.g. "The left wing was deformed". WHAT actor deformed the left wing? While not significant to the events in this case, it might be in another. A similar problem arises with the "was badly corroded" phrase. WHY was it corroded? Until you can define the actor+action sets you don't know where to aim corrective actions. Uncertain events are shown with a question mark to indicate uncertainties in the report. You can't picture which tire made the skid mark until you have more data; a question mark precedes "tire" to show that part of the description is missing. The disk brake assembly "failure" is ambiguous; it isn't clear whether the weld or base metal failed. The difference, in terms of where you would apply corrective action, would be significant. Question marks are also used to show that an actor's name is unknown. When the actor is identified, change the"?" to the name.
Next, check the sequence of each pair of events on the worksheet against the logical order of the scenario. Must the earlier event have occurred prior to the time of the later event? For example, did the tire skid mark begin before N4668J began to start toward the left side of the runway?
In this case, the answers may not be so ambiguous as in other reports but the principle must be addressed in all reviews.
After all individual events have been located, review all event pairs to ensure that their relationship is logical in time and space. If not, put them into proper order. When all time and space relationships are satisfactory, you will have checked the sequential logic of the events on the worksheet.
(4) Add causal links. The next, and most useful, step in validation is to apply "necessary and sufficient" tests to each event pair. We can validate the accident description only by demonstrating links which establish the sequence or flow of events which resulted in the outcome, from the first through the last event in the accident process. If we cannot identify causal links, we have a quality problem. Each "missing link" is a breach of the sequential logic of the description, whether from inaccurate investigation data, data processing or reporting failure. Each "missing link" identifies a deficiency in the accident description or the reported data.
The test requires answering a series of questions to assure that cause-> effect relationships really existed in the accident process. The first question determines whether the earlier event made the later event happen; i.e., did Event "A" cause Event "B" to occur in this accident. If the earlier event must have occurred first in order for the later event to occur, then the two events may be linked with an arrowhead showing the logical relationship leading from the "cause" event to the "effect" event.
The evaluator must examine each pair of events (event sets) on the chronological worksheet, from the earliest (in accident time) event pair to the last, for their causal relationships. If the first event must have occurred in order for the next event to occur, then a causal relationship exists between the events. In relation to the time line, no event to the right (later) could have occurred unless preceded by the event to its left (earlier) in this accident.
Beware of assuming expected relationships; forget about what usually happens and focus on what the report says actually happened in this accident. The investigator/reviewer must know how the system being examined is supposed to work; if not, competent advice must be sought from someone who knows the correct answers to these questions.
The next questions determine whether the "cause" event is the only occurrence required to cause the "effect" event to occur. Ask yourself if "A" would cause "B" every time "A" happened. If the left (earlier) event was the only requirement in order for the right (later) event to occur, a "sufficient" causal relationship exists between the events. (In other words, the left event of the pair was both necessary and sufficient to produce the right event, and in this accident it did so.) If the answer is no, you must find the other event(s) that was present and necessary. More than one event is frequently required to occur before another event can occur. This is equivalent to an "AND" logic gate in a Fault Tree (Figure 6) If the left event alone was not sufficient to produce the right event, look for the other event(s) that had to occur to produce the right event. Draw linking arrows showing all causal linkages. "OR" logic gates are not permitted in a final accident description; they indicate uncertainty about what happened. Uncertainties in the descriptions should be labeled with a question mark if causal links cannot be established. Is there some other cause/effect relationship that, by itself, was sufficient to precipitate the following event? If the answer is still "no" then you must determine what else had to occur. This may lead to several choices:
1. If another event had to happen, and you know what it was, draw a second (or third or fourth - as many as required to cause the next event!) causal link from each "causing" event to the "effect" event. This leads to a converging flow of events. In our example more than one "causing" event was clearly required before the aircraft rolled off the end of the runway.
2. If another event had to happen, but you don't know what it was, indicate the uncertainty with a "?"as shown in Figure 7.
Once the sufficiency is established draw a linking arrow from the left event to the right event, signifying a causal link between the two events. Each linking arrow on the work sheet identifies a causal relationship between two events.
After all possible "necessary and sufficient" causal relationships have been identified among the accident's event pairs, there may be single events left unlinked to another by any relationship. Those "loner" events identify problems with the accident description that require resolution. They may be observed systemic problems which are irrelevant to this specific accident. They may be relevant occurrences which have not been "tied into" the description of this accident properly. Whatever the reason, they represent quality deficiencies which detract from the credibility of the accident report.
Figure 8 shows the investigation problems disclosed by the audit procedure. Each gap and question mark represents a problem. These range from missing identification of actors (see entry at E-2 -What corroded the metal?) to uncertainties about the action (F-6 - "failed" is ambiguous; did the weld break, or the parent metal of the housing, or something else?) The limited existing causal linkages demonstrate other deficiencies: brake application alone should not cause the aircraft to "pull" to a side absent another influencing factor - was a crosswind acting on the aircraft at that time? (See the "Brief Report Format") What actions caused the change to the brake assembly from its original condition? Each question mark indicates a deficiency in the description.
The linking arrows quickly demonstrate the deficiencies in a report which purports to show What Happened. They also identify specific questions for which answers are needed before the proposed investigative work product can be judged acceptable.
When the matrix is complete we can trace the linkages back from the last event (harm) to the first event for which evidence exists. If that cannot be done, as in this example, the report's specific problems must be resolved with additional data or acknowledged to avoid its misuse.
This analysis enables a reviewer to define the content of the accident description precisely, and evaluate how easily users can visualize and understand the accident process. If gaps or flawed logic show up in the worksheet display they are immediately apparent. Deficient outputs can be returned for revision with specific shortcomings identified.
Causally-linked events may also be used as the foundation on which to build options for controlling future risks, for controlling the quality of recommendations arising from the accident, and for other analytic and practical purposes.
(5) Handling problems disclosed by the QC task. Several generic families of problems arise during attempts to establish causal links during accident analysis:
(a) Gaps in linkage from the first event to the last in the accident process. We can seek more data through investigation, accept the gaps' limitations of our understanding the accident,orusesystematicmethods (e.g., BackSTEP, Fault Trees, etc.) or combinations of methods to develop hypotheses to fill the gaps. Or we can limit the scope of the report to those events for which we can substantiate linkage. If it chooses to retain the gaps, the responsible issuing authority must acknowledge that it is depriving not only itself, but all other potential subsequent users of the facts on which to base choices for corrective actions, leading to less effective safety recommendations and misdirected safety efforts.
(b) Extraneous events left over after completing the causal linkages. At best they divert efforts to control future risks from bona fide needs demonstrated by the accident. Worse, they provide "handles" for others to grasp to raise irrelevant, unnecessary and invalid questions about the accident. The best course of action is to delete them from the description, because they mislead rather than illuminate future users of reports.
(c) The report_does not transform the accident data into useful building blocks. Displays cannot be established without actor-action sets and causal links. This is clearly unacceptable report quality, usually indicative of inadequately trained investigators or incompetent organizational management.
(d) Managers want a "simple" description of a complex accident. The worksheet will display an accident's complexity with sufficient clarity and accuracy that all but the most obtuse can accept its portrayal.
(e) Accident "safety" statisticians want their forms completed whether or not the accident data fit the blanks. This very frequent problem often arises after statisticians design forms for data collection, then declare that the statistical elements on the forms are significant investigative data and train investigators to "fill out the form" rather than investigate the accident. The use of identical forms to establish the data base for all accidents deliberately flouts the uniqueness of each specific accident, undermining any attempt at systematic analysis. Despite investigators' "best efforts" to fill in the blanks, few forms will prove to be either valid or reliable by any quality control method. Be assured the problem is with the forms, not the method. (Narrative section of forms rarely fare much better when analyzed by quality assurance methods. However, that problem can be overcome by using the worksheet analysis to identify the deficiencies in the description.)
(f) Political self-interest. This final problem is probably the greatest obstacle to improving the quality of current investigative methodologies. Individuals who want to make unwarranted judgments, draw unjustified conclusions, insist on single (or specific) causes of the accident, or propose self-serving recommendations designed to improve their competitive positions rather than solve specific safety problems demonstrated by an accident, will rebel at attempts to eliminate their meddling with truth. Quality management methods expose incompetent investigative techniques, sloppy logic, loose interpretations of data, and unsubstantiated hypotheses. They frustrate "hip shooters" who are accustomed to hiding poor outputs and abstractions behind statistical babble. Only by persistently exposing reality can an acceptable modus operandi of accident investigation be established.
The Brief Report Format (Figure 9)
Controlling the quality of abbreviated "fill in the blanks" data is a much more daunting task. If you apply the audit procedure outlined above you will see that only two events can be extracted from the data provided in the example. Interpretations reported in this format do not lend themselves to validation by any quality control procedure.
However, the Brief does contain new information about the accident - the cross-wind component during landing - that might be relevant to the narrative.
Benner, L., "Accident theory and Accident Investigation". Proceedings of the Society of Air Safety Investiga-tors Annual Seminar, 1975, p.149.
Benner, L., Accident Models and Investigation Methodologies Employed by Selected U.S. Government Agencies. Report to the Occupational Safety and Health Administration, U.S. Department of Labor. Washington, DC, February 21, 1983.
Benner, L., "Four Accident Investigation Games", Events Analysis, Inc. Oakton, VA, 1976.
Hendrick and Benner, Investigating Accidents with STEP. Marcel Dekker, New York, 1987.
Johnson, W., MORT Safety Assurance Systems. Marcel Dekker, New York, 1980.
Rimson, I. J., "Are These the Same Accident?". ISASI forum, 1983, #3, pp.12-13.
Rimson, I.J., "Standards for the Conduct of Air Safety Investigation". ISASI forum, V.23, #4, p.51.