The Investigation Process Research Resource Site
A Pro Bono site with hundreds of resources for Investigation Investigators
Home Page Site Guidance FAQs Old News Site inputs Forums
. . . .. . . . . . . . last updated 4/28/07

INVESTIGATING
INVESTIGATIONS

to advance the
State-of-the-Art of
investigations, through
investigation process
research.



Research Resources:

Search site for::


Launched Aug 26 1996.

 

Investigating "Causes" and Assigning "Blame"

Ira J. Rimson, P.E.; Forensic Engineer

2120 Kirby St. NE; Albuquerque, NM 87112-3476

irimson02@comcast.net

Abstract

 

Determining "Causes" of mishaps has long been a source of ignition for arguments in the safety community. Few safety practitioners comprehend how seriously investigations have been contaminated by the uncritically accepted goals of "Probable Cause" and "Blame," and how severely they have hindered prevention initiatives.

Those who originally established causal determination as an investigative goal neglected to establish definitions against which investigators' results could be measured. As a result, weak investigative rigor has been accepted uncritically. Simple early systems accommodated rudimentary investigations; current technological complexity does not. We can no longer afford to indulge in unscientific, unvalidated, non-standard investigation methodologies which produce unverifiable conclusions. Vague recommendations arising from these unsubstantiable data lack any reliable means for determining their usefulness for correcting systemic deficiencies.

Probabilistic techniques employ historic and assumed data to identify and rank risks, attempting to predict vulnerability of new systems. Yet after those new systems start operating, anomalies, deviations, disruptions, failures, accidents and catastrophes occur. These real world mishaps afford opportunities to identify limitations and faulty assumptions of early predictions. The safety community must evaluate the validity of both predictive and deterministic (after the fact) methodologies. Concentration on assigning causes and blame has failed to halt preventable mishaps by creating illusions of understanding.

This paper will propose alternative methodologies which analyze the Logic of Causation, and propose reviving a 160-year-old definition of "Cause" --that of precursor (as in cause-and-effect). The increased rigor which results will generate more robust evaluation data. Better data will validate both predictive and deterministic models. Valid models will be benchmarks against which standardized, logic-based investigations may be verified. Verified investigation outcomes will identify potentially successful intervention strategies which deter occurrence.

"The greatest obstacle to discovering the shape of the earth, the continents

and the oceans was not ignorance, but the illusion of knowledge."
          

Daniel J. Boorstin
Librarian Emeritus
of the U.S. Congress

 


What is a "Cause"

 

If "probable cause" is OK for safety (and I doubt it), it certainly is unsatis-factory for prevention, and entirely unacceptable for investigations. Who wants to pay for an investigation whose result is "probable cause" except someone who doesn't want to know for sure anyhow?           

                                                            -Hughes Chicoine

                                                         Certified Fire/Explosion Investigator, Canada

 

At the outset let's take a brief look at a well-known modern vision of cause: the concept of causation as expressed in the phrase "Probable Cause" by the National Transportation Safety Board (NTSB). "Cause", as applied to the objectives of government air safety investigations, evolved without definition along an inconsistent pathway:

 

"Cause" first appeared as a duty of the Department of Commerce in 172 of the Air Commerce Act of 1926:

 

            [The Department] shall... (e) investigate, record and make public the causes of accidents....

 

"Probable Cause" materialized in the 1934 Amendment to the Air Commerce Act of 1926[1], which merged the formerly scattered aviation functions of the Department of Commerce's Aviation Branch into the Bureau of Air Commerce:

           . . . the Secretary of Commerce shall, if he deems it in the public interest, make public a statement of the probable cause or causes of the accident....

 

The Civil Aeronautics Act of 1938[2] established a Safety Board within the Civil Aeronautics Authority, assigning to it in 702 duties to:

            (2) investigate such accidents and report to the Authority the facts, conditions, and circumstances relating to each accident and the probable cause thereof; and

            (3) make such recommendations to the Authority as, in its opinion, will tend to prevent similar accidents in the future;[3]

 

The Federal Aviation Act of 1958[4] established both the Federal Aviation Administration and the Civil Aeronautics Board, assigning to the CAB responsibility for "...the promotion of safety in air commerce."[5] A duty of the CAB specified in 701 was to:

 

            (a) investigate such accidents and report the facts, conditions, and circumstances relating to each accident and the probable cause thereof;

 

The Department of Transportation Act of 1966[6] established the Department of Transportation, interposing it as a managerial layer superior to the FAA; and the National Transportation Safety Board. In 5 it established the duties of the NTSB, including:

            (b)(1) determining the cause or probable cause of transportation accidents;[7]

 

The Independent Safety Board Act of 1974[8] established the NTSB as an independent agency of the U.S. federal government, and established among its duties in 304:

            (a)(1) investigate or cause to be investigated (in such detail as it shall prescribe), and determine the facts, conditions, and circumstances and the cause or probable cause or causes of accidents...[9]

The roles of successive investigative agencies evolved from determining "cause" to "probable cause" to "cause or probable cause" to "cause or probable cause or causes", yet agency managers and investigators were denied any comprehensive statutory definitions of "cause" or "probable cause." It was as though the bureaucrats who drafted the legislation didn't really "...want to know for sure anyhow."

A 1942 description of the CAA Air Safety Board 's interpretation of "probable cause" by Jerome F. "Jerry" Lederer, then its Director, has survived:

            ...we, therefore, endeavor to state how the accident happened and why. The "why" is our conclusion expressed in terms of probable cause and contributing factors.... It has been our endeavor to stick to a practical pattern which

establishes the proximate cause as the probable cause and sets up the underlying or more remote causes as contributing factors. (Le:42)[10]

Unfortunately, both "proximate cause" and "probable cause" are legal terms of art; i.e., they possess specific meanings within the context in which they are applied. The two terms are defined as follows in their legal applications[11]:

 

            "Proximate Cause":

                        - That which, in a natural and continuous sequence, unbroken by any efficient intervening cause, produces injury, and without which the result would not have occurred.

                        - That which is next in causation to the effect, not necessarily in time or space but in causal relation.

                        - The proximate cause of an injury is the primary or moving cause, or that which, in a natural and continuous sequence, unbroken by any efficient intervening cause, produces the injury and without which the accident could not have happened, if the injury be one which might be reasonably anticipated or foreseen as a natural consequence of the wrongful act.

 

                        - An injury or damage is proximately caused by an act or a failure to act, whenever it appears from the evidence in the case, that the act or omission played a substantial part in bringing about or actually causing injury or damage; and that the injury or damage was either a direct result or a reasonably probable consequence of the act or omission.

            As for "Probable Cause":

                        - Reasonable cause; having more evidence for than against.

                        - A reasonable ground for belief in certain alleged facts.

                        - A set of probabilities grounded in the factual and practical considerations which govern the decisions of reasonable and prudent persons and is more than mere suspicion but less than the quantum of evidence required for conviction.

                        - An apparent state of facts found to exist upon reasonable inquiry (that is, such inquiry as the given case renders convenient and proper), which would induce a reasonably intelligent and prudent man to believe, in a criminal case, that the accused had committed the crime charged, or, in a civil case, that a cause of action existed.

                        - The evidentiary criterion necessary to sustain an arrest or the issuance of an arrest or search warrant.

Attempting to apply the legal definitions of "proximate cause" and "probable cause" to an investigation context results in instant incomprehensibility. Worse yet, these legal definitions add the following vague abstractions to the original confusion:

 

            reasonable anticipation           foreseeability              natural consequence

            substantial part                       direct result                 reasonably probable

            reasonable & prudent              mere suspicion            apparent state of facts

            reasonable inquiry                   convenient and proper

            less than the quantum of evidence needed to convict

as well as still more undefined concepts of cause:

            intervening cause                     primary cause             driving cause

            reasonable cause.

 

Rules of law often retain some vagueness to permit flexibility in applying them to unforeseen circumstances. (CoCo:90) Equivocation has utility for an attorney attempting to prove or refute the elements of legal proof required to convince a jury of a client's guilt or innocence. However, that kind of definitional elasticity is not conducive to the foundation of investigation: scientific hypothesis testing. By establishing undefined "causes" or "probable causes" as their principal investigation objective, investigation authorities created a quandary for investigators who try to apply objective methodologies to the study and understanding of how mishaps occur. In the absence of any more precise definitions against which to measure their conclusions, it is impossible for investigators to determine the causes or probable causes of mishaps.

Other Concepts of Causation

It is usually between a consequent and the sum of several antecedents; the concurrence of them all being requisite to produce, that is, to be certain of being followed by the consequent. In such cases it is very common to single out only one of the antecedents under the denomination of Cause, calling the others merely conditions.... The real Cause is the whole of these antecedents; and we have, philosophically speaking, no right to give the name of causes to one of them exclusively of the others.

                                                            John Stuart Mill (Mi:43)[12]

Few formal methodologies have been designed specifically to investigate deterministic events (which have already happened), compared to the myriad which have been developed to support probabilistic risk assessments (which attempt to predict what might happen in the future). Two prominent methodologies have been used for deterministic analyses of industrial, chemical and nuclear mishaps:

 


            Management Oversight and Risk Tree (MORT) Analysis.[13]

            "Method: Apply a pre-designed, systematized logic tree to the identification of total system risks, both those inherent in physical equipment and processes, and those which arise from operational/management inadequacies.

            "Application: The pre-designed tree, intended as a comparison tool, generally describes all phases of a safety program and is applicable to systems and processes of all kinds. The technique is of particular value in accident/incident investigation as a means of discovering system or program weaknesses or errors which provide an environment conducive to mishaps.[14]

                       

            "Product: Comparison between the actual system and the model "tree", which results in assignments of general weaknesses or errors as "Less-Than-Adequate" factors to which causation may be attributed."[15]

            Root Cause Analysis[16]

 

            "Method: The process [uses] a structured MORT-based system with adjustments made to suit a particular usage. The MORT method defines 23 categories of surface causes or investiga-tion findings that help provide the foundation for the subsequent development of root causes. By arranging "why" statements into surface and root cause categories, [investigators] can focus on why the investigation's findings occurred rather than the specifics of the findings themselves. Analyzing this information consists of observing the frequency and weight placed on items, not-ing common threads that seem to flow through the statements, and making professional interpre-tations and intuitive judgments to deduce underlying root causes for each investigation finding.[17]

            "Application: ...to determine the underlying contributing reasons (root causes) for the observed deficiencies documented in the findings of an investigation. By emphasizing a few root causes, management can correct a wide variety of deficiencies or faults in a system.

            "Product: Identified dependent causes are grouped to common causes, which are then reduced where possible to single root causes applicable to a set of related deficiencies, shortcomings or observations."

In merely two methodologies we have uncovered the following new, and as yet still undefined, descriptions of causation:

            inherent risks              inadequacies                weaknesses                 errors

            less-than-adequate factors      surface causes             findings           root causes

            contributing reasons                observed deficiencies              faults

            common causes           shortcomings               observations

Uncertainty, Ambiguity and Vagueness

 

            Careful and correct use of language is a powerful aid to straight thinking, for putting into words precisely what we mean necessitates getting our own minds quite clear on what we mean.                           - William, Lord Beveridge

                                                                                               

These three linguistic concepts are probably known abstractly to investigators, but they apply quite specifically to investigations and their objectives:[18]

- Uncertainty = "not definitely ascertainable or fixed, as in time of occurrence, number, dimensions, quality, or the like"; in scientific terms, the object cannot be "classified".

 

- Ambiguity = "having several possible meanings or interpretations, equivocal"; in scientific terms, the object cannot be "discriminated".

            - Vagueness = "indefinite or indistinct in nature or character, as ideas, feelings, etc."; thus incapable of being either classified or discriminated.

These concepts are significant to investigators, who must eliminate as much uncertainty, ambigu-ity and vagueness as possible during the investigation process. Both uncertainty and ambiguity can eventually be resolved by sufficient data, information and facts. Vagueness, on the other hand, cannot. A vague term is itself inherently incapable of classification; its very definition lacks sufficient precision to enable either classification or discrimination. (McFr:93)

 

Transposing the terms "cause" or "probable cause" from their legal definitions to scientific appli-cations led to increasing vagueness and indeterminacy. No amount of additional information or data can help us identify the terms "cause" or "probable cause". Their lack of definition doesn't define what we're looking for. Likewise, substitution of euphemisms or terms of art detracts from precise meaning. As a result, we cannot determine what it is we're looking for, where to find it, how to get to where it might be, how to recognize it (if by chance we do stumble upon it), or how to verify its identity.

 

Assigning Blame

 

A can always exceed B if not all of B is counted and/or if A is exaggerated.

                                                                                    - Thomas Sowell

 

Within the past generation our society has become fixated on assigning blame. This phenomenon may have arisen from a false expectation of perfection which originated from quantum leaps of technological advancement. It may be an outgrowth of our society's unwillingness to require individuals to be accountable for the results of their actions. (It is just as likely a byproduct of a plethora of hungry lawyers.) In any case, we no longer accept with passivity the motto of the '60s, "S*** Happens!". Instead, we assume that someone must blameworthy, so we find a hungry lawyer and sue.

 

One corollary of ill-defined concepts of causation has been the ease with which causes can be assigned. Assigning causes requires little more than pointing fingers and assessing blame. Gerard Bruggink, then Deputy Director of the NTSB's Bureau of Air Safety, rebutted those who argued that investigators do not indulge in casting blame:

 

By emphasizing 'Who Caused the Accident?' rather than 'What Might Have Prevented It?', investigation authorities engage in weighing causes and, therefore, weighing blame. Causal summaries identify the individuals and organizations that seem to be most at fault, balancing between probable cause and contributing factors. (Br:87)

 

Despite his muddying the semantic waters with references to "fault" and the still-undefined "probable cause" and "contributing factors", Bruggink raised a valid point. Investigators may assign blame for any number of reasons. In my experience the most prevalent is that it is easier than spending the time and effort to identify specifically what went wrong and why. They conclude that the event itself is evidence that some person or persons did something that caused, or failed to do what was necessary to avoid the occurrence. It is consistent with the bottom line of MORT analyses, that something or other was "Less Than Adequate." Our friends at the UK's Aircraft Accident Investigation Board call it the "BGO" - Blinding Glimpse of the Obvious! Lawyers have a Latin term which encompasses the mind-set: Res Ipsa Loquitur - "the thing speaks for itself." Once the thing has spoken, as a result of a mishap, the simplest response is to lay the blame on the most convenient culprit.

 

There are various reasons why investigators should not assign blame, not the least of which is the likelihood of tainting their investigation by selective acceptance of facts which support their charges. Blame requires subjective evaluation of alleged acts committed or omitted by persons involved in a mishap. Mishap reports often contain obvious clues to investigator subjectivity:

 

      Judgmental verbs; e.g., "failed to" or "did not", without specifying precisely the alleged error(s) of omission, and the law, regulation or procedure with which the actor allegedly failed to comply, and how that contributed to the outcome.

      Comparative adjectives; e.g., "improperly" or "incorrectly", likewise absent support-ing data specifying how the act varied from expectation, and how that influenced the outcome.

 

Temptation to indulge in such judgments occurs because:

 

      The investigator "knows" from personal experience what the actor "should have" done, and assumes that it wasn't done properly because the mishap occurred; or

      The investigator doesn't know what the actor should have done, and assumes that obviously something wasn't done properly because the mishap occurred.[19] (Be:95)

The Investigators' Role

The greatest tragedy of science is that you often slay a beautiful hypothesis with an ugly fact.                                             - Thomas Huxley

 

The investigators' role, quite simply, is to determine What Happened, with scientific rigor and proof. Investigators must identify facts, conditions and circumstances as their first order of business.[20] This is not a simple task. Benner has suggested that an accident "...can be viewed as an unscheduled and largely uninstrumented scientific experiment performed to test a hypothesis (or theory)." (Be:75) The investigator is privy to the results, and is faced with the task of deter-mining where, when and how it began, and what route it took to get to the ending. Investigation has frequently been characterized as "more art than science," an oversimplification which has not benefited from such popular video dramas as "CSI." Only within the past few decades have investigative methodologies emerged which combine robust, scientifically based hypothesis testing with the discipline of formal logical analysis.

As technological advances have led to more complex and interactive systems, it has become obvious that one, or a few, "probable causes" cannot account for the evolutionary processes that lead to accidental outcomes. Accidents themselves are complex processes, often generating over extended time periods. Prerequisite conditions for undesired outcomes may have been estab-lished years, or even decades, before.[21]

Differences amongst investigative philosophies can been seen by contrasting the investigations into the mishaps which befell NASA's Challenger and Columbia. Challenger was the subject of three separate investigations[22], each of which was tainted by the pre-ordaining mind-set of its investigators. Yet it was an independent examination of the organizational behavior of persons within the involved agencies that first considered corporate cultures which were prerequisite to the decisions that enabled the explosion. (Va:96) Although the final report of the Columbia Accident Investigation Board is not complete as of this writing, the Board's independence from pressures by interested parties has enabled it to address issues that were ignored, if recognized at all, during the initial Challenger investigations. Although NASA's management apparently learned little from Challenger's loss, Columbia's Investigation Board certainly learned a great deal from Challenger's investigations.

When the first priority of the investigation is identifying the "facts, conditions and circumstan-ces" relating to the mishap under investigation, the issue of causal vagueness is eliminated. Facts, conditions and circumstances may be uncertain or ambiguous, but those problems can be overcome by additional facts, data and information. Facts, conditions and circumstances are never vague. They are tangible, measurable descriptions of what happened. Once we know what happened, we can recast the scenario into specific events and conditions which permit applying the tests and proofs of formal logic.

 

The Logic of Cause->Effect[23]

 

Four hundred years ago Sir Francis Bacon recognized that simple enumeration of events was inadequate methodology with which to conduct inductive logical analyses. John Stuart Mill (1806-1873) developed his classical canons of inductive inference which encompass concepts of causation and effect. (CoCo:90) These methodologies are essential tools which enable investiga-tors to develop what happened - establishing with rigor the structure of causation which was precursor to the effect. Mill's enduring principles have been adapted to the task of accident investigation and analysis in at least two specific applications.

 

Ludwig Benner, Jr., [24] has developed "Multilinear Events Sequencing" ("MES") over more than two decades. It posits that a mishap is a total process, of which only the end event is initially accessible to the investigator. MES affords the investigator an ordering discipline within which a matrix of predecessor events are arrayed to reconstruct the process which led to the undesired outcome, over the time period of the mishap. Once the matrix is generated the investigator or analyst compares discrete patterns of "event-pairs" which occurred during the mishap process, and applies sequential, causeÞeffect, and necessity and sufficiency logic tests.[25] (HeBe:86) and (Be:97) Benner's most recent application uses the reported facts of an industrial accident investigation to compare analyses by competing investigation methodologies. (Be:03)

 

Ladkin et al[26] have developed "WB-analysis" (from "Why-Because"). It is a suite of methods for explaining complex system failures based on formal semantics and logic. WB-analysis is primarily concerned with analyzing reported causality a posteriori. The list of events, states and processes (short coherent sequences of actions and states that do not need to be analyzed into components) stated in accident reports are taken to be inputs to the causal analysis. WB-a enables the investigator or analyst to identify significant system states and events, express them as propositional variables, build a chronological graph of causal-factor relationships among sequential sets of variables, and apply counterfactual testing. Semantic testing is applied to these pairwise, to obtain the a WB-graph, first in textual form and then in graphical form.[27] (GeHo:97) and (GeLa:97a)[28]

The differences between MES and WB-a lie principally in their evolution: MES is designed to assist the investigator by formalizing the investigation process; WB-a is a tool for conducting formal analyses of investigation conclusions. Despite their differences, both MES and WB-a have utility for investigators and investigation managers.

 

The Need for Testing

 

            For every expert there is an equal and opposite expert, but for every fact there is not necessarily an equal and opposite fact.                - Thomas Sowell

 

Investigators can reduce their vulnerability to criticism from special interests by aggressively applying CauseÞEffect logic, testing their hypotheses and demonstrating replicability of their conclusions. Testing need not be complicated. Simple counterfactual tests which demonstrate logical reasoning are far more convincing than subjective, intuitive "causes" which require blind acceptance of the investigators' ipse dixit.[29]

CauseÞEffect logic self-tests by hypothesizing counterfactual arguments; i.e.,

 

            Hypothesis: If Cause A results in Effect B, then

                                    Absence of Cause A will result in absence of Effect B.

 

            Test: Remove Cause A.

 

            If Effect B still happens, then Cause A cannot "cause" Effect B.

In his theory of Constraints, Goldratt proposed that each cause normally has more than one effect. To test the validity of the investigator's assumed CauseÞEffect logic, the analyst must find evidence of yet another expected coincident effect. (Go:90) Goldratt calls this phenomenon the "Effect-Cause-Effect" test. Dettmer restates it as follows (De:97):          

 

If we accept that [CAUSE] is the reason for [ORIGINAL EFFECT], then it must also lead to [PREDICTED EFFECT(S)], which [do/do not] exist.[30]

Dettmer cites eight Categories of Legitimate Reservation which should be tested to assure that analytical logic has been verified[31]:

            1. Clarity

            2. Entity Existence

            3. Causality Existence

            4. Cause Insufficiency

            5. Additional Cause

            6. Cause-Effect Reversal

            7. Predicted-Effect Existence, and

            8. Tautology.

 

Whatever manner of hypothesis testing investigators choose to employ, its objectivity will discourage the kind of controversies that arise, and survive to do mischief long after the mishap may be forgotten. By removing "judgment calls" from the investigation and analysis process, robust logical assessments force critics to demonstrate that their theories conform more logically with the factually-derived chronology of what happened. [32]

 

Prevention: A Measure of Investigation Accuracy

1. Anyone can make a decision, given enough facts.

            2. A good manager can make a decision without enough facts.

            3. A perfect manager can operate in perfect ignorance.

                                                                                    - Spencer's Laws of Data[33]

As early as 1938 the Safety Board within the Civil Aeronautics Authority was empowered to

investigate and report the facts, conditions, and circumstances relating to each accident, and make recommendations that would tend to prevent similar accidents in the future[34] It seems evident that the "facts, conditions and circumstances" were intended to be prerequisite to, and form the bases for, recommendations for correcting the process defects which enabled the mis-hap, and thereby preventing recurrence. Nevertheless many (if not most) organizations which control investigations choose to elevate the task of determining "cause(s)" to preeminence, a choice that has never seriously been questioned. These agencies have maintained the priority of causal determination even as it has become obvious that many assigned "cause(s)" cannot be substantiated by the facts, conditions and circumstances in their specific cases. "Cause" is, by definition, vague; "probable cause," "root cause," and other adjective-causes are even more vague. "Accidents" are the concluding events of specific, and usually unique, mishap processes.

One, or several, "cause(s)" cannot account for complex, evolutionary mishap processes. Prede-termined menus are futile attempt to generate standard categories from which analysts can assign causation by picking "one from column A" or "two from column B," confident of the conventional wisdom that one size cause fits all facts.[35]

Ineffectual corrective and preventive action is corollary to vaguely specified causation. Recom-mendations for mitigation or prevention have been largely ineffective in forestalling the seeming inevitability of common accident mechanisms.[36] Effecting corrections and preventing recurrence require specific identification of what went wrong, and precise remedies to change the behaviors which either enabled the progress of the mishap process, or failed to recognize and arrest it. They demand action to identify specific dysfunctionalities, trace their origins and change the behavior which led both to and from them. Not only do rational investigations prevent similar mishaps, they also identify and mitigate against perpetuation of process inefficiencies and wasted investment.

The two objectives - determining causation, and improving system performance -- are countervalent; that is, they are so fundamentally inconsistent that increasing concentration on one diminishes the worth of the other. So long as we cannot even define what "causes" are, efforts expended in their quest are squandered. Worse yet, the more vague the subsumed causal elements, the more efforts must be devoted to searching for things which have not, and cannot, be defined.

Amending suboptimal system performance requires identification of specific human behaviors which facilitated departure from the original planned mission scenario and objectives, into an unplanned path to an undesired outcome. Make no mistake, human behavior sets the stage. Inanimate objects are incapable of making volitional decisions.[37] In the words of the eminent aviator, mishap investigator and psychologist Captain Robert O. Besco, Ph.D:

            Many humans are caused by accident, but all accidents are caused by humans.[38]

Investigators must identify the specific human behaviors which led to disrupting the original plan. Facts, conditions and circumstances are never vague. They are tangible, measurable descriptions of what happened. Once we know what happened, we can dissect the scenario into events and conditions which encourage application of formal logic's tests and proofs. Legitimate causeÞeffect relationships which determine the progress of a mishap process can identify potential early intervention points which possess realistic probabilities for effecting prevention and process improvement.[39]

For the most part we have no idea whatsoever whether investigations and their recommenda-tions support our professed objectives for their accomplishment. We do not measure objectively whether recommended "fixes" actually work. It is possible - even probable - that a substantial proportion of current "safety" regulations, policies, and operational procedures have no effect on achieving system improvement or, worse yet, are actually inimical to our objectives. Investigators' work products should be the principal sources for identifying factors upon which to base systems' improvements. The usefulness of those work products depends on investigations' incorporating rigorous methodologies which establish, test and verify their conclusions and recommendations. It is not enough for investigation sponsors merely to generate arbitrary recommendations. Those which arise from fallacious "causes" cannot contribute to improvement. Even those derived from the findings of competent investigations should be tracked after implementation to verify their efficacy.

Where Do We Go From Here?

He that will not apply new remedies must expect new evils, for time is the greatest innovator.                                                      - Sir Francis Bacon

 

Early in his book Managing Risk, Dr. Vernon Grose makes the point that:

 

...many managers, acting as though an accident is a random stroke of fate, have to be reminded to seek and remove causes prior to a loss. (emphasis in the original)[40]

The investigator's success depends a great deal upon the culture within the organization by which he is empowered. It is appropriate for the investigator to establish the fundaments of his duties early on. For example, what objective has the greater opportunity for benefiting the company, its industry and its customers: determining "cause(s)," or improving the operation?

In the aftermath of the ValuJet crash in Florida in 1996, William Langewiesche compiled a roster of lessons which we need to learn, not least those responsible for investigating accidents and attempting to prevent their recurrence.(La:98) Although directed at the aviation industry and its government regulatory agency, it is applicable to all investigators and investigation managers:

            We can find fault among those directly involved - and we probably need to. But if our purpose is to attack the roots of such an accident, we may find them so entwined with the system that they are impossible to extract without toppling the whole structure....Beyond the question of blame, it requires us to consider that our solutions, by adding to the complexity and obscurity of the airline business, may actually increase the risks of accidents. ...

            The ValuJet case...fits the most basic definitions of an accident caused by the very functioning of the system or industry within which it occurred.... The two unfortu-nate mechanics who signed off on the nonexistent safety caps just happened to be the slowest to slip away when the supervisors needed signatures. Other mechanics almost certainly would have signed too, as did the inspectors.... The falsification they committed was part of a larger deception - the creation of an entire pretend reality that includes unworkable chains of command, unlearnable training pro-grams, unreadable manuals, and the fiction of regulations, checks and controls. Such pretend realities extend even into the most self-consciously progressive large organizations, with their attempts to formalize informality, to deregulate the workplace, to share profits and responsibilities, to respect the integrity and initiative of the individual. The systems work in principle, and usually in practice as well, but the two may have little to do with each other. Paperwork floats free of the ground and obscures the murky workplaces where, in the confusion of real life, system accidents are born.

Investigators and the organizations on whose behalf they investigate must recognize and expose systemic unsuitabilities, unfitness and irrelevance, and recommend changes even to those regulatory and administrative dogmas that have survived unquestioned since they were first written. Gerard Bruggink once averred that the principal factor in accident causation is the "...uncritical acceptance of easily verifiable assumptions." We would establish much more credibility and achieve much greater success were we to replace "uncritical acceptance" with more stringent verification.

 

References

 

(Be:75)

Ludwig Benner, Jr., "Accident theory and Accident Investigation." Proceedings of the Society of Air Safety Investigators Annual Seminar, Ottawa, Canada, October 7-9, 1975, pp. 148-154. At: http://www.iprr.org/papers/75iasiatheory.html

(Be:95)

Ludwig Benner, Jr., "Words Mean Something." ISASI forum, V. 28, #3, (September 1995). Sterling, VA., International Society of Air Safety Investigators. At: http://www.bjr05.net/papers/Words.htm

(Be:97)

Ludwig Benner, Jr., Introduction to Investigation. (Stillwater, Oklahoma State University Fire Protection Publications, 1997). Available from Emergency Film Group, Edgartown, MA

(Be:03)

Ludwig Benner, Jr., "Investigating Investigation Methodologies." Presented at the \2nd Workshop on the Investigation and Reporting of Incidents and Accidents (IRIA03), 16 - 19 September 2003, Williamsburg, VA

(Br:87)

Gerard M. Bruggink, "To Kill a Myth." Proceedings of the Eighteenth International Seminar of the International Society of Air Safety Investigators, Atlanta, Georgia, October 6-9, 1988. ISASI forum, V. 20, #4, February 1988, pp. 4-9.

(CoCo:90)

Irving M. Copi & Carl Cohen, Introduction to Logic, 8th ed. (New York, Macmillan, 1990) [ISBN 0-02-325035-6]

(De:97)

H. William Dettmer, Goldratt's theory of Constraints. (Milwaukee, American Society for Quality Press, 1997) [ISBN 0-87389-370-0]

(GeHo:97)

Thorsten Gerdsmeier, Michael Hhl, Peter Ladkin & Karsten Loer, "How Aircraft Crash," 11 June 1997. RVS Group, Technical Faculty, University of Bielefeld. at http://www.rvs.uni-bielefeld.de/~ladkin/Journalism/ForMag.html .

(GeLa:97a)

Thorsten Gerdsmeier, Peter Ladkin & Karsten Loer, "Formalising Failure Analysis." RVS Group, Technical Faculty, University of Bielefeld. at http://www.rvs.uni-bielefeld.de/~ladkin/Reports/AMAS97.html .

(GeLa:97b)

Thorsten Gerdsmeier, Peter Ladkin & Karsten Loer, "Analysing the Cali Accident With a W-B Graph." Presented at the Human Error and Systems Development Workshop, Glasgow, March 1997. (Second Version, 13 March 1997). at http://www.rvs.uni-bielefeld.de/~ladkin/Reports/caliWB.html .

(Go:90)

Eliyahu M. Goldratt, theory of Constraints. (Great Barrington, North River Press, 1990) [ISBN 0-88427-085-8]

(Gr:87)

Vernon L. Grose, Managing Risk: Systematic Loss Prevention for Executives. (Englewood Cliffs, Prentice-Hall, 1987) [ISBN 0-13-551110-0]

(HeBe:86)

Kingsley Hendrick & Ludwig Benner, Jr., Investigating Accidents with STEP. (New York, Marcel Dekker, 1986) [ISBN 0-8247-7510-4]

(La:98)

William Langewiesche, "The Lessons of ValuJet 592." The Atlantic Monthly, March 1998, pp. 81-98.

(Le:42)

Jerome F. Lederer, (Director, Safety Bureau), Memorandum to the Civil Aeronautics Board dated June 12, 1942. Subj: "Basic system of analyzing aircraft accidents", pp. 2-3.

(Le:92)

Jerome F. Lederer, "Is Probable Cause(s) Sacrosanct?." ISASI forum, V. 25, #1, March 1992, pp. 8-9.

(McFr:93)

Daniel McNeill & Paul Freiberger, Fuzzy Logic. (New York, Simon & Schuster, 1993) [ISBN 0-671-73843-7]

(Mi:43)

John Stuart Mill, A System of Logic, 8th ed. 1843. (London, Longmans, 1873)

(Pe:84)

Charles Perrow, Normal Accidents. (New York, Basic Books, 1984)

[ISBN 0-465-05142-1 (paper)]

 

(USSC:96)

U.S. Supreme Court opinion in the case of General Electric company, et al., Petitioner v. Robert K. Joiner et ux. (96-188)

(Va:96)

Diane Vaughan, The Challenger Launch Decision. (University of Chicago Press, 1996) [ISBN 0-226-85175-3]

 



[1]. P.L. 73-418, 48 Stat. 1113-1114.

[2]. P.L. 75-706, 52 Stat. 973 et seq.

[3]. The tasks of reporting the facts, conditions and circumstances relating to accidents and recommending actions to prevent recurrence are included in all subsequent legislation, and not repeated herein.

[4]. P.L. 85-726, 72 Stat. 731-811.

[5]. Id. 102(e).

[6]. P.L. 89-670, 80 Stat. 931-950.

[7]. The was the first time that a requirement to determine "probable cause" was imposed on transportation modes other than aviation.

[8]. Title III of P.L. 93-633, 88 Stat. 2166-2173.

[9]. Emphasis added in all the preceding quotations.

[10]. Fifty years later (Le:92) Lederer reflected on the origins of "Probable Cause" and suggested that much of the semantic controversy might be obviated by adopting "Findings", "Significant Factors" and/or "Recommendations" instead. He once again failed to address the definition issue, which would remain an enduring source of vagueness absent more precise specification.

[11]. Black's Law Dictionary, 6th ed., St. Paul, West Publishing Co., 1990.

[12] That's 1843!

[13] System Safety Analysis Handbook (1993) pp. 3-167 & 3-168

[14] Such discoveries will, presumably, identify factors inherent in causation.

[15] One may assume, apparently, that absent the "Less-Than-Adequate" factors the mishap would not have occurred.

[16] Id. pp.3-235 & 3-236

[17] Note the instantiation of subjective judgment calls which immediately contaminate the objective data.

[18] Definitions from Random House Dictionary of the English Language. New York, Random House, 1967.

[19] Investigators' version of Hobson's Choice.

[20] This task was set forth as the initial responsibility of the Civil Aeronautics Authority's Safety Board by the Civil Aeronautics Act of 1938 (see footnotes 2 & 3 supra), but seems to have gotten lost amidst the rush to assign causes and blame.

[21] Perrow cites century-old "Rules of the Road" as one of many major factor which have led to a static loss rate in the marine "error-inducing system." (Pe:84)

[22] First by an internal NASA board of inquiry; subsequently by a Special Congressional Sub-committee and a Presidential "Blue-Ribbon" Commission.

[23]. After demonizing the idea of "cause" as currently used in investigation, I apologize for thrusting yet another definition at you: "cause" as specified in the logic of formal argument. Bear with me; I think you will be able to see how this one is on your side.

[24]. Starline Software, Ltd.; 12101 Toreador Lane; Oakton, VA 22124

[25]. Copi & Cohen (Op. cit. pp. 377 380) posit that "We can legitimately infer cause from effect only in the sense of necessary condition. And we can legitimately infer effect from cause only in the sense of sufficient condition. Where inferences are made both from cause to effect and from effect to cause, the term 'cause' must be used in the sense of 'necessary and sufficient condition.' "

[26]. Technical Faculty, AG RVS, University of Bielefeld, D-33501 Bielefeld, Germany

[27]. From http://www.rvs.uni-bielefeld.de/research/WBA/

[28]. An example of WB analysis of the American Airlines accident at Cali, Colombia, is available at (GeLa:97b).

 

[29]. "...nothing...requires a district court to admit opinion evidence which is connected to existing data only by the ipse dixit of the expert." (USSC:96) "Ipse dixit" = "That which he says", or "Because I say so."

[30]. (De:97) pp. 5051. Example: " 'I have appendicitis' might be offered as the cause of the effect 'I have a pain in my abdomen.' But if the cause is really valid, we might also expect to see a couple of other effects: 'I have a fever' and 'My white cell count is elevated.' "

[31]. Dettmer, Op.Cit., Chapter 2.

[32]. Investigators who are appropriately educated and trained can incorporate logic testing during the investigation process, where it can be much more efficiently employed to detect logic problems before the report is publicized and becomes subjected to the reactive adversarial process.

[33]. Origin unknown, but contained in every compilation of "Murphy" Laws.

[34]. See p. 2, supra.

[35]. Thus the NTSB can assure the public that the principal "Probable Cause" of General Aviation accidents is "Pilot failed to obtain or maintain flying speed," not yet having parsed the syntax well enough during 37 years to discover that its proclaimed cause factor is not a "cause" at all; it is a description of what happened, and an incomplete one at that .

 

[36]. See Perrow, op. cit.

[37]. Thus it was not that Russell Weller's brakes failed on that July morning in Santa Monica, but that he chose, or was permitted, to drive even though he lacked the cognitive capability to process the fact that his Buick was not responding as he intended, and react upon that input correctly to avoid the resultant carnage.

[38]. (multiple attributions)

[39]. Thus incidents, when considered to be "accidents which were avoided", possess intrinsic identification of successful interventions which may be instantiated into revised procedures and practices.

[40]. (Gr:87), p. 12.