The Investigation Process Research Resource Site
A Pro Bono site with hundreds of resources for Investigation Investigators
Home Page Site Guidance FAQs Old News Site inputs Forums
. . . .. . . . . . . . last updated 8/8/09

INVESTIGATING
INVESTIGATIONS

to advance the
State-of-the-Art of
investigations, through
investigation process
research.



Research Resources:

Search site for::


Launched Aug 26 1996.

 


THE MOST SIGNIFICANT HUMAN ERROR
IN THE AVIATION SYSTEM
By
C. O. Miller
Consultant - System Safety
Sedona, Arizona

Prepared for presentation at the Canadian Aviation Safety Seminar,
Vancouver, B.C. May 11, 1999


Table of Contents



ABSTRACT

"Learn from the mistakes of others, you'll never live long enough to make them all yourself" ... an often repeated advisory to anyone associated to influencing the safety of air travel. The problem remains, however, we do not heed this advice as well as we can and should; the "we" being all of us in the aviation system. We observe hazards that have not had prior knowledge applied effectively or efficiently to keep such hazards from maturing into accidents. We see too many repeat mishaps. Accordingly, this paper provides but goes beyond examples of such cases and into the WHYs of this situation. Some recommendations are also offered for consideration by members of the aviation community including air safety agencies.

Introduction

The title of this paper implies a question. What is the most significant human error in the aviation system? The answer is simple but what to do about it is another matter. Those of us who have been in the aviation accident prevention business longer than we care to (or can?) remember have seen this problem: discussed it, cussed it, tried frequently to resolve it and continually wonder if our efforts are really worthwhile.

Granted, the recent air carrier accident situation in particular has been rather good at least in the Western World, notwithstanding an occasional catastrophe. Even the public seems to "buy" the system, literally and figuratively. They purchase tickets in increasing numbers. They quiet down or otherwise accept what safety authorities seem to say after the headlines and TV talk shows find a new subject of the week. However, the public gets particularly distressed if a current accident seems to involve a repeat performance of previous causal factors. One cannot blame them. Known precedent from accidents/incidents has produced countless suggested remedial actions. Some even are implemented! Too many are not and/or are forgotten. I submit this is a form of human error.

Recall that a typical definition of human error would be, "...observable or inferable (unwanted) phenomena occurring during or subsequent to the execution of human functions". [1] Various commentators have amplified this by a series of "failures to...": failure to perform required actions, failure to perform them correctly, failure to perform them in the proper sequence, failure to perform them within the required time. Also can be found performance of a non-required action with unwanted results. [2]

Given this scope of human error then, the most significant human error in the aviation system is the continuing failure by persons in that system to apply, effectively and efficiently, the bitter lessons of past mishaps in order to prevent future accidents. "Persons in that system" include passengers as well as pilots, media personnel as well as maintenance professionals, lawyers as well as legislators, educators, you, me, etc. "Effectively" is in this indictment because we must concern ourselves with real world results, not just intentions or efforts just accomplished theoretically on paper. "Efficiently" is in there because that deals with skills, energy, time and cost expenditure; in engineering terms, a ratio of output divided by input.

What follows are illustrations of the problem, a very small selection of examples that could be presented. Next comes a discussion of why this form of human error occurs. Included are some suggested remedies, topped off by a challenge to the aviation community to recognize the situation and do something about it.

Failures to Apply Known Precedent to Prevent Accidents

At least two ways are available to identify such failures. First, one may choose a recent accident and ask (research) where findings from previous cases could have been applied to prevent the latest one. Alternatively, one may pick a hazard category and identify commonalities between a series of such accidents. Hazard categories in this sense would be Controlled Flight In to Terrain (CFIT), icing, loss of control/upset, wake turbulence, runway incursions, pilot malfeasance, etc. Consider some examples illustrating each approach; first similar accidents.

Swissair Flight 111's investigation is not complete; however, the issues which have become a matter of public record allow us to wonder about previous accidents with similar concerns. For example, a Nov. 3, 1973 PanAm cargo flight which tried unsuccessfully to get back to Boston's Logan Airport not only raised hazardous materials questions but also illustrated the need to manage in-flight smoke/fire and get down as soon as possible. It is assumed that the Transportation Safety Board of Canada will be studying what was experienced and known about this problem well before the Nova Scotia tragedy occurred. Perhaps they will read a succinct paper on this very subject by Gerry Bruggink written in 1986. [3]

Consider the TWA Flight 800 tragedy and the resultant concern for aging aircraft systems besides structure per se ? Early in 1997, the Gore Commission recommended the FAA et al expand its aging aircraft program to include a variety of systems including electrical wiring, connectors, electro-mechanical systems , etc. [4] On October 1, 1998, the DoT Secretary and the FAA Administrator announced the "FAA's Aging Transport Non-Structural Systems Plan". Still later, a Notice of Proposed Rulemaking (NPRM) was issued by the FAA April 2, 1999 entitled "Aging Airplane Safety". [5] Not mentioned was that suggestions regarding non-structural system age problems were made to an FAA/Industry aging aircraft committee formed soon after the 19 8 8 Aloha Airlines famous "top down convertible" case ... within only a matter of months after that accident. Then to amazement to anyone who has followed this issue, the above noted NPRM speaks almost entirely to "damage-tolerance" (structural) problems and the frequency of inspections related thereto! Some people still do not seem to appreciate the scope of the aircraft age problem.

Consider one more pair of similar accident cases. On March 10, 1989, an Air Ontario F-28 crashed on takeoff from the Dryden airport for the simple reason that sufficient ice was on the wings to preclude the aircraft from flying. On March 22, 1992, a USAir F-28 crashed on takeoff from LaGuardia Airport for the simple reason that sufficient ice was on the wings to preclude the aircraft from flying. How tragic! The Air Ontario pilots had been trained at USAir and Piedmont Airlines (later absorbed by USAir). The Dryden Commission investigating team had communicated with the U.S. airlines F-28 training personnel specifically about the Canadian experience. This was a case of hands across the border that did not reach the right people!

Now for similar hazards and referring back to TW 800, wiring faults, fires and explosions are hardly new to aviation. Around 30-40 years ago, the Lockheed Corp. conducted considerable research concerning the "wet wire phenomenon". When writing about this in 1968, Jerry Lederer recommended among other things, "Devise non-destructive inspection techniques to detect damaged insulation in installed assemblies." [6] Sound familiar? It should; that is only being done now. Also, of course, explosions have occurred in B-747's before Jamaica Bay, near Madrid, May 7, 1976 and in China recently.

Problems with Kapton coated wiring were initially experienced with the Titan missile system placed in operation in 1965. [7] In 1988, the FAA presumably was prepared to issue an Advisory Circular (AC) concerning the "explosive" nature of Kapton when subjected to "improper maintenance", otherwise known as performance degradation due to wear and tear that inevitably comes with age. (The hardware, not the mechanic!) The AC was never issued since the experience defining the Kapton insulated wire problems was gained during military operations. It was argued myoptically that military experience had no relevancy to civilian flight and maintenance activity, disregarding the fact that they both are manned by human beings.

Consider the wake turbulence hazard. On May 16, 1996, a cargo MD-11 experienced severe fuselage damage attempting to land at Anchorage on a runway parallel to the one that was used by a

B-747, three miles ahead. Fortunately, only one crewman was injured. Wake turbulence produced uncontrollable high sink rate of the -11 even though both aircraft were "heavies". The -11 flight crew was criticized for not maintaining a flight path above that of the 747 which theoretically would have avoided the hazard.

Now, rewind time back to May 30, 1972 when a DC-9 rolled and crashed on approach behind a DC-10 at the Ft. Worth International Airport. Not until then did appreciation really begin relative to the strength of wakes behind wings of high aerodynamic efficiency. That one jet could not handle the wake of another was quite a surprise; however, size is not the only factor. This was realized during a personal investigation of a fatal Piper PA-28 accident of Dec. 16, 1973 during which a student pilot was flying a rectangular "circuits and bumps" pattern solo at a field also served by a few large aircraft. This time, a B-707 was cleared straight in from about 8-10 miles out. Meanwhile, as the 707 got near the airport, the light plane had reached the point of turning crosswind and onward to final approach. There were reasons to believe the pilot was aware of the always "stay higher than the path of the airplane ahead of you" advisory and the tower had issued a "Caution, wake turbulence" radio call. Then, why the accident? It became obvious upon analyzing where the two aircraft would have been relative to each other on a common time base. Perception of the 707's path from the light plane descending and otherwise maneuvering 90 degrees abeam became a guess at best. The pilot guessed wrong.

Had this kind of accident happened between 1973 and 1996? Answer: yes. Examine the PA-31-350 commuter wake turbulence case at the Philadelphia airport July 25, 1980. The lighter aircraft was making a right hand semi-circular approach ,having been cleared behind a jetliner on a straight-in. The resulting scenario was the same as with the aforementioned PA-28.

Will such an accident happen again? Probably; at least until pilots and the regulation/advisory circular writers appreciate the perception-decision-reaction triad of human behavior in the three dimensional operational world of air traffic control. Ironically, The NTSB released an otherwise excellent report in 1994 regarding wake vortex. Unfortunately, they concentrated almost entirely on aircraft types and in-trail separation distances, not the flight path dynamics of wake vortex encounters. [8]

My relatively recent, favorite example of lessons not being learned until the body count gets too high starts in the Italian alps Oct. 22, 1987. An ATR-42 fell from the sky near Lake Como during an icing encounter. I and a former chief test pilot for a major airframe manufacturer of jet transports were retained by the airline to assist in their investigation. Within a matter of days after reviewing the excellent CVR/DFDR records (about a month or so after the accident) the sequence of events was apparent. While climbing on autopilot, the aircraft was allowed to get 5-7K slower than the latest recommended flight-in-icing speed. (The cockpit crew did not have this information, but that is another story.) Of significance here was the fact that the autopilot was fighting the ice-induced degradation of aileron control until the autopilot snapped off in roll without warning to the crew. At night, over mountains, in obvious instrument meteorological conditions (IMC) with an aerodynamically degraded airframe and experiencing numerous varying g forces, audio, visual and proprioceptive cues, the craft's impact was decided at the time of the roll.

The key element in all this prevention-wise during the in-flight accident sequence was the masking of the flight control forces by the autopilot. Had it not been on, the crew very likely would have realized flight control limits were being approached. This information was conveyed quietly to NTSB and FAA staff members upon my return from Italy along with other factors germane to the accident. This was done "quietly" because of information dissemination limits associated with consultants' work. Nevertheless, while people in Italy were arguing "fault", a specific accident prevention practice was available; simply, do not use the autopilot in icing. Incidentally, throughout this period, several incidents occurred where pilots had similar experiences to that of the Italian crew but just did regain control.

Now, go forward to Roselawn, Indiana, October 31, 1994. An ATR-72 crashed in a virtual carbon copy of the hazards experienced at Lake Como experience. Granted, the initial flight conditions were slightly different and the Safety Board's investigation revealed more details as to why roll control became lost when the autopilot disconnected. Nevertheless, hand flying that aircraft before or even possibly during the changing flaps configuration which precipitated the roll would have avoided the tragedy. Interestingly, the NTSB Roselawn report made no reference to the Como case except to list it in an obscure table of prior icing accidents.

Next, on Jan. 9, 1997, a commuter Embraer EMB-120 crashed in Monroe, Michigan having encountered ice while being vectored for landing. The now familiar roll departure occurred on autopilot, this time while the the flight was reducing to an assigned, allowable airspeed of 150K and leveling at 4000 feet. The NTSB report finally did discuss the use of autopilot-in-icing question and made appropriate recommendations. Also, the Board very wisely, albeit belatedly, challenged aircraft flight-in-icing certification requirements which had been unchanged in the U. S. for decades. They too had numerous incidents, near-accidents, supporting their position as we had in Italy. Icing certification had also been a major controversial area during the Italian case but became lost in a maze of bureaucratic politicizing. Again, a story in itself.

Finally, on Jan. 7, 1999, near St. Louis, a commuter ATR-42 departed controlled flight in icing during an ILS approach, with recovery at 1600' MSL. Only the NTSB's preliminary report was available at the time of preparing this text; however, it will be interesting to learn if this flight was on autopilot during the approach. [9]

The Etiology of These Human Error Problems ...the "Whys"

Some might question the logic of blaming such problems as those noted above on human error. After all, some say human performance is just one kind of accident cause factor; others being structural fatigue, engine failure, weather, etc. But therein lies an issue which will be discussed more shortly. People tend to stress "cause" in over-simplified terms at the expense of prevention. Whenever you apply accident prevention lessons, it requires human assessments, decisions, actions and so on. Thus, when one examines the WHYs of prevention failure, answers rest always with personnel and their interactions with other system elements. Such reasons are not easy to classify; however, discussed below are seven areas for review. Included therein are some remedies suggested by this author. They will be shown in bold print for ease of recognition.

Normal Human Behavior

Perhaps it seems strange to list "normal" behavior as a reason for failure to do something. However, therein lies a gross misconception among managers in particular. To err is indeed human. We all exhibit characteristics which, upon reflection, we might wish we did not have. To place this in perspective to the theme of this paper, consider one of the finest human factors report issued in years. "The Interfaces Between Flightcrews and Modern Flight Deck Systems", prepared by an FAA Human Factors Team headed by Dr. Kathy Abbott, Stephan M. Slotte and Donald K. Stimson. [10] Aside from the subject of their work as implicit in its title, a remarkable Appendix was included entitled "Potential Barriers to Implementation of the (Report's) Recommendations". It rightly pointed out such normal reactions as "resistance to change", "turf protection", "defensiveness", "finger pointing", "misunderstandings about human factors" among many other phenomena which could be encountered when trying to get someone to pay attention to their ideas. It is reminiscent of past discourses seen or heard regarding resistance to innovation. So, what could be done about all this, besides suggesting people read the FAA Team's report?

A wooden plaque sits on the wall aside my desk. It depicts a Cherokee Indian Prayer. Questionable source accuracy notwithstanding, it reads:

"O Great Spirit, grant me the serenity to accept the things

I cannot change, courage to change the things I can, and

wisdom to know the difference."

Another homily indicative of appreciating limits of normal human behavior was presented by "Mr. Aviation Safety", Jerry Lederer, in a paper about thirty years ago. [11] He said man/woman kind should change from N.I.H (Not Invented Here) to N.I.H (Now I Hear).

Finally, my human factors branch chief back in the mid 50s, Jack Latham, used to preach, "There's a difference in what a person can do and what he/she will do in the operational situation". I have never forgotten that since it is perhaps the most misunderstood, but accurate principle related to human behavior and accidents.

These thoughts are indeed fine guidelines among thousands that could be listed to improve human behavior in resolving problems. Of course, we should not forget the possibility of exchanging the person providing the unsatisfactory behavior for one more competant or motivated. In any event, one overall recommendation that comes to mind for this behavior problem is for continued effort by all of us to educate, especially managers and decision-makers, on real world human capabilities and limitations...with emphasis on prevention actions, not blame.

Another suggestion in this area offered by a colleague was to bring the people who introduce the risks to the people who take them .[12]gggg That reminds one of the old says when mechanics always went along on test flights?

The Endless Ocean of Accident Prevention Knowledge

We have nearly a century of powered flight from which we have produced an ocean of mistakes, not to mention knowledge acquired through tributaries from other sources. Accident prevention lessons are part of that ocean and have been applied to varying degrees. The trouble is the ocean has become so vast it denies accurate navigation. In a sense, it has not been charted. Then, occasionally, a tsunami (tidal wave) occurs derived from some major underseas disturbance (major aircraft accident). The wave crashes upon the shores of public interest and emotion, sometimes severely damaging certain airlines, a given model aircraft and/or careers, not to mention a few score passengers and crew here and there. Some remedial changes are then made based upon the information the wave revealed. Remember, however, a tsunami does not produce new water. Through wave dynamics, it just reminds people of the power of the ocean. Similarly, a major accident really only gives us a hint of the immense aviation knowledge that had existed already.

To carry this metaphor a bit farther, a smooth ocean can be likened to those periods in air carrier safety where accidents have been absent for awhile (e.g., the perfect CY98 record for the U.S). When discussing this with the aformentioned Jerry Lederer a few months ago, he offered a thought for all of us. He cautioned, "a danger to aviation is success". Perhaps one could thus equate a smooth ocean to complacency.

Whether the problem is defined as merely a plethora of accident prevention knowledge so large as to be unwieldy or one of simple complacency, we have advances in computer technology which give us markedly improved access to past lessons. Project GAIN [13] seeks to take advantage of this capability. Unfortunately, however, that project is geared to contemporary monitoring of undesired events which, while important in many respects, does not plan for researching prior experiences and prevention lessons.

We need a hazard based system of prevention knowledge which can be acquired, evaluated/analyzed, stored and retrieved in a timely, cost effective manner. This does not exist anywhere today, despite attempts in the past by a couple universities, the U. S. Air Force and NASA among possibly others. Libraries do not provide this. Accident data bases do not have it. These do not have the necessary taxonomy of hazard identification and prevention methodology. They seem to be content with logging documents by author, title, date and/or generalized subject. They do not classify their information sufficiently to meet the needs of safety persons researching a particular hazard area without an excessive expenditure of time and resources.

As will be discussed shortly, accident investigating agencies could easily take a first step in meeting this challenge. In the meantime, three actions are being taken by this author. First is an attempt to induce Embry-Riddle Aeronautical University to use at least part of significant grant they received recently for air safety purposes for a program as outlined above. Second, the subject is expected to arise at the ISASI Annual Seminar in Boston, Aug. 22-26, 1999. As one of the scheduled participants, I shall once again raise this subject. Third is an attempt to have this issue discussed at a forthcoming ICAO Divisional Meeting concerning accident investigation and prevention, Sept. 14-24, 1999. Since this is a meeting limited to "States", one or more countries will have to get behind the effort if it is to have any impact internationally.

No Time For the Past

Closely associated with the foregoing discussion is the fact that not all participants in the aviation system are disciplined in professional study protocols which should always ask what is known from the past of relevance to their current inquiry. Frequently, time pressures for the project result in cutting this corner; hence, the plea above to make retrieval of information easier. However, it is often that the managers fail to check for this "look back" feature in what they are reviewing. Of course, the legal system does this, almost to a fault, demanding case citations which literally sometimes go back centuries.

Another dimension to this problem is illustrated by something told to me by the late Bill Stieglitz, one of the first - and best -flight safety engineers in the world (circa 40's into the 60's with Republic Aircraft). He noted that about every five years he would have to start all over indoctrinating design engineers in accident prevention principles and practices. Design safety handbooks and regulatory requirements notwithstanding, people change jobs or the jobs are redefined. People are promoted, new people arrive. Organization changes produce changed communication interfaces. In short, the dynamics of all organizations tend to dilute if not ignore institutional knowledge.

As implied above, many aviation personnel depend upon laws such as Federal Air Regulations (FARs) to indirectly document the lessons of the past. After all, it is said, we learn from our mistakes so we write a rule that forbids certain actions or demands others...all in the name of safety. This is like the reported issuance by the Army Air Corps posted on a bulletin board during WW I; namely, "By the order of the Commanding General, there will be no more aircraft accidents".

The problem with regulations is two fold. First, it is impossible to write a rule for all ramifications of human behavior. Second, rules are out of date, almost by definition. The administrative law hoops through which rules must jump usually cost a few years even from the time of general agreement to the provisions. Regulations are, or should be, only a floor for desired performance...and the floor covering needs replacement expeditiously as new knowledge is obtained. What is needed too are walls or other structure to which higher standards may be attached as they are implemented.

To fight "No Time For the Past" takes an improvement in the professionalism of workers and management alike to make the necessary time available to learn and apply the accident prevention lessons of the past. This includes accident investigators even to the point of having a standard report section devoted to documenting such effort. Similarly, operators should make bitter lessons of the past part of every training program for flight and ground crews, tailored to that particular phase of training such as upgrading to Captain. This would include at least a review of Crew Resource Management cases, particularly as experienced in the model aircraft in question. [14]

Influential Persons and Levels of Ignorance

Unfortunately, blood priority (a.k.a. tombstone safety) seems to become recognized too often, only when special task forces or commissions are formed following an attention getting tragedy in a some major back yard. (For example, TWA 800 near New York or the Air Florida accident which sort of disrupted traffic on Washington's 14th Street bridge.) With some exceptions, Canada's Dryden Commission being one of them, these reactive studies are too often staffed by people whose first real exposure to safety engineering, operations or management is with the study being conducted. They do not have the necessary institutional knowledge. Worse, they seem to be reluctant to seek it from others except from direct parties to the tragedy, most of whom always have their own ox to protect. The "neutral" people seem to emerge from naive legislative staffs or certain hallowed halls of ivy which have yet to integrate accident prevention into their curricula.

For example, during studies by the U.S. Gore Commission and a U.S. National Civil Aviation Review Commission one could count on a single hand the number of true, experienced safety professionals, safety educators, independent safety consultants, retirees from safety positions who were on the team...people who do not need to have their testimony pass muster with some office of general counsel or political party hierarchy. [15] A long time and highly respected aviation correspondent told me when discussing this phenomenon that it reminded him of the infamous Military-Industrial complex that has influenced our country's defense posture not particularly to the benefit of the public's pocketbook.

A specific example of this problem involved an acknowledged "expert" in the physics of ice formation who could not understand why the solution to icing hazard had to consider landing approach speeds and consequently how long airport runways had to be. In other words, another "level of ignorance" seen frequently in these studies is the lack of a systems orientation.

Fortunately, some groups such as the Flight Safety Foundation (FSF) are a step removed from government directed studies. They also have the capability to, and do, cajole many of their members with qualified safety personnel into supporting accident prevention studies on a relatively unbiased basis. Still, FSF members pay fees and part of the Foundation's income comes from the Government. Thus, FSF too must be careful so as to maintain informed objectivity.

A couple specific recommendations can be made in this touchy area of challenging the efforts of very well meaning people. First, appreciate that several professional safety groups exist such as the International Society of Air Safety Investigators (ISASI), the System Safety Society (SSS), the National Safety Management Society (NSMS), the American Society of Safety Engineers (ASSE) and the safety committee of the "Human Factors and Ergonomics Society (HF&E). These professional safety societies should take a more aggressive role with legislative committees et al to ensure safety qualified representation during safety reviews . They can certainly provide more accident prevention input than most of the accepted public interest groups and new, self styled "safety institutes" which seem to spring up, at least on the TV talk shows.

Second, these review commissions should include personnel of potentially opposing views on the issues at hand. Take a lesson from the legal field. Notwithstanding the negative sides of their adversary system, truth will more probably come out when the opinions of both sides are visible equally.

Antiquated, Excessive Emphasis on Accident Causation

Most countries throughout the world continue to determine and emphasize "Cause", "Probable Cause", "Primary Cause", Contributing Cause" and the like during accident inquiries. Apparently, these expressions portend "closure"; but closure for what? Pain and suffering for friends and relatives of victims? Possibly. As a determinant in tort or criminal litigation? Probably. As the answer to accident prevention? Not really. "Cause" is part of the cognitive, accident analytical process but only one step, often a self defeating step. People tend to stop thinking when they reach one cause. Indeed, this is where known precedent often gets lost.

Accidents are a combination of events, sequentially or otherwise...cause-effect relationships (plural). All the links of a chain comprise an accident, not just one link. Thus if all the links, all potential prevention possibilities, are not illuminated under reasonably similar lighting, many are never seen and/or are forgotten. This is exactly what happened in the trio of icing accidents described earlier. In Italy, the political-cultural environment demanded blaming some one; hence the dead Captain took the brunt of the official report. The Roselawn case emphasized the aerodynamic deficiencies of an iced wing. The Michigan case highlighted flight-in-icing certification deficiencies with commuter aircraft. All three of these factors among others were present in all the cases and were brought out during the Lake Como investigation, let alone in the later accidents/incidents.

Attempts to resolve this "cause" issue have been frustrating, to say the very least. Australia and Canada have made strides towards an "all cause" concept as have the U.S. military forces. Those who fought for that are to be congratulated indeed. ICAO took a very tiny step in that direction during its 1992 Accident Investigation Division meeting by leaving the door open for multiple causes in "Annex 13" (the international accident investigation guideline). Generally speaking however, groups like the NTSB have not openly debated the issue. My personal research and suggestions on the subject have been published several times with references 16 and 17 being the most comprehensive. [16] ,[17] There is some reason to believe that a current study of NTSB operations by the Rand Corporation, funded by the Board, might include a look at this matter. At least Rand has heard and seen this author's views on the subject and, it is believed, heard similar comments from others.

My recommendations to the NTSB have been basically to avoid any emphasis on singular or prioritized approaches to accident causation in conducting or reporting investigations . In the U.S., some argue the Board's enabling legislation would have to be modified to adopt such a course; a view with which I do not concur Government agencies have a right, let alone an obligation, to interpret legislation and promulgate regulations accordingly. The "cause" language in the Board's charter is broad enough to allow deemphazizng "cause" in Board reports. In any event, NTSB regulations and procedures, particularly in 49 CFR 831, would only require relatively minor modifications, . Admittedly, a public relations effort would be needed to explain the reason for the change. Getting out of the "cause" business removes safety agencies from assessing blame and allows more needed concentration on their primary if not exclusive mission, accident prevention .

Recommendations and Action Failure

Implicit in the basic theme of these remarks is the failure of the aviation system to adequately track and analyze what happens to recommendations emanating from accident/incident investigations. This is a major shortcoming of investigating bodies but by no means is limited thereto. The best of recommendations intentions is frequently lost in real world responses like, "Study it", "Redirect it", "Clarify it", "Enlarge it", and "Dilute it", even when the public pronouncement by the action agency seems to be in agreement with the given proposal. [18] Sometimes the action is so protracted that knowledge of the issue is simply lost by the sheer passage of time and changes in personnel (notwithstanding more similar accidents). As proof of this situation, compare the 1972 NTSB Approach and Landing Forum report with relatively recent special studies on "Approach-and-Landing Accidents" (ALR) and "Controlled-Flight-Into-Terrain Accidents" (CFIT). [19],[20],[21]. Superb that these recent studies are, they failed to go back to what was stated in 1972. They would have found numerous issues and suggestions that have been around for over three decades; how to deal with non-precision approaches, three-pointer altimeters and the importance of Visual Approach Slope Indicators (VASI) just to name a few.

Incidentally, it was fascinating to learn recently that the FAA is not replacing VASI's that might have worn out or otherwise have been shut down due to runway changes. They argue VASI's are not needed given modern Instrument Landing Systems (ILSs) and airborne approach and landing instrumentation , especially under instrument meteorological conditions. [22] This is blatantly stupid as any qualified person who has studied approach and landing accident investigations or the aforementioned 1972 NTSB report can testify.

But back to the "action failure" point, It is quite possible that, even if it is given reasonable processing, an original recommendation might have been flawed, albeit accepted all around. However, how is this to be established unless the recommendation's effectiveness is studied long term? Without such closed loop thinking the accident investigations can lose much of its impact on the accident prevention system. [23]

Resolution of the recommendations followup issue could be relatively easy. Expand accident/incident and recommendation databases to emphasize prevention, (not "cause" as noted above.) Place a spotlight on the people/agencies who fail to make or consider recommendations. Accident reports should contain a section for the investigator(s) to record at least what he/she believes the mode of remedial action should be. The military system safety process provides the start of a logic hierarchy for such expanded recommendation analysis via its "System Safety Sequence". This important section of MIL-STD-882 begins with designing-out the identified hazard, continues with provisions for hazard severity mitigation, use of warnings and., finally as a last resort, control through procedures, education and training. Other parts of the system safety process acknowledge the possibility of simply accepting the hazard and notification thereof to appropriate parties. [24] Admittedly this can become tricky in a civil aviation environment when policy making politicians are wont to imply perfect safety to a critical public.

Recommendation databases need to be expanded not only to document what investigations suggest but also provide a way of tracking hazards, the successes/failures of the formal recommendations that the investigating body issues and where the action breakdowns occur, if any . A periodic report to the public on these items would be much more meaningful than the gobbledygook that permeates most current statistical studies of cause factors. It would be wise to also go back to examine and and code at least 20-30 years of past recommendations in this manner. We may never gain the full intelligence the past offers but we can improve what we know and can teach today...which leads to another suggestion.

Educational institutions should incorporate into their safety curricula a survey of past hazard identification and control knowledge . Such a course at the graduate level was conducted at George Washington University in the late 60's. Using major problem areas such as approach and landing, turbulence and thunderstorms, aircraft configuration accidents, etc., each class session was conducted in essentially two parts. The first was student review and presentation of only the facts of an accident typifying a particular hazard category. The second part was a explanation of the technical aspects of the numerous cause-effect relationships (again, plural) found in every accident, followed by what people had done or tried to do prevention-wise. Cause, as such, rarely arose. This approach seemed to be well received because it linked the students to past knowledge and experiences. As an anonymous sage once put it:

"Good judgment comes from experience. Good experience
comes from bad judgment."

Increased Aviation System Complexity

Last but by no means least of these big picture human error issues is what now comprises our aviation "system". In brief, we have seen a mushrooming expansion of the system's socio-technological dimensions: international air travel on a scale hard to even acknowledge, real-time news media coverage and dissemination of accident investigation discoveries, litigious societies growing and continuing to adversely affect accident prevention communications, aircraft and ground subsystems which challenge us mere humans to understand why the machines sometimes fail. These are perhaps just some of the expanding system factors which are curtailing our ability to learn from the past.

On the positive side of the ledger are advances in communication methods such as automatic data processing, the internet and E-mail; that is, most of the time. Will we ever get over that often heard excuse, "It was computer error"? One also wonders, for example, does privacy protection of computer keyboard strokes really exist? If not, does advanced information processing have this as a disbenefit to safety?

The deleterious effects of these problems involve difficulties in understanding known precedent in the first place, let alone communicating it. Tremendous pressures are exerted on investigating agencies and individual investigators. No one who has followed the trials and tribulations of TWA 800 and Swissair 111 can possibly dispute this.

To alleviate these adverse influences is a toughy. Solutions most certainly rest beyond any particular professional discipline. This suggests a systems approach to hazard identification and control should be emphasized not only by the requirements deciders but also the designers, builders, operators, investigators, managers, et al. The influence of the " et al ", often not thought of as being in the aviation system, must be considered; for example, news media personnel, legislators, litigators, public interest groups, and educators.

The beginning of an understanding in this regard is apparent in the FAA in that some of their recent safety initiatives have expanded to include industry and operational personnel. Nevertheless, they are a far cry from accepting applicable principles of aforementioned MIL-STD-882 among other documents describing what system safety is all about; not that this matter has not been brought to their attention from both inside and outside the agency.

As to the technical complexities, they might be the easiest issues to resolve as long as new people continue to make wheels round in stead of square (and Y2K notwithstanding). We have astounding technology breakthrough capabilities. The chapter remains open as to whether or not we apply them with reasonable lessons from the past.

Concluding Remarks

Let us not forget that the aviation system works hard to prevent accidents. Expanded if not totally new programs have been put in place in recent years; Flight Operations Quality Assurance (FOQA) program, an Aging Transport Non-Structural Systems Plan, an Industry Safety Strategic Plan, a Commercial Aviation Safety Team and the previously mentioned FSF programs. At the forthcoming ISASI annual seminar in August in Boston, and the ICAO meeting mentioned earlier, opportunities will be present to discuss and, hopefully, do something on the recommendation question. Nevertheless, when I read of these potentially meaningful efforts, I cannot help being concerned over the prospect of continuing repeat business, if it may be called that because we do not look backward as well as forward.

Pogo was right, "We have met the enemy and they are us". We are all part of the aviation system; thus it is up to all of us to keep working at accident prevention . For those of you who might be getting tired of it all, let me share something I observed on the tube a few months ago. It came up during one of those evening soap operas but nevertheless hit home.

A young boy was terminally ill. He kept fighting and fighting to sustain life. The question arose in the dialog between the parents and the attending physicians. When should they give up and terminate use of the boy's life support equipment? The predominant response was, "When we stop caring".

When we stop caring about preventing accidents is when we can forget what the past has - or should - have taught us.



************
About the Author

C. O. ("Chuck") Miller holds degrees in Aeronautical Engineering, Systems Management, and Law. He is also a graduate of the U.S. Federal Executive Institute's principal course. He was a pilot in the U.S. Marine Corps in World War II and an industry test pilot for three years after his postwar undergraduate studies. He has been involved specifically as a safety professional since then in industry, research, university, government and private consulting environments including being the Director, of NTSB's Bureau of Aviation Safety from 1968 to 1974 and one of the principal consultants to Canada's Dryden Commission in 1989. He is a "Fellow" of the American Institute of Aeronautics and Astronautics, the Human Factors Society, the System Safety Society and the International Society of Air Safety Investigators among many other honors bestowed over the years. He was inducted into the Arizona Aviation Hall of Fame in 1993 and the Safety and Health Hall of Fame International in 1996.



FOOTNOTES:

[1] Rabideau, Gerald, (Personal notes from teaching human error prevention, circa 1975; precise source unknown.)
[2] Meister, David and Allan D. Swain (Personal notes as in Reference 1.)
[3] Bruggink, G. M., "The Uncontrollable Cabin Fire - Land and Evacuate", 101 BAS 1986.
[4] "Final Report to President Clinton from the White House Commission on Safety and Security", Feb. 12. 1997.
[5] FAA, "14 CFR Parts 119, 121, et al, Aging Aircraft Safety, Proposed Rule", Federal Register, April 2, 1999.
[6] Lederer, Jerome, "Hazard of Wet Wire Fire", Flight Safety Foundation, circa 1968. Republished in the ISASI FORUM, Apr.-June 1999.
[7] National Avionics Society, Avionics Newsletter , March 1989.
[8] NTSB "Special Investigation Report: Safety Isues Related to Wake Vortex Encounters During Visual Approach to Landing", NTSB/SIR 94-01.
[9] NTSB Reporter , March 1999. (Verified by a check of the NTSB Web Site,
Apr. 6, 1999.)
[10] FAA Human Factors Team, "The Interfaces Between Flightcrews and Modern Flightdeck Systems", June 18, 1996. Also published in the Flight Safety Foundation's "Flight Safety Digest", Sept.-Oct. 1996.
[11] Lederer, Jerome, "Ideal Safety System for Accident Prevention", Southern Methodist University Journal of Air Law and Commerce, Vol. 34, 1968.
[12] Personal communicationwith Ludwig Benner, Mar. 21, 1999.
[13] Federal Aviation Administration, "Global Analysis and Information Network (GAIN)" May 1996.
[14] This "No Time for the Past" issue was considered to be so important by a colleague (Gerry Bruggink) who kindly reviewed a draft of the text, that he felt it should be the title of the presentation. As usual, he was probably correct; however, the title shown on the cover was already printed in the seminar program.
[15] Miller, C. O., "Commentary on the Gore Commission Report As It Concerns Aviation Safety", ISASI Forum, Apr.-June 1997
[16] Miller, C. O., "Aircraft Accident Investigations: Functions and Legal Perspectives", Southern Methodist University Journal of Air Law and Commerce , Vol. 46-2, Winter 1981.
[17] Miller, C. O., "Down With Probable Cause...", presented at the ISASI Seminar, Canberra, Australia, Nov. 7, 1991; published in the Proceedings
[18] Hendrick, K. and Benner, L., "Investigating Accidents with STEP" (Table 15.6), Marcel Dekker, NY, NY, 1986.
[19] NTSB, "Special Study Report on Approach and Landing Accident Prevention Forum", Oct. 24-25, 1972.
[20]. Flight Safety Foundation, "Approach-and-Landing Forum Accident Reduction Task Force, Final Reports", (Four Groups), FSF Flight Safety Digest, Nov.98-Feb99.
[21] Khatwa, R. and A.L.C. Roelen, "An Analysis of Controlled-Flight-Into-Terrain Accidents of Commercial Operators, 1988 through 1994", FSF Flight Safety Digest , Apr.-May96 (Reprinted Nov.98-feb.99.
[22] Rees, Wally, "Spend Money on VASI", Letter to the Editor, Aviation Week and Space Technology Oct. 26, 1998, (and personal followup thereto with Capt.. Rees and FAA personnel, Nov. 9, 1998.)
[23] Miller, C. O., "System Safety", Chapter 3, in E. L. Weiner and D. C. Nagel, (Eds.) Human Factors in Aviation , Academic Press, 1988.
[24] U.S. Department of Defense, "System Safety Program for Systems and Associated Subsystems and Equipment, Requirements for", MIL-STD-882B, July 15, 1969