Thursday, March 1, 2012

he Birth of Design for Testability

he Birth of Design for Testability


Slowly, over the last four decades, industry has gradually begun to give Design for Testability the attention it truly deserves. This concept was originally pioneered by Ralph De Paul, Jr. in the mid 1960’s based on diagnostic ideas that he had developed in the 1950’s. At that time, industry was not yet receptive to the idea of implementing within the design process, new techniques oriented solely toward improving the testing and “troubleshooting” of a device or system. During its first two decades, Design for Testability remained largely an “outsider” discipline, accepted only by a relatively small group of industry experts, until the idea finally began to take hold in the mid 1980’s.

The Initial Two-Decade Investment

In the 1960’s and early 1970’s, De Paul—who would later found DETEX Systems (today’s DSI International)—began pushing for innovative ways to assure our servicemen of dependable equipment in the field. Recognizing that this was best insured during the design process, De Paul developed a way of representing design functionality as a causal model in what he called Design Disclosure Format, or DDF (later to be known as a dependency model). With DDF, Maintenance Dependency Charts (MDCs) were developed to improve troubleshooting and equipment maintenance. The DDF approach would prove to be equally effective for electronic or non-electronic systems. Emerging from this process in the mid 1960’s was military standard MIL-M-24100, co-developed by Ralph De Paul and the first precursor to future Testability standards. A careful study of this initial standard would reveal that at its core was a design description format that was nearly identical to the format used by LOGMOD—DSI’s first commercially available Testability and Maintenance tool.

With LOGMOD enjoying continual field successes through the late 1970’s, United States Department of Defense studies were conducted and William Keiner (US Navy) sought out DSI and Ralph De Paul to bring this need for Testability analysis before the U.S. congress. Keiner visited DSI on numerous occasions and was ultimately credited with authoring MIL-STD 2165, the first recognized “Testability” standard. This document, however, was heavily influenced by the ideals, techniques and writings that De Paul—ever the diagnostic evangelist—had freely shared with Keiner during this period.

As a result of DSI having been intimately involved in the initial “push” of MIL-STD 2165 (now, MIL-HDBK 2165), DSI was also instrumental in the development of MIL-HDBK 1814, the Integrated Diagnostics standard. The purpose of this document was to better define the broadening scope and diagnostic responsibilities of all contributing parties, both those involved design and support activities. This was the era where Design for Testability began to branch into additional areas and the multitude of design and support disciplines began to coalesce into a more unified diagnostic engineering process that was better suited to evaluate diagnostic performance and support objectives during the design phase of large scale complex programs.

Riding on the Shoulders of Giants

In 1993, IEEE recognized Ralph De Paul by posthumously awarding him the John Slattery award for his contribution to diagnostic engineering, referring to him as “The Father of Testability”. Throughout the 1990’s and early 2000’s, however, DSI continued to push industry to strive towards higher levels of commitment to Design for Testability. Industry experts remained in contact with DSI and, when the IEEE began to develop its own standard on testability and diagnosability, Eric Gould (DSI’s current subject matter expert) was recruited to aid in its development. As a member of the Diagnostic and Maintenance Control subcommittee of IEEE SCC20, Gould was heavily involved with the writing of this standard—not only drafting the metrics section of the document, but also participating in the ballot resolution process.

The resulting IEEE Std 1522 (2004) is a document that provides a formal basis for the analytical component of the Design for Testability process. Ultimately, this standard was not only the result of years of collaborative effort by many of the most resume-rich and well-known subject matter experts in the field, but the strict ballot resolution process required by the IEEE ensured that the standard withstood the “test of fire” as it was reviewed by members of the Design for Testability community at large. Because the metrics defined within this standard were intended to be applied within a wide variety of domains, care was taken not only that the metrics were computationally precise, but that the definitions were sufficiently general to ensure universal applicability. This awareness of the full spectrum of concern is one of the areas in which this document differs from many of the more provincial descriptions of the testability process that can be found by searching the Internet.

Defining Testability

Design for Testability is not an endeavor solely owned by the chip, board or software test ambassadors—although these segments of industry oddly appear to believe that the Design for Testability process is peculiarly theirs. Many of the definitions of testability or Design for Testability that can be found on the Internet are extremely narrow, funneled towards the low-level testing methodologies performed for circuit or software test. Furthermore, many of these definitions are closely associated with specific test methodologies (JTAG, Boundary Scan, etc.) that imply that their intended domain is only low-level, electronic-specific or software test applications. It is hard to imagine how designers of fuel systems, radar arrays, military vehicles and communication satellites—to name just a few of the many system technologies to which Design for Testability must be contractually applied—could find a use for the process described in the provincial definitions that are becoming alarmingly widespread.

Moreover, excessively low-level descriptions of the benefits of good testability usually result in a mis-connect when engineers attempt to sell Design for Testability to management. It is important that the process be described in terms that are general enough to encompass the various benefits in all of the disciplines (Maintainability, Safety, Supportability, etc.) that it is ultimately Design for Testability’s responsibility to sustain. It is beyond the scope of this paper to propose a new definition of testability—especially since there are plenty of serviceable definitions already afloat that may serve as excellent starting points. One could do worse than to begin with the extremely (and intentionally) general definition served up in IEEE 1522:

Testability: A design characteristic that allows its operational status to be determined and the isolation of faults to be performed efficiently.

We could then hazard the following, equally general definition of Design for Testability:

Design for Testability: The aspects of the product design process whose goal is to ensure that the testability of the end product is competently and sufficiently developed.


No comments:

Post a Comment

Related Posts Plugin for WordPress, Blogger...