Applying Software Test Generation Methods to Large Programs
SBIR 2004 Topic 9.06: Information Technology
Department of Commerce (DOC)
National Institute of Standards and Technology (NIST)
Solicitation Closing Date: January 30, 2004 at 3:00 PM EST

The entire solicitation may be viewed at http://www.nist.gov/sbir


9.06.05 Subtopic: Applying Software Test Generation Methods to Large Programs

Up to 50% of the cost of software development is testing. In addition, adequately secure software can no longer depend on user testing to find security bugs. Writing tests manually is labor-intensive and misses important tests. Specification-based formal test generation methods, based on model checking [1], have been developed, but applying these methods to large programs requires several innovative solutions.

First, we need ways to semi-automatically partition and/or abstract specifications into pieces small enough to process [2]. Tests resulting from these pieces must then be merged and elaborated to form a test suite. The pieces must be sound, that is, tests generated from the pieces are valid in the originals. Also, one must be able to gauge how much of the original is covered by the tests generated, to judge whether more abstractions or different partitions are needed.

Second, limited nondeterminism must be allowed. In the simplest programs, each input produces exactly one output. Actually a range of responses may be acceptable, for instance, different orders of executing subtasks, small differences in numerical results, or arbitrary program-generated tags. The expected output produced in the test scenarios must cover a range of acceptable actual outputs, either explicitly or by reference to an abstract class.

Third, there must be a way to efficiently turn test suites into self-evaluating source code. Conceptually one specifies which calls correspond to each abstract test step, the test suite is macro-expanded into code, and dispatch and reporting code is added. In practice, this is far more complex. Innovative adaptations of existing test code generators is one possible approach.

Finally, tests must be traceable back to the original specification. The final executable test suite must report not only that a particular test failed, but also the part(s) of the specification that yielding that test. In other words, what lines, paragraphs, etc. of the specification the software does not satisfy.

As software becomes a bigger part of NIST standards, we want to spend our time developing standards, not generating acceptance tests. A highly automated, flexible, formally specified software test generation method allows us to put our time to best use and would be widely used in industry.

[1] “Model Checkers in Software Testing”, NIST-IR 6777

[2] “Abstracting Formal Specifications to Generate Software Tests via Model Checking”, NIST-IR 6405


Requests for general information on the NIST SBIR program may be addressed to:

SBIR Program
100 Bureau Drive, Stop 2200
Gaithersburg, MD
Telephone:
Fax:
Email:


NOTE: The Solicitations listed on this site are partial copies from the various SBIR and STTR agency solicitations and are not necessarily the latest and most up-to-date. For this reason, you should always use the suggested links on our reference pages. These will take you directly to the appropriate agency information where you can read the official version of the solicitation you are interested in.
The official link for this solicitation is:   http://www.nist.gov/sbir.

Solicitation Closing Date: January 30, 2004 at 3:00 PM EST .