Putting Formal Specifications under the Magnifying Glass: Model-based Testing for Validation

Emine G. Aydal, Richard F. Paige, Mark Utting, Jim Woodcock

Research output: Contribution to conferencePaperpeer-review


A software development process is conceptually an abstract form of model transformation, starting from an enduser model of requirements, through to a system model for which code can be automatically generated. The success (or failure) of such a transformation depends substantially on obtaining a correct, well-formed initial model that captures user concerns. Model-based testing automates black box testing based on the model of the system under analysis. This paper proposes and evaluates a novel model-based testing technique that aims to reveal specification/requirement-related errors by generating test cases from a test model and exercising them on the design model. The case study outlined in the paper shows that a separate test model not only increases the level of objectivity of the requirements, but also supports the validation of the system under test through test case generation. The results obtained from the case study support the hypothesis that there may be discrepancies between the formal specification of the system modeled and the problem to be solved, and that using solely formal verification methods may not be sufficient to reveal these. The approach presented in this paper aims at providing means to obtain greater confidence in the design model that is used as the basis for code generation.
Original languageUndefined/Unknown
Publication statusPublished - 2009

Bibliographical note

Print ISBN: 978-1-4244-3775-7

Cite this