posted on 2012-11-02, 12:37authored byTamim Ahmed Khan
Web services only expose interface level information, abstracting away implementation details. Testing is a time consuming and resource-intensive activity. Therefore, it is important to minimize the set of test cases executed without compromising quality. Since white-box testing techniques and traditional structural coverage criteria require access to code, we require a model-based approach for web service testing. Testing relies on oracles to provide expected outcomes for test cases and, if implemented manually, they depend on testers’ understanding of functional requirements to decide the correct response of the system on every given test case. As a result, they are costly in creation and maintenance and their quality depends on the correct interpretation of the requirements. Alternatively, if suitable specifications are available, oracles can be generated automatically at lower cost and with better quality. We propose to specify service operations as visual contracts with executable formal specifications as rules of a typed attributed graph transformation system. We associate operation signatures with these rules for providing test oracles.
We analyze dependencies and conflicts between visual contracts to develop a dependency graph. We propose model-based coverage criteria, considering this dependency graph, to assess the completeness of test suites. We also propose a mechanism to find out which of the potential dependencies and the conflicts were exercised by a given test case. While executing the tests, the model is simulated and coverage is recorded as well as measured against the criteria. The criteria are formalized and the dynamic detection of conflicts and dependencies is developed. This requires keeping track of occurrences and overlaps of pre- and post-conditions, their enabling and disabling, in successive model states, and interpreting these in terms of the static dependency graph.
Systems evolve over time and need retesting each time there is a change. In order to verify that the quality of the system is maintained, we use regression testing. Since regression test suites tend to be large, we isolate the affected part in the system only retesting affected parts by rerunning a selected subset of the total test suite. We analyze the test cases that were executed on both versions and propose a mechanism to transfer the coverage provided by these test cases. This information helps us to assess the completeness of the test suite on the new version without executing all of it.