Software Testing Material

Google
 

Wednesday, February 20, 2008

Data-Driven Testing

Data-Driven Testing

Introduction

When deciding to automate the testing procedure of an application one expected benefit is that the automatic tests have the same life-time as the application under test and don't need to be rewritten all the time. Additionally it should be possible to extend existing tests easily along with the application's development.

To develop tests which meet these requirements, it isn't enough to just record a test and use it like this. In most cases it will be necessary to edit the test script to refactor its code and do some other changes. Such a change might be to separate the test data which has been recorded (like the values entered in line edits) from the code by introducing data-driven testing. This will also allow to extend test cases later by just adding new data to the test data without changing the actual test script code.

One requirement for data-driven testing is of course that the test tool in use goes beyond straight event recording and replaying and offers a full-fledged scripting language for test scripts. This is the case for Squish, which currently offers its users to choose between the Tcl and Python scripting languages. In this article we will use Python.

This article will show how to create such a data-driven test and also how to execute an external tool from a test script to take over tasks such as comparing files.

What exactly is data-driven testing?

Before we go on to implement a data-driven test, I'd like to give a short introduction about the idea behind data-driven testing. Basically it means that a test case contains data files which the application under test (AUT) will use in some way during the test. In the simplest case these might be files the application needs to read during the test execution. In such a case, the test framework should offer an API to manage such data files.

A more sophisticated way of data-driven testing is that the testing framework actually understands the contents of the data files and it is possible to iterate over the contents in the test script. This is useful to specify the data which should be entered into input fields, etc. during the test. Instead of hard-coding this data into the test script, it will be stored in records in a data file which can be easily extended with new data. Such data might also be read from a database or any other data source.

Setting up the test

We need to think about what should happen in the test case so we can record the parts which we don't want to code manually. As an example, we will create a test for the textedit example which can be found in the examples directory of Qt. Afterwards we will edit the generated script to refactor its code and make it data-driven.

We won't discuss the details about setting up a test suite here since this is already covered extensively in Squish's manual. The only step I want to stress here is to choose Python as scripting language in the test suite settings of the test suite. We call the test suite suite_textedit and create a test case called tst_simpleinput.

Recording the GUI interactions

In this test case we want to open a file, edit the file, save it and compare the result against an expected output. The input file will be a test data file, the commands to edit the file will also be specified in a test data file along with the expected result. This way the test can be easily extended in the future by just adding more test data. To compare the saved output file against the expected result, we will use the tool diff, which is installed on any Unix system and which is available on Windows as well, e.g. through Cygwin. But first we will record the GUI interactions and then we will go on and modify the generated script.

Already for the recording we need a test data file which we will open in the test case. So we create a file called simple.html in the test case's testdata directory with the following contents:

First paragraph

Bold paragraph

Now when recording, in the record settings dialog which will pop up, we can choose this test data file so it will be copied to the AUT side automatically. Now we record the following test flow:

1. Choose File->Open in the menu

2. Choose the file simple.html in the file dialog by double clicking on it

3. Enter something in the editor (this part of the test will be modified later on anyway, so the entered text doesn't matter)

4. Choose File->Save As in the menu

5. Enter out.html as filename in the file dialog

6. Press Return in the file dialog

7. Choose File->Close in the menu

8. Choose File->Exit in the menu

This will generate a test script like the following:

import time
 
def main():
    testData.put("simple.html")
    time.sleep(0.007)
    sendEvent("QMoveEvent", ":Richtext Editor", 756, 218, 665, 649)
    time.sleep(0.047)
    clickButton(":Richtext Editor.qt_top_dock.File Actions.fileOpen_action_button")
    time.sleep(0.189)
    sendEvent("QMoveEvent", ":Richtext Editor.qt_filedlg_gofn", 800, 423, 805, 261)
    time.sleep(0.1)
    doubleClickItem(":Richtext Editor.qt_filedlg_gofn.qt_splitter.files and more files.filelistbox"+\
        ".qt_viewport", "simple.html", 48, 11, 0, Qt.LeftButton)
    time.sleep(2.14)
    type(":Richtext Editor.QTabWidget1.tab pages.QTextEdit2", "Hello")
    time.sleep(2.876)
    activateItem(":Richtext Editor.automatic menu bar", "File")
    activateItem(":Richtext Editor.QPopupMenu1", "Save As...")
    time.sleep(0.157)
    sendEvent("QMoveEvent", ":Richtext Editor.qt_filedlg_gsfn", 800, 423, 788, 325)
    time.sleep(1.715)
    type(":Richtext Editor.qt_filedlg_gsfn.name/filter editor", "out.html")
    time.sleep(0.484)
    type(":Richtext Editor.qt_filedlg_gsfn.name/filter editor", "")
    time.sleep(1.654)
    activateItem(":Richtext Editor.automatic menu bar", "File")
    activateItem(":Richtext Editor.QPopupMenu1", "Close")
    time.sleep(2.876)
    activateItem(":Richtext Editor.automatic menu bar", "File")
    activateItem(":Richtext Editor.QPopupMenu1", "Exit")
 

Setting up the test data

Now we have the basic test script which we can edit to use test data. But before we can do this, we need to set up the test data itself. We will use the following files as an example:

  • tests.tsv Specifies the input files and the related expected output files.
  • inputwords.tsv Specifies input commands for a test which enters some words in the editor
  • expwords.tsv Contains the expected output from the inputwords.tsv test.
  • inputpara.tsv Specifies input commands for a test which inserts a new paragraph in the editor
  • exppara.tsv Contains the expected output from the inputpara.tsv test.

The contents of tests.tsv looks like this (note that the fields are separates by tabs, not by spaces):

CommandFile        ExpectedOutput
inputwords.tsv        expwords.tsv
inputpara.tsv        exppara.tsv

The first line specifies the column names which will be later used in the test script to access a field in a record. The other lines contain the data. In our case these are the input and expected output files.

We won't show the contents of all files here. But as an example let's look at the test which inserts a new paragraph. Here is the content of the inputpara.tsv file:

Argument
New Paragraph

The first line again specifies the column title. The other lines contain the data which should by passed to the Squish type() function which sends key events to a edit widget. In this case first the text New Paragraph will be inserted and then the Return key will be pressed to split the paragraph in two.

The corresponding expected result file exppara.tsv looks like this:

New Paragraph

First paragraph

Bold paragraph

This is the HTML output which the AUT should save when we save the contents of the editor after executing the above actions.

Editing the test script

Now that we have the test data files in place we will go on and edit the recorded test script to use this test data. What we want to achieve is that the test executes the following actions for each record in tests.tsv:

1. Choose File->Open in the menu

2. Choose the file simple.html in the file dialog by double clicking on it

3. Enter the text in the editor as specified in the input data file

4. Choose File->Save As in the menu

5. Enter out.html as filename in the file dialog

6. Press Return in the file dialog

7. Get out.html from the AUT side and compare it against the expected output file

8. Choose File->Close in the menu

Basically we need to put most of the test script into a loop which iterates over all records in tests.tsv. Then we have to implement step 3 to enter the commands as specified in the test data instead of entering some hard-coded data. We also have to implement step 7.

First we put most of the test into the loop. We will use a for loop which loops over all records in the data set tests.tsv and stores the current record in the variable t. In Python we use the following code for this:

    for t in testData.dataset("tests.tsv"):

In Python the body of the loop is defined by all lines below the loop statement whose indentation is larger than the loop statement's indentation. The first line which has the same indentation as the loop statement again is the first line outside the loop body. So what we will do is to insert the loop statement after the line sendEvent("QMoveEvent", ....). Then we increase the indentation of all lines until the line activateItem(":Richtext Editor.QPopupMenu1", "Close").

The next step is to modify the line type(":Richtext Editor.QTabWidget1.tab pages.QTextEdit2", "Hello") to type in the commands as specified in the current test data record's input file. So we replace this line with the following code:

        file = testData.field(t, "CommandFile")
        for arg in testData.dataset(file):
            type(":Richtext Editor.QTabWidget1.tab pages.QTextEdit2", testData.field(arg, "Argument"))

First we retrieve the name of the input file in the current test data record. We address the field containing the file name of the command file using the column title CommandFile. Then we loop over all commands in the specified input file and pass the commands retrieved from the field labeled Argument to the type() function to insert the text into the text editor.

The last bit is to compare the saved result against the expected result. To do this we will use the command line tool diff. First we will implement a function which runs diff on two files and adds a test result to the result log. We add the following code to the beginning of the script:

def diff(file1, file2):
    out = commands.getoutput("diff -u " + findFile("testdata", file1) +\
        " " + findFile("testdata", file2))
    if out == "":
        test.passes("Diff", file1 + " and " + file2 + " are equal");
    else:
        test.fail("Diff", out)

We use the Python module commands to execute the external diff program. Therefore we also need to import the commands module. We call the getoutput function from this module which executes the command specified as string and returns the output from the command. Therefore we put together the command line of diff which has to contain the files which should be compared. We assign the output to the variable out.

The output will contain the differences between the two files. This means if the string is empty, there was no difference. In this case we add a PASS test result with the information about the two files to the test result log. Otherwise we add a FAIL test result with the output generated by diff to the test result log.

Now we only need to call this function from within the test data loop. We add the following code after we pressed Return in the file dialog to save the editor's contents to the file out.html:

        testData.get("out.html")
        diff("out.html", testData.field(t, "ExpectedOutput"));

First we need to copy the file out.html from the AUT side to the squishrunner side. We use Squish's testData.get() function for this. Then we call the diff() function we just implemented. For the two files to be compared we pass the out.html file and the file which is specified in the current test data record as expected output in the column ExpectedOutput to the diff() function.

Now we implemented all steps and we can execute the test. By just adding new input and expected output files, and adding them to the tests.tsv file, it is possible to extend this test case. No test script code changes would be necessary. The complete test script will now look like the following:

import time
import commands
 
# Function which looks for file1 and file2 and invokes diff on them. If the files don't differ
# a test pass is added to the test results, a test fail otherwise
def diff(file1, file2):
    out = commands.getoutput("diff -u " + findFile("testdata", file1) +\
         " " + findFile("testdata", file2))
    if out == "":
        test.passes("Diff", file1 + " and " + file2 + " are equal");
    else:
        test.fail("Diff", out)
 
def main():
    # copy simple.html to the AUT side
    testData.put("simple.html")
    time.sleep(0.007)
    sendEvent("QMoveEvent", ":Richtext Editor", 756, 218, 665, 649)
    time.sleep(0.047)
 
    # loop over all tests specified in tests.tsv
    for t in testData.dataset("tests.tsv"):
        # open the simple.html file
        clickButton(":Richtext Editor.qt_top_dock.File Actions.fileOpen_action_button")
        time.sleep(0.189)
        sendEvent("QMoveEvent", ":Richtext Editor.qt_filedlg_gofn", 800, 423, 805, 261)
        time.sleep(0.1)
        doubleClickItem(":Richtext Editor.qt_filedlg_gofn.qt_splitter.files and more files.filelistbox"+\
        ".qt_viewport", "simple.html", 48, 11, 0, Qt.LeftButton)
        time.sleep(2.14)
 
        # get the file which specifies the type commands which should be executed
        file = testData.field(t, "CommandFile")
 
        # iterate over all commands in the command file and execute them via type()
        for arg in testData.dataset(file):
            type(":Richtext Editor.QTabWidget1.tab pages.QTextEdit2", testData.field(arg, "Argument"))
 
        # save the resulting contents to out.html
        time.sleep(2.876)
        activateItem(":Richtext Editor.automatic menu bar", "File")
        activateItem(":Richtext Editor.QPopupMenu1", "Save As...")
        time.sleep(0.157)
        sendEvent("QMoveEvent", ":Richtext Editor.qt_filedlg_gsfn", 800, 423, 788, 325)
        time.sleep(1.715)
        type(":Richtext Editor.qt_filedlg_gsfn.name/filter editor", "out.html")
        time.sleep(0.484)
        type(":Richtext Editor.qt_filedlg_gsfn.name/filter editor", "")
        time.sleep(1.654)
        
        # copy the saved out.html to the squishrunner side
        testData.get("out.html")
        
        # diff out.html agains the expected result
        diff("out.html", testData.field(t, "ExpectedOutput"));
        
        # close the editor tab
        activateItem(":Richtext Editor.automatic menu bar", "File")
        activateItem(":Richtext Editor.QPopupMenu1", "Close")
        
    # exit the AUT
    time.sleep(2.876)
    activateItem(":Richtext Editor.automatic menu bar", "File")
    activateItem(":Richtext Editor.QPopupMenu1", "Exit")
Conclusion: This article showed the complete process of creating a data-driven test case. While event capture

and reply eases the first steps of creating a test case, a test tool has to offer much more (a versatile
scriptinglanguage, extension APIs for test data handling, etc.) to allow test engineers to create robust,
maintainable and extensible test cases which will pay off in the long run.

No comments: