Documentation for 3.10
Other versions: trunk  4.3  4.0  3.27  Older...

Guide to the TextTest Dynamic GUI
Administering interactive test runs
How to start the Dynamic GUI
The dynamic GUI is selected on the command line via the “-g” option. Alternatively, it is started by clicking “Run” in the static GUI toolbar.



The Test Tree View
The initial left window is a tree view of all the tests that are being run. These are colour-coded: white tests have not started yet, yellow tests are currently in progress, green tests are reckoned to have succeeded while red tests are reckoned to have failed. By clicking the rows in this view details of what happened to a particular test can be examined.
When whole test suites become complete and all tests in them have succeeded, the dynamic GUI will automatically collapse the tree view for that suite and simply identify that line by the number of tests that were successful. This aids the user in seeing which tests need his attention. If this behaviour is found to be undesirable for any reason, it can be disabled by setting the config file value 'auto_collapse_successful' to 0.
The Status tab
The initial right window is a summary of how many tests are in a given state and is continually updated as they change. When they complete, if they fail they are sorted into categories of which files differed underneath the “Failed”. Note that as several files can differ in the same test this doesn't necessarily add up to the total number of failed tests.
It also doubles as a way of selecting and hiding whole categories of tests at once. Clicking the lines will select all such tests in the test tree view. Unchecking the check boxes on the right will cause all tests in that category to be hidden in the test tree view on the left. If you uncheck one of the “<file> different” lines, note that tests will only be hidden if all files that were different are marked as hidden, so if a file is different in more than one file it will not be hidden until you uncheck all the files where it was different. In this example there is only one file anyway.
Note that whether categories are visible by default can be configured using the config file list entry “hide_test_category” (a common usage is to hide all successful tests automatically). To see how to refer to various categories, use the keys from for example the “test_colours” entry in the config file table.
Viewing Tests
When tests are selected via either of the above means, a new “Test” tab is created on the right, containing the details of what's going on in that test. Another tree view, this time of the result files of that test, appears in the top right window, while a textual summary of what happened for this test can be seen in the “Text Info” tab below. The files are also colour-coded, depending on whether TextTest thought they were different (red) or not (green).
Double clicking on files from the test view will bring up views of those files, using the external tools specified in the config file. The relevant entries are "diff_program" for viewing differences, "follow_program" for following a running test and "view_program" for viewing a static file. These default to “tkdiff”, “tail -f” and “xemacs” respectively on UNIX systems, “tkdiff”, “baretail” and “notepad” on Windows. By default differences will be shown if they are thought to exist (red files) otherwise the file is simply viewed. To select other ways to view the files, right-click and select a viewer from the popup menu.
Note that “view_program” is a “composite dictionary” entry and can thus be configured per file type, using just the stems as keys. It is thus easy to plugin custom viewers if particular files produced by the system are more conveniently viewed that way.
On the Text Info tab you will see a textual preview of all the file differences, along with a summary of what happened to the test. This textual preview is also used by the batch mode email report. The tool used to generate the diff can be configured via the config file entry “text_diff_program” (it defaults to “diff”, which is also used internally by “tkdiff”). Each file diff is truncated: the number of lines included can be set via “lines_of_text_difference”, which defaults to 30.
If several files differ they will be displayed in alphabetical order. Sometimes this is not ideal as some files give better information than others. In these cases you should use the config file dictionary setting “failure_display_priority” which will allow you to ensure the most informative file comes at the top. Low numbers imply display first.
To protect from very large files being compared with diff, you can specify a maximum file size for this, using the “text_diff_program_max_file_size” config file entry. (Otherwise difference programs can hang forever trying to compare the files)
Saving Test Results
When tests fail, you can examine the differences as above, and sometimes you will decide that the change in behaviour is in fact desirable. In this case, you should “Save” the test, or group of tests. This operation will overwrite the permanent “standard” files with the “temporary” files produced by this run.
To achieve this, the dynamic GUI has a “Save” button in the toolbar and a corresponding “Saving” option tab at the bottom right. Select the tests you wish to save from the left-hand test window by single-clicking, and using Ctrl+left click to select further tests. (Press Ctrl-A to select all tests) On pressing “Save” (or Ctrl+S), all selected tests will be saved.
On saving a test, by default all files registered as different will be saved. You can however save only some of the files by selecting the files you wish to save from the file view (under the Test tab) in much the way you select multiple tests in the tree view window.
Further configuration options are available under the “Saving” tab
You can configure which version the results are saved as (look here for a description of versions). By default, they will be saved as the version that you ran the dynamic GUI as. There is a drop-down list so that you can select other versions if you want to, which will generally include the current version and all versions more general than it. Sometimes you don't want results to be saved for particular versions, this can be configured via the “unsaveable_version” entry which will cause these versions not to appear in the list.
You can also overwrite all the files, even if they were regarded as the same, via the option “replace successfully compared files also”: this is a way to re-generate the run-dependent text for a test.
Marking tests
Sometimes you have a lot of tests failing for different reasons. It can be helpful to be able to manually classify these tests as you discover what caused individual failures, so that you can see which tests still need to be checked. You can therefore "mark" tests with a particular text, which will cause them to be classified differently and be easy to hide as a group from the Status view. This is achieved by right-clicking on the test and selecting the relevant item from the popup menu.


Last updated: 28 February 2020