Documentation for 4.3
Guide to the TextTest Static GUI
Examining and maintaining the test suites
The static GUI is started by default unless you specified
otherwise in your config file. If you did, it can be started via the command line option "-gx".
The initial left window is a tree view of all the tests in the test suite.
This view can be manipulated either with the mouse or via the Selection tab on the right: many actions apply
to the currently selected tests.
If you select a test, the right window shows the files
for that test and the textual description. The files can be
double clicked to view them in a viewer, which is controlled by the
“view_program” entry (by default, emacs on UNIX and notepad on Windows).
Note this setting can be configured per file "type" as described here.
In addition, you can use this view to create new files of either of the three types,
use "Create File" from the popup menu and select the type of file you wish to create.
If the static GUI is started with no pre-existing test applications, or if
the -new flag is given on the command line, you will initially be given a dialog to define
a new application for TextTest to test. You can also do this later on by selecting "Add Application"
from the Edit menu.
The static GUI can also be used to create new tests or test
suites. By right-clicking a test suite, and selecting "Add Test" or "Add Suite" from
the popup menu, you get a dialog that can be filled in and will result in a new test
under the chosen suite.
In order to run tests, you first need to select some tests to
run. You can do this by single-clicking tests in the test window
on the left. Use ctrl + left click to build up a multiple
selection one test at a time, shift + left click to select
everything between what is currently selected and the line
clicked, ctrl + A to select everything. Alternatively, you can
select tests according to search criteria using the “Select”
button and “Selection” tab on the right (see below
for details of what can be done).
At the top right is a tab called “Running” which
will have three sub-tabs. The tabs “Basic” and
“Advanced” can be used to configure a multitude of
things about how the tests will be run. At the start the default
options should be sufficient. (Note that the tabs are
essentially a graphical representation of all command line
options that can be given to the dynamic GUI. See the table which lists them
all and describes what they do)
Once you are happy with these, press “Run” (on
the toolbar or in one of the above tabs). This will start the
dynamic GUI on the selected tests.
On the right there is a “Selection” tab(it should be
visible when you start TextTest). This provides a simple search mechanism for
finding tests, useful when the test suite grows too large to
always select comfortably which tests you want to run via the
test tree view alone. When the “Select” button is pressed, all
tests will be selected which fulfil all of the criteria
specified by the text boxes in the “Select” tab. It
follows that if no filters are provided and “Select”
pressed, all tests will be selected.
Using the same criteria on the same tab, it is also
possible to filter the tests, so that instead of selecting tests that match
the criteria, TextTest will hide those that do not match the criteria. This
is often useful when you want to work with a subset of the test suite for some time.
In addition, the View menu contains actions to turn a selection into a filtering
by hiding all tests that aren't currently selected.
The respective frames for selection and filtering
contain various “modes” represented by radio buttons. These operate
independently of each other. “Discard”, the default, will ignore the current
selection or filtering. “Extend”
will keep the current selection or filtering and add to it. “Refine”
will match only tests that were already selected or shown and match the
search criteria, while “Exclude” will match only
test that were not already selected (and isn't currently implemented for filtering).
Note that the number of selected tests (and the total number
of tests, and the number of hidden tests) is displayed in the column header of the test view at
all times. The various selection criteria can also be tried out
from the command line, using the plugin
script “default.CountTest”.
The simplest filters focus on matching the names of tests and
the test suites they are in. The “Test Names Containing”
field (-t on the command line) will select all test cases which
have the indicated text as a substring of their names. If
instead a standard regular expression is used, all tests whose
name matches that expression will be selected..
In a similar way, the “Test Paths Containing”
field (-ts on the command line) provides a way to select tests based on their
full path relative to the root, i.e. including all parent suite names. The components
may be separated either with "/" or " ". As above, substrings and regular expression
matching may also be used. This replaces the previous "Test Suites Containing" and can
of course be used for that purpose also as the test suite full path is a substring of
the test's full path.
Likewise it is possible to match on the name of the application,
which can be useful if several different ones are loaded into the GUI at the same time,
or also on the description of the test, using a similar substring/regular expression matching
to that described above. This can also be done on the command line via "-a" and "-desc" respectively.
Sometimes test suites contain different tests depending on
the version identifier. In
this case, fill in the “Tests for Version” filter to
select the tests applicable to a particular version. This is
filled automatically if the static GUI is itself started with a
version identifier. It is not generally useful to do this on the
command line - simply running with a version will have the same
effect.
You can also search for certain logged text in the result
files. This is done via the “Test-files containing”
filter (-grep on the command line). By default, this will search
in the file identified by the “log_file” config file
entry. If the “Test-file to search” filter is also
provided (-grepfile on the command line), that file will be
searched instead. This allows selecting all tests that exercise
a certain part of the system's functionality, for example. Regular expressions
may be used in the text string to search for, while UNIX-style file expansions
may be used for the file name (note, these are different syntaxes!)
If system resource usage testing
is enabled, you can select tests based on how
much time they are expected to consume.
By default this will use the total CPU time ("performance" files) but this can be configured via the "default_performance_stem" setting.
Selecting based on time is done via the “Execution Time” filter (-r on the command line). A
single number will be interpreted as a maximum time to
select. Two comma-separated numbers will be interpreted as a
minimum and a maximum. All times are in minutes. In addition,
you can use the format mm:ss, rather than needing to convert
times into a fraction of a minute, and can also use the
operators <,>,<= and >= to specify ranges of times
to include.
Sometimes it may be useful to define such a subselection of
the tests that you may wish to reuse. To do this, select “Save
Selection” from the “File” menu, which brings
up a file chooser dialog so you can choose a file to save in.
Note it has two different options, allowing you to specify that
either the exact tests currently selected are to be saved, or
the criteria which were used to select them. Whichever, a new
“filter file” is created, which can be selected
again via “Load Selection” in the same menu, and
also via the “Tests listed in file” tab under
“Selection”.
The differences between the two variants become apparent when
somebody tries to load this file. Loading an explicit list of
tests will probably be faster than re-selecting them according
to some criteria, but if new tests are added since the selection
was saved, it will naturally not pick up these tests.
By default, the static GUI files will be saved in a directory
called “filter_files” under the directory where your
config file is. The dynamic GUI will save them in a temporary
location which is removed when the static GUI is closed. These
locations are used to generate the shortcut list for locations to
search when doing "Load Selection" or "Save Selection" in the GUI,
or filling in the “Tests listed in file” option, and are
also those searched if -f is provided on the command line. These
locations can be extended or replaced by defining the config file entry
“filter_file_directory”.
These files may refer to each other, though you will need to create them
by hand for this to happen. This can be used to combine stored selections
into other selections, using the command-line options -funion,-fintersect and -finverse
to create intersections,unions and inverses of selections specified in pre-existing filter
files. In most cases these options will be most useful in other filter files, for example
TextTest's self-tests contain a filter file called "gui" which contains the string
"--funion static_gui,dynamic_gui".
It is also possible to define which tests
to run by default based on such filter files. The config file setting
"default_filter_file" will make sure that only the tests that match
the criteria in the given file (found via the path mechanism described above)
are included in the run. This is primarily useful for defining a
version
of the test suite. A closely related concept is also available in
batch mode via the setting "batch_filter_file".
It's fairly common that the files in a test suite
get changed outside of the static GUI, for example if they are version-controlled
and you update them (which cannot be done from the GUI - yet...). As they
are all plain text it is also easy to edit them in a text editor independently
of the GUI. There is therefore a "refresh" action in the Actions menu, which
will re-read the whole test suite, including the config file settings, from the files.
It doesn't take account of which tests are selected so it isn't possible to refresh
just individual tests right now.
The order of the test suites is primarily defined by the
testsuite.<app> files, unless automatic sorting is enabled
(see the guide
to files and directories) However, there are some quick ways
to sort the tests after the fact. By simply clicking on the
column header they can be sorted “transiently” (i.e.
nothing is saved in any files and the sort is gone if you
restart). You can also sort more permanently by selecting the
various options from the Edit menu, which also contains various
options for manual sorting by moving the selected tests up and
down. These options are also avaiable via the “Re-order”
submenu in the popup menu for the test window.
Note that by default, sorting a test suite does so
recursively (i.e. all contained test suites will also be
sorted). To disable this behaviour, set the config file entry
“sort_test_suites_recursively” to 0.
Note that you can also simply edit the testsuite file via the file view, and
re-ordering of the tests performed this way
will show up in the GUI without needing to restart it.
Anything that is written on standard error by a dynamic GUI run will
be placed in a popup dialog in the static GUI when the dynamic GUI is closed down. These usually
indicate a bug in TextTest but they can also indicate environmental problems, for example
GTK issues or shell startup problems. Hopefully these can usually be fixed but occasionally you
end up with environmental issues that cannot easily be fixed.
These popups can therefore be suppressed if desired. Simply
add a config file entry "suppress_stderr_text" followed by a substring or regular expression
that matches the line that is being repeatedly printed. You can now (3.16 onwards) in fact write
anything that you can use for "run_dependent_text" in this entry, which for example makes it easier to filter several
lines at once. All lines that match these entries
will be filtered out before displaying a dialog, and of course if no lines are left no dialog
will be displayed.
|