Documentation for 3.9.1
Guide to TextTest's User Interfaces
Static GUI, Dynamic GUI and
Console
TextTest can be operated in two modes: interactive mode
which expects a user to be present and able to respond, and
batch mode which does not. Batch
mode provides the test results in the form of an email
and/or HTML report. This document aims to describe the various
interactive modes.
Interactive mode now consists primarily of the PyGTK GUIs :
the dynamic GUI, for monitoring tests as they run, and
the static GUI, for examining and changing test suites
and starting the dyamic GUI. The older console interface is
still present, though is no longer being actively developed.
It is thus possible to operate with TextTest in any of three
ways: console only, dynamic GUI only (started from a command
prompt for each test run) or static and dynamic GUIs. These
possibilities have arisen in that order: TextTest was
traditionally a command-line UNIX script, indeed the very early
versions were actually Bourne shell scripts! It is generally
best to pick one of these approaches and stick to it: they are
more or less equivalent.
Newcomers to TextTest, unless opposed to GUIs in principle,
should generally use both the static and dynamic GUIs. This is
really how TextTest is meant to be used now (anyone on Windows
will find any other way of operating painful, probably). It can
still be useful to know about the other interfaces in case of
problems: they can help in error-finding because they are
simpler.
The “default_interface” config file setting can
be used to choose your preferred way of running. It can take the
values “static_gui”, “dynamic_gui” or
“console”, and defaults to the first of these. Any
interface can also be chosen on the command line, via the
options -gx, -g or -con respectively.
The dynamc GUI is selected on the command line via the “-g”
option. Alternatively, it is started by clicking “Run”
in the static GUI toolbar.
The initial left window is a tree view of all the tests that
are being run. These are colour-coded: white tests have not
started yet, yellow tests are currently in progress, green tests
are reckoned to have succeeded while red tests are reckoned to
have failed. By clicking the rows in this view details of what
happened to a particular test can be examined.
When whole test suites become complete and all tests in them
have succeeded, the dynamic GUI will automatically collapse the
tree view for that suite and simply identify that line by the
number of tests that were successful. This aids the user in
seeing which tests need his attention. If this behaviour is
found to be undesirable for any reason, it can be disabled by
setting the config file value 'auto_collapse_successful' to 0.
The initial right window is a summary of how many tests are
in a given state and is continually updated as they change. When
they complete, if they fail they are sorted into categories of
which files differed underneath the “Failed”. Note
that as several files can differ in the same test this doesn't
necessarily add up to the total number of failed tests.
It also doubles as a way of selecting and hiding whole
categories of tests at once. Clicking the lines will select all
such tests in the test tree view. Unchecking the check boxes on
the right will cause all tests in that category to be hidden in
the test tree view on the left. If you uncheck one of the
“<file> different” lines, note that tests will
only be hidden if all
files that were different are marked as hidden, so if a
file is different in more than one file it will not be hidden
until you uncheck all the files where it was different. In this
example there is only one file anyway.
Note that whether categories are visible by default can be
configured using the config file list entry “hide_test_category”
(a common usage is to hide all successful tests automatically).
To see how to refer to various categories, use the keys from for
example the “test_colours” entry in the config
file table.
When tests are selected via either of the above means, a new
“Test” tab is created on the right, containing the
details of what's going on in that test. Another tree view, this
time of the result files of that test, appears in the top right
window, while a textual summary of what happened for this test
can be seen in the “Text Info” tab below. The files
are also colour-coded, depending on whether TextTest thought
they were different (red) or not (green).
Double clicking on files from the test view will bring up
views of those files, using the external tools specified in the
config file. The relevant entries are "diff_program"
for viewing differences, "follow_program" for
following a running test and "view_program" for
viewing a static file. These default to “tkdiff”,
“tail -f” and “xemacs” respectively on
UNIX systems, “tkdiff”, “baretail” and
“notepad” on Windows. By default differences will be
shown if they are thought to exist (red files) otherwise the
file is simply viewed. To select other ways to view the files,
right-click and select a viewer from the popup menu.
Note that “view_program” is a “composite
dictionary” entry and can thus be configured per file
type, using just the stems as keys. It is thus easy to plugin
custom viewers if particular files produced by the system are
more conveniently viewed that way.
On the Text Info tab you will see a textual preview of all
the file differences, along with a summary of what happened to
the test. This textual preview is also used by the batch
mode email report. The tool used to generate the diff can be
configured via the config file entry “text_diff_program”
(it defaults to “diff”, which is also used
internally by “tkdiff”). Each file diff is
truncated: the number of lines included can be set via
“lines_of_text_difference”, which defaults to 30.
If several files differ they will be displayed in
alphabetical order. Sometimes this is not ideal as some files
give better information than others. In these cases you should
use the config file dictionary setting
“failure_display_priority” which will allow you to
ensure the most informative file comes at the top. Low numbers
imply display first.
To protect from very large files being compared with diff,
you can specify a maximum file size for this, using the
“text_diff_program_max_file_size” config file entry.
(Otherwise difference programs can hang forever trying to
compare the files)
When tests fail, you can examine the differences as above,
and sometimes you will decide that the change in behaviour is in
fact desirable. In this case, you should “Save” the
test, or group of tests. This operation will overwrite the
permanent “standard” files with the “temporary”
files produced by this run.
To achieve this, the dynamic GUI has a “Save”
button in the toolbar and a corresponding “Saving”
option tab at the bottom right. Select the tests you wish to
save from the left-hand test window by single-clicking, and
using Ctrl+left click to select further tests. (Press Ctrl-A to
select all tests) On pressing “Save” (or Ctrl+S),
all selected tests will be saved.
On saving a test, by default all files registered as
different will be saved. You can however save only some of the
files by selecting the files you wish to save from the file view
(under the Test tab) in much the way you select multiple tests
in the tree view window.
Further configuration options are available under the
“Saving” tab
You can configure which version the results are saved as (see
the guide to files and
directories for a description of versions). By default, they
will be saved as the version that you ran the dynamic GUI as.
There is a drop-down list so that you can select other versions
if you want to, which will generally include the current version
and all versions more general than it. Sometimes you don't want
results to be saved for particular versions, this can be
configured via the “unsaveable_version” entry which
will cause these versions not to appear in the list.
You can also overwrite all the files, even if they were
regarded as the same, via the option “replace successfully
compared files also”: this is a way to re-generate the
run-dependent text
for a test.
The static GUI is started by default unless you specified
otherwise in your config file.
As can be seen, the structure of the static GUI is similar to
that of the dynamic GUI, and tests can be viewed in much the
same way. On the left is a colour-coded tree view of the whole
test suite.
The above example shows the same test suite and the same test
as previously being viewed in the static GUI. The files can be
double clicked in a similar way, here they will invariably be
viewed with “view_program” (xemacs/notepad by
default). Note this setting can also be configured per result
type as described above.
The static GUI can be used to creating new tests or test
suites. By clicking a test suite, and filling in the forms
marked "Adding Suite" and "Adding Test" (and
pressing the corresponding button when complete), the test suite
can be extended without the need to create directories and edit
files by hand. Note that you can also edit the test suite file
from this view, and re-ordering of the tests performed this way
will show up in the GUI without needing to restart it.
In order to run tests, you first need to select some tests to
run. You can do this by single-clicking tests in the test window
on the left. Use ctrl + left click to build up a multiple
selection one test at a time, shift + left click to select
everything between what is currently selected and the line
clicked, ctrl + A to select everything. Alternatively, you can
select tests according to search criteria using the “Select”
button and “Selection” tab on the right (see below
for details of what can be done).
At the top right is a tab called “Running” which
will have three sub-tabs. The tabs “Basic” and
“Advanced” can be used to configure a multitude of
things about how the tests will be run. At the start the default
options should be sufficient. (Note that the tabs are
essentially a graphical representation of all command line
options that can be given to the dynamic GUI)
Once you are happy with these, press “Run” (on
the toolbar or in one of the above tabs). This will start the
dynamic GUI on the selected tests.
On the right there is a “Selection” tab which has
a sub-tab “Select Tests” (it should be visible when
you start TextTest). This provides a simple search mechanism for
finding tests, useful when the test suite grows too large to
always select comfortably which tests you want to run via the
GUI alone. When the “Select” button is pressed, all
tests will be selected which fulfil all of the criteria
specified by the text boxes in the “Select” tab. It
follows that if no filters are provided and “Select”
pressed, all tests will be selected.
There are four “modes” for selection represented
by radio buttons at the bottom. “Discard”, the
default, will ignore what is already selected. “Extend”
will keep the current selection and add to it. “Refine”
will match only tests that were already selected and match the
search criteria, while “Exclude” will match only
test that were not already selected.
Note that the number of selected tests (and the total number
of tests) is displayed in the column header of the test view at
all times. The various selection criteria can also be tried out
from the command line, using the plugin
script “default.CountTest”.
The simplest filters focus on matching the names of tests and
the test suites they are in. The “Test Names Containing”
field (-t on the command line) will select all test cases which
have the indicated text as a substring of their names. If
instead a standard regular expression is used, all tests whose
name matches that expression will be selected..
In a similar way, the “Suite Names Containing”
field (-ts on the command line) provides a way to select entire
test suites based on a similar substring/regular expression
search. Note that the string matched is the whole path
of the test suite : test suites can contain other test suites.
Sometimes test suites contain different tests depending on
the version identifier. In
this case, fill in the “Tests for Version” filter to
select the tests applicable to a particular version. This is
filled automatically if the static GUI is itself started with a
version identifier. It is not generally useful to do this on the
command line - simply running with a version will have the same
effect.
You can also search for certain logged text in the result
files. This is done via the “Log files containing”
filter (-grep on the command line). By default, this will search
in the file identified by the “log_file” config file
entry. If the “Log file to search” filter is also
provided (-grepfile on the command line), that file will be
searched instead. This allows selecting all tests that exercise
a certain part of the system's functionality, for example.
If system resource usage testing
is enabled for CPU time, you can select tests based on how
much CPU time they are expected to consume. This is done via the
“Execution Time” filter (-r on the command line). A
single number will be interpreted as a maximum CPU time to
select. Two comma-separated numbers will be interpreted as a
minimum and a maximum. All times are in minutes. In addition,
you can use the format mm:ss, rather than needing to convert
times into a fraction of a minute, and can also use the
operators <,>,<= and >= to specify ranges of times
to include.
Sometimes it may be useful to define such a subselection of
the tests that you may wish to reuse. To do this, select “Save
Selection” from the “File” menu, which brings
up a file chooser dialog so you can choose a file to save in.
Note it has two different options, allowing you to specify that
either the exact tests currently selected are to be saved, or
the criteria which were used to select them. Whichever, a new
“filter file” is created, which can be selected
again via “Load Selection” in the same menu, and
also via the “Tests listed in file” tab under
“Selection”.
The differences between the two variants become apparent when
somebody tries to load this file. Loading an explicit list of
tests will probably be faster than re-selecting them according
to some criteria, but if new tests are added since the selection
was saved, it will naturally not pick up these tests.
By default, the static GUI files will be saved in a directory
called “filter_files” under the directory where your
config file is. The dynamic GUI will save them in a temporary
location which is removed when the static GUI is closed. These
locations are used to generate the drop-down list for the “Tests
listed in file” option, and are also those searched if -f
is provided on the command line. These locations can be extended
or replaced by defining the config file entry
“test_list_files_directory”.
The order of the test suites is primarily defined by the
testsuite.<app> files, unless automatic sorting is enabled
(see the guide
to files and directories) However, there are some quick ways
to sort the tests after the fact. By simply clicking on the
column header they can be sorted “transiently” (i.e.
nothing is saved in any files and the sort is gone if you
restart). You can also sort more permanently by selecting the
various options from the Edit menu, which also contains various
options for manual sorting by moving the selected tests up and
down. These options are also avaiable via the “Re-order”
submenu in the popup menu for the test window.
Note that by default, sorting a test suite does so
recursively (i.e. all contained test suites will also be
sorted). To disable this behaviour, set the config file entry
“sort_test_suites_recursively” to 0.
By default the whole test suite will be expanded on starting
up the static GUI. This can sometimes be awkward, especially for
test writers who are only interested in a small part of the test
suite. For them, it is best that everything starts collapsed so
they can just view the parts that matter to them.
To this end there is a setting “static_collapse_suites”.
This should be set to 1 to disable the automatic expand of the
test suite. Instead, it will only expand the first level of
suites/tests.
There are many things which can be configured about the
TextTest GUIs, some of which are mostly a matter of personal
taste. To this end, it is possible to have a personal config
file where you place any entries that are supported by the
config files (although it is advisable to stick to GUI
appearance). This file should be called “config” and
placed in a subdirectory of your home directory called
“.texttest”. You can also place it anywhere at all
and identify the directory with the environment variable
$TEXTTEST_PERSONAL_CONFIG. (“Home directories” on
Windows are formed either from the $HOME variable or a
combination of the variables $HOMEDRIVE and $HOMEPATH)
You can edit and view these files by going to the “Config”
tab in the static GUI, where you will also see the config files
for the applications you're currently running on.
The look and feel of the GUI widgets themselves can be
configured by providing a GTK configuration file. This file
should be called “.gtkrc-2.0” and should be placed
in the same directory the above “config” file, if
you want it to only affect TextTest, or in your home directory
if you want it to affect all GTK GUIs you might run. The syntax
of these files is described, for example, at the PyGTK
homepage.
You can also configure the contents of the toolbar via XML
files placed in this directory. Such an XML file should be named
to indicate to TextTest when it should kick in. For example:
“default_gui.xml”
(affect every time you start TextTest)
“default_dynamic_gui.xml”
(affect the dynamic GUI only)
“queuesystem_static_gui.xml” (affect the
static GUI only when running the queuesystem configuration)
The first element indicates the configuration module run (and
any parent modules). The second should be "static",
"dynamic" or absent. The file name should always end
in _gui.xml.
As for the contents, the easiest thing is to
look at the files in the source/layout directory and
pattern-match. Note you only need to add extra XML sections, you
don't need to copy these files, though they give you the names
of all possible elements. For an example which extends the
standard toolbar, look in the self-tests under
GUI/Appearance/UserDefinedGUIDescription/personaldir.
TextTest comes by default with four bars: all of which are
optional: a menubar and toolbar at the top and a shortcut bar
and status bar at the bottom. The menubar and toolbar are fairly
standard and generally provide access to the same functionality.
The shortcut bar at the bottom allows you to create GUI
shortcuts for sequences of clicks that you do regularly. This is
the GUI shortcut
functionality as provided by PyUseCase,
which TextTest itself relies on, primarily for its own testing,
but also to allow for this customisation possibility.
The status bar at the very bottom tries to indicate what
TextTest is doing right now or has just done. The “throbber”
at the far right indicates whether it is doing something:
sometimes searching a large test suite for example may take a
little time.
All of these can be hidden by default using the
“hide_gui_element” entry, as above, see the table
of config file settings for the key format. If you don't
hide the menubar via this mechanism you can also show and hide
them via the “View” menu.
There are plenty of configuration options in various tabs
around the GUI, and all are identified with a label next to
them. This label is used to identify them in the config file:
take any label, make it lower case and replace all spaces
replaced by underscores (“_”) and you have a way to
identify the control in the config file..
Default values are changed by using the config entry
“gui_entry_overrides”. For text boxes, simply
provide the key as given above with the text you want as the
default. For check boxes provide “0” or “1”
as appropriate. For radio buttons form the key by concatenating
the label with the label for the specific button you want
selected, and providing “1” also.
In a similar way you can configure what options are presented
in drop-down lists to the user, in the case of the text boxes.
This is done via the “gui_entry_options” config file
entry, which is keyed in the same way. For example:
[gui_entry_overrides]
show_differences_where_present:0
current_selection_refine:1
run_this_version:sparc
[gui_entry_options]
run_this_version:linux
This will cause the dynamic GUI saving tab not to automatically
check the box for "Show differences where present", as
in the example above. It will also cause the static GUI to
include the version "linux" in a drop-down list for
the "Run this version" text box and to set it to
“sparc” by default. And finally the radio button on
the “Select Tests” tab will select “Refine”
instead of the default “Discard”
The colours for the test tree view (left window) and the file
view (top right window under Test tab) can be configured via the
GUI dictionary entries “test_colours” and
“file_colours” respectively. These are keyed with
particular pre-defined names for the different test states: to
see what they are, look at the default values in the >table
of config file settings. The values should be text strings
as recognised by RGB files.
The toolbar actions generally have keyboard accelerators,
whose values can be seen from the menus which also contain them.
These can be configured via the “gui_accelerators”
dictionary entry. The keys in this dictionary should correspond
to the labels on the relevant buttons, and the values should be
for example “<control><alt>r” or “F4”.
If in doubt, consult the format of the default ones in the >table
of config file settings. The values should be text strings
as recognised by RGB files.
If you find the TextTest window sizes to be inconvenient you
can also configure this. There is a config file dictionary entry
“window_size”. This has various keys, which can be
prefixed by “static_” or “dynamic_” to
make them specific to the particular GUIs if desired.
“maximize”, if set to 1, will maximise the window
on startup.
“height_pixels” and “width_pixels”
give the window an absolute size at startup (not recommended
outside personal files!)
“height_screen” and “width_screen”
give the window a size as a fraction (not percentage!) of the
size of your screen.
“horizontal_separator_position” and
“vertical_separator_position” allow a default
configuration of where the pane separators start out, also as a
fraction of screen size.
Naturally, when you press Quit in the GUI it will try to
clean up everything it has created in terms of both files and
processes. For what happens to the temporary files created, see
the section on the temporary
directory.
As we've seen above, it is quite possible to start many
different viewers and editors from TextTest and many different
dynamic GUI runs from the static GUI. If all of these had to be
closed by hand it would probably be cumbersome – so the
default operation of TextTest is to find all of these processes
and kill them. On UNIX; they will be sent SIGINT, SIGTERM and
SIGKILL with a pause in between. On Windows, they will be
terminated with “pskill” which tends to be fairly
final. (If you don't have administrator rights on Windows they
will all be leaked because pskill requires such rights!)
It can be useful to configure a questioning dialog such that
TextTest will ask you before killing such processes. This is the
purpose of the “query_kill_processes” config file
entry. This is a composite dictionary whose keys are “static”,
“dynamic” or “default” (this last for
both static and dynamic GUIs) and the values are patterns of
“process names”: i.e. names of editors, viewers, the
dynamic GUI etc. For eaxmple:
[query_kill_processes]
default:.*emacs
static:tkdiff
dynamic:texttest.*
This is started if the “-con” option is provided,
or if “default_interface” is set to “console”.
It is simpler and much more restricted than the GUIs.
Essentially, it will run each test in turn, and if it fails,
will ask whether you wish to view the differences, save it, or
continue. Viewing the differences will write a (truncated) text
version of all file differences to the standard output, and will
start the graphical difference viewer on the file specified by
the config file entry “log_file” (the standard
output of the SUT, by default). Saving works much like from the
dynamic GUI, except that there is no possibility to save single
files or multiple tests at the same time (but see below).
Continuing will do nothing and leave everything in place.
There are a couple of command-line options relevant to the
console interface only, both related to saving. Specifying “-o”
will cause all files judged different to be overwritten (the
equivalent of the GUI “Save” button applied to all
tests, except you have to decide before the run starts). The
“-n” option will cause all files regarded as the
same to be updated: a way of updating the run
dependent text contained in them. Specifying both these
options will cause all files to be updated, regardless of what
happens.
|