Documentation for trunk
Other versions: 4.3  4.0  3.27  Older...

Automatic Failure Interpretation in TextTest
Introduction
There are situations when it saves a fair bit of time to inform TextTest about what your system is likely to log and what it means. This means that TextTest can find such patterns automatically and instead of just saying “differences in stdout” can provide you a much more succinct description of what went wrong. TextTest can also automatically try to collect information about what happened from the system: for example core files that have been dumped on UNIX. This document aims to give a guide to how TextTest can interpret failures at a higher level than file differences.
Collecting Core Files (UNIX)
This will just happen. Each test is run with a unique temporary directory as the current working directory. This means that TextTest will pick up any core file written there and try to extract the stack trace from it. If it succeeds, it will report the test as having “CRASHED” and display the stack trace in the “Text Info” window.
Since version 3.7, it does this by using an external script “interpretcore.py” which outputs stack traces given core files. It has been tested on quite a few flavours of UNIX : HPUX-11, powerpc, Sparc Solaris and Linux RHEL3, RHEL4 and seems to be quite portable (much more so than the old code that was part of TextTest until version 3.6) This script is plugged in by default via the default value of the “collate_script”entry on UNIX.
It is provided as standard with TextTest, but can clearly also be used externally to TextTest. It works by using the standard debugger 'gdb'. If it fails to find the stack trace for any reason the test will still be reported as “CRASHED” but the reason for failure will be given in the “Text Info” window instead.
Known Bugs : associating logged text patterns with a summary description
It can be useful to associate a succinct problem description with the appearance of certain logged text. For example, your application may well report “Internal Error!!!” when something bad happens. In this case you could tell TextTest about this so it can describe such a test failure as an Internal Error and save you the trouble of reading the log file.
Another common usage, as the title hints, is for known bugs in the system under test. This may be needed because the bug appears only intermittently, or because time has not yet been found to fix it even though its presence is known. In either case, telling TextTest about it can prevent lots of people examining lots of failed tests in order to discover the same issue again.
The easiest way to do this is by using the dialog in the GUI. Right-click the test(s) or testsuite where the bug should apply and select "Enter Failure Information". This can (since TextTest 3.23) be done in the either the dynamic or static GUI : the dynamic GUI can also provide immediate feedback if the selected test matches the information given.
We can then fill in the dialog as shown :


Here we imagine that when our application logs “after <n> seconds” this implies it has gone into a delay that isn't appropriate. So we fill in what text it should look for, which file it should look in, and then in the bottom section provide some description information, a full text to give and a short description. This creates a special file knownbugs.<app> in the test or suite's directory, which has a format that is easy to edit in a normal editor.
If we then run the test and it produces the indicated text, we then get a nice summary as well as the usual complete file differences. Note it has used our “brief description” given above in the Details column of the test tree view, while the full description appears in the Text Info window at the bottom right. (Here I've run two copies of the text to show what tests failing with knownbugs look like in the test tree when not selected)


Ordinarily, you will search for some text that will be produced in a certain file, as given by the "file to search in" entry. This will search the unfiltered version of the file (not the diff produced in the Text Info window). This can be inverted via the radio buttons in the top section, so that the bug is triggered when the text is not present. It can also be converted to a check for an exact match, which is mostly useful for small diffs in the "full difference report" (see below) where things have been removed and you don't want to trigger the bug if there are other differences, like the test not having even got to that point.
In addition to searching in specific files, you can also select "brief text/details" or "full difference report" for the matching. This will look in (respectively) the text that appears in the Details column or the text in the bottom right window, i.e. the actual difference report. Note however that it isn't generally possible to match on the first line of this text, which is an additional explanatory line added by the GUI and not part of the state of the test. This line will be different in the case of running in batch mode, which is why it isn't a good idea for TextTest to use it for matching.
There is now also a control to trigger reruns when known bugs are found, which is useful for intermittent problems. In that case the test will simply be restarted, up to a maximum of the number of times you give. When running with a grid engine, the tests will not necessarily be rerun on the same machine so it is also a way to circumvent temporary hardware issues in that case. The check box beneath it "Skip known bug reporting" allows for this mechanism to be used only to trigger reruns: in this case the test will not be reported any differently if the reruns continue to produce the same result.
When the tests are run, TextTest wll then find all such "knownbugs" files, using its mechanism for finding and prioritising files in the hierarchy. All information from all such files will be used, the files do not overwrite each other as versioned files used to up to version 3.10.
Searching for previously reported bugs
Sometimes you end up in a situation where a test fails with a previously identified bug that has been reported on some other test. TextTest can help you find such bugs, via the dynamic GUI's "Find Failure Information" option. This will give you a list of all reported bugs in the test suite which match the selected test(s) and you can choose which one you want to apply, if any.
Extracting information from bug systems (particularly Bugzilla, Jira or Trac)
If you have a bug-tracking system with an API of some sort, you can probably get it to talk to TextTest without very much effort. Instead of providing textual descriptions you can then just provide the bug ID from the bug system and TextTest will extract the (current) information for you. It will try to determine whether the bug has been fixed (closed in the bugsystem) and if so it will be reported as “internal error” rather than as “known bug”, as it is expected the bug text would not continue to occur if the bug had been closed.
To set this up, you need to give TextTest the URL to your bug system, using the config file entry "bug_system_location". The key should be the name of the bug system, currently one of "bugzilla", "bugzillav2", "trac" or "jira". For example, if you use bugzilla version 3 and later you might add this in your config file:
[bug_system_location]
bugzilla:http://www.mysite.com/bugzilla
[end]
Extracting information from Bugzilla
If you use Bugzilla to track your bugs, there are two plugins already written and bundled with TextTest. For bugzilla version 3 and onwards there is a plugin called "bugzilla" that calls bugzilla's native web service API to extract the relevant information. This should work against any bugzilla installation out of the box.
There is also a plugin called "bugzillav2" that can interface with the older versions of bugzilla that don't have a web service, however for this to work, you also need to install the “bugcli” program. This is essentially an additional open source CGI script that runs on the bugzilla server. Note that bugcli is not supported by anyone any more: if you find problems you'll just need to upgrade Bugzilla.
Extracting information from Atlassian's Jira
If you use Jira to track your bugs, help is also at hand. Simply set the location of your Jira installation as below. Note that it has been found necessary to include the port number (8443 is Jira's default) to make this work in some circumstances. Jira's webservice also requires a login before it will release information, so you need to also provide the settings "bug_system_username" and "bug_system_password" to TextTest in a similar way. A sample config file might therefore look like this:
[bug_system_location]
jira:https://jira.blah.com:8443

[bug_system_username]
jira:texttest

[bug_system_password]
jira:the_password
[end]
These kind of settings are often useful to put in a site-specific config file.
Extracting information from Trac
If you use Trac to track your bugs, you can do something similar. Simply set the location of your Trac installation as below.
[bug_system_location]
trac:http://trac.edgewall.org/demo-0.11
Extracting information from other bug systems
If you use some other bug tracker with an API, it should be fairly easy to copy the "bugzilla.py" or "jira.py" module from the TextTest source (under default/knownbugs) and change it to implement the 'findBugInfo' method as appropriate for your bug system. By providing the name of the module in the bug system field when reporting the bug, it will load this module and extract the relevant information. Naturally it's appreciated if you submit your changes back to the project if you do this.
Knownbugs file format
The above dialog will generate a knownbugs.<app> file from the information entered into the dialog. When updating tests it can sometimes be useful to just change these files directly rather than using the dialog again. To aid in this, we describe here the knownbugs file format.
The file format is, like the config file, broadly .ini style. Each reported bug gains a section of its own, with an autogenerated title indicating the user that reported the bug and the time it was reported. For example, the screenshot above would produce this file:
[Reported by geoff at 02May14:55:55]
search_string:after [0-9]* seconds
search_file:output
full_description:Known problem with delay
brief_description:Delay bug
The following table details everything that can be done in the "Enter Failure Information" dialog, and the entry in the reported bug section in the knownbugs file that would correspond to it. The order here corresponds to the order of controls in the dialog.

Failure Information dialog action
Knownbugs file line
Enter <text> in "Text or regexp to match" field
search_string:<text>
Uncheck box "Enable regular expressions"
use_regexp:0
Check radio button "NOT present"
trigger_on_absence:1
Check radio button "Exactly as given"
trigger_on_identical:1
Check radio button "Brief text/details"
search_file:brief_text
Check radio button "Full difference report"
search_file:free_text
Select <file_stem> from "File to search in" combo box
search_file:<file_stem>
Check box "Trigger even if other files differ"
ignore_other_errors:1
Check box "Trigger even if file to search would otherwise compare as equal"
trigger_on_success:1
Enter <text> in "Version to report for" field
version:<text>
Enter <text> in "Trigger only when run on machines" field
execution_hosts:<text>
Select <system> from "Extract info from bug_system" combo box
bug_system:<system>
Enter <id> in "Bug ID" field
bug_id:<id>
Enter <text> in "Full description" field
full_description:<text>
Enter <text> in "Few-word summary" field
brief_description:<text>
Check box "Report as 'internal error' rather than 'known bug'"
internal_error:1
Choose <number> for "Number of times to try to rerun the test" field
rerun_count:<number>
Advanced knownbug configuration (requires hand-hacking knownbugs files currently)
There are also a couple of possible entries in knownbugs file sections that don't have an equivalent in the creation dialog yet. If you have several known bugs that might trigger at the same time it can be useful to set priorities between them. The default prioritisation assigns priority 10 to "internal errors", priority 20 to reported bugs in bug systems, and priority 30 to those with only a textual description. (Lower numbers mean higher priority!). To change this, just add e.g. "priority:5" in the relevant section.
You can also write your own trigger in Python to do anything at all. This is often useful for checking exceptional environment conditions that might cause failures, e.g. some server being down. In this case you add "custom_trigger:pythonmodule.pythonFunction" and make sure your "pythonmodule" can be found on PYTHONPATH. The signature of pythonFunction should then be
def pythonFunction(execHosts, tmpDir):
   ...
where execHosts is a list of all the machines where the test ran (usually one element only) and tmpDir is the location of the TextTest sandbox, which can be used to check other test files or to write your own temporary files. The bug will be triggered if the function returns True and the other triggers in the bug also apply.


Last updated: 08 July 2015