Documentation for 3.14
Other versions: trunk  4.3  4.0  3.27  Older...

Files written by the System Under Test
Monitoring files that are created, edited and deleted by the tests
Introduction
By default, the standard output of the system under test will be collected to a file called output.<app> and the standard error will be collected to a file called errors.<app> (see the guide to files and directories). Any other files that might be written by the system under test will be ignored. However, it is possible to tell TextTest to "collate" individual files and compare them in a similar way to how it compares standard output and standard error. It is also possible to tell it to create an additional file which will list all files that were created, edited or deleted by the system under test (a "catalogue" file), in case comparing every single file is overkill.
Telling TextTest to collect additional files
This can be done by specifying the config file entry “collate_file”. This entry is a dictionary and so takes the following form :
[collate_file]
<texttest_name>:<source_file_path>
<texttest_name>:<another_source_file>
where <source_file_path> is some file your application writes and <texttest_name> is what you want it to be called by TextTest.
If you plan to do this, make sure you read the document describing how the TextTest temporary directory works first. <source_file_path> here should in principle never be an absolute path : it should be relative (implicitly to the temporary directory described above). This is because your tests will otherwise have global side effects – making them harder to understand and more prone to occasional failure, particularly if run more than once simultaneously.
Note that this ordering can seem counter-intuitive, in effect you are asking TextTest to copy the text file located at <source_file_path> to <texttest_name>.<app> in the temporary directory of that test, where it will be picked up and compared. You might expect the source to be named before the target, but many different config dictionary entries use these TextTest names for result files as keys so this one works the same for consistency.
Standard UNIX file pattern matching (globbing) is allowed in both <texttest_name> and <source_file_path>. Where this is used in the path to the source file it simply means that the exact name of the file that will be produced may vary, but whatever file matches the pattern will be copied and given the same name each time by TextTest, provided it was created or modified by the test run (unchanged files will not be collected in this way). It's also possible to provide multiple patterns or names to look in for this situation, where the names of the produced files vary in such a way that writing a pattern isn't possible.
If comparison of a collected file is not desired for any reason, it can be added to the config file list entry “discard_file”. The most common usage of this is to disable the collection of standard output and/or standard error (i.e. by adding “errors” or “output” to the list).
Running an arbitrary script on the collected files
If the file you refer to via "collate_file" is not plain-text or needs to be pre-processed before it can easily be compared, you can tell TextTest to run an arbitrary script on the file. This script should take a single argument (the file name to read) and should write its output to the standard output. You do this by specifying the composite dictionary entry “collate_script”, which has the same form as “collate_file” except the value should be the name of the script to run. “collate_script” has no effect unless “collate_file” is also specified for the same file.
There are several ways that TextTest can find the script. Obviously a full absolute path will work. If a relative path is given, TextTest will also look in its own "libexec" directories where its standard collate scripts live : to avoid mixing your scripts with the standard ones you can create a directory "site/libexec" and scripts in there will be found also. The scripts can often just be placed somewhere on your PATH, which will work with any file type on UNIX but only with .exe files on Windows (the shell is not used).
If the script fails it will write a file at $TEXTTEST_SANDBOX/framework_tmp/<stem>.collate_errs, which you can then go and look at. Hopefully in future releases this information will be more readily available.
Collecting multiple related files at the same time (advanced)
When patterns are used in the TextTest name it means that all previously saved files that match this “target pattern” and all files written by the test that match the “source pattern” become collated files. E.g. suppose we have the following entry in the config file:
[collate_file]
data*:data*.dump
Suppose also that an earlier saved run had produced data1.<app> and data2.<app> and the latest run produced data1.dump and data3.dump. Then the list of collated files becomes: data1, data2, data3. This means that the latest run's data1 will be compared against the file saved in data1.<app>, data2 will be flagged as missing and data3 flagged as new result.
Some care is required in writing collate patterns. Completely general patters like “*:* “ would cause confusion since anything could relate to anything, in theory. The current implementation assumes that files have a common stem, i.e.: it can handle stems like the example above, but not unrelated stems like “*good* : *bad*”
Binary Files
Binary files should be identified as such by listing them in the “binary_file” config file entry - this ensures that TextTest will check whether they are identical but no attempt will be made to filter them or run a difference tool on them. If they differ, it's then up to the user to examine both files using whatever tools they have available to them.
New Files and Missing Files
If standard results have not already been collected for a particular file produced when a test is run (as they won't be when you've just enabled the mechanism above) , the file is reported under “New Files” and should be checked carefully by hand and saved if correct. New files appearing is also sufficient reason to fail a test, so every test should fail the first time unless the expected results are imported externally. The standard output and standard error are also treated this way.
In the same way, if files are not produced that are present and expected in the standard results, these will be reported under “Missing Files” and the test will fail. Saving such a result will cause the missing files to be removed from the standard results.
Generating a catalogue of file/process changes
Sometimes a system will potentially create and remove a great many files in a directory structure (TextTest itself is one example!) Collecting and comparing every single file might be overkill. Instead, you have the possibility to create a catalogue file, which will essentially compare which files (under the test's temporary directory) are present before and after the test has run, and which files and directories that were present before have been edited during the test run.
It will then report what has been created, what has been removed and what has been edited. This is done by setting the config file entry “create_catalogues” to true. It will generate result files called catalogue.<app>. If no differences are found, this is noted briefly at the top of the file : catalogue files are always created from version 3.6 and onwards.
Note that this feature can be used to aid test data isolation also.
In addition, you can request that the catalogue functionality checks for processes that were created (leaked!) by the test. If such processes are found to exist, they will be reported to the catalogue file and automatically terminated by TextTest. This is done by getting the SUT to log when it creates a process in a predictable way. The text identifying the process created should be provided in the “catalogue_process_string” config file entry. TextTest will then search the result file indicated by “log_file” for matches with this string, and assume the process ID immediately follows it. If the process is found to be running, it will be reported to the catalogue file and terminated.
The Severity of Differences in particular Files
This is controlled by the dictionary entry “failure_severity”, and takes the form:
[failure_severity]
<texttest_name>:<severity>
<severity> here is a number, where 1 is the most severe and increasing the number means decreasing the severity. If the entry is not present, both “output” and “errors” files will be given severity 1, while everything else will have severity 99.
The severity has two effects on how TextTest behaves:
  1. When multiple files are found to be different, the difference is always reported in the dynamic GUI “details” column as a difference in the most “severe” file found to be different.
  2. If a severity 1 file is found to be different, the whole line will turn red, otherwise only the “details” column will turn red.
As an example, the test below has failed in “performance”, which is a severity 2 file. If the output had also been different, the whole line on the left would be red and the details would report “output different(+)”.




Last updated: 05 October 2012