Documentation for 3.27
Other versions: trunk  4.3  4.0  Older...

The TextTest Sandbox
handling test data and avoiding global side effects
Introduction
When TextTest tests are run, it will try to write the output of the system under test to a temporary directory structure created specifically for this purpose. This is known as the “TextTest sandbox”: its purpose is to provide a totally separate environment where your system can create,edit and delete as much as it wants without doing anything permanent, and also an environment where test data can be provided in an easily accessed way.
There is one created for each run of each test. It is created under the directory indicated by the environment variable TEXTTEST_TMP. If this variable is not set then the config file entry "default_texttest_tmp" will be read, which in turn defaults to the value of $HOME/.texttest/tmp (on Windows $HOME is formed from $HOMEDRIVE and $HOMEPATH). This is hereafter referred to as the “root temporary directory”.
Each time TextTest is started, it is assigned a unique identifier based on the version, the process ID and the time stamp at which the run was submitted (the string “static_gui” is prepended in the case of the static GUI). A subdirectory of the root temporary directory is then created with this name. All temporary files and directories created by this run will then be created under this directory.
What it looks like, and what it is used for
In the case of the dynamic GUI or the console interface, tests are actually being run. This means that, for every test being run, a temporary directory structure is created which essentially mirrors the permanent directories which represent the tests (see the guide to TextTest test suites), so that each test is assigned a unique temporary directory. All temporary files corresponding to particular tests are then written to these directories. When the tests are run, each test starts the system under test with the corresponding temporary directory as current working directory, with its standard output and standard error redirected to local files.
It will set the environment variable TEXTTEST_SANDBOX to point out this directory, to aid in providing correct absolute paths to programs that insist on them, or change current working directory internally. In addition it will set the environment variable TEXTTEST_SANDBOX_ROOT which points to the root of this structure (which in turn will be a subdirectory of TEXTTEST_TMP). This will live for as long as the whole test run, and can be used for temporary storage which needs to be shared between multiple tests.
It is imperative that you ensure any other files created by the system under test are created relative to this temporary directory, to avoid global side effects and to aid TextTest in finding them. This should be possible by always specifying relative paths in your test configuration files, which will be interpreted relative to this directory for each run.
(In the temporary directory for each test case, TextTest creates a subdirectory called “framework_tmp”. It uses this to write its own temporary files, such as filtered versions of the output, performance data etc.)
In the case of the static GUI, the temporary directory will contain logs of each dynamic GUI run that is started from it. These will write files in subdirectories labelled dynamic_run<n>, with the numbers increasing for each run that is started. When the dynamic GUI is closed, the contents of whatever it wrote on standard error will be displayed in a message box by the static GUI, as well as in a file in this directory.
Populating the temporary directory with test data files (for reading or editing)
Sometimes the system under test needs to read some file relative to the current working directory. TextTest allows you to place such files in the permanent test directory structure. You should then specify the “link_test_path” config file entry as the (local) file name of the file you want to provide. You can then refer to a local file of the appropriate name in your options file in that test case, for example.
TextTest will look for the file name you specify, using its mechanism for finding and prioritising files in the hierarchy. If it finds such a file (or directory), it will create a symbolic link to it from the temporary directory (UNIX) or copy it (Windows). If it doesn't, it will silently continue, as it is regarded as a normal situation to need test data files for some tests but not others.
The files can be given any name at all (unless the system under test requires a particular name), and the normal extensions of application and/or version identifiers can be applied to them as with other files. These identifiers will be stripped from the copied or linked file name in the sandbox, which will be as given in the config file.
Sometimes the system under test will itself edit existing files. In this case, you will want to copy to the temporary directory the file or directory structure which it plans to edit, so that test runs are repeatable and do not have global side effects. You can do this using the “copy_test_path” config file entry, which will find files or directories to copy in the same way as link_test_path, and indeed is equivalent to link_test_path on Windows.
Requiring test data to exist
Note that TextTest will not insist on such test data existing for every test just because you have specified something in "link_test_path" or "copy_test_path". If you want to be given an error message if a particular test data type is not present for some reason, use the config setting "test_data_require". (Sometimes the SUT will not work at all without certain data available, then it can be useful for the tool to know this)
So to insist on the read-only data file "my_file" existing, you can write
link_test_path:my_file

test_data_require:my_file
Merging test data files and directories with each other
Sometimes it can be useful to have test data files and data structures stored in tests that can be merged with more general versions higher up the hierarchy. This avoids having to copy information and maintain multiple copies. This can be achieved by using
copy_test_path_merge:my_file_or_dirname
If my_file_or_dirname refers to a file here, this means that all versions of the file in the test hierarchy will be found and an amalgamated file created in the test sandbox that consists of all them appended together, with the most "general" at the top and the most test-specific at the bottom. This can for example be used for "settings" files.
If it instead refers to a directory, an amalgamated directory will be created from all of the ones in the test hierarchy, picking the files from the most test-specific directory in case they appear in several of them. This is useful in case most of the directories contain the same files but you need to make small tweaks to individual files in the directory structure. (Note that in this case it will not amalgamate the files themselves to each other if there are several of them - as data directory structures do not normally want this in our experience)
To constrast the directory behaviour, using "copy_test_path" instead will treat each directory as a separate unit, i.e. it will take the entire directory from the most specific place in the hierarchy.
Making "partial copies" of large data structures (UNIX only)
Sometimes an application may need to read from a very large directory structure, and potentially edit some files in it. Copying the whole structure for each test run is possible but time consuming. It's better to be able to copy just the parts that will be changed and link the rest. This is done with the “partial_copy_test_path” config file entry, in conjunction with the catalogue creation feature (“create_catalogues” in the config file). The first time the test is run, all the files are copied, and the catalogue records which files are created, edited and deleted. The next time, the structure will be copied and linked as determined by what is in the catalogue file.
If any use is made of symbolic links to the master data, it is generally recommended to make the entire “master copy” of the data readonly, in case bugs in the application would cause it to corrupt the test data. It is possible to tell TextTest to ignore the catalogue file and copy everything again if the file-changing properties of the test change : check the “Ignore catalogue when isolating data” box (-ignorecat on the command line)
Configuring the copy operation
It is also possible to take control over the copying operation and insert your own script to do it, by making use of the "copy_test_path_script" setting.
copy_test_path:my_data

[copy_test_path_script]
my_data:/path/to/script.sh
The script in question will accept two arguments, the source file and the destination. It is called instead of (not as well as) the default copy operation, so often consists of performing the copy and then making some adjustments to the copied data. From TextTest 3.25 it is also possible to refer to environment variables set in your environment files from within this script (e.g. TEXTTEST_SANDBOX, TEXTTEST_LOG_DIR)
Associating environment variables with test data
Applications will often reference their test data structures via environment variables. When these structures are isolated by TextTest as described above, it can be helpful to update the variables accordingly. There are two ways to do this, which are fairly similar in effect.
The first is to simply refer to the environment variable using one of the test data config file settings described above (link_test_path, copy_test_path or partial_copy_test_path). For example, you could write
copy_test_path:$MY_ENV_VAR

This would take the value of the environment variable MY_ENV_VAR as determined by the environment files and the external environment, identify if it refers to an existing file or directory, and if so, copy that as test data. The environment variable will also be updated to point at the absolute path of the copied location.
Alternatively, you can associate environment variables with test data found via the normal mechanism. This is done via the “test_data_environment” config file setting, which is a dictionary. For example
copy_test_path:data

[test_data_environment]
data:MY_ENV_VAR
For each name identified by link_test_path, copy_test_path or partial_copy_test_path, you can provide an entry which will be the name of an environment variable to set to the isolated version of the data.
Note! In both of these cases the environment variables will be set even if no data is found. The assumption is that the system under test might in that case want to create such data in an equivalent position.
Ignoring parts of test data directory structures
If you specify a directory as test data, via any of the three ways described above, it will be treated as test data recursively in its entirety. Sometimes, however, some parts of it are not really part of the data and should not be displayed as such, either in the GUIs or in the catalogue files. If it is version controlled via CVS it is for example likely to contain CVS directories which we will want to ignore.
To achieve this, use the setting “test_data_ignore”. The keys are names identified by “link_test_path”, “copy_test_path” or “partial_copy_test_path”. The values should be names (or regular expressions) of files or directories under the relevant structure which should be ignored. If a directory is ignored, so are its contents.
This has several effects. These files/directories will not be shown in the static GUI (which will behave as if they didn't exist) and changes in them will not appear in the catalogue file, if there is one. If you are using “partial_copy_test_path”, that means they will also not be copied, ever: changes there are regarded as uninteresting. It is thus a very bad idea to use this setting in conjunction with partially copied structures, if there is any chance of a write-conflict between several tests, or if files will be created (and not deleted) by test runs in that place. In the last case the directory would grow without limit.
What happens to the sandbox when TextTest terminates?
When you quit the GUI (or the console interface terminates), the temporary directory associated with the run is by default automatically removed. It is thus important to approve any test results that you wish to use again, either as the default result or as a version.
It is also possible to run TextTest in “keeptmp” mode. This means that the temporary directory structure of the run is not removed when texttest exits.
Running in batch mode automatically selects “keeptmp” mode for the temporary directories. It may also be requested explicitly using the “-keeptmp” option on the command line, or checking the “keep temporary write directories” box in the “side effects” tab from the static GUI.
Another option in batch mode is to provide "-keeptmp 0" which will remove all the temporary files at the end of the run exactly as in interactive mode. This is not normally desirable because it precludes the possibility to "reconnect" to the run and view the results in the GUI, or to do any detailed examination of any failures.
Separating the location where the result files are written from the test run location
Ordinarily the TextTest sandbox is used both as a place to store the result files the test writes (stdout, stderr, log files etc), and where the application manipulates test data and writes its own files. It can be useful to separate these things, particularly when using a grid engine to run tests in parallel. The test data and the application's own files do not need to be visible anywhere else, so they can be directed to a local "tmp" disk. The result files, meanwhile, should be visible to the machine where TextTest is running, and may want to be archived etc.
In this case you can either set "default_texttest_local_tmp" in your config file, or set the environment variable TEXTTEST_LOCAL_TMP, to an appropriate location. A common choice on POSIX systems is
default_texttest_local_tmp:/tmp/$USER
The "sandbox" used for running the test (current working directory, and the location of $TEXTTEST_SANDBOX) will then be under this local location. But stdout and stderr files will be written to the "normal" location under TEXTTEST_TMP, and any collated files will be written there. A new environment variable, TEXTTEST_LOG_DIR, points out this location where result files/logs are written.
Writing logfiles to the current working directory and naming them with your application suffix, as suggested elsewhere, will thus not work in this setup, because the current working directory is the local sandbox, which is not the same as the log directory in this setup. You either need to add them to [collate_file], in which case you won't see them until the test finishes, or use $TEXTTEST_LOG_DIR in your log configuration file. If your logging framework does not expand environment variables (some don't) use "copy_test_path_script" as described above, and provide a script that copies the file while expanding its variables.
Example: configuring application logging by using log4x-style configuration files as test data
Up to TextTest 3.10 there was a separate mechanism for plugging in log4x-style configuration files. As TextTest 3.11 can handle application and version-specific suffices for test data this became redundant: so it now reduces to a special case of the test data mechanism. As this is a very common usage of the mechanism it's documented here for completeness. If you don't know what a logging framework is, see the appendix below.
We start by deciding on a name for our logging configuration files. For consistency with TextTest 3.10 we will choose "logging". So we need to tell TextTest to treat these files as readonly test data:
link_test_path:logging
We should then place a file called "logging" in the root test suite of our application, with only those logs enabled that we want turned on for all tests, which is probably not many of them. We can then create test-specific logging files for particular tests, by selecting that test, right-clicking on "Data Files" in the file view, and selecting "create file" from the popup menu. TextTest will then choose a logging file via its mechanism for finding and prioritising data files.
The system under test can then be configured to read a local file called "logging" from the current working directory for its log configuration. In practice though, this is likely to be inconvenient for uses other than testing, so you'll probably want to use an environment variable or Java property to point it out. This is done as follows:
Example configuration (environment variable)
Assumes the SUT locates the log configuration file via the environment variable $DIAG_INPUT_FILE.
[test_data_environment]
logging:DIAG_INPUT_FILE
[end]
In a similar way you can make sure your "logging" file writes all the logs to the current working directory, and names them with the appropriate suffix that your config file also has. Then you won't need to do anything further. Sometimes this leads to problems if your application changes directory internally, when it can be a good idea to identify the absolute path. The easiest way to do this is via the environment variable $TEXTTEST_SANDBOX described above (or a proxy variable that is set to be the same as it in the tests)
Appendix - what is a logging framework and why do I care?
It is naturally possible to conduct all your logging for TextTest by writing just to standard output. However, there are drawbacks to doing this.
  1. It isn't possible to have some log statements present for some tests and absent for others.
  2. Where logs cannot be easily disabled, they can slow down the system in production.
  3. You are compelled to log at one level only: it isn't possible to separate high-level domain-relevant logs from lower-level debug logs that will only be understood by the developers.
Logging frameworks exist to solve these problems. TextTest aims to handle their configuration smoothely and seamlessly to make it easy to use them in your program when testing it.
We recommend you look at the log4x family of tools, for example log4j (Java) and log4cpp (C++). Python has its own builtin "logging" module which works in a similar way. However, it should be possible to plug in a wide variety of logging frameworks, provided they support the features that TextTest assumes, as described above.


Last updated: 08 July 2015