Documentation for 3.10
Running TextTest Unattended
A Guide to Batch Mode
It can be very useful to have TextTest run lots of longer
tests (say) overnight and provide the results in an email or
HTML report rather than have one of the interactive
user interfaces present. That is the purpose of “batch
mode”. To select batch mode, provide the command line
option “-b <batch_session>” or fill in the
“Run Batch Mode Session” tab under “How to
Run” in the static GUI. In general, you will probably want
to start such batch runs via a script, for example using crontab
on UNIX.
The batch mode “session” is simply an identifier
that defines a particular sort of batch run. Most of the batch
mode configuration can be defined per session. Any identifier at
all can be provided, and if no configuration is recognised for
that session name default settings will be used. All of the
batch mode config file settings that start with “batch_”
(described below) are “composite dictionary entries”
with the batch session names as keys, it is recommended to read
the file
format documentation for what this means.
TextTest batch mode generates an email report. For a
multiple-developer project it is often useful to direct such
reports to a newsgroup, providing everyone the chance to see at
a glance what works and what doesn't. This will then generally
look something like this (example newsgroup viewed in mozilla):
As you see from this example, the title of the mail consists
of the date and a summary of what tests were run and what
happened to them (for the application “Tail”, in
this case). If the -name option is provided to the run on the
command line, that name is used to define the run instead of the
date. (In general, use -name to test actual named releases, and
the default date-functionality with nightjobs)
The body of the mail contains two further sections, one which
summarises exactly which tests failed and a further section
which endeavours to give some details as to why they failed.
These sections can be explored or ignored depending on how
involved the reader is in the project. Managers will generally
only need to look at the subject lines...
The name of the application, as provided here, can be
configured via the config file entry “full_name”. By
default a capitalised version of the file extension used for the
application will be used here, but this doesn't always look so
nice in reports.
The “details” section consists of the textual
previews as generated by the dynamic GUI's “text info”
tab when a test fails. It can be configured in the same way.
In addition, it can be useful to configure the maximum width of
lines allowed: some newsgroups have maximum line length limits
and you don't want test reports bouncing. This can be done via
the config file entry “max_width_text_difference”.
Where the mail is sent to is controlled by the config file
entry “batch_recipients”. This can be configured per
batch session, and may be a comma-separated list for multiple
recipients. The sender address can be controlled by the
“batch_sender” config file entry, while the SMTP
server to use for sending mail can also be configured via
“smtp_server”.
All of these will need to be configured on Windows as no
defaults are provided. On UNIX, the SMTP server defaults to
“localhost” and both sender and recipient addresses
default to “$USER@localhost”, so it is generally
only necessary to configure the recipients.
For more flexibility in viewing and analysing a lot of
results, as well as being able to easily monitor the behaviour
of particular tests over time, it can be very useful to store
the batch results in a repository and generate HTML reports from
them. In order to store the information from the batch runs, the
config file entry “batch_result_repository” should
be set to a directory under which batch results can be stored.
Results are then stored per test and day and are never
overwritten: to recreate results for a particular day it is
necessary to explicitly remove the previous ones, either
manually or via the archiving script described below.
For the location of the actual reports, set the config file
entry “historical_report_location” to another
directory. Both of these are composite dictionaries as described
above so both can be varied per batch session. In order to
actually generate the report, run the script
'batch.GenerateHistoricalReport' which will rebuild all the
reports from scratch based on what is in the repository. This
script is also run when the -coll flag is provided, see below.
The easiest way to get a handle on what this looks like is to
look at this example,
which is generated by TextTest's tests for itself. Each day's
results correspond to a column, while each test has a row. The
results can be explored by clicking around.
The colours in the site are also configurable: use the config
file dictionary setting “testoverview_colours”. To
see how to set this, look at the config
file table and pattern match on the default value.
After a while, very old test results in the repository cease
to be interesting and can safely be archived. This is done via
the script batch.ArchiveRepository, with arguments 'after' and
'before' for the time period to archive (and 'session' for the
batch session to do it on, defaults to all known sessions). The
dates should be in the same format as the dates on the pages,
e.g. 21Jan2005.
The config file entry “batch_timelimit”, if
present, will run only tests which are expected to take up to
that amount of time (in minutes). This is of course only useful
if performance testing is enabled for
CPU time.
More generically, you can use the “batch_filter_file”
entry to identify filter files
to be associated with a particular batch run. These can either
contain a list of tests or search criteria to apply and can be
edited using the static GUI. In this context it can be useful to
note that such filter files can contain application and
version-specific suffices in case similar criteria imply
different tests for different applications: this allows the same
entry for batch_filter_file to indicate different tests for
different applications and versions.
If certain versions should be run automatically as part of a batch
mode run without needing to explicitly specify them on the command line,
the entry “batch_extra_version” can be used for this purpose. This
is a more specialised version of the “extra_version” setting.
If the entry “batch_use_version_filtering” is set
to “true”, all versions are assumed to be disabled
unless explicitly enabled by being included in the
“batch_version” list setting. The point of this is
in the presence of multiple test applications and multiple
releases of the system: a single run of TextTest can be started
with a particular version identifier and each application can
decide in its config file if it wants to run tests for that
version of the system. This is generally easier than trying to
set up separate nightjob runs for each application.
Both of these things act in concert with any test
selection filters selected on the command line or from the
static GUI. As described there, only tests which satisfy all
filters present will be selected.
When many versions of the system under test are active, and
many different hardware platforms are used, you may want to test
the system on all of these combinations. This can lead to a
great many test runs and consequently a lot of emails. It is
often easier to read these if they are collected into a single
larger email: otherwise it is hard to get an overview of what is
happening.
To do this, set the config entry “batch_use_collection”
to "true" for the batch session in question. This will
ignore the email-sending settings and send the batch report to
an intermediate file. When all tests have been run in this way,
the collection script can be run via “texttest.py -s
batch.CollectFiles”. Alternatively, the -coll flag can be
provided, which will perform both the scripts
batch.GenerateHistoricalReport and batch.CollectFiles (i.e. it
will generate the HTML report described above as well). The
collection script will search for all such intermediate files
and amalgamate them into a single mail per application. If a
version is provided to this script via -v <version>, only
runs which ran with that version identifier will be collected.
This applies to the HTML report as well.
This config file configures TextTest so that:
- “-b local” will send email to the sender
directly
- “-b nightjob” will run all tests that take
up to 180 minutes (3 hours). It will only accept the version
identifiers “11”, “12” and “linux”
(if we also set “batch_use_version_filtering” to
true, which we didn't here...) It will write its results to an
intermediate collection file, where they can be collected
later.
- “-b wkendjob” will run all the tests that
there are. It will also accept the version identifiers “sparc”
and “powerpc” (evidently we want to test on more
flavours of UNIX at the weekends!). It will use collection in a
similar way to the “nightjob” session.
- All of these will write their results under the location
/some/central/directory, provided no previous results have been
calculated that day. When the collection is run, the files from
the 'nightjob' or 'wkendjob' sessions will be amalgamated and
mailed to the carmen.test_newsgroup mail address. The website
at /our/documents/html/testreports will also be regenerated
from scratch from the repository described above.
Batch mode's email report is all very well, but alone it
doesn't give you the power of the GUI to view results in detail
or to save them if that would be appropriate. It can be very
useful to “reconnect” the GUI as if a batch run had
been run using it. To do this, go to the “Reconnect”
tab under “Running” in the static GUI (or provide
the “-reconnect <directory>” option on the
command line).
In “Temporary results directory” you should enter
the full path to wherever the run would have written its
results. This defaults to whre your own normal runs will do so.
(For backwards compatibility, on UNIX you can also enter a user
name, which will use the directory where that person would
normally do so) Any version identifiers provided in the original
run should also be provided in this reconnect run, in the
“Version to Reconnect to” field..
There is a switch at the bottom which allows you to choose
between a quick re-display of what was displayed in the original
batch email report, and an option to recalculate the results
from the raw files. If for any reason the quick re-display isn't
possible, it may trigger a recomputation anyway.
The recomputation, whether explicitly requested (-reconnfull
on the command line has the same effect) or auto-triggered, will
take the raw output of the run reconnected to and reapply the
text filtering mechanisms to it, and also re-evaluate any
automatic failure interpretation
that could be triggered. This is useful if you have updated your
config file filters in the meantime and want to see if they are
applied correctly. It is, of course, a good deal slower than
simply re-reporting what was present before, as occurs by
default.
|