Documentation for 3.9.1
Measuring system resource usage
CPU time and memory consumption
TextTest is a very suitable tool for testing systems that
place substantial demands on system resources, for example they
may run for long periods and consume lots of CPU time, or they
may be very demanding in terms of memory. In this case it is
generally necessary to control this, so that changes to your
system do not suddenly result in it being much slower or
consuming much more memory. This document provides a guide to
how to set up such testing using TextTest.
In general, such values should be measured by the system
under test and logged to some TextTest result file. TextTest can
then be configured to extract them and compare them. On UNIX,
CPU time can also be measured and extracted automatically by
TextTest with changes only needed in the config file.
For each type of system resource tested, a small result file
is generated by TextTest containing a single line with the
relevant information in it. The name of this file doubles up as
a way to refer to the tested resource in the various config file
entries. For example, the file might contain this:
CPU time : 30.39 seconds on apple
Most of the performance configuration can be defined per tested
resource file in this way. Any identifier at all can be
provided, and if no configuration is recognised for that name
default settings will be used. All of the performance config
file settings that start with “performance_”
(described below) are “composite dictionary entries”
with the file stems as keys, it is recommended to read the file
format documentation for what this means.
It is generally assumed that you may have more than one
machine at your disposal in order to run tests. Further, these
machines may not be identical, some may be faster than others.
For any comparison involving times, it is therefore essential to
say on which machines you intend for testing to be performed, or
it will be impossible to get reliable results.
This is done using the config file entry
“performance_test_machine”. This has the form below
[performance_test_machine]
<system_resource_id>:<machine_name1>
<system_resource_id>:<machine_name2>
The machine names should be either names of machines where you
want performance testing to be enabled, or the string “any”,
which indicates that performance should be tested on any machine
for this system resource id. Note that it may be very useful to
use the key “default” here, to save lots of repeated
typing.
Running the test on any machine outside the list will still
work, but no performance-related information will be generated
or compared. It is also possible to disable collecting
performance-related information: use the “-noperf”
flag on the command line, or select “Disable any
performance testing” from the static GUI's “How to
Run” tab.
This is keyed in the config file by the identifier “cputime”
but generates a file called “performance.<app>”
for historical reasons (it used to be the only kind of
performance measurement). Enabling it is a matter of enabling
the config file entry “performance_test_machine” for
the system resource id “cputime”, as described
above.
It uses the shell tool “time” to time the run of
the system under test, which works pretty reliably. Of course,
the CPU time of the whole test isn't always what you want –
in this case you will need to take over the measurement yourself
via the mechanism described below.
By default, only the user time is collected. You can include
the system time also by setting the config file entry
“cputime_include_system_time”.
Any information present in any TextTest result file can be
extracted and treated as a system resource. First, you should
choose a system resource id for the resource concerned. You then
need to make sure that “performance_test_machine” is
enabled for this key (or “default”, naturally).
Additionally, you need to tell TextTest how to extract it. This
is done via the config file entry
“performance_logfile_extractor”, which has the
following form:
[performance_logfile_extractor]
<system_resource_id>:<text_identifier>
The <text_identifier> here is a string to be found in the
file to be searched. When TextTest finds that string, it will
take the word immediately following it and try to interpret it
as a number (note that you need to provide the whole string up
to the word beforehand, not just a part of it as for
“run_dependent_text”). It will also parse the format
hh:mm:ss for times. It will then generate a file
<system_resource_id>.<app> in a similar way to the
performance file above.
The file to be searched is identified primarily by the config
file entry “log_file”. This defaults to “output”,
i.e. the standard output of the SUT. You may need to search
different files for different entries, in which case the entry
“performance_logfile” is of use. This has the same
form as above, except that the value is the file stem of the
file you wish to extract the information from.
The number, once identified, is assumed to be a time (in
seconds or hh:mm:ss) and will be displayed accordingly. To
override this, choose a system_resource_id containing the
substring “mem” and it will be interpreted as a
memory value in MB (!). The special system resource id “memory”
will cause performance_test_machine to be set to “any”
by default, as memory is assumed to be less dependent on what
machine is used.
When you have just enabled such resource usage extraction,
you generally want to automatically extract the current values
from your existing result files for all tests, creating the
auto-generated system resource files. This avoids getting a lot
of “new file” results first time around. There is a
plugin script for this called
“default.ExtractStandardPerformance”.
The files for comparison are not compared exactly. Part of
the point of testing things like this is that it is never
exactly the same: you need to set a tolerance. This is done via
the config file entry “performance_variation_%”,
which has a similar format to the ones already described. It
verifies that the percentage difference between the two figures
is no more than a certain figure. There is also an entry
“minimum_performance_for_test”, which can be used to
say, for example, that a test must run for at least 5
CPU-seconds before it is worth comparing it.
Note that TextTest's policy with percentages is to always
usage the “percentage increase” (defined as <larger>
– <smaller> / <smaller>) which can be
surprising at first, especially if your test shows “100%
faster” as below! It was found to be easier to set the
tolerances this way because it leads to a symmetric situation:
7% slower and 7% faster mean the same thing. The more
immediately intuitive way of defining decreases as a percentage
of the current value leads to the situation where a 100%
slowdown today is counterbalanced by a 50% speedup tomorrow,
which can also become hard to follow.
When differences in performance are reported, you are given
the option to save as described below. An extra dialogue appears
as to whether you want to save “Exact Performance”
or “Average Performance”. “Exact Performance”
will just overwrite the old file with the new one. “Average”
will, not surprisingly, take the old and new statistics, take
the average and save that as the new result.
Below is what you will see in the dynamic GUI when you run
tests that fail in performance in this way.
(As a footnote, there is also the possibility to plug in a
mechanism to test the execution machine for processes which
might interfere with the performance of the system under test.
If such interference is detected, a larger tolerance can be
used, indicated by the config file entry
“cputime_slowdown_variation_%” (only for CPU time
measurement, as indicated). This does not have any effect in the
default configuration, but can be implemented in derived
configurations. See the guide to writing your own configuration)
There are two plugin scripts
for this. For CPU time, use the script
“performance.PerformanceStatistics”. This will print
the amount of CPU time used by each test to standard output. Use
the option “file=<sys_resource_id>” to control
which resource is reported on. You can also compare with another
version, percentage-wise for each test, using the option
“compv=version”. To compare memory usage of “myapp”
version 1.3 with the previous version 1.2, for example, use
texttest -a myapp -v 1.3 -s “performance.PerformanceStatistics file=memory compv=1.2”
|