Documentation for trunk
Other versions: 4.3  4.0  3.27  Older...

Measuring system resource usage
CPU time and memory consumption
Introduction
TextTest is a very suitable tool for testing systems that place substantial demands on system resources, for example they may run for long periods and consume lots of CPU time, or they may be very demanding in terms of memory. In this case it is generally necessary to control this, so that changes to your system do not suddenly result in it being much slower or consuming much more memory. This document provides a guide to how to set up such testing using TextTest.
In general, such values should be measured by the system under test and logged to some TextTest result file. TextTest can then be configured to extract them and compare them. On UNIX, CPU time can also be measured and extracted automatically by TextTest with changes only needed in the config file.
For each type of system resource tested, a small result file is generated by TextTest containing a single line with the relevant information in it. The name of this file doubles up as a way to refer to the tested resource in the various config file entries. For example, the file might contain this:
CPU time   : 30.39 seconds on apple
Most of the performance configuration can be defined per tested resource file in this way. Any identifier at all can be provided, and if no configuration is recognised for that name default settings will be used. All of the performance config file settings that start with “performance_” (described below) are “composite dictionary entries” with the file stems as keys, it is recommended to read the file format documentation for what this means.
A note on machines and when information is collected
It is generally assumed that you may have more than one machine at your disposal in order to run tests. Further, these machines may not be identical, some may be faster than others. For any comparison involving times, it is therefore essential to say on which machines you intend for testing to be performed, or it will be impossible to get reliable results.
This is done using the config file entry “performance_test_machine”. This has the form below
[performance_test_machine]
<system_resource_id>:<machine_name1>
<system_resource_id>:<machine_name2>
The machine names should be either names of machines where you want performance testing to be enabled, or the string “any”, which indicates that performance should be tested on any machine for this system resource id. Note that it may be very useful to use the key “default” here, to save lots of repeated typing.
A common move is therefore to set
[performance_test_machine]
default:any
which has the dual effect of disabling machine-based checks for all kinds of performance testing, and enabling the builtin CPU time measurement, see below.
Running the test on any machine outside the list will still work, but no performance-related information will be generated or compared.
Measuring CPU time consumption directly with TextTest (UNIX only)
This is keyed in the config file by the identifier “cputime” but generates a file called “performance.<app>” for historical reasons (it used to be the only kind of performance measurement). Enabling it is a matter of enabling the config file entry “performance_test_machine” for the system resource id “cputime”, as described above.
It uses the shell tool “time” to time the run of the system under test, which works pretty reliably. Of course, the CPU time of the whole test isn't always what you want – in this case you will need to take over the measurement yourself via the mechanism described below.
By default, only the user time is collected. You can include the system time also by setting the config file entry “cputime_include_system_time”.
Extracting other system resource usage from logged files
Any information present in any TextTest result file can be extracted and treated as a system resource. First, you should choose a system resource id for the resource concerned. You then need to make sure that “performance_test_machine” is enabled for this key (or “default”, naturally). Additionally, you need to tell TextTest how to extract it. This is done via the config file entry “performance_logfile_extractor”, which has the following form:
[performance_logfile_extractor]
<system_resource_id>:<text_identifier>
The <text_identifier> here is a string to be found in the file to be searched. When TextTest finds that string, it will take the word immediately following it and try to interpret it as a number (note that you need to provide the whole string up to the word beforehand, not just a part of it as for “run_dependent_text”). It will also parse the format hh:mm:ss for times.
A more general form of this is also possible since TextTest 3.23. You can identify where in the string the number is found by using a regular expression group, identified using brackets. TextTest will then iterate through all the groups in the match and take the first one which seems to match its expected format.
In either case, it will then generate a file <system_resource_id>.<app> in a similar way to the performance file above.
For example, support we have this information logged to our file:
...
Time taken to load: 40 seconds
300 MB of memory were consumed.
...
We can then keep track of performance and memory by adding this to our config file:
[performance_logfile_extractor]
load_time:Time taken to load:
memory:([0-9]*) MB of memory
The first case assumes the number appears immediately after the string given, the second that appears in the regular expression group at the start. Note that in the first case, if the second colon were omitted, this would not work, as the number must be the next thing after the string provided if no groups are given.
Identifying which file to search in
The file to be searched is identified primarily by the config file entry “log_file”. This defaults to the standard output of the SUT, i.e. the string "stdout" or "output" depending on your naming scheme. You may need to search different files for different entries, in which case the entry “performance_logfile” is of use. This has the same form as above, except that the value is the file stem of the file you wish to extract the information from. Since TextTest 3.22 it will also work to refer to a file (by filename or UNIX file expansion) that has not necessarily been collated to form part of the test comparison. For example, this will get CPU time information from standard output and memory information from a file that matches the expansion given.
[performance_logfile]
cputime:stdout
memory:logs/*/memory.log
[end]
Choosing a system resource id and units
If you choose the system_resource_id “memory”, the number will be interpreted as a memory value in megabytes. Otherwise it will be assumed to be a time in seconds. These units can be configured by setting the config file entry "performance_unit" to whatever you want reported for your particular resource.
This special system resource ID "memory" will also cause "performance_test_machine" to be set to “any” by default, as memory is assumed to be less dependent on what machine is used.
If you chose some other name for system_resource_id the number is assumed to be a time (in seconds or hh:mm:ss) and will be displayed accordingly. If it decreases the change will be reported as "faster(<system_resource_id>)". This reporting can however be configured, via the config file entries "performance_descriptor_decrease" and "performance_descriptor_increase". These should be a comma-separated 3-tuple of <name>, <brief_description>, <full_description>. (The easiest thing is probably to look in the table of config file settings and examine the default values for "cputime" and "memory").
Choosing the default performance measure
Having set up such collection, it's probably a good idea to set (e.g. from the example above)
default_performance_stem:load_time
so that TextTest can use this measure with various other forms of performance-related functionality (which will otherwise use the total CPU time if measured, i.e. the "performance" files described above).
The functionality affected is
  1. Which measure is displayed in the Text Info panel in the GUI when a test is selected
  2. Which measure is used in the GUI's Selection tab and equivalent command line settings (all the fields related to selection via performance)
  3. Which measure is used in the time field of junit-format batch results
  4. Which measure is used for "min_time_for_performance_force", if using parallel testing.
Generating data from existing files
When you have just enabled such resource usage extraction, you generally want to automatically extract the current values from your existing result files for all tests, creating the auto-generated system resource files. This avoids getting a lot of “new file” results first time around. There is a plugin script for this called “default.ExtractStandardPerformance”.
Comparing System Resource Usage Files
The files for comparison are not compared exactly. Part of the point of testing things like this is that it is never exactly the same: you need to set a tolerance. This is done via the config file entry “performance_variation_%”, which has a similar format to the ones already described. It verifies that the percentage difference between the two figures is no more than a certain figure.
There is also an entry “performance_test_minimum”, which can be used to say, for example, that a test must run for at least 5 CPU-seconds before it is worth comparing it.
By default the tests will show "failure" both on performance improvements and degradations. It's generally assumed that the user wants to be informed of performance improvements in this way so that the expected result can be updated and such improvements preserved. However, particularly in cases where the tests are managed by people other than the developers, it can also be useful to define an acceptable worst performance and have the test always succeed if it is better than that. In which case you should set "performance_ignore_improvements" to "true" and manually edit the stored performance to be this acceptable worst performance value.
(There is also a setting "performance_variation_serious_%", which acts in the same way as "performance_variation_%" as described above. If set to a larger value that "performance_variation_%" it indicates that performance exceeding this value should be treated as full failure of the test and not just marked as a performance failure. Currently it will only affect the colouring in the historical HTML batch report but at some point this will be applied in the GUI also)
Important: on the interpretation of percentage changes
Note that TextTest's policy with percentages is to always usage the “percentage increase” (defined as <larger> – <smaller> / <smaller>) which can be surprising at first, especially if your test shows “100% faster” as below! It was found to be easier to set the tolerances this way because it leads to a symmetric situation: 7% slower and 7% faster mean the same thing. The more immediately intuitive way of defining decreases as a percentage of the current value leads to the situation where a 100% slowdown today is counterbalanced by a 50% speedup tomorrow, which can also become hard to follow.
This can now be overridden by setting the config file setting "use_normalised_percentage_change" to "false". This means that percentages will use the more immediately intuitive "percentage change" (defined as abs(<newer> – <older>) / <older>) Note that this has the effect that changes are no longer symmetric and "performance_variation_%" is also interpreted in this way, which means that a test can succeed when making a change but fail when reverting it. This setting should therefore be used with care.
Approving System Resource Usage Files
When approving tests where differences in performance have been reported, it is possible to approve an average of the old performance figure and the new one, to prevent too much oscillation. To do this, fire up the "Approve As..." dialog, and select “Average Performance” from the radio button at the bottom of the dialog.
Below is what you will see in the dynamic GUI when you run tests that fail in performance in this way.


Statistical reports on System Resource Usage
There are two plugin scripts for this. For CPU time, use the script “performance.PerformanceStatistics”. This will print the amount of CPU time used by each test to standard output. Use the option “file=<sys_resource_id>” to control which resource is reported on. You can also compare with another version, percentage-wise for each test, using the option “compv=version”. To compare memory usage of “myapp” version 1.3 with the previous version 1.2, for example, use
texttest -a myapp -v 1.3 -s “performance.PerformanceStatistics file=memory compv=1.2”


Last updated: 23 September 2014