Documentation for 3.27
TextTest Course Material
A series of hands-on exercises to get a firmer grasp of the TextTest tool
This document assumes you have already installed TextTest as described in the Installation Guide. To do these exercises, start by downloading the "systems under test" and test data from here
. Unzip it and then set the environment variable TEXTTEST_HOME to point at its "tests" directory.
For each exercise there is a subdirectory of "tests" containing the program you are to test and any test-data. Some exercises
use TextTest to create tests under that directory, others start with a existing suite and make it work or improve it.
Exercise 3 is the exception: it has no directory because the point is to modify the tests you have made in the other exercises.
There are a total of 8 exercises. It is suggested to start with Exercise 1 and then Exercise 2 which covers most of the things a normal testsuite is likely to run into. The others are mostly useful for when you wish to use the features that they are aimed at exploring.
Note that Exercise 2 comes in two versions that cover mostly the same functionality. The suggested version now is the "TextTest Koans"
which involves fixing up an existing test suite by filling in "blanks" in various files, and works more by experiment and trial and error
than by following detailed instructions. The original version is more like the other exercises and involves building up the test suite from scratch, with detailed instructions.
You can also download my own solutions
to these exercises in case you get stuck or just prefer to browse a solution rather than try to create one yourself...
Before you start it might be worth setting up TextTest's text editor to use something you're familiar with. By default it will use "emacs" on POSIX-based systems and "notepad" on Windows. To e.g. use "gedit" instead, create a file at ~/.texttest/config containing the line
Exercise 2 (new version): The TextTest Koans
You can do this via
texttest -a lesson1,lesson2
The aim of the Koans exercise is to select each test in turn, and make them all succeed by filling in the blanks.
Each of them contain, in one of their files, something looking like "__fill in this blank__" or just "____",
replace these with the correct content to make the test work. Many of them contain additional hints in their
Description, look at the bottom right pane when the test is selected.
After you have got each test working, discuss briefly with your partner what you learnt about TextTest from it.
(NOTE: The first two tests do not have any blanks, they just get you used to running a test and approving a test. Pressing "Approve" to make any test after the second one green is cheating!)
Exercise 2 (original version): The Search/Replace Script
Change directory to "tests/ex2_searchreplace". Here you will find the script "searchreplace.py" and a
file "file.txt" which is meant as test data. Start by trying it out a bit so you understand what it does
and what you're trying to test. For example, try something like the following:
gewoia : cat file.txt
gewoia : ./searchreplace.py bar foo file.txt
searchreplace.py running at 22Oct11:47:03
Replacing 'bar' with 'foo'
Replacing in /nfs/vm/texttest/geoff/course/tests/searchreplace/file.txt
OK to commit?
gewoia : cat file.txt
It's probably easiest to close the TextTest static GUI from "Hello World" and
restart it with
which will ask you for the details of your new program. (Starting without arguments
again will reload your hello world test). You can also do "Add Application" with the
Hello World tests still loaded if you prefer.
Select the script, choose the "ex2_searchreplace" directory as subdirectory, choose a suitable extension as you
did for Hello World (don't choose "txt" as that will cause confusion with "file.txt").
The easiest test to specify is one that contains no arguments. Create a test as for Hello
World. You should get some kind of "Usage" error from the script. Approve this behaviour as correct.
This time enter e.g. "foo bar file.txt" (if you changed the file as in my example above)
in the "Command Line Options" field in the "Add Test" dialog box. (Or copy the test,
right click "Definition" files and add an "options" file with the same contents). Either
way, you get a test containing an "options" file. If you run it you will get different
text, probably the first two lines of the "trial" output from above. It won't actually
do any replacement yet (bear with it until the next step). Approve the behaviour.
Run the test again. Note that it fails, because it records the current time which has now
moved on. We need to tell TextTest to ignore this difference.
To rectify this you'll need to edit your "config file", which you do by selecting the "config" tab
(top right in the "static GUI"). In this tab you will find a file "config.<extension>" under "Files For <your application name>".
If you have defined a personal configuration file it will also be present: don't edit that
as it is specific to your user. Double-click the application file described, which will open it in the editor described in the introduction. A lot of what's hard about TextTest is editing this
file correctly and most of the exercises involve doing so.
Read the documentation on filtering the output
, there are lots of more or less sophisticated
ways to do this, from ignoring the entire line to replacing any date of that format via a regular
expression. Choose one and proceed to the next step when you can run the test and it goes green. You can test any changes
you do without needing to rerun the test every time, by pressing "F5" (Recompute Status) in the dynamic GUI, which will
rerun the filtering on an existing test run. The filtered versions of the files can be viewed by right-clicking
on the files also.
You may wonder why the last test didn't try to update "file.txt". The reason
is that TextTest doesn't yet know that this file is supposed to be test data.
The test is running in TextTest's temporary "sandbox" environment where there is no
We should rectify this by ensuring the sandbox environment gets populated with the file. TextTest identifies test data via local file names (in this case "file.txt") and searches the test and then each parent suite in turn for such a file. So you can tell it to pick up this file just by adding
to your config file as above. To understand better what is happening here you can look up "copy_test_path" in the TextTest configuration reference
for help (or the page on "Test Data"
for a wider overview).
As there is already such a file in the "root suite" (the top level of the hierarchy)
that file will now be copied for all tests. So the test you made in step 4 will now
behave differently. If you want to make a new test and preserve the old test as it was,
make a copy of the old test using TextTest, and then go to the shell and move "file.txt" to
the appropriate directory. (This is a good opportunity to explore a bit the file structure
TextTest is creating for you: everything is plain text files and can usually be edited fairly
easily outside of the tool also)
If you run this test again it will fail: the reason is that it writes out the absolute path to
the file it has edited, so you can see where the "TextTest sandbox" is in this case. TextTest has a built-in
filter for this path as many applications need to filter it. Look for "INTERNAL" in the documentation
try to replace the path with something so we're still verifying that the correct file is being edited. View
your filtered file as before and make sure it looks OK.
The edit is rejected in the test above because the test asks for a response on standard input which is not
provided. So take a new copy, select it and right click on "Definition Files", picking "Create/Import File".
Select "stdin" for standard input, create a new file and type "y" in it. This will provide this response to
standard input. If you run the test
now the text saying "Not editing the file" will go away.
The test is hopefully now editing the file as we request, but we need to prove that. Start by setting
"create_catalogues:true" in the config file, which will give us a check
on all the files it's producing. This will affect all 4 tests so you should run them all.
You should get 4 rows all saying "catalogue new". On the right you have a status summary which is worth getting
to know. There should be a row saying "Group 1: 3". This is TextTest's way of saying these 3 tests have changed
in the same way. Click on this row and it will select the tests in the test view. If you view the "Test" tab
you can see that the first three tests are now saying that no files were changed, as we expect. You can now approve
them without needing to examine each one individually.
Hopefully our new test will tell us that file.txt is being edited. Approve it.
That's good, but we still can't see the new text in the file itself. To do this, refer to
the docs section on "tests that write files"
for how to do this using "collate_file".
You've hopefully got 3 or 4 tests that work now. You may well have several identical files for different
tests. Of course, this isn't a problem for this size of testsuite but can become a major pain when you've got
a few hundred tests.
The way to reduce this duplication is to rearrange the hierarchy. If several tests require the
same contents in a particular file, create a Test Suite and move those tests to it. You can then have a single
copy of the file in the test suite instead of several identical ones in each test.
Your last two tests could move to a suite containing "file.txt", for example. You could also define the "options"
file at the root suite level and clear them in the single test that doesn't want any command line options (search for "Options Files"
in the Test Suite Guide
Exercise 3: Setting up unattended test runs
Start by running, e.g.
texttest.py -b continuous -zen
which will run all the tests from the other exercises from the command line. The -zen flag gives you coloured console output so you can
more easily see when tests succeed and fail.
This logfile-only report isn't very useful on its own. The main way to view unattended runs is via the HTML report now.
Read the information about generating HTML reports
and try to produce one that
looks something like the example linked there. You might also want to try to make sure all your applications
write their results on the same page given that they're quite small.
Note you will need to add configuration
entries to all your "config" files, though you probably won't need the TextTest GUI. Note also that by default
runs are identified by date, so once you have a page with a single column, further runs won't appear there
unless you explicitly name the run (-name on the command line)
(Besides a colourful logfile and the HTML report, other options involve a basic plain-text report sent by email, or output results as XML
so that tools like Jenkins can display them in their own format. But we're not doing those today)
It's not so nice that we've had to copy the same information to several different files. Try to extract it out to a
separate file and "import" it into your config files. Look at "import_config_file" in the TextTest configuration reference
for information on how to do this.
Exercise 4: The Regular Expression Koans
You can do this via
texttest -a lesson3
This works in much the same way as Exercise 2. The tests under "Filtering" are running "grep -E -v" as their system under test,
which simulates filtering with regular expressions in TextTest. Replace the underscores in the "options" files with a regular
expression which will filter away the relevant lines and make the test green. Do not Approve or change anything else! Read
the descriptions of the tests for further hints, in the bottom right pane.
The tests under "Replacing" use TextTest's "run_dependent_text" filtering, encountered in Exercise 2. By replacing the blank in the
replace clause in the config files with something appropriate, you can see the result in the output, and practice with back references
that are very useful for replacement.
Output from TextTest tests frequently changes in predictable ways. Then it can be very useful to update lots of stored behaviour
and configuration with a single regular expression replacement.
Play around with this feature a bit (from the Actions menu) and try to change the test results.
Note that each row is treated independently so it's possible
to replace multiple lines, and also to remove and add lines. Try this out, and experiment with the back references you just learned about also. The feature will perform the replacement in a dynamic GUI run so nothing will be changed permanently unless you do Approve there.
Log in to Jira and create a new issue using "Product Development Sandbox" as the project and filling in all the required fields.
At the end you should get a bug ID like SANDBOX-XXX which will be used later.
There is a couple of tests failing and one going green. Look at the differences to see what could have gone wrong.
Right-click the first failed test, "Test2", from the dynamic GUI and select "Enter Failure Information". In the dialog that comes up, enter
a suitable value for the "Text or regexp to match" text field (note that by default, it will search the "stdout" file for this text).
In "Extract info from bug system" select "Jira" and enter the bug ID created in 5.1.
If you do this correctly, the test will change status to "Known bug" and go "half-green", i.e. green in the first column and red in the
second. This indicates to future people running it that the failure is known and that they don't need to worry about it.
Jira information is shown now in the "Details" column of the test tree view and in the Text Info window at the
bottom right of the dynamic GUI.
Select Test3 and repeat the same steps as you did in 5.4. Adjust the text and the file to search as desired to match this new behaviour.
Note that in this case the check box "trigger even if other files differ" is checked, take a look at the tooltip for an explanation.
If you uncheck it the bug will not match, because several files differ in this case.
If you select Test2 and Test3 in the static GUI a new file "knownbugs.kb" has been created there.
Double-click one and see what has been created.
Information about the knownbugs file format can be found
It is often convenient to adjust the contents directly in the files.
This test's behaviour is familiar, we have another test (Test2) with the same behaviour.
Rather than report the same info again, we find the existing mapping and apply it to this test.
Right-click on it and choose "Find Failure Information". Select the bug you reported and
leave "Apply to whole suite" as the default.
TextTest's hierarchical structure allows for file placement to determine which information applies to which
tests. By simply moving the file here it now applies to all the tests.
Log in to Jira and resolve the issue.
Note that all the bugs that previously had known bugs are now "fully red" again.
TextTest checks status in Jira, and if bugs are marked fixed, it complains
and demands action. Check the status tab, they are now referred to as "internal error".
Sometimes you get this situation when the bug has been fixed, but this fix is not available in your environment because it isn't
released yet. In this situation we need to stop TextTest failing and replace the Jira reference with a comment.
Select the root suite in the static GUI, right-click and choose Enter Failure information. Put the same text matching from Test2 in,
but make sure that "Extract info from bug system" field is "<none>" and
clear the "bug ID" field. Enter suitable text in the "Full description" and "Few-word summary" fields at the bottom of the dialog
Open the knownbugs.kb file in a text editor and remove the section referring to the previous reported bug.
Rerun the tests and preview the results. Test2 and Test4 show your comment now instead of the Jira number and are back to "half-red".
This test is known to be indeterministic, in that a key output line is sometimes missing.
So run 10 copies of it (use the "Times to Run" field in the Running tab) to see this effect.
We could match this in a few ways
- use the "NOT present" switch at the top
- match using the "Full difference report", and paste text directly from the preview window
- as (2) but provide the entire report and use "Exactly as given".
Text that is missing is hazardous and needs to be treated with care: after all, it could be missing because of a crash on startup
rather than for the issue we're thinking about now.The first two options are prone to this. Note that the whole of the full difference
report includes the entire bottom right pane contents EXCEPT the first line, which is additional explaining text!
If we assume the problem is in the environment, rather than something that can be fixed in the system, it can be useful to trigger a
rerun. So set the rerun count to 2 or 3 also.
If you run Test5 10 times again you should now get 10 green. Some of them will say "after 1 rerun" in their details column
if you expand them in the test tree.
If you want to actually view how the known bug looks you can comment the rerun line out from the knownbugs file and rerun.
Exercise 6: The SWT/Eclipse GUI
Instructions for how to do this can be found here
From the command line, type
This is an Eclipse/SWT example application which allows you to create an address book with contacts in it. Play around a little
with it so you know what you're about to test.
Note that the
has a large overlap with this, although based around a different app. If anything here is unclear
it may help to look at that for more detailed descriptions and screenshots.
again and fill in the initial dialog. Fill in "AddressBook" as the Java Class name and select SWT for the GUI testing option.
try to locate the AddressBook program as you may have done with other programs in previous exercises!
Then, create a test as before, and do "Record Use-Case" on it. Create a contact in the address book, fill in some data (you don't
need to fill in all the fields) and close the Address book. When you do this, you will be prompted to enter "use-case names" for
the actions you have performed, and maybe to adjust how they map to the GUI. Fill in this dialog with suitable names.
If you made a mistake recording, you should just press Quit at this point. If you like what you created, press Approve and then Quit.
The test will then be replayed in the background and the expected behaviour collected. When it is done, open the stdout file and
examine it so you can see what is being compared. Visit the config tab also and view the UI map file (ui_map.conf) which is what
has been created by the usecase name entry dialog.
If you press "Run" now, the test will run without the GUI showing, using the virtual display program "Xvfb". Xvfb produces warnings
on some Linux systems, which you might need to add run_dependent_text for as in Exercise 2. To see the test execute in the GUI, check
the "Show GUI" option, and increase the "Replay pause" setting a bit so it goes slowly enough to view it.
This time we want to test the search functionality. We could just create our contact again, but that would be a pain, and we might
make the data in it subtly different by mistake. So instead of recording from scratch, we will start from part of the previous test.
Copy the test you have created, and edit the copied usecase file to remove the step that closes the GUI. Then run the test with the
"Show GUI" button checked. It will create the contact and then record anything else you do. Search for the contact and verify it
gets selected. Enter the new usecase names and approve the usecase file as before. This time you need to run the test by hand to collect
the correct information in the stdout file.
This latest test now has a rather mechanical description of what is happening. It would be useful to raise the abstraction level and get
a very succinct description. This would also allow us to easily create tests containing more contacts.
Double-click the usecase file in this latest test. This brings up the StoryText Editor dialog. Select all the steps that consist of
creating the contact, right-click and press "Create shortcut".
It will suggest a very long name, referring to all the data you entered. Change it to something sensible and see how the contents evolve
in the preview window below. Note that if you delete any of the data references this data will be hardcoded: if you leave them in
they will be treated as variables.
You can also create a shortcut for the search function.
When you close the StoryText editor, TextTest will ask to insert the created shortcuts into other tests. Answer Yes to this and it should
insert into the test we created first.
Hint: do a partial recording again. Write the second contact by hand using the shortcut that is now there to vary the data.
StoryText assumes everything is important until you tell it otherwise. Under "Definition Files", create a "storytext_options" file
containing the string "-X Menu". This should mean that this test will not care about menu changes in future.
Exercise 7: Performance and Memory Testing
This can be done via
texttest -a cpumem
There are three tests, which take a random amount of time to execute and report some other timing and memory usage in their standard
output. They should succeed as they define some filterings which prevent the varying resource usage from failing the tests.
See the docs
for how to
The second test runs nearly instantaneously, so you should set a minimum using "performance_test_minimum" to prevent this
test being included in the performance runs. The others vary somewhat, set the tolerance ("performance_variation_%") accordingly
until they're reliably green. Try not to overdo it, obviously 1000% tolerance will work but it won't be very useful in practice!
This involves using the setting "performance_logfile_extractor". When you have results, try to set tolerances accordingly as above
until the tests are reliably green. Note it is possible to store the average of the stored and received performances, by doing
"Approve As" and then checking the relevant option. This helps make sure the stored performance is in the middle of the range of
performances, which allows you to have a lower tolerance than if it's at one end of the range.
Do something similar to what you did in Exercise 3 for these tests. Run two or three named runs to gather some data. The data is
more interesting if the tests still have some failures sometimes, so you can sabotage the tolerance a bit if you like!
You can then generate an additional page with just the various performance and memory data on it. As there aren't many tests it makes
sense to try and make sure this is on the same page. You can look up the config file settings for "historical_report_resource_pages"
in the configuration file references
to work out how to do this.
Exercise 8: CaptureMock/The Continuous Integration Script
This exercise assumes you have the Mercurial version-control system
and the GCC C compiler
installed. If you don't you need to get them.
In the directory for exercise 8, under scripts/automatic_build.py you will find a small "continuous integration"
script. The basic idea is to update some code (in fact a C hello world program) from Mercurial source control, if
there are changes trigger a build on several machines in parallel, and send an email if any of them fail.
The aim of the exercise is to create repeatable TextTest tests for this apparently hard-to-test script without
even making any changes to it...
Go to the ex8_ci_script directory and run "scripts/automatic_build.py".
(It expects to be run from this directory) There are no updates from source control,
so it does not do anything. Note however that it created a timestamped directory under "logs"
containing a file showing what the source control did.
Run texttest --new, select the script above, check the box to enable CaptureMock and make sure you choose
"ex8_ci_script" in the subdirectory field,
otherwise it won't find the test data which is there! Create a test for no changes, as done before.
The script tries to update "source" from "repo" so you'll need to add both of these as test data
as you did in exercise 2. "repo" can be linked with "link_test_path" as we don't expect the script
to make changes there.
It will however fail if you run it again, because it tells you about its
log directory which is timestamped. Filter it in the same way as you did with exercise 2.
The test is now repeatable, but it tells us it's writing some logs,
which we can't see. Let's make sure they're sensible. Set
"create_catalogues:true" in the config file as before, which will give us a check
on all the files it's producing. It shows us we're creating a file
"src_update" in our timestamped log directory, and that some Mercurial control
file is being edited. Generalise the filter for the timestamp so it filters
the catalogue file also (you can duplicate it but it's neater to use a file-expansion
wildcard in the key name). We don't care about the Mercurial control file, so tell
TextTest to ignore changes there by setting "test_data_ignore:.hg".
We should now check what's in src_update. Use "collate_file" as before to make this file
part of the baseline for the test. You'll need to use a file expansion this time: note
that directories beginning with "." do not match the simple expansion "*", so you'll need
to provide part of the name also.
There is one problem still: the test still relies on the Mercurial
checkout ("source") being up to date. You should capture this state using CaptureMock so that
the test doesn't fail if further checkins are made using Mercurial. Read the
for guidance. You should record the interaction with the "hg" program and check it looks sensible. You'll need to filter the
sandbox directory too, but we did that in exercise 2 also.
We now have a perfect test for no changes in source control!
Investigate what the script does in these circumstances outside of TextTest first,
so you understand what you're testing. Go to the shell in the exercise directory.
As we've seen, the script uses Mercurial ("hg") to update the directory "source" from the directory
"repo". So trigger a change and see what happens. Make an edit in
repo/main.c, check it in via "hg commit -m 'change' -u 'me' repo", and then rerun
scripts/automatic_build.py. The local build should succeed, the remote
one should fail (can't reach "my_other_machine" / SSH isn't installed) and an email should be delivered
(though as we saw in exercise 3 this may not work, depending on your machine setup).
We can now add a test for this. Trigger another change as we did above, but create
a test instead. Note that the "source" directory will be copied before each test
run and the updates performed on the copy, so the test can be run repeatedly
without needing to do more checkins.
If you've handled Mercurial correctly in step 8.5 you should be able to
capture the current Mercurial behaviour and protect your tests from future
changes in the repository also. Note that TextTest also captures the file
edit made by Mercurial and replays it, even when you run the test without
running Mercurial for real.
When the test for the build triggering and succeeding is working, you can
then deliberately introduce a compilation failure and repeat, to create a
test for the build failing.
You should now have 3 repeatable tests, congratulations!
Each time you run the tests where builds fail it tries to send an email. We probably
don't want to be sending these emails for real, but we do want to check that
they're sent correctly. All the more so if our "real" mail sending is broken and
we can't see it being sent at all...
Our test program is written in Python so we can use a feature specific to Python programs
that can intercept the email-sending module "smtplib" in a similar way to how we handled
the command-line "hg" program above. See here
for how to enable Python interception using CaptureMock.
We want to "record" the email sending now, but we don't want to have to set up the Mercurial repositories and re-record the
Mercurial interactions again. So this time we choose "Mixed Mode" from the Running tab, which will replay what we have and
record what we don't have. In this way we can check the email arrives and looks right, and then when running the test check our
interaction with the "smtplib" module remains the same as when we recorded it.
The remote build is always failing: it's trying to reach a machine that
doesn't exist with ssh.
Create a fake "ssh" program as "executable test data" for the "build
succeeds" test, as described on the "mocking" documentation page, so that
we have control of this. Just write a script in any language you want,
and make sure that it has execute permissions.
Your "fake ssh" should probably say what machine it's supposed to be
running on, and perform the build locally, remembering to pass on the
exit code which the build script makes use of. If you haven't done so
already, collate the remote build log also so you can see the text you write out.