Test framework for pyglet. Reads details of components and capabilities from a requirements document, runs the appropriate unit tests.
python tests/test.py top app graphics clock resource # these all run automatically python tests/test.py font media text python tests/test.py image python tests/test.py window
Because the tests are interactive, they can take quite a while to complete. The ‘window’ section in particular takes a long time. It can be frustrating to get almost through the tests and then something gets messed up, so we suggest you run the tests in sections as listed above. If you are curious, the sections are defined in tests/plan.txt.
Here are the different sections and how long they take.
Section Time to Run top automatic app automatic graphics automatic clock automatic resource automatic font 1 minute media 1 minute text 1 minute image 5 minutes window 10 minutes
First, some definitions:
A capability is a tag that can be applied to a test-case, which specifies a particular instance of the test. The tester can select which capabilities are present on their system; and only test cases matching those capabilities will be run.
There are platform capabilities “WIN”, “OSX” and “X11”, which are automatically selected by default.
The “DEVELOPER” capability is used to mark test cases which test a feature under active development.
The “GENERIC” capability signifies that the test case is equivalent under all platforms, and is selected by default.
Other capabilities can be specified and selected as needed. For example, we may wish to use an “NVIDIA” or “ATI” capability to specialise a test-case for a particular video card make.
Some tests generate regression images if enabled, so you will only need to run through the interactive procedure once. During subsequent runs the image shown on screen will be compared with the regression images and passed automatically if they match. There are command line options for enabling this feature.Literal block
By default regression images are saved in tests/regression/images/
The test procedure is interactive (this is necessary to facilitate the many GUI-related tests, which cannot be completely automated). With no command-line arguments, all test cases in all sections will be run:
Before each test, a description of the test will be printed, including some information of what you should look for, and what interactivityLiteral block is provided (including how to stop the test). Press ENTER to begin the test.
When the test is complete, assuming there were no detectable errors (for example, failed assertions or an exception), you will be asked to enter a [P]ass or [F]ail. You should Fail the test if the behaviour was not as described, and enter a short reason.
Details of each test session are logged for future use.
After the command line options, you can specify a list of sections or test cases to run.
python tests/test.py –capabilities=GENERIC,NVIDIA,WIN window
Runs all tests in the window section with the given capabilities. Test just the FULLSCREEN_TOGGLE test case without prompting for input (useful for development).
Run a single test outside of the test harness. Handy for development; it is equivalent to specifying –no-interactive.
Add the test case to the appropriate section in the test plan (plan.txt). Create one unit test script per test case. For example, the test for window.FULLSCREEN_TOGGLE is located at:
The test file must contain:
During development, test cases should be marked with DEVELOPER. Once finished add the WIN, OSX and X11 capabilities, or GENERIC if it’s platform independent.
Your test case should subclass tests.regression.ImageRegressionTestCase instead of unitttest.TestCase. At the point where the buffer (window image) should be checked/saved, call self.capture_regression_image(). If this method returns True, you can exit straight away (regression test passed), otherwise continue running interactively (regression image was captured, wait for user confirmation). You can call capture_regression_image() several times; only the final image will be used.
The tests have to be processed by 2to3 in order to run them with Python 3.
This can be done with:
2to3 --output-dir=tests3 -W -n tests
And then run the tests int tests3 directory.