Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

Info
iconfalse

Note: eWater has a suite of over 500 600 regression test projects that are used to check for changes to Source during software development. The regression test suite runs on almost every change made to the software code and therefore any code changes that cause regression test failures can be tracked down easily. If a test project has trouble running or the results are different to the expected results then the regression test has failed. The software development team decides if the change in results is intended or not. For each beta software release, there is a list of known changes to these regression tests. To view the regression test changes and a report about the regression test suite for a specific release, see Source Revision History

You can create and run local regression tests, and it is good practice to do so for your models before you upgrade to newer versions of Source. Your projects can be included in the eWater Regression Test Suite, please contact eWater if interested.

Building regression tests

The first steps are to set up the test project, create results to act as the baseline for the regression test and then build the test using the Test Builder.

...

  • Sub-folders for each scenario within your project called S01, S02, … etc. Each of these folders contain *.csv files, which are the recorded results for that scenario. These are the baseline files for your regression test comparison, that is, your expected results; and
  • A Source project file that the Test Runner loads and runs to compare to the baseline files.
  • A Plugins.csv that lists all plugins currently installed in Source, regardless of whether they are used by your test project. When you run your regression test in a later version of Source, you need to have all plugins listed in this file installed. Therefore, if your test project does not use some of the plugins, you can manually edit the Plugins.csv, deleting the rows listing the plugins that are not used. If no plugins are used by your test project, you can delete the Plugins.csv file.
Figure 1. Regression tests, Test Builder

Running

...

regression tests

You use Test Runner to run regression tests on your previously created test projects. You do this with a more recent version of Source than the one in which you created your test project. For example, you could run a regression test comparing the results generated by the last production release and the most recent beta version.

...

The Run slow regression test option is for debugging by developers, including plugin writers. By default, a standard regression test is performed; the project is loaded, run, and the results of each scenario compared to baseline. In the slow regression test, the steps of a standard regression test are run multiple times and combined with copy scenario and import scenario to check that all parameters and inputs are loaded, copied, imported, saved and reset correctly at the beginning of each run. 


Figure 7. Regression tests, Test Runner

Image Modified

Troubleshooting regression tests

If your regression test fails, read the error message(s) and look at the release notes and regression test changes for releases between the version of Source in which you created your test project, and the one in which you ran your regression test. You can also run the regression test in some of these intermediate releases to determine in which version the errors start. If you wish to investigate the errors further, or do not have access to the beta releases, please contact Source support

Updating regression tests

If the regression test reveals that your results have changed, and after troubleshooting you are satisfied these changes are intended (eg. caused by known software improvements), you can overwrite the baseline data saved with your regression test by enabling Overwrite Results and then clicking Run again. This will update your regression test to the version of Source you are currently using for testing.