Regression testing

Regression testing allows users and developers to identify when modifications to the software lead to changes in model results. A regression test is a project file that has been converted to a format that saves the results for selected Source scenarios as 'baseline' results, along with the project. The user can then run the regression test in later Source versions, and Source will notify the user if the results differ from the baseline results.  

Regression tests seek to ensure that: 

  • Changes (such as bug fixes) in Source do not introduce new or additional problems, 
  • Old projects are compatible with future versions of Source, and
  • Software changes do not inadvertently alter correct results.

Note: eWater has a suite of over 600 regression test projects that are used to check for changes introduced in Source during software development. This regression test suite is run on almost every change made to the software code, allowing any code changes that cause regression test failures to be readily identified. If a test project has trouble running or the results are not as expected, then the regression test has failed. eWater’s software developers' team and hydrologists decide if the change in results is as intended or not. Each Beta and Production software release includes a list of known changes to these regression tests. To view the regression test changes and a report about the regression test suite for a specific release, see Source Revision History

Source users can also create and run local regression tests. Establishing local regression tests is good practice as it allows users to identify potential result changes when upgrading models to newer versions of Source. In some circumstances, these local tests can be included in the eWater Regression Test Suite, please contact eWater if interested.

The user interface for the Regression Test tool was developed to provide access for both Source users and software developers and includes functionality for both. However, although Users can access all available functionality from the Interface, they cannot investigate results for functionality, which is usually used by the developers to debug or investigate system performance. This document does not cover the functionalities which can only be used by the software developers.   

Regression testing involves four steps: 

  • Building regression tests, 
  • Running regression tests, 
  • Monitoring and assessing the results of regression tests, and  
  • Determining the implications of the regression test results. 

How to build and run regression tests is described below. There are also tips for troubleshooting errors and upgrading tests. The final two steps of the process are undertaken by the user. 

Building regression tests

To build a regression test, you must first set up the test project and create the results that will provide the baseline for comparison. Once this is done, the regression test is built using the Test Builder. The following steps illustrate this process: 

  1. Open or create a project with the scenarios that you want to test.
  2. For each scenario that you wish to test: 
    1. Before running, ensure that you are only recording the minimum number of parameters relevant to what you would like to test in your regression test. 
    2. Configure and run your scenario, noting you must run at least one analysis type (e.g., Single analysis) from any of the test scenarios, and
    3. Ensure that all the recorded results are giving you the answers you expect.
  3. Open the Regression Test Builder (Figure 1) using Tools » Regression Tests » Test Builder.
  4. Click on the ellipsis button (…) to open a Select Folder window and navigate to the folder (e.g., C:\) where you want to save your baseline data and test project for the regression test. Click Select Folder button on the Select Folder window.
  5. Select one option (e.g., Fast) from the Configuration Type dropdown on Figure 1, select any available options from Scenario and Analysis box (e.g., Automatic Calibration -Single Analysis) and leave the Ignore checkbox unchecked.
  6. Click Save to save the Test Builder files to the chosen folder location. 
Figure 1. Regression tests, Test Builder

Figure 1 is an example from a Source project (e.g., RegressionTests_example_V520.rsproj) with two scenarios: (a) Automatic Calibration and (b) Manual Calibration. The scenario called Automatic Calibration is run under analysis type Single analysis while the scenario called Manual Calibration is run under Flow calibration analysis 

Once the user saves, the Test Builder will create a folder with the same name as the Source project (e.g., RegressionTests_example_V520 in C:\) and which contains: 

  • Sub-folders (e.g., “Automatic Calibration”, “Manual Calibration”) with the same name as each run scenario in the user’s Source project. Each of these scenario sub-folders contains additional sub-folders, which is labelled the same as the selected run configuration analysis type (e.g., “Single analysis”/ “Flow Calibration analysis”). Each sub-folder includes csv files that are the recorded results for that scenario run. These csv files include your expected results and are the baseline files for your regression test comparison.  
  • A Source project file (e.g., RegressionTests_example_V520.rsproj in C:\RegressionTests_example_V520\) that the Test Runner will then load and run to compare to the baseline files. 
  • A Plugins.csv that lists all the plugins currently installed in your Source project, regardless of whether they are used in your test project. If you run your regression test in a later version of Source, you need to have all plugins listed in this file installed. If your test project does not use some of these plugins, you can manually edit the Plugins.csv, deleting the rows listing the plugins that are not used. If no plugins are used by your test project, you can delete the Plugins.csv file.  
  • A XML file with the name combining the Source project file name and “.rgt.xml”. This is a regression testing setup file.

The regression test folder (e.g., C:\RegressionTests_example_V520\) and its sub-folders are self-contained, and can be shared with others for regression testing. 

The Regression Test Builder interface (Figure 1) includes a range of information, which is described below: 

  • Details frame: 

Details are read from the Source project details. The content is editable. 

Author - identifies the author of the regression test site and by default are automatically filled from the Source project file. The content is editable. 

Is Test Failure - checks if a previous regression test failed and will display the error details for debugging during the regression Test Runner if ticked. Even though there is no testing error, Source also displays the information.  It is recommended that this be unchecked by the user.  

  • Configuration frame: 

Type option:  Regression runner allows you to select one of five different types: 

Fast:   This is the default method in the Regression Test Runner and is recommended in most cases. The standard regression test (Fast option in Test Builder) is performed; the project is loaded, run, and the results of each scenario compared against the baseline. 

Slow: The Run slow regression test option is used by software developers, including plugin writers for debugging. In the slow regression test, the steps of a standard regression test are run multiple times and combined with copy scenario and import scenario to check that all parameters and inputs are loaded, copied, imported, saved and reset correctly at the beginning of each run.  

ModelledVariable: The Run Modelled Variable Checker option is used for debugging by software developers. Non-developers should not use this option. 

Perf:  The Run Performance Test option is used for debugging by software developers. Non-developers should not use this option. 

Commandline: The Regression Test files will be generated for the run using the command line method.

Ignore check box: This option is used by software developers for debugging. Checking this box will mean that users do not receive feedback if results change. Most users should not need to tick this option.  

Scenario and analysis list: this box will display all the available scenarios combined with all possible run configuration analysis types in your Source project file (information on run configuration analysis types can be found at <span>Configuring Scenarios</span>). Only run scenario and analysis type (e.g., Manual Calibration - Flow Calibration analysis) is enabled for the selection. The user needs to select the applicable Scenario (such as Automatic Calibration) and applicable analysis type (such as Single Analysis), Multiple scenario and run analysis type combinations can be selected. Scenario and run analysis type combinations which are not relevant for a project (such as Run with warm up which is not applicable for the catchment model in this example) will not be available. 

All selected recorders for all run scenarios and analysis types (e.g., Manual Calibration - Flow Calibration analysis) in a Source project file will be outputted/written to the regression test sub-folder during the Test Builder, but only selected items (such as Automatic Calibration - Single Analysis in Figure 1) will be checked during the Test Runner.  Therefore, to keep regression test file size and run time to a minimum the user should preferably create very specific Source projects for regression testing – including only the scenarios and recorders required for testing. 

Running regression tests

Use the Test Runner to run regression tests on a test project created by the Regression Test Builder. Typically, this is done with a more recent version of Source than the one in which the user created their test project. For example, the user could run a regression test comparing the results generated by the latest production release with those generated in a previous production release or a more recent beta version. 

To run a regression test: 

  1.  Choose Tools » Regression Tests » Test Runner to open the dialog shown in Figure 2,
  2. Click on the ellipsis button (…) to load a test project from the folder where the regression test was saved during the Test Builder process (e.g., C:\RegressionTests_example_V520\RegressionTests_example_V520.rsproj), 
  3. Leave the four options (i.e., Overwrite Results, Run Slow Regression Test, Run Performance Test and Run Modelled Variable Checker) unchecked, and
  4. Click Run button to run the scenario(s) in the test project. 
Figure 2. Regression tests, Test Runner

The Regression Test Runner interface (Figure 2) options are explained below: 

  • The Overwrite Results option can be used to write the current Test Runner results to replace previous results. More information is provided in the Updating regression tests section. 
  • Checkboxes of Run Slow Regression Test, Run Performance Test and Run Modelled Variable Checker are used by software developers for debugging and should not be used by regular users. These options work as a group, and only one can be selected at a time. Although the selection of three options from Regression Test Runner can overwrite the option from Regression Test Builder. 

Once a regression test is run, all processing information and checked results are displayed in the Test Runner window (Figure 3).   Use the Show/Hide Info. button to toggle on or off information on the regression test run. The Export All Errors button allows the user to save the errors and information to a file. If there is any difference found, the message and red cross icon will be shown at the bottom left of the window, and the Diff. Chart icon will appear. To graphically compare differences in results, the user should click on the Diff. Chart icon to get a new chart screen (Figure 4) with all expected results from the test builder, current running results, and the difference between results.     

Figure 3. Regression tests, Test Runner results

The error message helps the user to identify the problem. For example, in Figure 3 the error messages in the first 2 lines show there is one failed item, which is from the scenario Automatic Calibration running under Single Analysis. The modelled flow, the downstream flow, at the gauge node of Gauge site was changed. Note that in this case a big difference (50%) was manually changed modified for demonstration purposes. 

Figure 4. Regression tests, the Chart for Diff. Chart icon

If no error is found during the test running, the green tick with the label Success will be displayed at the bottom left of the Test Runner window instead of a red cross, and the Diff. Chart icon will be not displayed. 

Troubleshooting regression tests 

If your regression test fails, read the error message(s) and look at the release notes and regression test changes for releases between the version of Source in which you created your test project, and the one in which you ran your regression test. You can also run the regression test in some of these intermediate releases (beta) to determine in which version the errors start. If you still cannot resolve the issue or do not have access to the beta releases, please contact Source support.  

Updating regression tests 

If the regression test reveals that your results have changed, and after troubleshooting you are satisfied these changes are intended (e.g., caused by known software improvements), you can overwrite the baseline data saved with your regression test by enabling Overwrite Resultsand then clickingRunagain. This will update your regression test to the version of Source you are currently using for testing.