Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

The intent of regression testing is to ensure you know when changes Regression testing allows users and developers to identify when modifications to the software change results in models. A lead to changes in model results. A regression test is a project file that has been converted to a format that saves the results for the selected Source scenarios as 'baseline' results, along with the project. You The user can then run the regression test in later Source versions, and Source will notify you the user if the results differ from the baseline results. 

Regression tests seek to ensure the followingthat: 

  • Changes (such as bug fixes) in Source do not introduce new or additional problems;, 
  • Old projects are compatible with future versions of Source; , and
  • Correct results are not inadvertently altered because of software changesSoftware changes do not inadvertently alter correct results.
Info
iconfalse

Note: eWater has a suite of over 600 regression test projects that are used to check for changes to introduced in Source during software development. The This regression test suite runs is run on almost every change made to the software code and therefore code, allowing any code changes that cause regression test failures can be tracked down easilyto be readily identified. If a test project has trouble running or the results are different to the expected results not asexpected, then the regression test has failed. The eWater’s software development team decides developers' team and hydrologists decideif the change in results is as intended or not. For each beta Each Beta and Production software release, there is includes a list of known changes to these regression tests. To view the regression test changes and a report about the regression test suite for a specific release, see Source Revision History

You Source users can also create and run local regression tests. Establishing local regression tests , and it is good practice to do so for your models before you upgrade as it allows users to identify potential result changes when upgrading models to newer versions of Source. Your projects In some circumstances, these local tests can be included in the eWater Regression Test Suite, please  please contact eWater if interested.

The user interface for the Regression Test tool was developed to provide access for both Source users and software developers and includes functionality for both. However, although Users can access all available functionality from the Interface, they cannot investigate results for functionality, which is usually used by the developers to debug or investigate system performance. This document does not cover the functionalities which can only be used by the software developers.   

Regression testing involves four steps: 

  • Building regression tests, 

...

  • Running regression tests, 
  • Monitoring and assessing the results of regression tests, and  
  • Determining the implications of the regression test results. 

How to build and run regression tests is described below. There are also tips for troubleshooting errors and upgrading tests. The final two steps of the process are undertaken by the User. 

Building regression tests

To build a regression test, you must first set up the test project , and create results to act as the results that will provide the baseline for comparison. Once this is done, the regression test and then build the test using the is built using the Test Builder.The following steps illustrate this process: 

  1. Open or create a project with the scenarios that you want to test.
  2. For each scenario that you wish to test: 
    1. Before running, ensure that you are only recording only the minimum number of parameters relevant to what you would like to test in your regression test. It is important to only have a small number of recorders turned on such as storage levels or the volume at the outlet node. 
    2. Configure and run your scenario, noting you must run at least one analysis type (e.g., Single analysis) from any of the test scenarios, and
    3. Ensure that all the recorded results are giving you the answers you expect.
  3. Open the Regression Test Builder Builder (Figure 1) using using Tools » Regression Tests » Test Builder.
  4. Click on the ellipsis button (…) and  to open a Select Folder window and navigate to the folder (e.g., C:\) where you want to save your baseline data and test project for the regression test. Click Click Select Folder button on the Select Folder.
  5. Click Save to save the folder location or the ellipsis button to load a different folder.

...

  1. window.
  2. Select one option (e.g., Fast) from the Configuration Type dropdown on Figure 1, select any available options from Scenario and Analysis box (e.g. Automatic Calibration -Single Analysis) and leave the Ignore checkbox unchecked.
  3. Click Save to save the Test Builder files to the chosen folder location. 
Figure 1. Regression tests, Test Builder

Image Added

Figure 1 is an example from a Source project (e.g., RegressionTests_example_V520.rsproj) with two scenarios: (a) Automatic Calibration and (b) Manual Calibration. The scenario called Automatic Calibration is run under analysis type Single analysis while the scenario called Manual Calibration is run under Flow calibration analysis 

Once the user saves, the Test Builder will create a folder with the same name as your project that contains:the Source project (e.g., RegressionTests_example_V520 in C:\) and which contains: 

  • Sub-folders for each scenario within your project called S01, S02, … etc(e.g., “Automatic Calibration”, “Manual Calibration”) with the same name as each run scenario in the user’s Source project. Each of these folders contain *.csv files, which scenario sub-folders contains additional sub-folders, which is labelled the same as the selected run configuration analysis type (e.g., “Single analysis”/ “Flow Calibration analysis”). Each sub-folder includes csv files that are the recorded results for that scenario run. These csv files include your expected results and are the baseline files for your regression test comparison, that is, your expected results; and 
  • A Source project file that the Test Runner loads and runs to compare to (e.g., RegressionTests_example_V520.rsproj in C:\RegressionTests_example_V520\) that the Test Runner will then load and run to compare to the baseline files. 
  • A Plugins.csv that csv that lists all the plugins currently installed in your Source project, regardless of whether they are used by in your test project. When If you run your regression test in a later version of Source, you need to have all plugins listed in this file installed. Therefore, if If your test project does not use some of the these plugins, you can manually edit the Plugins.csv, deleting the rows listing the plugins that are not used. If no plugins are used by your test project, you can delete the Plugins.csv file.  
  • A XML file with the name combining the Source project file name and “.rgt.xml”. This is a regression testing setup file.

The regression test folder (e.g., C:\RegressionTests_example_V520\) and its sub-folders are self-contained, and can be shared with others for regression testing. 

The Regression Test Builder interface (Figure 1) includes a range of information, which is described below: 

  • Details frame: 

Details are read from the Source project details. The content is editable.

...

Image Removed

Running regression tests

...

 

Author - identifies the author of the regression test site and by default are automatically filled from the Source project file. The content is editable. 

Is Test Failure - checks if a previous regression test failed and will display the error details for debugging during the regression Test Runner if ticked. Even though there is no testing error, Source also displays the information.  It is recommended that this be unchecked by the user.  

  • Configuration frame: 

Type option:  Regression runner allows you to select one of five different types: 

Fast:   This is the default method in the Regression Test Runner and is recommended in most cases. The standard regression test (Fast option in Test Builder) is performed; the project is loaded, run, and the results of each scenario compared against the baseline. 

Slow: The Run slow regression test option is used by software developers, including plugin writers for debugging. In the slow regression test, the steps of a standard regression test are run multiple times and combined with copy scenario and import scenario to check that all parameters and inputs are loaded, copied, imported, saved and reset correctly at the beginning of each run.  

ModelledVariable: The Run Modelled Variable Checker option is used for debugging by software developers. Non-developers should not use this option. 

Perf:  The Run Performance Test option is used for debugging by software developers. Non-developers should not use this option. 

Commandline: The Regression Test files will be generated for the run using the command line method.

Ignore check box: This option is used by software developers for debugging. Checking this box will mean that users do not receive feedback if results change. Most users should not need to tick this option.  

Scenario and analysis list: this box will display all the available scenarios combined with all possible run configuration analysis types in your Source project file (information on run configuration analysis types can be found at: https://wiki.ewater.org.au/display/SD540/Configuring+Scenarios The user needs to select the applicable Scenario (such as Automatic Calibration) and applicable analysis type (such as Single Analysis), Multiple scenario and run analysis type combinations can be selected. Scenario and run analysis type combinations which are not relevant for a project (such as Run with warm up which is not applicable for the catchment model in this example) will not be available. 

All selected recorders for all the scenarios and analysis types in a Source project file (e.g., Manual Calibration - Flow Calibration analysis) will be outputted/written to the regression test sub-folder during the Test Builder, but only selected items (such as Automatic Calibration - Single Analysis) will be checked during the Test Runner.  Therefore, to keep regression test file size and run time to a minimum the user should preferably create very specific Source projects for regression testing – including only the scenarios and recorders required for testing. 

Running regression tests

Use the Test Runner to run regression tests on a test project created by the Regression Test Builder. Typically, this is done with a more recent version of Source than the one in which you the user created your their test project. For example, you the user could run a regression test comparing the results generated by the last production release and the most latest production release with those generated in a previous production release or a more recent beta version. 

To run a regression test: 

  1. Choose Tools » Regression tests » Test Runner  Choose Tools » Regression Tests » Test Runner to open the dialog shown in Figure 7;2,
  2. Click on the ellipsis button (…) to load a test project ; andClick Run to from the folder where the regression test was saved during the Test Builder process (e.g., C:\RegressionTests_example_V520\RegressionTests_example_V520.rsproj), 
  3. Leave the four options (i.e., Overwrite Results, Run Slow Regression Test, Run Performance Test and Run Modelled Variable Checker) unchecked, and
  4. Click Run button to run the scenario(s) in the test project. 
Figure 2. Regression tests, Test Runner

Image Added

The Regression Test Runner interface (Figure 2) options are explained below: 

  • The Overwrite Results option can be used to write the current Test Runner results to replace previous results. More information is provided in the Updating regression tests section. 
  • Checkboxes of Run Slow Regression Test, Run Performance Test and Run Modelled Variable Checker are used by software developers for debugging and should not be used by regular users. These options work as a group, and only one can be selected at a time. Although the selection of three options from Regression Test Runner can overwrite the option from Regression Test Builder. 

...

Once a regression test is run any errors , all processing information and checked results are displayed in the Test Runner window .  To graphically compare differences in results, click on the Diff. Chart button. Click the (Figure 3).   Use the Show/Hide Info. button  button to toggle on or off or on the display of information on the regression test run. The The Export All Errors button  button allows you the user to save the errors and information to a file.

The Run slow regression test option is for debugging by developers, including plugin writers. By default, a standard regression test is performed; the project is loaded, run, and the results of each scenario compared to baseline. In the slow regression test, the steps of a standard regression test are run multiple times and combined with copy scenario and import scenario to check that all parameters and inputs are loaded, copied, imported, saved and reset correctly at the beginning of each run.

Figure 7. Regression tests, Test Runner

Image Removed

If there is any difference found, the message and red cross icon will be shown at the bottom left of the window, and the Diff. Chart icon will appear. To graphically compare differences in results, the user should click on the Diff. Chart icon to get a new chart screen (Figure 4) with all expected results from the test builder, current running results, and the difference between results.     

Figure 3. Regression tests, Test Runner results

Image Added

The error message helps the user to identify the problem. For example, in Figure 3 the error messages in the first 2 lines show there is one failed item, which is from the scenario Automatic Calibration running under Single Analysis. The modelled flow, the downstream flow, at the gauge node of Gauge site was changed. Note thatin this case abig difference (50%) was manually changed modified for demonstration purposes. 

Figure 4. Regression tests, the Chart for Diff. Chart icon

Image Added

If no error is found during the test running, thegreen tick with the label Successwill be displayedat the bottom left of the Test Runner window instead of a red cross., and the Diff. Chart icon will be not displayed. 

Troubleshooting regression tests 

If your regression test fails, read the error message(s) and look at the the release notes and  and regression test changes for releases between  for releases between the version of Source in which you created your test project, and the one in which you ran your regression test. You can also run the regression test in some of these intermediate releases (beta) to determine in which version the errors start. If you wish to investigate the errors further, still cannot resolve the issue or do not have access to the beta releases, please contact contact Source support. 

Updating regression tests 

If the regression test reveals that your results have changed, and after troubleshooting you are satisfied these changes are intended (ege.g., caused by known software improvements), you can overwrite can overwrite the baseline data saved with your regression test by enabling enabling Overwrite Results Resultsand then clicking Run Runagain. This will update your regression test to the version of Source you are currently using for testing.