Manual Regression Tests

Regression Testing is defined as a type of software testing to confirm that a recent program or code change has not adversely affected existing features.

Regression Testing is nothing but a full or partial selection of already executed test cases which are re-executed to ensure existing functionalities work fine.

This testing is done to make sure that new code changes should not have side effects on the existing functionalities. It ensures that the old code still works once the new code changes are done.

How to Test

A) Retest All - run all defined test cases. This is very expensive and should not be done very often. Only before larger releases or important events (e.g. BEAT81 events with 1000 participants)

B) Selective Regression Test - run a certain subset of tests. Better!

  • Test cases selected can be categorized as 1) Reusable Test Cases 2) Obsolete Test Cases.

    • Re-usable Test cases can be used in succeeding regression cycles.

    • Obsolete Test Cases can't be used in succeeding cycles.

  • Prioritize the test cases depending on business impact, critical & frequently used functionalities. Selection of test cases based on priority will greatly reduce the regression test suite.

Selecting test cases for regression testing

Effective Regression Tests can be done by selecting the following test cases -

  • Test cases which have frequent defects (D)

  • Functionalities which are more visible to the users (U)

  • Test cases which verify core features of the product (C)

  • Test cases of Functionalities which has undergone more and recent changes (R)

  • All Integration Test Cases (UI functionality of e.g. Signup)

  • All Complex Test Cases (Different error messages depending on provider)

  • Boundary value test cases (Min weight / max weight in health data form)

  • A sample of Successful test cases (booking, login, signup)

  • A sample of Failure test cases (booking without credit, login with wrong password)

Last updated