Specifying Requirements: Identify and validate the functional and performance requirements that need to be tested.
1. Defining Requirements
2. Viewing Requirements
3. Modifying Requirements
4. Converting Requirements: Convert requirement to test in Test Plan.
Planning Tests: Plan and confirm which tests need to be performed and how these tests must be run.
1. Developing a Test Plan Tree
2. Designing Test Steps
3. Copying Test Test Steps
4. Calling Tests with Parameters
5. Creating and Viewing Requirements
6. Generating Automated Test Scripts (using QTP or WinRunner)
Running Tests: Organize test sets, schedule their execution, perform test runs, and analyze the results of these runs.
1. Defining Test Sets
Sanity: Tests the entire application at a basic level to check that it is functional and stable.
Normal:Tests the system in a more in-depth manner than the sanity test. ANormal test set can contain both positive and negative checks.
Advanced: Checks the entire application, including its most advanced features.
Regression:Verifies that a change to one part of the application does not preventthe rest of the applicationi from functioning.
Function: Tests a specific feature or a group of features in the application.
Configure email notification if test set failed.
2. Adding Tests to a Test Set
3. Scheduling Test Runs (by "Execution Flow")
Schedule the running sequence and dependendies.
4. Running Tests Manually
Tracking Defects: Add defects that were detected in the application and track how repairs are progressing.
1. Adding new defects
2. Matching defects (by filtering or searching similar defects)
3. Updating defects
4. Mailing defects
5. Linking defects to Tests