Spinner logo QXQA

Did You Know?


Managing Campaign Test Cases

Campaign test cases are the heart of a Test Campaign. This is where you decide what will be executed and in which order. A clean, well-ordered campaign makes execution faster, more consistent, and easier to review.


Why it matters

  • Consistency: every run follows the same sequence, so results are comparable across builds and testers.
  • Speed: a well-structured list reduces time wasted jumping between unrelated areas.
  • Coverage control: you can clearly see what’s included in the campaign (and what isn’t).

When to use it

  • When creating a new campaign and you need to define its execution scope.
  • When updating a campaign to match a new build, release scope, or bug-fix verification list.
  • When coordinating multiple testers and you need a stable, shared run order.

Core concepts

  • Campaign Casea single test case linked to a specific campaign
  • Orderthe sequence number used to control execution order (1, 2, 3…)
  • Project scopecampaigns only use test cases that belong to the same project
  • Stable orderingkeeping order consistent helps compare results over time

How it works

  1. You select the test cases you want to include in the campaign.
  2. Each test case is added as a campaign item with an Order value.
  3. The campaign list is displayed and executed in ascending order.
  4. You can update the list later (add, remove, or reorder) to match changing scope.

How to use it

Step 1: Add test cases to the campaign

Open the campaign and add the test cases you want to execute as part of this run. Keep the list aligned with the campaign’s goal (regression, smoke, release validation, etc.).

Step 2: Assign the execution order

Set an Order number for each test case. The system uses this value to determine the execution sequence. A simple approach is to start from 1 and increase by 1.

Step 3: Reorder test cases when needed

If the campaign changes, update the order to reflect the most logical run flow (for example: login first, then core gameplay, then store/payment, then edge cases).

Step 4: Remove test cases that are out of scope

If a test case no longer belongs in the campaign (feature removed, scope changed, or it’s moved to another campaign), remove it to keep the campaign clean and focused.

Step 5: Review the final run list

Before running, scan the list to confirm three things: the scope is correct, the order makes sense, and there are no duplicates or missing essentials.


Best practices

  • Start with a “spine”: put the critical path first (login → navigation → main flow).
  • Keep order meaningful: don’t sort randomly; sort by user journey or feature dependency.
  • Group by area: keep related test cases together (UI flow, API validations, error handling).
  • Reserve a block for bug verification: if the campaign is for a build with fixes, keep verification cases near the end or in a clearly named section.
  • Keep the list short when it should be short: smoke campaigns should stay lightweight and fast.

Common mistakes

Adding too many test cases “just in case”
Create separate campaigns (Smoke vs Regression) instead of overloading one campaign.

Messy or unstable ordering
Use a consistent order strategy (critical path first, then deep coverage).

Leaving outdated cases inside the campaign
Remove cases that no longer match the build scope to avoid noise in results.

Mixing unrelated features in one run list
Split campaigns by goal or feature area so execution stays focused.


Security & permissions

  • Campaign test cases are managed within the boundaries of the campaign’s project.
  • Only authorized users can add, remove, or reorder test cases in a campaign.
  • Execution follows the campaign’s run permissions in the project.

Related documentation

  • Test Campaign Overview
  • Creating a Test Campaign
  • Running a Test Campaign
  • Campaign Comments
  • Campaign To-Dos

Tools

A+ A-

Version

1