Running a Test Campaign means executing all linked test cases in a defined order and collecting their results under one structured campaign run. It gives you a clear picture of overall progress, individual case outcomes, and what needs follow-up.
Why it matters
- Structured execution: all test cases run in a predictable sequence.
- Centralized results: outcomes are grouped under one campaign context.
- Release confidence: you can validate a full scope before sign-off.
When to use it
- Before releasing a build to ensure all required cases pass.
- After a hotfix to revalidate affected areas.
- During regression cycles to confirm stability across features.
Core concepts
- Execution Order – test cases run in ascending order as defined in the campaign
- Campaign Run – a single execution cycle of the full campaign
- Per-Case Result – each test case produces its own status and details
- Campaign-Level View – a combined overview of progress and outcomes
- Execution Type – defines how the campaign is executed (depending on configuration)
How it works
- The system loads all test cases linked to the campaign.
- Test cases are sorted based on their defined Order.
- Each test case is executed sequentially.
- Individual results are collected and reflected in the campaign summary.
How to use it
Step 1: Review the campaign before execution
Confirm that the test case list, order, environment, and scope match the build you are validating.
Step 2: Start the campaign run
Trigger the campaign execution from the campaign view. The system will begin executing test cases in ascending order.
Step 3: Monitor progress
As execution progresses, individual test case statuses are updated. You can follow completion, failures, and partial progress directly from the campaign view.
Step 4: Analyze results
After completion, review:
- Which test cases passed
- Which failed
- Any execution notes or comments
Use campaign comments and to-dos to document next steps.
Step 5: Follow up
If fixes are applied, you can rerun the campaign or create a new campaign for the updated build to maintain clean tracking history.
Best practices
- Always review scope before running: avoid executing outdated campaigns.
- Keep execution environments clear: label campaigns accurately (e.g., Staging, Production-like).
- Use to-dos immediately after failures: don’t wait to assign follow-ups.
- Separate smoke from regression: faster validation cycles improve team efficiency.
- Document context: add notes when failures are environment-related or known issues.
Common mistakes
❌ Running a campaign without verifying order
✔ Double-check execution order to ensure logical flow.
❌ Using the same campaign across unrelated builds
✔ Create a new campaign per major build or milestone for clean tracking.
❌ Ignoring partial failures
✔ Review individual case details before closing the campaign.
❌ Not documenting outcomes
✔ Use comments and to-dos to keep communication transparent.
Security & permissions
- Campaign execution respects project-level permissions.
- Only authorized users can trigger a campaign run.
- Execution behavior follows the campaign’s configured execution type.
Related documentation
- Test Campaign Overview
- Managing Campaign Test Cases
- Executing Test Cases
- Campaign To-Dos