Regression Campaign Reporting in AXQA Execution Intelligence Platform
The Test Campaign Report provides a complete overview of a grouped execution where multiple test cases run together in a structured sequence. It gives you both a high-level summary and detailed results for every included test case.
Why it matters
- Gives management-level visibility into overall release quality.
- Shows how multiple test cases are performed within the same execution cycle.
- Helps identify patterns across related failures.
- Supports regression tracking across builds and releases.
What the report includes
- Campaign execution timestamp.
- Execution source (manual, automated, or agent-based).
- Associated build or version reference.
- Total number of test cases executed.
- Overall campaign result summary.
- Individual result for each test case.
Execution flow visibility
Test cases inside a campaign are executed in sequence. The report reflects the exact order of execution and captures the outcome of each one.
- Status per test case (passed / failed).
- Linked detailed execution report for deeper inspection.
- Structured result aggregation.
Aggregated outcome
The campaign report summarizes overall performance:
- Total passed vs failed.
- Execution completion status.
- Clear overview for release decision-making.
This makes it easy to determine whether a build is stable enough to move forward.
How it works
- A Test Campaign is triggered.
- The system executes each test case sequentially.
- Each test case stores its own execution record.
- The campaign aggregates those results into a unified report.
- The campaign execution becomes part of the project’s history.
Why campaign reports are valuable
- They provide a structured regression overview.
- They connect individual test results into one execution event.
- They support auditing and release tracking.
- They allow comparison between multiple campaign runs.
Best practices
- Group related test cases logically within a campaign.
- Use build references to track release stability.
- Review both summary and individual failures before making release decisions.
- Re-run campaigns after major fixes to validate resolution.
Common mistakes
❌ Looking only at the campaign summary
✔ Investigate individual failed test cases for context.
❌ Mixing unrelated test cases in a single campaign
✔ Keep campaigns structured and purpose-driven.
Security & access
- Campaign reports follow project-level access permissions.
- Execution data remains preserved for historical review.
- Each campaign run is clearly identifiable and traceable.
Related documentation
- Execution History Overview
- Test Case Execution Report
- Compare Executions