Spinner logo QXQA

Did You Know?


Test Automation Overview

Automated Test Case & Campaign Execution in AXQA Overview

Automation in AXQA allows you to execute test cases and Test Campaigns without manual intervention. Instead of relying on someone to click “Run,” the system can trigger, execute, and record results automatically — based on defined rules or schedules.


Why it matters

  • Saves time by reducing repetitive manual execution.
  • Improves consistency across regression cycles.
  • Ensures critical tests are never forgotten.
  • Supports continuous testing workflows.

When to use it

  • You run the same regression Test Campaign after every release.
  • You want nightly or weekly automated checks.
  • You need fast feedback after backend changes.
  • You want automatic follow-up actions when failures occur.

Core concepts

  • Automated ExecutionRunning test cases or Test Campaigns without manual input.
  • Automation RuleA defined instruction that tells the system when and how to execute.
  • TriggerThe event or condition that starts an automated run.
  • Execution EnvironmentWhere the test runs (server-side or via an external agent).
  • Execution HistoryA recorded log of every automated run.

How it works

  1. You define what should be automated (a test case or Test Campaign).
  2. You create a rule that determines when it should run.
  3. The system monitors the trigger condition.
  4. When the condition is met, execution starts automatically.
  5. Results are recorded just like manual executions.

How to use it

Step 1: Decide what to automate

Choose test cases or a full Test Campaign that are stable and repeatable. Automation works best when the workflow is predictable.

Step 2: Create an automation rule

Define when the system should trigger the execution — manually, on schedule, or based on a condition.

Step 3: Select execution environment

Decide whether the automation should run directly from the platform or through an external agent.

Step 4: Monitor results

Review execution history and logs to track performance and detect recurring failures.


Best practices

  • Start with stable, high-value regression tests.
  • Keep automated suites focused — avoid automating everything at once.
  • Review automation rules regularly.
  • Combine automation with build/version tracking for accurate reporting.

Common mistakes

Automating unstable or constantly changing test cases
Automate only mature and reliable workflows.

Ignoring failed automation runs
Monitor logs and address failures quickly.

Overcomplicating automation logic early on
Start simple and expand gradually.


Security & permissions

  • Only authorized users can create or modify automation rules.
  • Execution permissions are validated before any automated run.
  • Automation respects project-level visibility and access control.

Related documentation

  • Execution Types
  • Automation Rules
  • Test Campaign Overview

Tools

A+ A-

Version

1