Spinner logo QXQA

Did You Know?


Running a Test Case

Running a Test Case executes all steps in order and records results at both step and test case level. Depending on the execution source, steps may run manually, on the server, or through the Smart Agent, while keeping the same step structure and validation rules.


Why it matters

  • Reliable execution flow – steps run in a predictable order with clear progress tracking.
  • Action + validation in one run – execute APIs and validate outcomes using expected logic.
  • Traceable results – every run creates a history entry you can review and compare later.

When to use it

  • You want to validate a feature or regression scenario end-to-end using ordered steps.
  • You need backend verification by running Action APIs and comparing against Expected logic.
  • You want a recorded execution history for reporting, auditing, or debugging.

Core concepts

  • Execution Sourcewhere the test runs (Manual, Server, Smart Agent)
  • Test Runa single execution instance of a test case
  • Step Orderthe sequence in which steps are executed
  • Action APIAPI executed to perform the step action
  • Expected Logicrules used to decide pass/fail (literal or expected API)

How it works

  1. You start a run from the Test Case page (or from within a plan if applicable).
  2. The system prepares the execution context (project, build/environment if selected, and step inputs).
  3. Steps execute in order from lowest step order to highest, applying Action/Expected logic per step.
  4. Results are recorded per step and aggregated into the final Test Case run status and history entry.

How to use it

Step 1: Open the Test Case and start a run

From the Test Case details page, click the run action (for example, Run / Run Now). The available run option may vary depending on the selected Execution Source.

Step 2: Confirm execution context (if shown)

If your workflow includes selecting a build, version, or environment, choose the correct context before running. This ensures the run is associated with the right target and your results remain traceable.

Step 3: Monitor step-by-step progress

During execution, each step updates independently. API-based steps execute the Action API (if defined), then apply Expected logic (Expected API and/or expected literals). Manual steps can be updated by the tester according to the scenario.

Step 4: Review the results and history

After completion, review the overall Test Case status and drill into step results for details such as extracted values, comparisons, and API responses. Each run is stored in history for later review.


Best practices

  • Run once after any change to steps, APIs, parameters, authentication, or body mode.
  • Keep steps atomic so failures are easy to locate and diagnose.
  • Prefer Expected API for dynamic systems instead of hardcoding values that change between environments.

Common mistakes

Starting a run without the required inputs (e.g., missing required parameters)
Confirm required parameters and step inputs are provided before running.

Using fixed array indexes when response order is not guaranteed
Use the API Tree and wildcard paths (e.g., [*]) when validating collections.


Security & permissions

  • Execution follows project access rules and may be restricted by role.
  • API credentials are not exposed during execution or in user-visible outputs.
  • Smart Agent runs are controlled by the agent’s authenticated session and project permissions.

Related documentation

  • Test Steps Fundamentals
  • Action API vs Expected API
  • API Tree (Response Tree)
  • Execution Results & History

Tools

A+ A-

Version

1