Workflow Testing

The Workflow Testing feature allows you to validate a workflow before publishing. You can run tests against either historical evaluations (backtesting) or custom JSON payloads to identify and resolve issues safely. Testing ensures workflows behave as expected without impacting live systems.


When to use workflow testing

  • Validate against historical data before deployment.
  • Analyze hypothetical outcomes retroactively.
  • Simulate edge cases by running tests with custom JSON payloads.
  • Run regression checks after making workflow changes.
  • Experiment safely in a controlled environment.
📘

Note:

Workflow testing is currently disabled for workflows that include any asynchronous steps (e.g., DocV, OTP, Wait step) or asynchronous data services (e.g., Middesk, Thomson Reuters).

This is a temporary limitation we plan to lift in Q4 2025. If you need to test such workflows today, we can provide synchronous drop-in replacements. Please work with your Solutions Consultant or email [email protected] for details.


Testing modes

RiskOS™ supports two types of testing:

  • Historical evaluations: Previously processed records are re-executed against the workflow.

    • No live decisions or cases are created.
    • Results are available for analytical comparison.
  • Custom JSON payloads: Input a single JSON object to simulate a scenario.

    • Ideal for debugging or validating behavior with synthetic data.
    • Results are immediate but lightweight — there is no persistent storage and responses cannot be opened in Case Management.

1. Run a workflow test

  1. Save your workflow draft.
  2. Click Run a backtest in the Workflow Canvas.
  3. Choose an input method:
    • Select cases from a list: Pick from your RiskOS account by ID or date range.
    • Input IDs manually: Enter up to 100 IDs (comma-separated).
    • Use custom JSON input: Paste a JSON payload into the input field.
  4. Click Run Test to begin.

2. Interpret the results

When testing with multiple IDs or historical cases, a Test Results panel is shown.


Evaluations completed

Shows how many test cases were evaluated without workflow errors vs. those with issues.


Comparative outcome chart

A bar graph compares:

  • Previous Outcome → The original workflow’s decision.
  • Test Outcome → The updated workflow’s decision.

Evaluation results table

For each ID tested, the table displays:

  • The decision from the original workflow.
  • The decision from the updated workflow.

Click the > icon to drill into step-by-step execution details.


Limitations

  • Live workflows cannot be backtested — clone into a new minor version first.
  • When testing with a single custom JSON payload, the comparative results panel is not shown.
  • Test results cannot currently be exported or downloaded (JSON, CSV, etc.).
📘

Note:

Testing may use cached enrichment results. If the same evaluation ID was used in past runs, enrichment responses are reused.

  • Adding a new service or renaming a step bypasses the cache and may trigger new calls, potentially incurring additional costs.
  • A pop-up will ask you to confirm before costs are applied.
  • To avoid cached responses entirely, provide a new evaluation ID.

Best practices

  • Test iteratively: Run small sets of IDs or synthetic cases before large-scale backtests.
  • Check error cases: Review both “completed” and “completed with errors” results to spot workflow misconfigurations.
  • Validate deltas: Focus on how outcomes differ between the original and updated workflows.
  • Document limitations: Remember that test results are view-only; keep separate notes if you need a record.
  • Interpret errors carefully: RiskOS often completes workflows even if individual steps throw errors.
    • A red error symbol in the side panel usually means a missing or null field.
    • Many such errors are expected and not harmful (e.g., null values used in conditions).
    • Pay special attention to errors from data services and confirm they returned a valid API response.