Integration Guide

Learn how to call the RiskOS™ Evaluation API to assess fraud risk and correlation using Email Risk.

Socure Email Risk API integration guide

This guide walks you through how to integrate with Socure’s /api/evaluation endpoint using the Email Risk enrichment. You’ll learn how to send identity data with email inputs, parse the risk and correlation results, and apply decision logic to support onboarding, trust and safety, account integrity, and fraud prevention workflows.


Before you start

Make sure your RiskOS™ environment is provisioned with:

A workflow configured for the Email Risk enrichment.
Regional support coverage for the Email Risk enrichment.

Choose your environment

Start with Sandbox for development and testing, then move to Production for live applications.

https://riskos.sandbox.socure.com/api/evaluation
  • No real customer data
  • Free testing environment
  • Unlimited API calls

Get an API key

  1. In the Sandbox RiskOS™ Dashboard, go to Developer Workbench > API Keys.
  2. Copy your API key securely.

How it works

  1. Send a POST request to /api/evaluation with at least an email address.
  2. Socure runs the request through the configured RiskOS™ workflow and applies the Email Risk enrichment.
  3. Receive a decision (ACCEPT, REVIEW, or REJECT) and supporting metadata.
  4. Apply your routing logic based on thresholds, correlation levels, and risk signals.


Start a new Risk Evaluation

Endpoint

POST https://riskos.sandbox.socure.com/api/evaluation
POST https://riskos.socure.com/api/evaluation

Authentication and headers

Include your API key in the Authorization header as a Bearer token, along with standard JSON headers:

Authorization: Bearer YOUR_API_KEY
Content-Type: application/json
Accept: application/json
X-API-Version: 2025-01-01.orion   # optional – pins a specific API version

Example request

{
  "id": "APP-123456",
  "timestamp": "2025-08-27T06:10:54.298Z",
  "workflow": "consumer_onboarding",
  "data": {
    "individual": {
      "given_name": "Guillermo",
      "family_name": "McNeil",
      "email": "[email protected]",
      "address": {
        "country": "US"
      }
    }
  }
}
curl --location --request POST 'https://riskos.sandbox.socure.com/api/evaluation' \
  --header 'Content-Type: application/json' \
  --header 'Accept: application/json' \
  --header 'Authorization: Bearer YOUR_API_KEY' \
  --data-raw '{
    "id": "APP-123456",
    "timestamp": "2025-08-27T06:10:54.298Z",
    "workflow": "consumer_onboarding",
    "data": {
      "individual": {
        "given_name": "Guillermo",
        "family_name": "McNeil",
        "email": "[email protected]",
        "address": {
          "country": "US"
        }
      }
    }
  }'

Request schema

Top-level fields

📘

Note:

Email Risk requires only an data.individual.email to return a risk score, but for improved score accuracy we recommend including the optional fields for Email Risk in the request.

Field

Type

Required

Description

Example

id

String

Required

Required, customer-defined unique identifier for the request.

This value must be unique for each evaluation. Reusing an ID causes RiskOS™ to treat the request as a re-run and can impact processing behavior, results, and downstream workflows.

"APP-123456"

timestamp

String

Required

RFC 3339 timestamp indicating when the evaluation was initiated.

"2025-08-27T06:10:54.298Z"

workflow

String

Required

RiskOS™ workflow name configured in your environment.

"consumer_onboarding"

data

Object

Required

Main payload for the RiskOS™ workflow containing all request details.

individual

Object

Required

Individual identity object containing personal and address information.

See individual schema below.


individual fields

FieldTypeRequiredDescriptionExample
emailStringRequiredThe consumer’s email address."[email protected]"
given_nameStringOptionalFirst name. Improves correlation accuracy."Guillermo"
family_nameStringOptionalLast name. Improves correlation accuracy."McNeil"
addressObjectOptionalIndividual address fields.See address schema below.

address fields

FieldTypeRequiredDescriptionExample
line_1StringOptionalStreet address line 1."123 Main Street"
line_2StringOptionalStreet address line 2.
localityStringOptionalCity or locality."Miami"
major_admin_divisionStringOptionalState, province, or region (ISO 3166-2 format)."FL"
postal_codeStringOptionalZIP or postal code (hyphens optional)."33101"
countryStringOptionalCountry in ISO 3166-1 alpha-2 format (e.g., US, CA)."US"

Example response

When you call the Evaluation API, RiskOS™ returns a JSON payload that includes the final decision, evaluation metadata, and enrichment-specific results.

{
  "id": "APP-123456",
  "eval_id": "6dc8f39c-ecc3-4fe0-9283-fc8e5f99e816",
  "decision": "REVIEW",
  "data_enrichments": [
    {
      "enrichment_name": "Socure Email Risk",
      "enrichment_provider": "Socure",
      "status_code": 200,
      "request": {
        "email": "[email protected]",
        "given_name": "Guillermo",
        "family_name": "McNeil",
        "address": {
          "country": "US"
        },
        "modules": [
          "emailrisk"
        ]
      },
      "response": {
        "referenceId": "f3863a33-69ca-43c2-90e0-8b4344a41a09",
        "nameEmailCorrelation": {
          "reasonCodes": [
            "I557",
            "R551"
          ],
          "score": 0.99
        },
        "emailRisk": {
          "reasonCodes": [
            "I520",
            "R563"
          ],
          "scores": [
            {
              "name": "Email Risk Model (US) Norm",
              "version": "7.0",
              "score": 0.887
            }
          ]
        }
      }
    }
  ]
}

Key response fields

RiskOS™ returns a consistent set of top-level fields that describe the outcome of an evaluation, along with enrichment-specific results that depend on your workflow configuration.


Where to find specific results

AreaFieldsHow to use it
Decision and routingdecision, decision_at, tags, review_queues, notes, scorePrimary control signals. Branch application logic using decision. Use tags, queues, notes, and score for secondary routing, review, and explanation.
Module resultsModule-specific fields (for example: reasonCodes, scores, extracted attributes)Evidence and signals produced by workflow modules. Use for escalation, compliance review, investigation, and audit.
Identifiers and traceabilityid, eval_idPersist these identifiers to correlate API calls, logs, webhooks, GET requests, and support cases.
Enrichment executiondata_enrichments[] (response, status_code, total_attempts, is_source_cache)Inspect enrichment outputs and detect provisioning issues, partial failures, retries, or cached responses.
Workflow contextworkflow, workflow_id, workflow_versionUnderstand which workflow ran and which version produced the result. Useful for debugging and historical analysis.
Evaluation lifecycleeval_status, status, sub_statusExecution and case state only. Useful for monitoring and asynchronous workflows. Do not use for business decisions.
Execution contexteval_source, eval_start_time, eval_end_time, environment_nameObservability and performance metadata for latency tracking, environment validation, and API vs Dashboard attribution.

Decision and routing (primary control signals)

Use these fields to determine what action your application should take.

decision values are workflow-specific and may differ from the examples shown in this guide.

FieldTypeDescriptionExample
decisionString (enum)Final evaluation result.

Possible values:
ACCEPT
REVIEW
REJECT

Note: The fields returned can be customized to fit your integration or business needs.
"REVIEW"
decision_atString <Date-Time>RFC 3339 timestamp when the decision was finalized."2025-10-01T09:12:44.387Z"
scoreNumberIf configured for a workflow, provides an aggregate score of all steps. This can be used for risk banding, additional routing, or analytics alongside the primary decision value.0.61
tagsArray of StringsArray of labels applied during the workflow to highlight routing choices, notable signals, or rule outcomes. Useful for reporting, segmentation, or UI highlighting in the RiskOS™ Dashboard.["manual_review_required"]
review_queuesArray of StringsLists any manual review queues the evaluation was sent to. Empty when the case is fully auto-resolved without human review.["kyc-us"]
notesStringFreeform text field for analyst or system comments about the evaluation. Often used to capture manual review rationale or investigation context."Review triggered due to elevated risk indicators"

Evaluation lifecycle and status

These fields describe where the evaluation is in its lifecycle and are useful for monitoring and asynchronous workflows.

FieldTypeDescriptionExample
eval_statusString (enum)Indicates the current state of an evaluation in RiskOS™.

Possible values:
evaluation_completed
evaluation_paused
evaluation_in_progress
"evaluation_completed"
statusString (enum)Indicates the current state of an evaluation or case.

Possible values:
OPEN
CLOSED
"CLOSED"
sub_statusStringProvides additional detail about the evaluation status.

Example values:
Under Review
Pending Verification
Accept
Reject
"Under Review"

Identifiers and traceability

Use these fields to correlate requests, logs, webhooks, and support cases.

FieldTypeDescriptionExample
idString (UUID or custom string)Your evaluation identifier within RiskOS™.

Note: This is customer-generated.
"APP-123456"
eval_idString (UUID)RiskOS-generated unique identifier for the evaluation."6dc8f39c-ecc3-4fe0-9283-fc8e5f99e816"
workflowStringName of the workflow executed."consumer_onboarding"
workflow_idString (UUID)Unique identifier for the workflow run."dc7f261e-b158-477e-9770-7e4eae066156"
workflow_versionStringVersion of the executed workflow."28.16.0"

Execution context

These fields provide timing and environment context for the evaluation.

FieldTypeDescriptionExample
eval_sourceString (enum)Indicates where the evaluation was initiated from.

Possible values:
API: Request submitted via the Evaluation API.
Dashboard: Case created or evaluated through the RiskOS™ Dashboard.
"API"
eval_start_timeString <Date-Time>RFC 3339 timestamp for when RiskOS™ started processing the evaluation. Useful for latency and performance monitoring."2025-10-07T23:50:03.60187976Z"
eval_end_timeString <Date-Time>RFC 3339 timestamp for when RiskOS™ finished processing the evaluation. Can be paired with eval_start_time to compute total processing time."2025-10-07T23:50:03.738794253Z"
environment_nameStringIndicates which environment the evaluation ran in. Typically Sandbox for testing or Production for live traffic."Sandbox"

Enrichment results

Enrichment outputs are returned in the data_enrichments array.


FieldTypeDescriptionExample
enrichment_nameStringProduct name."Socure Email Risk"
enrichment_endpointString (URL)Endpoint invoked.
enrichment_providerStringProvider."Socure"
status_codeIntegerHTTP status code.200
requestObjectPayload sent to provider.See request schema below.
responseObjectPayload returned by provider.See response schema below.
is_source_cacheBooleanResponse served from cache.false
total_attemptsIntegerNumber of attempts.1

request fields

FieldTypeDescriptionExample
emailStringEmail address submitted for an evaluation."[email protected]"
given_nameStringFirst name of the individual submitting the information."Guillermo"
family_nameStringLast name (surname) of the individual submitting the information."McNeil"
addressObjectIndividual address fields.
countryStringCountry code in ISO 3166-1 alpha-2 format."US"
modulesArrayList of enrichment modules requested for evaluation.["emailrisk"]

response fields

FieldDescriptionTypeExample
referenceIdUnique identifier assigned to each enrichment after a RiskOS™ workflow is finalized.String (UUID)"f3863a33-69ca-43c2-90e0-8b4344a41a09"
nameEmailCorrelationProvides correlation score and associated reason codes for name–email matching.ObjectSee nameEmailCorrelation schema below.
emailRiskContains email risk assessment scores and corresponding reason codes.ObjectSee emailRisk schema below.

nameEmailCorrelation fields

FieldTypeDescriptionExample
scoreNumberCorrelation score between the provided name and email address. Higher values indicate stronger association.0.99
reasonCodesArray of StringsList of reason codes contributing to the name–email correlation result.["I556", "I520"]

The email correlation value is a number between 0.01 and 0.99 that indicates the strength of the relationship between the email address and the consumer's given_name and family_name.

The range of possible values can be interpreted as follows:

  • 0.95 - 0.99: Very high confidence. The full name match was verified by one or more sources.
  • 0.85 - 0.94: High confidence. The partial name match (includes fuzzy matches and nicknames) was verified by one or more sources.
  • 0.75 - 0.84: Medium-to-high confidence. The last name match was verified by one or more sources.
  • 0.65 - 0.74: Low confidence. Partial name match only.
  • 0.20 - 0.64: Correlation status is unknown. The Evaluation endpoint was unable to determine correlation.
  • 0.01 - 0.19: Disconnected identity elements. No correlation found.

emailRisk fields

FieldTypeDescriptionExample
reasonCodesArray of StringsList of reason codes indicating factors influencing the email risk score.["I520", "R563"]
scoresArray of ObjectsEmail risk model scores returned by the provider.
nameStringName of the scoring model used."Email Risk Model (US) Norm"
versionStringVersion of the scoring model."7.0"
scoreNumberConfidence score between 0.001–0.999.0.887

Each email risk score is a probabilistic value between 0.001 and 0.999 that represents the level of risk associated with an email address. A higher score indicates a greater likelihood of fraud.

For example, a score of 0.800 means that the consumer is riskier than 80 percent of all consumers and less risky than 20 percent of consumers. In general, consumers with scores equal to or greater than 0.970 are considered high risk.

👍

Tip:

When multiple risk scores are returned, we recommend following these guidelines:

  • Evaluate each model's performance individually, considering metrics such as fraud capture rate, false positive rate, AUC, etc., to gauge accuracy.
  • Understand the specific use cases and goals optimized in each model, as differently tuned models can yield varied scores.
  • Scrutinize the input data and algorithms employed, recognizing that models utilizing distinct techniques or data may produce diverse results.
  • Align scores with risk strategies and thresholds, contemplating the optimal utilization of multiple outputs for decision-making.
  • Investigate substantial score discrepancies, as conflicting scores may indicate an unusual case that warrants further review.
  • Refer to documentation outlining the design and purpose of each model to guide the interpretation of scores. Reach out to your Technical Account Manager to inquire about these documents.

In cases of significant score conflicts, prioritize the model with superior performance based on metrics like the fraud capture rate to serve as the tie-breaker.



Best practices

  • Always include given_name and family_name for stronger correlation.
  • Use parallel model scoring to compare performance of challenger vs. champion models.
  • Investigate discrepancies between multiple model scores — they may indicate edge cases.
  • Tune thresholds (e.g., ≥0.97 for auto-fail) to balance fraud capture vs. false positives.
  • Log reason codes for auditability and continuous improvement.


Validation checklist

Validation checklist:

Requests include at least email and recommended optional fields.
High-risk (≥0.97) and low-correlation (≤0.20) cases trigger additional review or step-up verification.
Parallel models are configured where available.
Logs capture full enrichment payload and decision scores.