Build a UX Metrics Dashboard

Create a UX metrics dashboard with QXscore to track experience quality

Introduction

This tutorial shows how to use the Results API QXscore endpoint to build a UX metrics dashboard that tracks experience quality across tests and time.

QXscore is a composite benchmark metric that combines behavioral and attitudinal signals, making it ideal for dashboards, trend analysis, and executive reporting.

What you’ll build

A dashboard-ready data flow that:

  1. Retrieves QXscores for a test
  2. Understands QXscore structure and components
  3. Normalizes QXscore data for BI tools
  4. Enables trend and comparison views across tests

Target audience

  • Product managers
  • UX researchers
  • Analytics and BI engineers
  • Research operations teams

Prerequisites

What is QXscore?

A QXscore is a standardized metric (0–100) that evaluates experience quality by combining:

Behavioral component

  • Derived from navigational task success
  • Measures what participants did

Attitudinal component

  • Derived from questionnaire responses
  • Measures what participants felt
  • Divided into four categories:
    • Usability
    • Trust
    • Appearance
    • Loyalty

Each test can contain one or more QX task groups, each producing a QXscore. Task groups contain at least two tasks each.

📘

The QXscore endpoint returns aggregated data only. This means that returned scores are an average of all participant sessions .

QXscore table as seen on UserTesting UI


Steps

Step 1 — Retrieve QXscores for a test

Endpoint

GET /api/v2/testResults/TEST_ID/qxScores

What this returns

  • One or more QX task groups
  • Aggregate scores across completed sessions
  • Component-level and subcomponent-level values

Example (curl)

curl --location 'https://api.use2.usertesting.com/api/v2/testResults/TEST_ID/qxScores' \
  --header 'Authorization: Bearer ACCESS_TOKEN' \
  --header 'Content-Type: application/json'

Step 2 — Understand the response structure

Top-level fields

{
  "testId": "TEST_ID",
  "qxScores": [...],
  "meta": {
    "totalQxTasks": 1,
    "completes": 12
  }
}
  • testId: indicates the test ID
  • meta.completes: indicates the number of completed sessions used in scoring

QX task group structure

Each entry in qxScores[] represents one QX task group.

{
  "taskGroupId": "TASK_GROUP_ID",
  "label": "QXscore 1",
  "qxScore": 38,
  "components": {
    "behavioral": 25,
    "attitudinal": 50
  },
  "values": {
    "behavioral": [50, 50],
    "usability": 50,
    "trust": 50,
    "appearance": 50,
    "loyalty": 50
  }
}
  • qxScore: the task group's QXscore, calculated by the average of the behavioral and attitudinal values found in the components property.
  • components.behavioral: indicates the average obtained from the navigational task scores in values.behavioral, which display the session average for each task
  • components.attitudinal: indicates the average obtained from values.usability, values.trust, values.appearance, and values.loyalty, which indicate the session average for each questionnaire category

Step 3 — Model QXscore data for analytics

Recommended warehouse table: qx_scores

ColumnDescription
test_idUUID of the test
task_group_idQX task group UUID
labelQX task group label
qx_scoreOverall QXscore (0–100)
behavioral_scoreBehavioral component
attitudinal_scoreAttitudinal component
usabilityAttitudinal sub-score
trustAttitudinal sub-score
appearanceAttitudinal sub-score
loyaltyAttitudinal sub-score
completesNumber of sessions included
extracted_atTimestamp of ingestion

Step 4 — Create dashboard views

Once loaded into a BI tool, QXscore enables several high-value UX views.

  1. Overall UX health

    • Metric tile showing latest QXscore
    • Compared against internal benchmarks
  2. Behavioral vs attitudinal split

    • Stacked bar or dual-axis chart
    • Identifies execution vs perception gaps
  3. Subcomponent breakdown

    • Radar or bar chart for:
      • Usability
      • Trust
      • Appearance
      • Loyalty
  4. Longitudinal trends

    • QXscore by test launch date
    • Track improvement over time

Step 5 — Compare across tests or releases

By storing QXscores per test, teams can:

  • Compare redesign A vs redesign B
  • Track UX impact across releases
  • Correlate QXscore with:
    • Conversion
    • Retention
    • NPS
    • Support tickets

Error handling

Take into consideration the following:

  • For scheduling, consider placing automated retries and notifications if you encounter a 503 error.
  • Consider placing backoff logic on 429 error responses.

Common pitfalls

PitfallRecommendation
Mixing task groupsTreat each taskGroupId independently
Overinterpreting small samplesUse meta.completes as context

What you can build next

With QXscore data in place, teams can:

  • Combine UX and product analytics in BI tools
  • Power executive dashboards with experience metrics
  • Feed scores into AI systems for automated insights
  • Detect UX regressions early

Summary

You now have a clean, scalable way to:

  • Retrieve standardized UX quality metrics
  • Store them in analytics-friendly formats
  • Visualize experience quality across tests and time

QXscore dashboards turn qualitative research into quantifiable, trackable UX signals that organizations can act on with confidence.