Model Session Details & Demographics

Understand and model session-level details and participant demographics

Introduction

This tutorial explains how to work with session-level details and participant demographics returned by the Results API. These data structures are rich and flexible, but they can be challenging to interpret and model correctly without guidance.

By the end of this tutorial, you will understand how to normalize session data, interpret task results, and safely join demographics to outcomes for enriched qualitative and quantitative analysis.

What you’ll build

A clear data model and processing approach that:

  1. Retrieves detailed session data
  2. Interprets participant demographics correctly
  3. Understands task result schemas and task types
  4. Normalizes responses for analysis
  5. Avoids common pitfalls with nullable and optional fields

Target audience

  • UX researchers
  • Data analysts
  • Research operations teams
  • Analytics engineers supporting research

Prerequisites

  • A valid access token (ACCESS_TOKEN). Go to Authorization for details.
  • A known session ID, referred to as SESSION_ID or sessionId. Use the GET /api/v2/sessionResults endpoint to find all completed sessions within a test.
  • Familiarity with JSON data structures

Steps

Step 1 — Retrieve session details

Endpoint

GET /api/v2/sessionResults/SESSION_ID

Request sample

curl --location 'https://api.use2.usertesting.com/api/v2/sessionResults/SESSION_ID' \
--header 'Authorization: Bearer ACCESS_TOKEN'

This endpoint returns all structured data collected during a session, including:

  • Participant identifier
  • Demographic answers
  • Task-by-task responses
  • Test and audience metadata

Step 2 — Understand the session details structure

At a high level, the response contains:

{
  "sessionId": "UUID",
  "audienceId": "UUID",
  "testPlanId": "UUID",
  "sessionParticipant": { ... },
  "sessionTaskResults": [ ... ]
}

Each section serves a different analytical purpose and should be modeled separately.


Step 3 — Interpreting participant demographics

Demographics are located under:

sessionParticipant.demographicsInfo[]

Each demographic item includes:

  • id — demographic item identifier
  • code — standardized category (e.g., GENDER, AGE_GROUP)
  • label — human-readable question
  • value — participant’s selected answer(s)
  • type — single or multiple choice

Important characteristics

  • All demographic fields may be nullable
  • Multiple answers may be comma-separated
  • Not all sessions include demographics

Recommended normalized table

demographics
- participant_id
- demographic_code
- label
- value
- question_type

Step 4 — Understanding task results

Task responses are stored in:

sessionTaskResults[]

Each task result includes:

  • taskId — task item identifier
  • taskType — task types include: blank, rating scale, multiple choice, image, NPS, URL, rank order, written, Figma, matrix, and QXscore
  • taskResponse — task response shape depends on taskType

Step 5 — Mapping task types to responses

The examples below show the naming convention for task types:

  • TASK_TYPE_RATING_SCALE
  • TASK_TYPE_MULTIPLE_CHOICE
  • TASK_TYPE_WRITTEN
  • TASK_TYPE_NPS
  • TASK_TYPE_QX_SCORE

Key rule

Never assume a fixed schema for taskResponse.

Instead:

  • Inspect taskType
  • Parse the corresponding response fields dynamically
  • Store raw responses as JSON when possible

Recommended normalized table

task_results
- task_id
- task_type
- task_response_json

Step 6 — Joining demographics to outcomes

To analyze outcomes by demographic segment:

  • Join task_results.task_id
  • To demographics.participant_id

Example questions enabled

  • Do younger participants struggle more with Task A?
  • Does sentiment differ by income group?
  • Which demographic segments fail navigation tasks most often?

Step 7 — Handling nullable and optional fields

The Results API is intentionally flexible. As a result:

  • IDs may be null
  • Arrays may be empty
  • Fields may be missing depending on test design

👍

Best practices:

  • Treat all fields as optional
  • Use defensive parsing
  • Avoid hard assumptions in ETL logic


Common pitfalls

PitfallRecommendation
Assuming all sessions have demographicsAlways check for nulls
Hard-coding task response schemasBranch logic by taskType
Losing raw responsesStore original JSON

What you can build next

Once session details are modeled correctly, you can:

  • Segment insights by demographic group
  • Combine task outcomes with QXscore metrics
  • Feed structured responses into AI pipelines
  • Power detailed dashboards and reports

Summary

You now have a clear approach to:

  • Interpret complex session detail structures
  • Normalize demographics and task results
  • Join participant context to outcomes safely
  • Avoid common modeling and parsing errors