Stream Insights into Slack or Jira
Near real-time polling
Introduction
This tutorial demonstrates how to build a near real-time automation that monitors UserTesting sessions as they complete, detects important signals (such as failed tasks or notable responses), and pushes alerts into Slack or Jira.
This pattern enables teams to act on research insights immediately—turning UX findings into operational workflows instead of static reports.
The approach uses polling, which is supported by the Results API, and sets the foundation for webhook-based integrations with other apps.
What you’ll build
A lightweight service or script that:
- Polls for newly completed sessions in a test
- Fetches session details for each new session
- Applies client-side logic to detect signals (e.g., failed tasks)
- Sends formatted notifications to Slack or Jira
Target audience
- DevOps engineers
- Product operations teams
- Research operations teams
- Platform and integration engineers
Prerequisites
- A valid access token (
ACCESS_TOKEN). Go to Authorization for details. - A known testId. Go to How to obtain a TestId (UUID) for details.
- A Slack Incoming Webhook URL or Jira API credentials
- A scheduler or lightweight worker (cron, serverless function, background job)
High-level architecture
graph LR
A["Scheduler / Worker<br/>(every N minutes)"] --> B["List sessions<br/>(polling)"]
B --> C["Identify newly<br/>completed sessions"]
C --> D["Fetch session<br/>details"]
D --> E["Apply rules<br/>(failed task,<br/>sentiment, etc.)"]
E --> F["Send message<br/>to Slack or Jira"]
Steps
Step 1 — Poll for sessions in a test
Endpoint
GET /api/v2/sessionResultsWhy polling works
- Sessions complete over time
- This endpoint allows you to repeatedly check for updates
- Near real-time behavior is achieved by polling every few minutes
Example (curl)
curl --location 'https://api.use2.usertesting.com/api/v2/sessionResults/?testId=TEST_ID&limit=10&offset=0' \
--header 'Content-Type: application/json' \
--header 'Authorization: Bearer ACCESS_TOKEN'Best Practice: Always use pagination query parameters (
limitandoffset) along with the appropriate logic to ensure all sessions are retrieved.
- Use the response data from
limit,offset, andtotalCountin themeta.paginationproperty to calculate the number of page iterations needed to collect all sessions in the test.- By default,
limitis set to25andoffsetto0.- In the response, sessions are sorted in descending order, i.e., from newest to oldest.
Step 2 — Track newly completed sessions
Best practice
Maintain a small state store (file, DB, cache) with:
- Last processed
finishTime - Or a set of processed
sessionIds
Logic
Ignore sessions with finishTime equal to or less than the last processed finishTime.
This prevents duplicate notifications.
Step 3 — Retrieve session details for new sessions
Endpoint
GET /api/v2/sessionResults/SESSION_IDPurpose
Session details include:
- Task results
- Participant responses
- Context needed to detect failures or insights
Example (curl)
curl --location 'https://api.use2.usertesting.com/api/v2/sessionResults/SESSION_ID' \
--header 'Authorization: Bearer ACCESS_TOKEN'Step 4 — Detect actionable signals (client-side logic)
The Results API returns raw, structured data. Insight detection happens in your code.
Example signals
Failed task detection
Identify task responses where:
- Rating scale is below a threshold
- Navigation task indicates failure
- NPS-style tasks score poorly
Notable qualitative responses
- Keyword matches in written responses
- Empty or abandoned tasks
- Unexpected task patterns
Example rule (pseudo-logic)
IF taskType == RATING_SCALE AND value <= 2
THEN flag as "Task Failure"
Step 5 — Send alerts to Slack
Slack integration pattern
- Use an Incoming Webhook
- Post a concise, actionable message
Example payload
{
"text": "🚨 UserTesting Alert\nA participant failed Task 3 in Test ABC.\nSession ID: 1234\nFinish Time: 2024-11-21T12:25Z"
}Best practices Include:
- Test name or ID
- Session ID
- What went wrong
Keep messages short and scannable.
Step 6 — Send issues to Jira (optional)
Instead of Slack, you can create Jira issues when critical signals occur.
Common input data sample for creating a Jira issue:
{
"fields": {
"project":
{
"key": "TEST"
},
"summary": "UX Issue Detected in UserTesting Session",
"description": "Include task details and session context here.",
"issuetype": {
"name": "Task"
},
"labels": ["usertesting", "ux-research", "regression"]
}
}Step 7 — Control frequency and load
Recommended polling interval
- Every 5–10 minutes for near real-time use
- Longer intervals for lower urgency
Rate-limit considerations Consider rate-limit error responses (429 status code) and use:
- Exponential backoff
- Small concurrency limits
- Avoid re-fetching session details unnecessarily
Example polling loop (pseudo-code)
last_checkpoint = load_checkpoint()
for page in paginate_sessions(test_id):
for session in page.sessions:
if session.finishTime <= last_checkpoint:
continue
details = get_session_details(session.sessionId)
if detect_failure(details):
notify_slack(details)
update_checkpoint(session.finishTime)Pseudo-code overview: The code iterates through completed test sessions since the last checkpoint, analyzes each one for failures, sends a Slack notification if a failure is detected, and updates the checkpoint to avoid reprocessing sessions in future runs.
Common pitfalls
| Pitfall | Recommendation |
|---|---|
| Duplicate alerts | Persist checkpoints |
| Alert noise | Start with strict rules |
| Over-polling | Increase interval if volume is high |
| Missing context | Include task and test info |
Summary
You’ve built a near real-time automation that:
- Monitors research sessions as they complete
- Detects meaningful UX signals
- Pushes insights directly into team workflows
This transforms UserTesting from a reporting tool into a real-time decision engine, enabling faster response and tighter collaboration across teams.
Updated 15 days ago
