UX Researchers

Run a usability study
before your next standup.

Real participants test your product while an AI moderator asks the follow-up questions you would. No scheduling. No notetaking. Describe what you want to learn β€” Candor handles recruitment, moderation, and transcription. You get insights, not logistics.

Every evaluation on Candor is completed by a real person. Not an LLM. Not a synthetic label. Human judgment.

The Problem

You spend more time on ops than on research

πŸ“‹

Recruiting takes longer than the research

You spend days finding participants, screening them, scheduling sessions, sending reminders, handling no-shows. By the time you run the study the sprint has moved on.

⏱

Moderation is a bottleneck

You can only run as many sessions as you have moderators and hours in the day. Five sessions takes a full week when you account for scheduling, running, and debriefing. And your best moderator is also your busiest person.

πŸ“‰

Insight delivery is always late

By the time you've transcribed, coded, synthesized, and presented findings, the team has already shipped. Research becomes a retrospective exercise instead of a decision-making input.

Use Cases

Research workflows, not research theater

AI-moderated usability test

Real participants browse your product while an AI voice moderator asks questions, probes on friction points, and adapts follow-ups in real time. You write the interview guide β€” topics to cover, areas to probe β€” and the AI runs the session. Every session is transcribed with key moments annotated. Run 5 sessions overnight instead of across a week.

Participant view
LIVE SESSION
your-app.com/onboarding
ModeratorWalk me through what you see on this page.
ParticipantI'm not sure where to click nextβ€”
ModeratorWhat would you expect to happen?
AI Moderator
1:47
How you'd run it
$ claude "run a 5-person usability test of our
onboarding flow β€” probe on confusion and
drop-off points"
What you get back
Themes across 5 human sessions:
Onboarding step 3 is a wall (5/5 sessions)
Every participant hesitated at the workspace setup
screen. 3 tried to skip it, 2 asked what "workspace"
means in this context.
Pricing page erodes trust (4/5 sessions)
Participants expected to see a free tier. When they
didn't, 4 out of 5 said they'd leave.
First-run experience is fast (5/5 sessions)
Every participant noted how quickly they got to the
core feature. Positive signal on time-to-value.
Transcripts: candor.sh/studies/ux-onboarding-2024/transcripts
Learn more about Voice Interviews β†’

Preference testing on design variants

You have 3 design directions and need to know which one users prefer. Run pairwise comparisons with real users β€” they see two options side by side, pick a winner, and explain why. Get a ranked result with agreement metrics in hours, not days. Works for mockups, copy variants, icon options, anything visual or textual.

Participant view
vs
A
Tie
B

β€œWhich of these two onboarding screens feels easier to get started with?”

How you'd run it
$ claude "compare these 3 onboarding screen variants
β€” which do users prefer and why?"
What you get back
Ranked by preference (20 human participants, pairwise):
#1 Variant B β€” "minimal + progressive" 74% win rate
#2 Variant A β€” "full form upfront" 52% win rate
#3 Variant C β€” "wizard with illustrations" 34% win rate
Agreement: 0.82 (strong consensus)
Top reason for B: "I could start using it immediately
without filling out a bunch of fields I don't understand yet."
Learn more about Pairwise Comparison β†’

Concept validation with open-ended feedback

Show real participants a prototype, landing page, or concept description and collect open-ended reactions. Free text responses with optional follow-up from the AI moderator if you want richer signal. Use it to validate a direction before investing engineering time.

Participant view
Free text

β€œAfter looking at this page, what do you think this product does? What's clear and what's confusing?”

How you'd run it
$ claude "show 15 users this landing page and collect
their initial reactions β€” what's clear,
what's confusing?"
What you get back
Themes across 15 human participants:
Clear: core value prop (12/15)
Most participants accurately described what the product
does after 10 seconds on the page.
Confusing: pricing model (9/15)
"Per-session" pricing wasn't intuitive. Participants
expected per-seat or per-month.
Confusing: "AI-moderated" (7/15)
Participants weren't sure if AI means no human is ever
involved, or if it's AI-assisted with human oversight.
Positive: terminal-first positioning (10/15)
Developers and technical PMs found the CLI angle
refreshing. Non-technical PMs were neutral.
Learn more about Free Text β†’
How This Compares

What changes when you drop the overhead

⚑

Setup time

Traditional tools require project creation, screener surveys, panel selection, and scheduling windows. Candor: one command, participants recruited automatically via Prolific. Your study is live in minutes, not days.

πŸ€–

Moderation

Traditional moderated research requires a live human moderator for every session. Candor's AI moderator runs sessions in parallel, 24/7, and never forgets to ask the follow-up question.

🎯

Time to insight

The traditional pipeline is sessions, transcription, coding, synthesis, report, presentation. Candor delivers transcripts with themes surfaced automatically. Your job starts at synthesis, not transcription.

Methodology

A note on research rigor

We know UX researchers care deeply about methodology β€” so do we. Candor uses randomized presentation order to prevent position bias, counterbalanced pairwise comparisons, and attention checks to filter disengaged participants. Every study includes inter-rater agreement metrics so you can assess consensus at a glance. This isn't a survey tool with a nice UI β€” it's a research platform with proper methodology baked in.

Your next research round starts here

Your next research round can start in 5 minutes.

$curl -fsSL https://candor.sh | bash
Read the docs β†’