Your Users

You have the people.
Candor gives you the study.

Share a link with your customers, patients, beta users, or team. They complete the study. You get structured results with real methodology built in. Free — no recruitment fees, no subscriptions, no credit card.

$ claude "create a study for my beta users — 5-minute voice interview about onboarding"
Study created. Share this link:
https://candor.sh/s/abc123
Send to your users. Results stream in real time.

Every evaluation on Candor is completed by a real person. Not an LLM. Not a synthetic label. Human judgment.

The Difference

Why this isn't a Google Form

A form can't follow up.

Google Forms gives you static answers. Candor's AI moderator asks follow-up questions based on what the participant says — probing on friction, confusion, and the moments that matter. You get the depth of a 1:1 interview without running one yourself.

A form can't prevent bias.

Candor randomizes presentation order, counterbalances pairwise comparisons, and inserts attention checks automatically. Your results are methodologically sound, not just a pile of opinions.

A form can't synthesize.

Candor surfaces themes across responses, calculates agreement metrics, and delivers structured results — not a spreadsheet of raw text you have to read through yourself.

Use Cases

Share a link. Get real signal.

Get voice feedback from your customers

Share a study link with your customers or patients. They click it, an AI moderator conducts a conversation, probing on their experience with your product or service. You write the interview guide — what topics to cover, what to dig into — and the AI handles the rest. No scheduling, no Zoom calls, no notetaking. They do it on their own time, you get transcripts with themes.

Participant view
AI MODERATOR
MODERATOR

Tell me about your last session with your coach.

PARTICIPANT

It was good but I felt rushed at the end...

MODERATOR

What would have made that ending feel better?

“Think about your most recent experience — what worked and what didn't?”

How you'd run it
$ claude "create a voice interview for my patients — probe on coach interactions, what felt helpful vs. scripted, and whether they'd recommend us"
What you get back
Themes across 8 participants:
Coach opening feels scripted (6/8)
"The first few messages felt like a template.
It got better once they asked about my week."
Accountability is the core value (7/8)
"I don't need more information, I need someone
who notices when I slip."
Session length is right (5/8)
Most participants said 15-20 minutes feels
natural. Two wanted shorter check-ins more often.
Transcripts: study/coach-feedback/transcripts
Learn more about Voice Interview →

Compare two versions with your beta testers

You're deciding between two approaches — two designs, two flows, two copy variants. Share a study link with your beta testers. They see both options side by side, pick the one they prefer, and explain why. You get a ranked winner with agreement metrics instead of a Slack poll with emoji reactions.

Participant view
vs
A
Tie
B

“Which version feels easier to use? Pick one and tell us why.”

How you'd run it
$ claude "compare our two onboarding flows — share with beta testers, collect preference + reasoning"
What you get back
Preference results (12 beta testers):
Flow A wins 75%
Flow B wins 17%
Tie 8%
Agreement: strong (α = 0.78)
Top reason for A: "I could skip what I didn't
need and come back to it later."
Results written to flow_comparison.json
Learn more about Pairwise Comparison →

Rate your product with your existing users

Send your users a quick study: rate specific aspects of your product on a 1-5 scale. Get per-dimension scores — not a single NPS number, but specific signal on trust, clarity, usefulness, and whatever dimensions matter to you. Know exactly where you're strong and where you're losing people.

Participant view
1
2
3
4
5

“Rate your experience across these dimensions.”

How you'd run it
$ claude "have my users rate our product on trust, clarity, usefulness, and likelihood to recommend — 1-5 scale"
What you get back
Ratings across 20 users (1-5 scale):
trust 4.1 ±0.6
clarity 3.4 ±1.1 ← high variance
usefulness 4.4 ±0.4
likelihood to rec. 3.8 ±0.9
Clarity is the weak spot — 6 users rated it
2 or below. Common thread: "I wasn't sure
what the dashboard numbers actually meant."
Results written to product_ratings.json
Learn more about Rating Scale →

Run discovery interviews with your community

You have a Slack community, a Discord, a mailing list, a patient base. You want to understand their world — not test a specific feature, but learn what they need, what they use, where the gaps are. Share a study link. The AI moderator conducts a conversation, adapts across sessions, and by session 10 it's probing the themes that emerged in sessions 1-5.

Participant view
AI MODERATOR
MODERATOR

Walk me through a typical morning — what's the first health-related thing you do?

PARTICIPANT

I check my glucose app, then honestly forget about it...

MODERATOR

What would make you come back to it during the day?

“Tell me about your daily routine — we're trying to understand where we can help.”

How you'd run it
$ claude "run discovery interviews with my community — understand daily routines, what tools they use, and where things fall through the cracks"
What you get back
Themes across 10 sessions (study-level moderation):
Morning check-in is habitual (8/10)
Most participants check a health app first thing
but don't engage again until evening.
Midday is the gap (7/10)
"Lunch and afternoon is when things fall apart."
No tool fills this window.
App fatigue is real (9/10)
"I have 4 health apps and use 1.5 of them."
Resistant to adding another unless it replaces
something.
Coverage: 12/15 guide topics explored.
Gaps: sleep routines, weekend patterns.
Suggest 3 more sessions.
Transcripts: study/discovery/transcripts
Learn more about Voice Interview →
How It Works

Three steps. No account for participants.

01

Create the study

Tell Claude what you want to learn. It sets up the right task type, writes the moderator guide, configures the methodology. You approve.

02

Share the link

Copy the study URL and send it to your people however you normally reach them — email, Slack, text, in-app notification. No login required for participants.

03

Results stream in

As participants complete the study, results appear in real time. Themes, scores, agreement metrics, transcripts — all from your terminal.

Self-recruited studies are always free.

No credit card. No limits. Full platform.

Need Candor to find participants for you? That's pay-per-participant. See pricing →

Run your first study

Describe what you want to learn. Share the link. Get results.

$curl -fsSL https://candor.sh | bash
Or talk to us about your use case →