Pairwise Comparison

Two items side by side. Participants pick a winner — or call it a tie. Candor handles pair generation, position counterbalancing, and statistical analysis so you don't have to.

Which one is better?

Option A
Option B
Progress3 of 10 completed
Automated

Candor does this so you don't have to

🧮

Generates all pairs

N items produce N*(N-1)/2 unique pairs. 5 items = 10 pairs, 10 items = 45 pairs. The combinatorial explosion is handled automatically.

🔀

Counterbalances position

Each pair is randomly assigned AB or BA order with a 50/50 split across participants. Position bias is eliminated by design.

📦

Batches into assignments

Pairs are grouped into right-sized batches. Each participant completes one batch. Task order is shuffled to minimize fatigue effects.

📊

Computes statistics

Win rates, global rankings, and Krippendorff's alpha for inter-rater reliability. Per-pair disagreement analysis flags noisy comparisons.

01 Under the hood
1

Pair generation

N items produce N*(N-1)/2 unique pairs. 5 items = 10 pairs, 10 items = 45 pairs. Candor's engine handles the combinatorial explosion seamlessly.

2

Display randomization

Each pair is randomly assigned AB or BA order with a 50/50 split. Double randomization prevents position bias and ensures high-quality signal.

3

Multiple participants per pair

Each pair is evaluated by multiple participants — not just one. Overlapping judgments let Candor measure inter-rater agreement and produce statistically reliable rankings.

4

Response de-mapping

Responses are automatically mapped back to original item IDs regardless of display order. You always see true, normalized item preferences.

{ original_id: "item_c" }
02 What you get back

Analysis: Global Ranking

Krippendorff's α = 0.89
RankItem IdentifierWin RateConfidence
#1item_c
85%
High
#2item_a
62%
Medium
#3item_b
18%
Low
Disagreement analysis: High variance detected in Item B comparisons. Check rater logs for subjective bias.
03 Best for

Ready to launch your first pairwise study?