Categorization

Participants assign labels from your custom taxonomy to each item. You get label distributions, confidence scores, and inter-annotator agreement — without building any labeling infrastructure.

What label best describes this item?

Sample content preview
review_comment_42.txt

review_comment_42.txt

Progress7 of 20 completed
Automated

Candor does this so you don't have to

🏷️

Custom taxonomies

Define any set of labels — safe/borderline/violation, positive/neutral/negative, or anything else. Candor generates the participant UI automatically.

High-throughput batching

20 items per assignment. Categorization is fast once the taxonomy is learned. Workers stay in flow and produce consistent labels.

👥

Label distributions

Multiple participants label the same items. Candor shows the full distribution across labels — not just majority vote. See when annotators disagree.

🎯

Confidence scores

For each item, see the assigned label plus a confidence score based on annotator agreement. Low confidence flags items needing more data or clearer guidelines.

01 Under the hood
1

Define your label set

Provide any set of labels that match your classification needs. Safe/borderline/violation, positive/neutral/negative, or fully custom categories. Candor builds the participant interface automatically.

Safe
Borderline
Violation
2

20 items per batch

Items are grouped into batches of 20 for fast classification. Once participants learn the taxonomy, they stay in flow and produce consistent labels with minimal fatigue.

3

Multiple annotators label same items

Each item is labeled by multiple participants — not just one. Overlapping annotations let Candor measure inter-annotator agreement and produce reliable label distributions.

4

Distributions and confidence computed

Label distributions and confidence scores are computed automatically from all annotations. You see the full picture — not just a majority vote — with disagreement clearly surfaced.

{ confidence: 0.85 }
02 What you get back

Analysis: Label Distribution

Inter-annotator α = 0.85
ItemAssigned LabelConfidenceDistribution
review_8812.jsonSafe98%
comment_9901.txtBorderline62%
thread_4412.mdViolation89%
meta_data_002.csvSafe100%
Low confidence on comment_9901.txt — annotators split across safe, borderline, and violation. Consider reviewing guidelines.
03 Best for

Ready to launch your first categorization study?