# API Reference

Every feature the CLI uses is available over HTTPS. Create an API key from the [API Keys](/dashboard/api-keys) page in the dashboard, then call the same endpoints the CLI does. All responses are JSON.

Base URL: `https://candor.sh/api`

## Authentication

Pass your API key as a bearer token in the `Authorization` header.

```bash
curl https://candor.sh/api/studies \
  -H "Authorization: Bearer ck_your_api_key_here"
```

Keys look like `ck_<48-hex>`. Treat them like passwords — anyone with a key can read and modify your studies. Revoke compromised keys from the dashboard.

## Studies

### The three study types

Every study has a **stimulus type** that decides what participants see and how they respond. You pick a stimulus by sending exactly one of `items`, `url`, or `topic` in the create payload — Candor infers the rest.

- **Item studies** — triggered by `items`. Participants evaluate a discrete set of things (images, audio clips, copy variants, model outputs). No voice — everything happens in a structured browser UI. *Use when you have alternatives to compare or want to label/rate a collection.*
- **URL studies** — triggered by `url`. Participants visit a live website or product and talk through their experience with an AI moderator. You can pass an interview guide; Candor generates one from your goal if you don't. *Use when you're testing a real product and want qualitative feedback on specific flows.*
- **Topic studies** — triggered by `topic`. Same moderated-interview format as URL studies, but without a product to test. Participants discuss a subject with the AI moderator. *Use for discovery research, concept testing, or any interview that isn't tied to a live UI.*

### Create a study

`POST /api/studies`

`goal` is the only universally-required field. Everything else depends on the stimulus type. Candor fills in sensible defaults (task, moderator scope, reward, batch size) based on the combination you pass, so a minimal request is usually enough to start.

### Parameters shared by all study types

- **`goal`** *(string, required)* — Plain-language description of what you want to learn. Used as the study name if `name` is not provided, and fed to the moderator / task generator to shape the participant experience.
- **`participants`** *(number, default: 5)* — For moderated studies (URL/topic): the number of sessions to run. For item studies on the direct platform: the number of independent shareable links. For item studies on a managed platform: the number of respondents per task batch.
- **`audience`** *(string)* — Natural-language audience description used when Candor is recruiting for you (e.g. `"US designers aged 25-40 who use Figma daily"`). Ignored for `recruitment: "direct"`.
- **`platform`** *(`"direct" | "managed"`, default: `"direct"`)* — Controls how participants are sourced. `direct` gives you a URL you share yourself (free, no recruitment fees). `managed` has Candor recruit from a vetted pool using your `audience` string.
- **`reward`** *(number (cents))* — Per-session (moderated) or per-assignment (items) payout. If omitted, Candor auto-calculates based on task type, media duration, and platform fees. Required only if the auto-estimate feels wrong.
- **`rewardMultiplier`** *(number)* — Multiplier applied to the auto-calculated reward. Use for hard-to-recruit audiences where you want to pay above-market without picking an exact number.

### Item studies

Send a non-empty `items` array. Each item needs a `label` and optionally an `assetUrl` (for images, audio, video) plus `mimeType`. Candor probes audio and video durations at creation time to size the reward and batch correctly — if you pass `assetUrl`, make it publicly reachable.

```bash
curl https://candor.sh/api/studies \
  -H "Authorization: Bearer $CANDOR_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "goal": "Which onboarding screenshot feels more trustworthy?",
    "items": [
      { "label": "Variant A", "assetUrl": "https://cdn.example.com/a.png", "mimeType": "image/png" },
      { "label": "Variant B", "assetUrl": "https://cdn.example.com/b.png", "mimeType": "image/png" },
      { "label": "Variant C", "assetUrl": "https://cdn.example.com/c.png", "mimeType": "image/png" }
    ],
    "participants": 10
  }'
```

The `task` field decides what participants do with each item. If you omit it, Candor picks one by reading your `goal` (and whether you passed `labels`).

### Task types for item studies

- **`task: "compare"`** *(or rank — 20 pairs per assignment)* — Shows two items side by side; participant picks the winner (or ties). No extra fields required. Candor generates all pairs for small item sets, or samples for >100 items.
- **`task: "rate"`** *(or score — 20 items per assignment)* — Shows one item; participant rates 1–5. No extra fields required.
- **`task: "label"`** *(or categorize — 20 items per assignment)* — Shows one item; participant picks from a fixed label set. Pass `labels: string[]` — e.g. `["positive", "neutral", "negative"]`. Required.
- **`task: "describe"`** *(or transcribe / respond / review — 20 items per assignment)* — Shows one item; participant writes an open-ended response. No extra fields. Pick the verb that best matches your instructions: describe for observations, transcribe for audio/video, review for evaluative writing.
- **`task: "scorecard"`** *(10 items per assignment)* — Shows one item; participant evaluates it across multiple rubric dimensions. Pass `criteria: { name, weight, levels[] }[]`. Required. Each criterion produces a dimension score; overall score is weighted.

**Batch size** (`batchSize`) is how many tasks go into a single worker's assignment — defaults above. Override it if you want shorter or longer sessions.

For very large item sets (more than a few hundred), post the first chunk in the create call and append the rest via `POST /api/studies/:id/items` in batches of up to 500.

### URL studies

Send a `url` to the page you want tested. Candor automatically generates a short interview guide from your goal — or pass your own in `interviewGuide` (plain text / Markdown; Candor converts it to the internal script format via Claude).

```bash
curl https://candor.sh/api/studies \
  -H "Authorization: Bearer $CANDOR_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "goal": "Test whether new users understand the pricing page",
    "url": "https://example.com/pricing",
    "participants": 5,
    "durationMinutes": 8,
    "platform": "managed",
    "audience": "US adults 25-45 who subscribe to at least one SaaS product"
  }'
```

- **`url`** *(string, required)* — The page participants will interact with.
- **`displayMode`** *(`"iframe" | "tab"`, default: `"iframe"`)* — `iframe` embeds the page inside Candor's session UI alongside the moderator panel. `tab` opens it in a new browser tab — use this for sites that refuse to be iframed.
- **`durationMinutes`** *(number, default: 5)* — Target session length. Drives the auto-estimated reward and the generated interview script's section timing.
- **`interviewGuide`** *(string)* — Your own interview script as plain text. Sections, questions, and tasks are preserved verbatim. If omitted, Candor generates a 2–3 section script from your `goal`.
- **`moderator`** *(`"none" | "session" | "study"`, default: `"session"`)* — `session` runs one moderator per participant (default). `study` runs one moderator across all sessions with adaptive coverage — cheaper but participants don't each get a full interview. `none` disables the moderator entirely (rarely useful for URL studies).
- **`moderatorOutput`** *(`"voice" | "text"`, default: `"voice"`)* — `voice` — AI moderator speaks aloud via TTS and expects participants to reply with their voice (a full spoken interview). `text` — AI monitors participant speech via STT but responds only with text prompts on screen. Default is `voice` for all moderated studies (URL, topic, and items follow-ups). Pass `text` to opt out of TTS.
- **`inputModes`** *(`string[]`)* — Explicit list of input channels participants can use, e.g. `["voice", "text"]`. Default is voice only.

### Topic studies

Send a `topic` string. Everything else works the same as URL studies — same moderator options, same interview guide handling, same recruitment choices. The only difference is participants don't see a website during the session.

```bash
curl https://candor.sh/api/studies \
  -H "Authorization: Bearer $CANDOR_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "goal": "Understand how indie devs decide whether to adopt a new LLM",
    "topic": "Choosing an LLM for a side project",
    "participants": 8,
    "platform": "managed",
    "audience": "Software engineers who ship side projects"
  }'
```

### Direct vs managed recruitment

Every study has a `platform` that controls how participants are sourced. It's a creation-time choice — you can't switch a study between modes later.

- **`platform: "direct"`** — Candor gives you one shareable URL per participant slot. You send the links to whoever you want — teammates, your own user list, a beta group. No recruitment fees. The `audience` field is ignored.
- **`platform: "managed"`** — Candor handles recruitment against a vetted participant pool using the `audience` string you provide. The total cost per participant is visible on the create response under `totalCostCents` — nothing charges until you approve the study.

### After creating a study

Every create returns `status: "draft"`. Nothing charges, nothing recruits, nothing is visible to participants until you explicitly approve and publish:

```bash
POST /api/studies/:id/approve   # draft -> ready_to_publish (or active for direct studies)
POST /api/studies/:id/publish   # ready_to_publish -> live recruitment
```

Direct (`link`) studies skip the publish step — approve activates them and returns the shareable URLs inline.

### List studies

`GET /api/studies`

Returns an array of studies. Append `?archived=true` to include archived studies.

### Get a study

`GET /api/studies/:id`

Append `?include=findings,participants` to include related entities in the response. Example response body:

```json
{
  "study": {
    "id": "study_a1b2c3",
    "name": "Onboarding walkthrough test",
    "goal": "Find friction in the signup flow",
    "stimulus": { "type": "url", "value": "https://example.com/signup" },
    "task": "use",
    "moderatorScope": "session",
    "moderatorOutput": "voice",
    "participants": 5,
    "status": "active",
    "platform": "managed",
    "estimatedCostCents": 6750,
    "createdAt": "2026-04-12T10:30:00.000Z",
    "updatedAt": "2026-04-12T10:45:00.000Z"
  },
  "findings": [],
  "participants": [],
  "activity": [
    { "event": "launched", "message": "Launching study...", "at": "..." },
    { "event": "published", "message": "Recruitment live", "at": "..." }
  ]
}
```

The `platform` field is `direct` for self-shared studies and `managed` when Candor handles recruitment.

### Add items to a study

`POST /api/studies/:id/items`

Append additional items to an existing item-based study. Useful for large studies where the initial `POST /api/studies` body would exceed request size limits — send the first batch on create, then stream the rest here in chunks of 500.

### Preview a task

```bash
POST   /api/studies/:id/preview  # create a preview assignment
DELETE /api/studies/:id/preview  # clean up preview assignments
```

The `POST` endpoint creates a disposable assignment and returns a `url` that opens the real participant UI in preview mode (no responses saved, preview rows filtered out of results). Works on drafts — you can preview before approving. For the browser-friendly shortcut, just visit `https://candor.sh/study/preview/:id` and the page handles the POST and redirect for you. See [Previewing a task](#guide-preview) in the guide for the full flow.

### Lifecycle transitions

```bash
POST   /api/studies/:id/approve   # move draft to ready-to-publish
POST   /api/studies/:id/publish   # go live with recruitment
POST   /api/studies/:id/pause     # temporarily stop recruiting
POST   /api/studies/:id/cancel    # permanently stop
POST   /api/studies/:id/archive   # archive
DELETE /api/studies/:id/delete    # permanent delete (billing records preserved)
```

The pause and archive endpoints also handle the inverse operation. Pause auto-toggles — calling it on a paused study resumes it — or pass `{ "action": "resume" }` in the body to force it. Archive takes `{ "action": "unarchive" }` to restore a study.

## Results & findings

The output shape depends on the study type. Item studies return computed results that depend on the task; moderated (URL/topic) studies return prioritized findings extracted from session transcripts.

```bash
GET /api/studies/:id/results       # computed results (item studies)
GET /api/studies/:id/findings      # synthesized findings (moderated)
GET /api/studies/:id/demographics  # participant demographics (managed recruitment only)
GET /api/studies/:id/coverage      # thematic coverage (moderated)
```

Append `?format=csv` to any of these to download as CSV. Worker IDs in results and demographics are exposed as `participantId` (JSON) or `external_participant_id` (CSV) so you can correlate responses across calls without depending on any recruitment provider's ID format.

### Item-study results — shape per task type

`GET /api/studies/:id/results` is valid for item studies only. The `results` object shape depends on the `task` you picked at creation time:

**Pairwise comparison (`task: "compare"`)**

```json
{
  "results": {
    "rankings": [
      { "rank": 1, "label": "Variant A", "winRate": 0.72, "totalWins": 18, "totalComparisons": 25 },
      { "rank": 2, "label": "Variant B", "winRate": 0.48, "totalWins": 12, "totalComparisons": 25 }
    ],
    "agreement": {
      "pairwiseAgreementRate": 0.84,
      "krippendorphAlpha": 0.68,
      "disagreedPairs": [ { "itemALabel": "Variant A", "itemBLabel": "Variant C" } ]
    },
    "totalResponses": 50,
    "totalPairs": 3
  },
  "status": "completed",
  "progress": { "totalTasks": 15, "completedTasks": 15, "totalResponses": 50,
                "uniqueParticipants": 10, "respondentsPerTask": 5 }
}
```

**Rating scale (`task: "rate"`)**

```json
{
  "results": {
    "items": [
      { "label": "Variant A", "meanRating": 4.2, "stdDev": 0.74, "median": 4, "totalRatings": 12 },
      { "label": "Variant B", "meanRating": 3.6, "stdDev": 0.92, "median": 4, "totalRatings": 12 }
    ]
  }
}
```

**Categorical label (`task: "label"`)**

```json
{
  "results": {
    "items": [
      {
        "label": "Screenshot 1",
        "assignedLabel": "positive",
        "confidence": 0.80,
        "totalVotes": 10,
        "labelDistribution": { "positive": 8, "neutral": 1, "negative": 1 }
      }
    ]
  }
}
```

**Free text (`task: "describe"`, `"review"`, etc.)**

```json
{
  "results": {
    "items": [
      {
        "label": "Variant A",
        "responses": [
          { "text": "Felt cluttered — I didn't know where to look first.", "participantId": "p_3f9e..." },
          { "text": "Clean and direct. Would click.", "participantId": "p_7a22..." }
        ]
      }
    ]
  }
}
```

**Scorecard (`task: "scorecard"`)**

```json
{
  "results": {
    "items": [
      {
        "label": "Model output A",
        "overallWeightedScore": 0.74,
        "totalResponses": 8,
        "dimensions": [
          { "name": "Accuracy",     "meanScore": 0.85, "weight": 5 },
          { "name": "Helpfulness",  "meanScore": 0.68, "weight": 3 }
        ]
      }
    ]
  }
}
```

### Moderated-study findings

`GET /api/studies/:id/findings` returns prioritized findings (P0–P3) that Candor extracts from session transcripts after sessions complete. The same P0–P3 scale used for the CLI's `findings` command — see the [Findings](#findings) concept section above for what each priority means.

```json
{
  "findings": [
    {
      "id": 42,
      "priority": "P0",
      "title": "Pricing page hides the free tier below the fold",
      "description": "4 out of 5 participants scrolled past the paid tiers...",
      "category": "information-architecture",
      "affectedFeature": "pricing",
      "timesMentioned": 4,
      "keyQuotes": ["I thought everything cost money — I was about to leave."],
      "suggestedAction": "Move the free-tier card above the comparison table.",
      "status": "open",
      "createdAt": "2026-04-12T11:05:00.000Z"
    }
  ]
}
```

### Coverage (moderated studies only)

`GET /api/studies/:id/coverage` returns the themes participants have explored so far and which expected themes are still missing — useful mid-study to decide whether to keep running sessions or stop early.

### Demographics (managed recruitment only)

`GET /api/studies/:id/demographics` works only for studies with managed recruitment. Returns one row per participant with demographic fields reported by the recruitment provider.

### Account balance

`GET /api/billing/balance`

Returns your organization's prepaid balance in cents and whether a payment method is on file. Useful for showing a top-up prompt before creating an expensive study.

```json
{
  "balanceCents": 12500,
  "hasPaymentMethod": true
}
```

## Webhooks

Instead of polling, subscribe to events and Candor will POST them to your endpoint as they happen. Create an endpoint from the [Webhooks](/dashboard/webhooks) page — you'll get back a signing secret that you should store securely.

### Payload format

```json
{
  "id": "evt_a1b2c3d4e5f6",
  "type": "study.completed",
  "createdAt": "2026-04-12T10:30:00.000Z",
  "data": {
    "studyId": "study_a1b2c3",
    "message": "All participants have submitted — study complete"
  }
}
```

Webhook bodies are intentionally minimal. Use `GET /api/studies/:id` to fetch the full state — this keeps payloads small and lets you process events in any order.

### Event types

Events are grouped into four namespaces. Subscribe to the ones you need.

- `study.*` — lifecycle transitions (launched, published, paused, completed, cancelled)
- `participant.*` — participant events (joined, session_started, submitted, no_show)
- `interaction.*` — clicks, media playback, responses. **High volume.**
- `transcript.*` — per-utterance events from the AI moderator and participant. **Very high volume.**

### Verifying the signature

Every request includes an `X-Candor-Signature` header with an HMAC-SHA256 of the raw body using your endpoint secret. Verify it before trusting the payload.

```javascript
// Node.js
import { createHmac, timingSafeEqual } from "crypto";

function verify(rawBody, headerSignature, secret) {
  const expected = "sha256=" + createHmac("sha256", secret)
    .update(rawBody)
    .digest("hex");
  const a = Buffer.from(headerSignature);
  const b = Buffer.from(expected);
  return a.length === b.length && timingSafeEqual(a, b);
}
```

```python
# Python
import hmac, hashlib

def verify(raw_body: bytes, header_signature: str, secret: str) -> bool:
    expected = "sha256=" + hmac.new(
        secret.encode(), raw_body, hashlib.sha256
    ).hexdigest()
    return hmac.compare_digest(header_signature, expected)
```

### Delivery & retries

Your endpoint must respond with `2xx` within 10 seconds. Non-2xx responses and timeouts are retried with exponential backoff (up to 3 retries = 4 attempts total). Persistently failing endpoints are flagged in the dashboard as Failing but not automatically disabled — you'll see the state change on the Webhooks page and can pause or delete the endpoint from there.

Webhook delivery is at-least-once. Use `X-Candor-Delivery-Id` to deduplicate on your side if you're doing anything non-idempotent.

Other headers you'll see on each delivery:

- `X-Candor-Event` — event type (e.g. `study.completed`)
- `X-Candor-Delivery-Id` — unique delivery ID, useful for idempotency
- `X-Candor-Timestamp` — when the event was emitted

## Errors

All error responses use standard HTTP status codes and return a JSON body with an `error` field.

```json
{ "error": "Study not found" }
```

- **`400`** — Bad request — missing or invalid parameters
- **`401`** — Unauthorized — missing or invalid API key
- **`402`** — Payment required — insufficient account balance
- **`404`** — Not found — study, endpoint, or resource does not exist
- **`422`** — Unprocessable — validation failed (e.g. pre-flight check)
- **`500`** — Server error — try again or contact support
