Card Sorting Category Optimization Calculator

Optimize the number of categories, cards per category, and participant count for card sorting UX research studies. Calculates agreement scores, optimal category ranges, and study reliability metrics.

Recommended: 20–100 cards per study
Minimum 15 recommended for reliable results
Average number of groups participants created
Card pairs placed together by majority of participants

Formulas Used

Total Possible Card Pairs: C(n, 2) = n × (n − 1) / 2

Agreement Score (AS): AS = Agreed Pairs / Total Possible Pairs

Optimal Category Range: Min = √(n/2), Max = √(n)  (square root rule)

Category Consistency Index (CCI): CCI = 1 − |actual_categories − optimal_midpoint| / optimal_midpoint

Cognitive Load Score (CLS): Based on Miller's Law — ideal 5–9 items per category (7 ± 2)

Study Reliability: R = 1 − (1 − p)ⁿ  where p = 0.20 (discovery probability), n = participants

Category Overload Index: COI = actual_categories / optimal_max_categories

Assumptions & References

  • Agreement Score thresholds: ≥70% strong, 40–69% moderate, <40% weak (Spencer, 2009)
  • Optimal category count uses the square root rule: √(n/2) to √(n) for n total cards
  • Miller's Law: humans optimally process 7 ± 2 items per group (Miller, 1956, Psychological Review)
  • Study reliability formula adapted from Nielsen's usability testing model (Nielsen & Landauer, 1993)
  • Discovery probability p = 0.20 used for card sorting (more conservative than usability testing p = 0.31)
  • Minimum 15 participants recommended for open card sorts (Tullis & Wood, 2004)
  • Closed card sorts may require fewer participants (8–10) when validating known structures
  • Category Consistency Index measures deviation from the optimal midpoint category count
  • Results assume cards are representative of the full content domain being organized

In the network