Choice-Based Conjoint Made Simple: From Design to Hierarchical Bayes Estimation

Understanding how customers make decisions is one of the hardest problems in marketing research. Consumers rarely evaluate product features in isolation — instead, they make trade-offs. For example, a buyer may accept a higher price if the brand is trusted, or settle for lower camera resolution in exchange for better battery life.
Choice-Based Conjoint (CBC) analysis is designed to reveal these trade-offs. It does so by simulating real-world purchase decisions, then applying statistical models to estimate how much value customers place on each product attribute. This makes CBC an essential tool for product design, pricing, and market strategy. Leading research partners such as Simbi Labs of India specialize in designing CBC studies that translate customer preferences into clear business insights.
What is Choice-Based Conjoint (CBC)?
At its core, Choice Based Conjoint is a survey-based experimental method. Instead of asking customers to rate features individually, respondents face choice tasks where they must pick one product from a set of alternatives. Each product differs on attributes like price, brand, and features.
For example, imagine you’re designing a new electric car. A CBC study might include attributes like:
Price: $35,000 / $45,000 / $55,000
Range per Charge: 250 miles / 350 miles / 450 miles
Charging Time: 30 min / 60 min / 90 min
Brand: Tesla / Ford / Toyota
A typical choice task could look like this:
| Car A | Car B | Car C | None |
| $35,000 | $45,000 | $55,000 | – |
| 250 miles | 350 miles | 450 miles | – |
| 90 min | 30 min | 60 min | – |
| Tesla | Ford | Toyota | – |
The respondent chooses the option they would most likely buy. Repeating this across multiple scenarios reveals their trade-offs.
Why Not Just Ask People Directly?
You might wonder: why not just ask customers which attribute is most important?
CBC forces respondents into realistic choices, making the results much closer to actual purchase behavior. For another approach that also emphasizes trade-offs—especially the “best-worst” method—check out MaxDiff.
Direct questioning often fails because:
i. People overstate preferences (e.g., “I’d always buy the eco-friendly option”) but behave differently at the point of purchase.
ii. Trade-offs are not intuitive — customers don’t always know how much more they’re willing to pay for an upgrade until they face a choice.
iii. Ratings/rankings don’t reflect competitive decision-making, while CBC simulates real market conditions.
By carefully designing such studies, teams like Simbi Labs of India ensure that businesses get realistic, actionable results rather than inflated or misleading responses.
Designing a CBC Study
Good design is crucial for reliable results. The process includes:
a. Selecting Attributes and Levels
i. Choose 5–7 attributes that truly drive decision-making.
ii. Each attribute should have 2–4 realistic levels.
iii. Avoid too many attributes, or respondents will get fatigued.
b. Constructing Choice Setas
i. Each choice task typically has 3–5 product alternatives plus a “none” option.
ii. Efficient experimental designs ensure that all levels are tested fairly without overwhelming respondents.
c. Number of Tasks
i. Respondents usually complete 8–15 choice tasks.
ii. Too few tasks → weak estimates. Too many → respondent fatigue.
Example:
For the electric car study, 300 respondents answering 12 choice tasks with 3 alternatives each would generate 10,800 choices — enough data for robust modeling.
Collecting Data
Once the conjoint experiment is designed, the next step is administering the survey and collecting choice data. This stage is critical because the quality of responses directly determines the accuracy of your final results.
1. Online Surveys
Most CBC studies are run online for speed, scale, and cost-efficiency. Platforms can also include visuals to mimic real shopping experiences.
2. Randomization for Balance
Profiles are randomized across respondents to ensure every attribute and level is tested fairly, avoiding bias in the results.
3. Revealed vs. Stated Preferences
Stated Preferences → What people say is important (via ratings/rankings). Often inflated and inconsistent.
Revealed Preferences → What people show is important by making trade-offs in actual choices. More reliable and realistic.
4. Data Quality Checks
i. Keep tasks manageable (8–15 per respondent).
ii. Use attention checks to spot careless answers.
iii. Ensure a sufficient sample size (200–500+ depending on study goals).
Estimating Part-Worth Utilities
Once data is collected, the task is to estimate part-worth utilities — numerical values that reflect the preference strength for each attribute level.
For instance, a CBC model might show:
Price $35,000 = +1.8 utility
Price $55,000 = -2.2 utility
Range 450 miles = +2.5 utility
Range 250 miles = -1.7 utility
Utilities are relative values, not absolute. A positive value means higher preference, and the differences between levels matter more than the absolute numbers.
Estimation Methods
There are different ways to estimate utilities:
a. Aggregate Logit Model
i. Treats all respondents as identical.
ii. Easy to compute but ignores individual differences.
b. Latent Class Analysis
i. Segments respondents into groups with similar preferences.
ii. Useful for market segmentation.
c. Hierarchical Bayes (HB) Estimation (most widely used today)
i. Produces individual-level utilities while borrowing information from the overall sample.
ii. Provides rich detail for simulations and segmentation.
Hierarchical Bayes Explained Simply
Hierarchical Bayes (HB) can sound intimidating, but the logic is simple:
Individual Preferences – Each person has their own utilities, but with limited data, we can’t estimate them perfectly.
Population Distribution – Preferences across the whole sample follow a general distribution.
Borrowing Strength – HB combines individual responses with population patterns. This prevents extreme or unreliable estimates for people with fewer choice data points.
Think of it as blending:
i. What this person’s choices suggest, and
ii. What we know about people in general.
The result is stable, respondent-level utilities that reflect both individuality and common trends.
Practical Example: Hierarchical Bayes in Action
Imagine you’re running a Choice-Based Conjoint study for a new coffee brand.
1. Individual Preferences
i. Respondent A only answers 10 choice tasks.
ii. Their choices suggest they prefer organic coffee and dislike high prices, but with so little data, we can’t be fully confident.
2. Population Distribution
Across 300 respondents, we see a clear trend:
i. Most people value organic and fair-trade labels.
ii. Most dislike higher prices, but are willing to pay more for strong aroma.
3. Borrowing Strength
Hierarchical Bayes combines:
i. What Respondent A’s 10 choices tell us.
ii. What we know from the overall population trend.
So, instead of producing extreme or noisy estimates for Respondent A (e.g., showing they only care about price and ignore all other features), HB stabilizes their results by blending them with the broader patterns.
Result
i. Respondent A’s utilities reflect their personal tendency (price-sensitive, organic lover) while staying consistent with population patterns (aroma matters somewhat to everyone).
ii. This makes individual-level predictions more reliable, even for respondents with limited data.
From Utilities to Business Insights
Once you have utilities, you can derive powerful insights:
a. Attribute Importance
Calculate the share of decision-making weight each attribute carries. For example, price may account for 40% of choices, while brand accounts for 25%.
b. Market Simulations
Use a simulator to predict how consumers would respond to new products or pricing.
Example: Estimate market share if Tesla cuts price to $40,000.
c. Segmentation
Group respondents by preference patterns.
Example: “Performance Seekers” prioritize range and charging speed, while “Value Buyers” care more about price.
Limitations to Keep in Mind
i. Results depend on realistic attribute levels. Unrealistic options (like $10,000 for a Tesla) will distort utilities.
ii. Respondents make hypothetical choices, not real purchases, so results are approximations.
iii. Too many attributes or tasks can cause survey fatigue and unreliable data.
Conclusion
Choice-Based Conjoint (CBC) provides a powerful way to uncover how customers truly make decisions. By forcing trade-offs, CBC goes beyond what people say is important and reveals what actually drives their choices. With careful study design, thoughtful data collection, and modern estimation techniques like Hierarchical Bayes, businesses can gain deep insights into attribute importance, customer segments, and likely market outcomes. The result is more than just numbers — it’s a decision-making toolkit that helps companies design better products, set smarter prices, and compete more effectively. In a world where consumer preferences are complex and markets move fast, CBC transforms hidden trade-offs into actionable strategies for growth.
Partnering with experts such as Simbi Labs of India ensures that these insights are not just statistical outputs but practical roadmaps for business success.
For an in-depth understanding, please refer to our book, “Academic Research Fundamentals: Research Writing and Data Analysis”. It is available as an eBook here, or you may purchase the hardcopy here .