Conjoint analysis is a survey methodology spreading fast across the social sciences and marketing due to its capacity to disentangle many causal effects in a single survey experiment. Unfortunately, conjoint designs are prone to substantial measurement error, which we reveal via intra-coder reliability evaluations. Although measurement error in (binary outcome) conjoint questions can in theory exaggerate, attenuate, or give incorrect signs for any estimated causal effect, we reanalyze many applications and find a nearly universal empirical result that measurement error in this context attenuates causal effects, meaning that the true effect is larger than it seems. We show how to estimate and correct for this attenuation bias, endemic throughout the literature. We also offer easy-to-use open source software that implements all methods described herein.