While AI has demonstrated impressive performance in various tasks, humans cannot effectively work with them through current explanations. In this study, we experiment on three factors that might improve human and AI performance, types of explanations in training and prediction phase, and subdomain test sets. Unlike static explanations, interactive explanations allow humans to learn about the model through an active learning process. We also argue that improvement in human and AI team performance may be dependent on types of subdomain test sets. For instance, while an AI may perform better than human and AI on an in-domain test set, we hypothesize that human and AI performance may perform better than an AI on an out-of-domain test set.