While techniques such as model pruning and precision reduction have improved the efficiency of Deep Neural Network (DNN) inference, DNN-based object detection still involves substantial computation, thereby limiting the scope of DNN inference on compute-limited platforms, such as many IoT devices. Proposed methods for differentiating easy versus difficult classification cases, such as early exit, can reduce the average amount of compute incurred but are limited by the accuracy to which they can correctly predict the level of effort needed for classification based only upon early stage features. In this work, we propose an alternative approach to predicting necessary effort that leverages semantic context. Namely, in any image there will usually be several objects present that can provide context for the plausibility of the set of classifications as a whole, with greater effort being applied only to outliers. Rather than relying on co-location of objects within training data, we derive our plausibility model from WordNet, a large pre-existing database of semantic relationships among objects, allowing classifier training to be performed in a traditional, relationship-agnostic fashion. We demonstrate the effectiveness of our approach, DoubtNet, using MobileNet as the initial low-power classifier, ResNet as the high-powered classifier, and developing an outlier detection module that is independent of both networks. DoubtNet increases the mAP of our base classifier by as much as 22% and consistently outperforms prior, training-set-based context models.