Deep Neural Networks have emerged as state-of-the-art solutions for complex intelligence problems. DNNs derive their predictive power by learning from millions of training examples in either a supervised or semi-supervised fashion. As such, a critical aspect of the DNN system design procedure is the collection of large annotated training datasets that exhibit high coverage of the problem space. While data synthesis and annotation techniques have been proposed to mitigate the burden of acquiring large datasets, these methods do not quantify the usefulness of each generated dataset and its subsequent impact on training effort. In this work we establish parallels between the autonomous design of DNNs for machine vision applications and the task of functionally verifying a hardware design. Similar to automatic test vector generation, we propose a technique that progressively generates training datasets using virtual synthetic models. Furthermore, we propose an automated DNN design framework that jointly tries to stochastically maximize training coverage while minimizing the number of training and validation cycles utilizing insights from functional verification.