Bronchoscopy-guidance systems have been shown to improve the success rate of bronchoscopic procedures. A key technical cornerstone of bronchoscopy- guidance systems is the synchronization between the virtual world, derived from a patient's three-dimensional (3D) multidetector computed-tomography (MDCT) scan, and the real world, derived from the bronchoscope video during a live procedure. Two main approaches for synchronizing these worlds exist: electromagnetic navigation bronchoscopy (ENB) and image-based bronchoscopy. ENB systems require considerable extra hardware, and both approaches have drawbacks that hinder continuous robust guidance. In addition, they both require an attending technician to be present. We propose a technician-free strategy that enables real-time guidance of bronchoscopy. The approach uses measurements of the bronchoscope's movement to predict its position in 3D virtual space. To achieve this, a bronchoscope model, defining the device's shape in the airway tree to a given point p, provides an insertion depth to p. In real time, our strategy compares an observed bronchoscope insertion depth and roll angle, measured by an optical sensor, to precalculated insertion depths along a predefined route in the virtual airway tree. This leads to a prediction of the bronchoscope's location and orientation. To test the method, experiments involving a PVC-pipe phantom and a human airway-tree phantom verified the bronchoscope models and the entire method, respectively. The method has considerable potential for improving guidance robustness and simplicity over other bronchoscopy-guidance systems.