Abstract
In apple fruit production, inexpensive enumeration of fruit on trees is an important goal with applications in many orchard planning tasks. Yield mapping, the combination of fruit enumeration with highly granular spatial information, is an important extension of this task with potential applications in the development of spatially targeted agronomic measures. Ideally, yield maps should have single-tree granularity with a one-to-one correspondence between each apple detected and the tree to which that apple is attached. This study proposes that mapping each detected apple to the most spatially proximal tree trunk provides a strong biological basis for yield mapping with single-tree granularity. A deep learning object detector was developed to locate fruit and trunk position in images. To produce consistent yield maps, a multiple-object-tracking method was combined with known information about tree spacing to estimate the position of each trunk across video sequences. The resulting position estimates were used to generate partitions between adjacent trees across each frame of the video sequences which are suitable for integration with a video-based fruit counting system. On the test dataset, all trunks are successfully detected. For the subset of images where ground truth data was available, the root mean square error (RMSE) for trunk position was 15.6 pixels.
Original language | English (US) |
---|---|
DOIs | |
State | Published - 2019 |
Event | 2019 ASABE Annual International Meeting - Boston, United States Duration: Jul 7 2019 → Jul 10 2019 |
Conference
Conference | 2019 ASABE Annual International Meeting |
---|---|
Country/Territory | United States |
City | Boston |
Period | 7/7/19 → 7/10/19 |
All Science Journal Classification (ASJC) codes
- Agronomy and Crop Science
- Bioengineering