TY - JOUR
T1 - Green fruit segmentation and orientation estimation for robotic green fruit thinning of apples
AU - Hussain, Magni
AU - He, Long
AU - Schupp, James
AU - Lyons, David
AU - Heinemann, Paul
N1 - Publisher Copyright:
© 2023
PY - 2023/4
Y1 - 2023/4
N2 - Apple is a highly valued specialty crop in the U.S. Green fruit thinning is an important operation of apple production, which is the removal of excess fruitlets in the early summer. The task ensures that remaining fruits at harvest time grow to have good size and quality while reducing the risk of biennial bearing. Current methods of thinning include hand, chemical, and mechanical. However, hand thinning generally requires a large labor force to implement, chemical thinning is non-selective and dependent on timing and weather during application, and mechanical thinning is also non-selective and destructive. A robotic green fruit thinning system could possibly be implemented that does not exhibit the drawbacks of current methods. A vision system is an essential component for a robotic green fruit thinning system that is responsible for green fruit detection and segmentation, decision-making on which fruit to remove, and environment reconstruction for path planning. This study took the first step towards developing a vision system for robotic green fruit thinning. First, green fruit and stem instance segmentation was applied using Mask R-CNN. Then, green fruit and stem orientation estimation was applied using Principal Component Analysis (PCA). Average precision scores for green fruit and stem segmentation on all mask sizes were 83.4% and 38.9%, respectively, whereas these increased to 91.3% and 67.7% if only considering the fruits and stems with mask sizes greater than 322 pixels. Green fruit orientation estimation with correction made 89.3% and 75.5% of estimates accurate within 30° of actual orientations for ground-truth and segmentation-generated masks, respectively. Performances respectively were 97.4% and 84.0% when only unoccluded masks are considered. Orientation correction resulted in considerable improvements in all cases of green fruit orientation estimation, with the greatest improvement seen on unoccluded ground truth masks where estimates accurate within 30° of ground truth orientations increased by 23.9%. Stem orientation estimation achieved very high accuracies with corresponding scores of 99.8% and 99.7%. The outcomes provided guideline information for developing a robust machine vision system for robotic green fruit thinning.
AB - Apple is a highly valued specialty crop in the U.S. Green fruit thinning is an important operation of apple production, which is the removal of excess fruitlets in the early summer. The task ensures that remaining fruits at harvest time grow to have good size and quality while reducing the risk of biennial bearing. Current methods of thinning include hand, chemical, and mechanical. However, hand thinning generally requires a large labor force to implement, chemical thinning is non-selective and dependent on timing and weather during application, and mechanical thinning is also non-selective and destructive. A robotic green fruit thinning system could possibly be implemented that does not exhibit the drawbacks of current methods. A vision system is an essential component for a robotic green fruit thinning system that is responsible for green fruit detection and segmentation, decision-making on which fruit to remove, and environment reconstruction for path planning. This study took the first step towards developing a vision system for robotic green fruit thinning. First, green fruit and stem instance segmentation was applied using Mask R-CNN. Then, green fruit and stem orientation estimation was applied using Principal Component Analysis (PCA). Average precision scores for green fruit and stem segmentation on all mask sizes were 83.4% and 38.9%, respectively, whereas these increased to 91.3% and 67.7% if only considering the fruits and stems with mask sizes greater than 322 pixels. Green fruit orientation estimation with correction made 89.3% and 75.5% of estimates accurate within 30° of actual orientations for ground-truth and segmentation-generated masks, respectively. Performances respectively were 97.4% and 84.0% when only unoccluded masks are considered. Orientation correction resulted in considerable improvements in all cases of green fruit orientation estimation, with the greatest improvement seen on unoccluded ground truth masks where estimates accurate within 30° of ground truth orientations increased by 23.9%. Stem orientation estimation achieved very high accuracies with corresponding scores of 99.8% and 99.7%. The outcomes provided guideline information for developing a robust machine vision system for robotic green fruit thinning.
UR - http://www.scopus.com/inward/record.url?scp=85149359426&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85149359426&partnerID=8YFLogxK
U2 - 10.1016/j.compag.2023.107734
DO - 10.1016/j.compag.2023.107734
M3 - Article
AN - SCOPUS:85149359426
SN - 0168-1699
VL - 207
JO - Computers and Electronics in Agriculture
JF - Computers and Electronics in Agriculture
M1 - 107734
ER -