TY - JOUR
T1 - Identifying disaster related social media for rapid response
T2 - a visual-textual fused CNN architecture
AU - Huang, Xiao
AU - Li, Zhenlong
AU - Wang, Cuizhen
AU - Ning, Huan
N1 - Publisher Copyright:
© 2019 Informa UK Limited, trading as Taylor & Francis Group.
PY - 2020/9/1
Y1 - 2020/9/1
N2 - In recent years, social media platforms have played a critical role in mitigation for a wide range of disasters. The highly up-to-date social responses and vast spatial coverage from millions of citizen sensors enable a timely and comprehensive disaster investigation. However, automatic retrieval of on-topic social media posts, especially considering both of their visual and textual information, remains a challenge. This paper presents an automatic approach to labeling on-topic social media posts using visual-textual fused features. Two convolutional neural networks (CNNs), Inception-V3 CNN and word embedded CNN, are applied to extract visual and textual features respectively from social media posts. Well-trained on our training sets, the extracted visual and textual features are further concatenated to form a fused feature to feed the final classification process. The results suggest that both CNNs perform remarkably well in learning visual and textual features. The fused feature proves that additional visual feature leads to more robustness compared with the situation where only textual feature is used. The on-topic posts, classified by their texts and pictures automatically, represent timely disaster documentation during an event. Coupling with rich spatial contexts when geotagged, social media could greatly aid in a variety of disaster mitigation approaches.
AB - In recent years, social media platforms have played a critical role in mitigation for a wide range of disasters. The highly up-to-date social responses and vast spatial coverage from millions of citizen sensors enable a timely and comprehensive disaster investigation. However, automatic retrieval of on-topic social media posts, especially considering both of their visual and textual information, remains a challenge. This paper presents an automatic approach to labeling on-topic social media posts using visual-textual fused features. Two convolutional neural networks (CNNs), Inception-V3 CNN and word embedded CNN, are applied to extract visual and textual features respectively from social media posts. Well-trained on our training sets, the extracted visual and textual features are further concatenated to form a fused feature to feed the final classification process. The results suggest that both CNNs perform remarkably well in learning visual and textual features. The fused feature proves that additional visual feature leads to more robustness compared with the situation where only textual feature is used. The on-topic posts, classified by their texts and pictures automatically, represent timely disaster documentation during an event. Coupling with rich spatial contexts when geotagged, social media could greatly aid in a variety of disaster mitigation approaches.
UR - http://www.scopus.com/inward/record.url?scp=85068190560&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85068190560&partnerID=8YFLogxK
U2 - 10.1080/17538947.2019.1633425
DO - 10.1080/17538947.2019.1633425
M3 - Article
AN - SCOPUS:85068190560
SN - 1753-8947
VL - 13
SP - 1017
EP - 1039
JO - International Journal of Digital Earth
JF - International Journal of Digital Earth
IS - 9
ER -