TY - GEN
T1 - Omnilayout
T2 - 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, CVPRW 2021
AU - Rao, Shivansh
AU - Kumar, Vikas
AU - Kifer, Daniel
AU - Giles, C. Lee
AU - Mali, Ankur
N1 - Publisher Copyright:
© 2021 IEEE.
PY - 2021/6
Y1 - 2021/6
N2 - Given a single RGB panorama, the goal of 3D layout reconstruction is to estimate the room layout by predicting the corners, floor boundary, and ceiling boundary. A common approach has been to use standard convolutional networks to predict the corners and boundaries, followed by post-processing to generate the 3D layout. However, the space-varying distortions in panoramic images are not compatible with the translational equivariance property of standard convolutions, thus degrading performance. Instead, we propose to use spherical convolutions. The resulting network, which we call OmniLayout performs convolutions directly on the sphere surface, sampling according to inverse equirectangular projection and hence invariant to equirectangular distortions. Using a new evaluation metric, we show that our network reduces the error in the heavily distorted regions (near the poles) by ≈ 25% when compared to standard convolutional networks. Experimental results show that OmniLayout outperforms the state-of-the-art by ≈4% on two different benchmark datasets (PanoContext and Stanford 2D-3D). Code is available at https://github.com/rshivansh/OmniLayout.
AB - Given a single RGB panorama, the goal of 3D layout reconstruction is to estimate the room layout by predicting the corners, floor boundary, and ceiling boundary. A common approach has been to use standard convolutional networks to predict the corners and boundaries, followed by post-processing to generate the 3D layout. However, the space-varying distortions in panoramic images are not compatible with the translational equivariance property of standard convolutions, thus degrading performance. Instead, we propose to use spherical convolutions. The resulting network, which we call OmniLayout performs convolutions directly on the sphere surface, sampling according to inverse equirectangular projection and hence invariant to equirectangular distortions. Using a new evaluation metric, we show that our network reduces the error in the heavily distorted regions (near the poles) by ≈ 25% when compared to standard convolutional networks. Experimental results show that OmniLayout outperforms the state-of-the-art by ≈4% on two different benchmark datasets (PanoContext and Stanford 2D-3D). Code is available at https://github.com/rshivansh/OmniLayout.
UR - http://www.scopus.com/inward/record.url?scp=85116072873&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85116072873&partnerID=8YFLogxK
U2 - 10.1109/CVPRW53098.2021.00411
DO - 10.1109/CVPRW53098.2021.00411
M3 - Conference contribution
AN - SCOPUS:85116072873
T3 - IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops
SP - 3701
EP - 3710
BT - Proceedings - 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, CVPRW 2021
PB - IEEE Computer Society
Y2 - 19 June 2021 through 25 June 2021
ER -