CVPR2016 Workshop Large Scale 3D Data: Acquisition, Modelling and Analysis

CVPR2016 Workshop Large Scale 3D Data: Acquisition, Modelling and Analysis

Details about important dates and submission instructions can be found on the workshop’s website: https://www.multimediauts.org/3DWorkshop_CVPR2016/

Recent research on large scale 3D data has been boosted by a number of rapid academic and industrial advances. 3D sensing devices, ranging from consumer depth cameras like Microsoft’s Kinect to professional laser-scanners, make 3D data capture readily available in real life. Moreover, structure from motion and dense multi-view stereo have matured to also deliver large scale point clouds. These point clouds typically need to be processed further into higher level geometric representations (for example, surface meshes), and semantically analysed (for example, object detection). These representations open up many exciting applications for mapping services, navigation systems and virtual/augmented reality devices.

This full-day workshop is inspired by these exciting advances and will cover large scale 3D data research topics, including acquisition, modelling and analysis. One key feature of our workshop is to introduce two 3D data challenges. The first challenge addresses semantic segmentation of large outdoor point clouds (see https://www.semantic3d.net/). The second challenge aims to evaluate multiple-view stereo algorithms applied to large numbers of commercial satellite images (check the workshop web site for the latest information).

Moreover, the full-day workshop is expected to demonstrate the convergence of state-of-the-art 3D sensor technology, 3D computer vision, and 3D applications such as augmented reality through a forum of invited talks by leading researchers and submitted research papers for oral and poster presentation. Authors are invited to submit a full paper (two-column format, 8 pages) in accordance with the CVPR guidelines available on the conference website: https://cvpr2016.thecvf.com. The review will be double-blind. Only electronic submissions will be accepted. Topics of interest include, but are not limited to:

  • Semantic segmentation of 3D outdoor point clouds in photogrammetry and mapping
  • Object description, detection and recognition on large scale point cloud data
  • Matching and registration of point cloud data across different sources of sensor
  • 3D scene reconstruction through multi-sensory data fusion, alignment, and registration
  • Camera pose tracking on mobile devices
  • Appearance and illumination modelling and representation
  • 3D rendering and visualization of large scale models (e.g. for urban areas)
  • Augmented reality and merging of virtual and real worlds, augmented reality in street view, and web-based 3D map applications
  • Multiple-view stereo algorithms applied to large numbers of commercial satellite images.

 

Contact

 

Organizers (listed in alphabetical order of last names)

Mohammed Bennamoun, mohammed.bennamoun@uwa.edu.au
Myron Brown, myron.brown@jhuapl.edu
Lixin Fan, lixin.fan@nokia.com
Thomas Fevens, fevens@cse.concordia.ca
Hak Jae Kim, hakjae.kim@iarpa.gov
Florent Lafarge, florent.lafarge@inria.fr
Sudhir Mudur, mudur@cse.concordia.ca
Marc Pollefeys, marc.pollefeys@inf.ethz.ch
Tiberiu Popa, tiberiu.popa@concordia.ca
Fatih Porikli, fatih.porikli@anu.edu.au
Charalambos Poullis, charalambos@poullis.org
Konrad Schindler, schindler@geod.baug.ethz.ch
Qiang Wu, qiang.wu@uts.edu.au
Jian Zhang, jian.zhang@uts.edu.au
Qian-Yi Zhou, qianyi.zhou@gmail.com