All posts in VR CAVE

Join us at the 25th ACM Multimedia conference in Mountain View, CA, USA.

ICT lab researcher Behnam Maneshgar will be presenting our work “Automatic Adjustment of Stereoscopic Content for Long-Range Projections in Outdoor Areas”. The work is co-authored with Leila Sujir, Sudhir Mudur, and Charalambos Poullis.

Abstract: Projecting stereoscopic content onto large general outdoor surfaces, say building facades, presents many challenges to be overcome, particularly when using red-cyan anaglyph stereo representation, so that as accurate as possible colour and depth perception can still be achieved.
In this paper, we address the challenges relating to long-range projection mapping of stereoscopic content in outdoor areas and present a complete framework for the automatic adjustment of the content to compensate for any adverse projection surface behaviour. We formulate the problem of modeling the projection surface into one of simultaneous recovery of shape and appearance. Our system is composed of two standard fixed cameras, a long range fixed projector, and a roving video camera for multi-view capture. The overall computational framework comprises of four modules: calibration of a long-range vision system using the structure from motion technique, dense 3D reconstruction of projection surface from calibrated camera images, modeling the light behaviour of the projection surface using roving camera images and, iterative adjustment of the stereoscopic content. In addition to cleverly adapting some of the established computer vision techniques, the system design we present is distinct from previous work. The proposed framework has been tested in real-world applications with two non-trivial user experience studies and the results reported show considerable improvements in the quality of 3D depth and colour perceived by human participants.

ICT lab researcher Oliver Philbin-Briscoe has been awarded the NSERC Undergraduate Student Research Award.

The proposal is on “Interactive Technologies in Virtual Reality”. A summary of the project is shown below:

The impressive successes in low-cost VR technologies (Oculus Rift, Virtuix Omni, Microsoft HoloLens/Illumiroom, movement detectors and sensor points Fitbit, etc) and the dramatic growth of many areas in computer vision (e.g. real-time monocular SLAM, 3D reconstruction, human activity recognition, etc) have offered an early glimpse into the fundamental changes that Virtual Reality (VR) can bring into the way humans participate in social activities, workplaces, learning, entertainment and other daily experiences. Further, these major advances fueled by the explosion of activity in the VR consumer market is raising visions of tremendous economic potential.

However, present day virtual reality systems require the use of specialized, and often cumbersome equipment for experiencing Virtual Environments (VE) (e.g. Head Mounted Devices (HMDs), active-vision or passive-vision glasses with tracking markers, etc).  These shortcomings have a seriously detrimental effect on the user’s experience and unless addressed well and fast, will not enable the realization of the societal and economic potential of VR technologies. Accordingly, the research proposed here aspires to address the above concerns and if successful will bring a transformational change in interaction technologies relating to Virtual Reality, pushing the state-of-the-art towards natural interaction using hand and body gestures, eye-gaze, speech and other natural and intuitive human action modalities.

In cooperation with the iMareCulture project, we are organizing a Workshop on Serious Games and Cultural Heritage in conjunction with the 9th International Conference on Virtual Worlds and Games for Serious Applications 2017.

Scope

The overall objectives of the workshop are the discussion and sharing of knowledge, and scientific and technical results, related to state-of-the-art solutions, technologies, and applications of serious games in cultural heritage, as well as the demonstration of serious games in cultural heritage. We believe the workshop will further enlarge the audience of this conference and thus keep improving the quality of the papers submitted. The nature of the workshop is multi-disciplinary, focusing on innovations in all the technology and application aspects of serious games and cultural heritage. The target audience is everyone in the general computer graphics and computer games research community with an interest in cultural heritage.

Topics

The workshop seeks original high-quality research and application/system paper submissions in all aspects of Serious Games and Cultural Heritage. Suggested topics include, but are not limited to:
• Interactive digital storytelling for virtual cultural heritage applications
• Challenges and trends in Serious Games for Cultural Heritage
• User engagement and motivation
• Assessment of the learning impact
• Human-Computer Interaction
• Game mechanics suited for CH education
• Personalization, adaptivity and Artificial Intelligence
• Game architectures
• Psychology and Pedagogy
• Best practices in the development and adoption of SGs for CH
• Generation and representation of cultural content in Games
• Culturally relevant Non-Player Characters
• Applications and case studies

Co-chairs

Bart Simon (Concordia University)
Sudhir Mudur (Concordia University)
Charalambos Poullis (Concordia University)
Selma Rizvic (University of Sarajevo)
Dimitrios Skarlatos (Cyprus University of Technology)

 

Join us at the 12th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications, GRAPP 2017.

ICT Lab researcher Behnam Maneshgar will be presenting our paper “A Long-Range Vision System for Projection Mapping of Stereoscopic Content in Outdoor Areas“. The work is co-authored with Leila Sujir, Sudhir P. Mudur, Charalambos Poullis.

Abstract: Spatial Augmented Reality, or its more commonly known name Projection Mapping (PM), is a projection technique which transforms a real-life object or scene into a surface for video projection (Raskar et al., 1998b). Although this technique has been pioneered and used by Disney since the seventies, it is in recent years that it has gained significant popularity due to the availability of specialized software which simplifies the otherwise cumbersome calibration process (Raskar et al., 1998a). Currently, PM is being widely used in advertising, marketing, cultural events, live performances, theater, etc as a way of enhancing an object/scene by superimposing visual content (Ridel et al., 2014). However, despite the wide availability of specialized software, several restrictions are still imposed on the type of objects/scenes on which PM can be applied. Most limitations are due to problems in handling objects/scenes with (a) complex reflectance properties and (b) low intensity or distinct colors. In this work, we address these limitations and present solutions for mitigating these problems. We present a complete framework for calibration, geometry acquisition and reconstruction, estimation of reflectance properties, and finally color compensation; all within the context of outdoor long-range PM of stereoscopic content. Using the proposed technique, the observed projections are as close as possible [constrained by hardware limitations] to the actual content being projected; therefore ensuring the perception of depth and immersion when viewed with stereo glasses. We have performed extensive experiments and the results are reported.

Details about important dates and submission instructions can be found on the workshop’s website: http://www.multimediauts.org/3DWorkshop_CVPR2016/

Recent research on large scale 3D data has been boosted by a number of rapid academic and industrial advances. 3D sensing devices, ranging from consumer depth cameras like Microsoft’s Kinect to professional laser-scanners, make 3D data capture readily available in real life. Moreover, structure from motion and dense multi-view stereo have matured to also deliver large scale point clouds. These point clouds typically need to be processed further into higher level geometric representations (for example, surface meshes), and semantically analysed (for example, object detection). These representations open up many exciting applications for mapping services, navigation systems and virtual/augmented reality devices.

This full-day workshop is inspired by these exciting advances and will cover large scale 3D data research topics, including acquisition, modelling and analysis. One key feature of our workshop is to introduce two 3D data challenges. The first challenge addresses semantic segmentation of large outdoor point clouds (see http://www.semantic3d.net/). The second challenge aims to evaluate multiple-view stereo algorithms applied to large numbers of commercial satellite images (check the workshop web site for the latest information).

Moreover, the full-day workshop is expected to demonstrate the convergence of state-of-the-art 3D sensor technology, 3D computer vision, and 3D applications such as augmented reality through a forum of invited talks by leading researchers and submitted research papers for oral and poster presentation. Authors are invited to submit a full paper (two-column format, 8 pages) in accordance with the CVPR guidelines available on the conference website: http://cvpr2016.thecvf.com. The review will be double-blind. Only electronic submissions will be accepted. Topics of interest include, but are not limited to:

  • Semantic segmentation of 3D outdoor point clouds in photogrammetry and mapping
  • Object description, detection and recognition on large scale point cloud data
  • Matching and registration of point cloud data across different sources of sensor
  • 3D scene reconstruction through multi-sensory data fusion, alignment, and registration
  • Camera pose tracking on mobile devices
  • Appearance and illumination modelling and representation
  • 3D rendering and visualization of large scale models (e.g. for urban areas)
  • Augmented reality and merging of virtual and real worlds, augmented reality in street view, and web-based 3D map applications
  • Multiple-view stereo algorithms applied to large numbers of commercial satellite images.

 

Contact

 

Organizers (listed in alphabetical order of last names)

Mohammed Bennamoun, mohammed.bennamoun@uwa.edu.au
Myron Brown, myron.brown@jhuapl.edu
Lixin Fan, lixin.fan@nokia.com
Thomas Fevens, fevens@cse.concordia.ca
Hak Jae Kim, hakjae.kim@iarpa.gov
Florent Lafarge, florent.lafarge@inria.fr
Sudhir Mudur, mudur@cse.concordia.ca
Marc Pollefeys, marc.pollefeys@inf.ethz.ch
Tiberiu Popa, tiberiu.popa@concordia.ca
Fatih Porikli, fatih.porikli@anu.edu.au
Charalambos Poullis, charalambos@poullis.org
Konrad Schindler, schindler@geod.baug.ethz.ch
Qiang Wu, qiang.wu@uts.edu.au
Jian Zhang, jian.zhang@uts.edu.au
Qian-Yi Zhou, qianyi.zhou@gmail.com

During a four-day international research colloquium from February 24th to 27th, 2016, the Elastic 3D Space group of Researchers, lead by Artists, Designers and Computer Scientists, will explore the potential of stereoscopic technologies with artistic practices. This event brings together over 15 researchers, artists and industry experts to share their research explorations on elastic space, and augmented and virtual reality, and future reality within multiple disciplines from six Universities Art Departments, two cultural production and exhibition sites, along with departments of Computer Science and Software Engineering, Architectural History, and Performance Studies and Design across three continents.

The February 24-27 event will start with a day of presentations including a walking tour in the afternoon, followed by three days of a workshop research exchange, with hand-on workshops, a session at the National Film Board stereoscopic studios, roundtable discussions, 3D drawings demos and virtual drawing prototypes.

This exchange will focus on both the technical exploration of stereoscopic technologies and software, while questioning its perceptual effects. It will deeply investigate the way our bodies relate to our built environment and interact within the illusory elastic 3D space.

There will be two keynote speeches by Ken Perlin, and Dorita Hannah

Keynotes: Elastic 3D Space Keynotes

Program: Elastic 3D Space Colloquium

Newsletter: newsletter

An API to the open-source scanning system “3DUNDERWORLD-SLS” developed at the ICT Lab is now part of OpenCV 3.1.The API and turorials were developed by Roberta Ravanelli.

The module implements the time-multiplexing coding strategy based on Gray encoding following the (stereo) approach described in our “3DUNDERWORLD-SLS: An Open-Source Structured Light Scanning System for Rapid Geometry Acquisition”.

More information about the API can be found here.

A video by OpenCV showcasing the GSOC projects can be found here.

The clustering module described in the journal paper IEEE PAMI 2013: A Framework for Automatic Modeling from Point Cloud Data has been made available on GitHub.

P2C clustering is a robust unsupervised clustering algorithm based on a hierarchical statistical analysis of the geometric properties of the data which was specifically designed for XYZ maps.

Join us at the 6th International Conference on Affective Computing and Intelligent Interaction – ACII2015.

ICT Lab researcher Chris Christou will be presenting our paper Psychophysiological Responses to Virtual Crowds: Implications for Wearable Computing. The work is co-authored by Kyriakos Herakleous, Aimilia Tzanavari, Charalambos Poullis.

Abstract: People’s responses to crowds was investigated with a simulation of a busy street using virtual reality. Both psychophysiological measures and a cognitive test were used to assess the influence of large crowds or individual agents who stood close to the participant while they performed a memory task. Results from most individuals revealed strong orienting responses to changes in the crowd. This was indicated by sharp increases in skin conductivity and reduction in peripheral blood volume amplitude. Furthermore, cognitive function appeared to be affected. Results of the memory test appeared to be influenced by how closely virtual agents approached the participants. These findings are discussed with respect to wearable affective computing that seeks robust identifiable correlates of autonomic activity that can be used in everyday contexts.

Venue: Xi’an, China – Grand New World Hotel – Hua Shan, Floor 1
Date: 22 September 2015
Time: 10:30-12:10, Track O2: Affect and Psychophysiology

As of August 1st, 2015 the Immersive and Creative Technologies Lab is a member lab of the 3D Graphics Group.

The 3D Graphics Group is part of the Department of Computer Science and Software Engineering, Faculty of Engineering and Computer Science, Concordia University.