All posts in VR CAVE

Join us at the 12th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications, GRAPP 2017.

ICT Lab researcher Behnam Maneshgar will be presenting our paper “A Long-Range Vision System for Projection Mapping of Stereoscopic Content in Outdoor Areas“. The work is co-authored with Leila Sujir, Sudhir P. Mudur, Charalambos Poullis.

Abstract: Spatial Augmented Reality, or its more commonly known name Projection Mapping (PM), is a projection technique which transforms a real-life object or scene into a surface for video projection (Raskar et al., 1998b). Although this technique has been pioneered and used by Disney since the seventies, it is in recent years that it has gained significant popularity due to the availability of specialized software which simplifies the otherwise cumbersome calibration process (Raskar et al., 1998a). Currently, PM is being widely used in advertising, marketing, cultural events, live performances, theater, etc as a way of enhancing an object/scene by superimposing visual content (Ridel et al., 2014). However, despite the wide availability of specialized software, several restrictions are still imposed on the type of objects/scenes on which PM can be applied. Most limitations are due to problems in handling objects/scenes with (a) complex reflectance properties and (b) low intensity or distinct colors. In this work, we address these limitations and present solutions for mitigating these problems. We present a complete framework for calibration, geometry acquisition and reconstruction, estimation of reflectance properties, and finally color compensation; all within the context of outdoor long-range PM of stereoscopic content. Using the proposed technique, the observed projections are as close as possible [constrained by hardware limitations] to the actual content being projected; therefore ensuring the perception of depth and immersion when viewed with stereo glasses. We have performed extensive experiments and the results are reported.

Details about important dates and submission instructions can be found on the workshop’s website:

Recent research on large scale 3D data has been boosted by a number of rapid academic and industrial advances. 3D sensing devices, ranging from consumer depth cameras like Microsoft’s Kinect to professional laser-scanners, make 3D data capture readily available in real life. Moreover, structure from motion and dense multi-view stereo have matured to also deliver large scale point clouds. These point clouds typically need to be processed further into higher level geometric representations (for example, surface meshes), and semantically analysed (for example, object detection). These representations open up many exciting applications for mapping services, navigation systems and virtual/augmented reality devices.

This full-day workshop is inspired by these exciting advances and will cover large scale 3D data research topics, including acquisition, modelling and analysis. One key feature of our workshop is to introduce two 3D data challenges. The first challenge addresses semantic segmentation of large outdoor point clouds (see The second challenge aims to evaluate multiple-view stereo algorithms applied to large numbers of commercial satellite images (check the workshop web site for the latest information).

Moreover, the full-day workshop is expected to demonstrate the convergence of state-of-the-art 3D sensor technology, 3D computer vision, and 3D applications such as augmented reality through a forum of invited talks by leading researchers and submitted research papers for oral and poster presentation. Authors are invited to submit a full paper (two-column format, 8 pages) in accordance with the CVPR guidelines available on the conference website: The review will be double-blind. Only electronic submissions will be accepted. Topics of interest include, but are not limited to:

  • Semantic segmentation of 3D outdoor point clouds in photogrammetry and mapping
  • Object description, detection and recognition on large scale point cloud data
  • Matching and registration of point cloud data across different sources of sensor
  • 3D scene reconstruction through multi-sensory data fusion, alignment, and registration
  • Camera pose tracking on mobile devices
  • Appearance and illumination modelling and representation
  • 3D rendering and visualization of large scale models (e.g. for urban areas)
  • Augmented reality and merging of virtual and real worlds, augmented reality in street view, and web-based 3D map applications
  • Multiple-view stereo algorithms applied to large numbers of commercial satellite images.




Organizers (listed in alphabetical order of last names)

Mohammed Bennamoun,
Myron Brown,
Lixin Fan,
Thomas Fevens,
Hak Jae Kim,
Florent Lafarge,
Sudhir Mudur,
Marc Pollefeys,
Tiberiu Popa,
Fatih Porikli,
Charalambos Poullis,
Konrad Schindler,
Qiang Wu,
Jian Zhang,
Qian-Yi Zhou,

During a four-day international research colloquium from February 24th to 27th, 2016, the Elastic 3D Space group of Researchers, lead by Artists, Designers and Computer Scientists, will explore the potential of stereoscopic technologies with artistic practices. This event brings together over 15 researchers, artists and industry experts to share their research explorations on elastic space, and augmented and virtual reality, and future reality within multiple disciplines from six Universities Art Departments, two cultural production and exhibition sites, along with departments of Computer Science and Software Engineering, Architectural History, and Performance Studies and Design across three continents.

The February 24-27 event will start with a day of presentations including a walking tour in the afternoon, followed by three days of a workshop research exchange, with hand-on workshops, a session at the National Film Board stereoscopic studios, roundtable discussions, 3D drawings demos and virtual drawing prototypes.

This exchange will focus on both the technical exploration of stereoscopic technologies and software, while questioning its perceptual effects. It will deeply investigate the way our bodies relate to our built environment and interact within the illusory elastic 3D space.

There will be two keynote speeches by Ken Perlin, and Dorita Hannah

Keynotes: Elastic 3D Space Keynotes

Program: Elastic 3D Space Colloquium

Newsletter: newsletter

An API to the open-source scanning system “3DUNDERWORLD-SLS” developed at the ICT Lab is now part of OpenCV 3.1.The API and turorials were developed by Roberta Ravanelli.

The module implements the time-multiplexing coding strategy based on Gray encoding following the (stereo) approach described in our “3DUNDERWORLD-SLS: An Open-Source Structured Light Scanning System for Rapid Geometry Acquisition”.

More information about the API can be found here.

A video by OpenCV showcasing the GSOC projects can be found here.

The clustering module described in the journal paper IEEE PAMI 2013: A Framework for Automatic Modeling from Point Cloud Data has been made available on GitHub.

P2C clustering is a robust unsupervised clustering algorithm based on a hierarchical statistical analysis of the geometric properties of the data which was specifically designed for XYZ maps.

Join us at the 6th International Conference on Affective Computing and Intelligent Interaction – ACII2015.

ICT Lab researcher Chris Christou will be presenting our paper Psychophysiological Responses to Virtual Crowds: Implications for Wearable Computing. The work is co-authored by Kyriakos Herakleous, Aimilia Tzanavari, Charalambos Poullis.

Abstract: People’s responses to crowds was investigated with a simulation of a busy street using virtual reality. Both psychophysiological measures and a cognitive test were used to assess the influence of large crowds or individual agents who stood close to the participant while they performed a memory task. Results from most individuals revealed strong orienting responses to changes in the crowd. This was indicated by sharp increases in skin conductivity and reduction in peripheral blood volume amplitude. Furthermore, cognitive function appeared to be affected. Results of the memory test appeared to be influenced by how closely virtual agents approached the participants. These findings are discussed with respect to wearable affective computing that seeks robust identifiable correlates of autonomic activity that can be used in everyday contexts.

Venue: Xi’an, China – Grand New World Hotel – Hua Shan, Floor 1
Date: 22 September 2015
Time: 10:30-12:10, Track O2: Affect and Psychophysiology

As of August 1st, 2015 the Immersive and Creative Technologies Lab is a member lab of the 3D Graphics Group.

The 3D Graphics Group is part of the Department of Computer Science and Software Engineering, Faculty of Engineering and Computer Science, Concordia University.


The Immersive & Creative Technologies Lab, part of the 3D Graphics Group at Concordia University, is recruiting two highly-motivated researchers for a PhD. The positions are full-time and are funded for 16-months starting January 2016, with a possible extension to 48 months.

Topic: The topic is flexible and can cover the following and related research areas:
– Computer Games and Virtual World Technologies
– Computer Vision and Graphics
– Immersive and Creative Technologies (Virtual/augmented reality)

Academic Requirements: We are looking for a highly motivated and creative individual who enjoys working in a collaborative research environment. Good communication skills and fluency in English are required. Applicants should have a strong academic training, including an undergraduate or graduate degree in a relevant discipline i.e. computer science, electrical engineering, mathematics, or statistics, and have excellent mathematical skills. High proficiency in scientific coding i.e. C++, C, OpenCV, OpenGL and/or Matlab is required. Having experience with 3D computer vision, robotics, or a related area and a background in machine learning techniques is desirable and will be considered a plus.

How to Apply: Applications should be made via the Concordia University Graduate Admissions Portal (

Application deadline: October 1st, 2015

Contact: Charalambos Poullis (

Join us at the 15th IEEE International Conference on Advanced Learning Technologies – ICALT2015.

ICT Lab researcher Kyriakos Herakleous will be presenting our paper “Effectiveness of an Immersive Virtual Environment (CAVE) for Teaching Pedestrian Crossing to Children with PDD-NOS”. The work is co-authored by Aimilia Tzanavari, Nefi Charalambous-Darden, Kyriakos Herakleous, Charalambos Poullis.

Abstract: Children with Autism Spectrum Disorders (ASD) exhibit a range of developmental disabilities, with mild to severe effects in social interaction and communication. Children with PDD-NOS, Autism and co-existing conditions are facing enormous challenges in their lives, dealing with their difficulties in sensory perception, repetitive behaviors and interests. These challenges result in them being less independent or not independent at all. Part of becoming independent involves being able to function in real world settings, settings that are not controlled. Pedestrian crossings fall under this category: as children (and later as adults) they have to learn to cross roads safely. In this paper, we report on a study we carried out with 6 children with PDD-NOS over a period of four (4) days using a VR CAVE virtual environment to teach them how to safely cross at a pedestrian crossing. Results indicated that most children were able to achieve the desired goal of learning the task, which was verified in the end of the 4-day period by having them cross a real pedestrian crossing (albeit with their parent/educator discretely next to them for safety reasons).

Venue: Parkview Hotel, Hualien, Taiwan
Date: 07 July 2015
Time: 16:45-18:00, Track 13

Closing Seminar

Την Παρασκευή 20 Μαρτίου 2015, ολοκληρώθηκε με μεγάλη επιτυχία το σεμινάριο παρουσίασης των αποτελεσμάτων του ερευνητικού προγράμματος ΙΠΕ/ΝΕΚΥΠ/0311/02 “VR CAVE” στις εγκαταστάσεις των ερευνητικών εργαστηρίων Immersive and Creative Technologies (ICT) Lab του Τμήματος Πολυμέσων και Γραφικών Τεχνών του Τεχνολογικού Πανεπιστημίου Κύπρου.

Continue Reading →