Object classification is one of the many holy grails in computer vision and as such has resulted in a very large number of algorithms being proposed already. Specifically in recent years there has been considerable progress in this area primarily due to the increased efficiency and accessibility of deep learning techniques. In fact, for single-label object classification [i.e. only one object present in the image] the state-of-the-art techniques employ deep neural networks and are reporting very close to human-like performance. There are specialized applications in which single-label object-level classification will not suffice; for example in cases where the image contains multiple intertwined objects of different labels.
In this paper, we address the complex problem of multi-label pixelwise classification.

We present our distinct solution based on a convolutional neural network (CNN) for performing multi-label pixelwise classification and its application to large-scale urban reconstruction. A supervised learning approach is followed for training a 13-layer CNN using both LiDAR and satellite images. An empirical study has been conducted to determine the hyperparameters which result in the optimal performance of the CNN. Scale invariance is introduced by training the network on five different scales of the input and labeled data. This results in six pixelwise classifications for each different scale. An SVM is then trained to map the six pixelwise classifications into a single-label. Lastly, we refine boundary pixel labels using graph-cuts for maximum a-posteriori (MAP) estimation with Markov Random Field (MRF) priors. The resulting pixelwise classification is then used to accurately extract and reconstruct the buildings in large-scale urban areas. The proposed approach has been extensively
tested and the results are reported.

Join us at the 25th ACM Multimedia conference in Mountain View, CA, USA.

ICT lab researcher Behnam Maneshgar will be presenting our work “Automatic Adjustment of Stereoscopic Content for Long-Range Projections in Outdoor Areas”. The work is co-authored with Leila Sujir, Sudhir Mudur, and Charalambos Poullis.

Abstract: Projecting stereoscopic content onto large general outdoor surfaces, say building facades, presents many challenges to be overcome, particularly when using red-cyan anaglyph stereo representation, so that as accurate as possible colour and depth perception can still be achieved.
In this paper, we address the challenges relating to long-range projection mapping of stereoscopic content in outdoor areas and present a complete framework for the automatic adjustment of the content to compensate for any adverse projection surface behaviour. We formulate the problem of modeling the projection surface into one of simultaneous recovery of shape and appearance. Our system is composed of two standard fixed cameras, a long range fixed projector, and a roving video camera for multi-view capture. The overall computational framework comprises of four modules: calibration of a long-range vision system using the structure from motion technique, dense 3D reconstruction of projection surface from calibrated camera images, modeling the light behaviour of the projection surface using roving camera images and, iterative adjustment of the stereoscopic content. In addition to cleverly adapting some of the established computer vision techniques, the system design we present is distinct from previous work. The proposed framework has been tested in real-world applications with two non-trivial user experience studies and the results reported show considerable improvements in the quality of 3D depth and colour perceived by human participants.

Join us at the 9th International Conference on Virtual Worlds and Games for Serious Applications 2017

ICT Lab researcher Oliver Philpin-Briscoe will be presenting our paper “A Serious Game for Understanding Ancient Seafaring in the Mediterranean Sea”. The work is co-authored with B. Simon, S. Mudur, C. Poullis, S. Rizvic, D. Boskovic, F. Liarokapis, D. Skarlatos, I. Katsouri, S. Demesticha.

Abstract: Commercial sea routes joining Europe with other cultures are vivid examples of cultural interaction. In this work,  we present a serious game which aims to provide better insight and understanding of seaborne trade mechanisms and seafaring practices in the eastern Mediterranean during the Classical and Hellenistic periods. The game incorporates probabilistic geospatial analysis of possible ship routes through the re-use and spatial analysis from open GIS maritime, ocean, and weather data. These routes, along with naval engineering and sailing techniques from the period, are used as underlying information for the seafaring game. This work is part of the EU-funded project iMareCulture whose purpose is in raising the European identity awareness using maritime and underwater cultural interaction and exchange in the Mediterranean sea.

Fabio Bruno [3D Research s.r.l. – University of Calabria] will be presenting our paper “Development and integration of digital technologies addressed to raise awareness and access to European underwater cultural heritage. An overview of the H2020 i-MARECULTURE project” at the IEEE OCEANS 2017 conference.

Abstract: The Underwater Cultural Heritage (UCH) represents a vast historical and scientific resource that, often, is not accessible to the general public due the environment and depth where it is located. Digital technologies (Virtual Museums, Virtual Guides and Virtual Reconstruction of Cultural Heritage) provide a unique opportunity for digital accessibility to both scholars and general public, interested in having a better grasp of underwater sites and maritime archaeology. This paper presents the architecture and the first results of the Horizon 2020 i-MARECULTURE (Advanced VR, iMmersive Serious Games and Augmented REality as Tools to Raise Awareness and Access to European Underwater CULTURal heritage) project that aims to develop and integrate digital technologies for supporting the wide public in acquiring knowledge about UCH. A Virtual Reality (VR) system will be developed to allow users to visit the underwater sites through the use of Head Mounted Displays (HMDs) or digital holographic screens. Two serious games will be implemented for supporting the understanding of the ancient Mediterranean seafaring and the underwater archaeological excavations. An Augmented Reality (AR) system based on an underwater tablet will be developed to serve as virtual guide for divers that visit the underwater archaeological sites.

In conjunction with the project “Exploring Elastic 3D Spaces: bodies and belonging” funded by the Social Sciences and Humanities Research Council of Canada, and in collaboration with Elastic Spaces (, the Immersive & Creative Technologies Lab ( at Concordia University is recruiting a highly-motivated researcher for a Post-Doctoral position.

The position is full-time and is funded for two years, starting in September 2017 or January 2018.

Natural Interactions in Virtual Reality

Academic Requirements:
We are looking for highly motivated and creative individual who enjoys working in a collaborative research environment. Good communication skills and fluency in English are required. Applicants should have a strong academic training, including an undergraduate or graduate degree in a relevant discipline i.e. HCI, computer games, computer science, electrical engineering, mathematics, or statistics, and have excellent mathematical skills. Having experience with augmented/virtual reality, interaction technologies, 3D computer vision, graphics, robotics, or a related area, and a background in machine learning techniques is desirable and will be considered a plus.

How to Apply:
More information about the application process can be found here:

Join us at the  2017 IEEE 3DTV-CON: True Vision – Capture, Transmission and Display of 3D Video, 3DTV-CON.

ICT Lab researcher Xichen Zhou will be presenting our paper “Automatic 2D to Stereoscopic Video Conversion for 3D TVs”. The work is co-authored with Bipin C. Desai, Charalambos Poullis.

Abstract: In this paper we present a novel technique for automatically converting 2D videos to  stereoscopic. Uniquely, the proposed approach leverages the strengths of Deep Learning to address the complex problem of depth estimation from a single image. A Convolutional Neural Network is trained on input RGB images and their corresponding depths maps. We reformulate and simplify the process of  generating the second camera’s depth map and present how this can be used to render an anaglyph image. The anaglyph image was used for demonstration only because of the easy and wide availability of red/cyan glasses however, this does not limit the applicability of the proposed technique to other stereo forms. Finally, we present preliminary results and discuss the challenges.

ICT lab researcher Oliver Philbin-Briscoe has been awarded the NSERC Undergraduate Student Research Award.

The proposal is on “Interactive Technologies in Virtual Reality”. A summary of the project is shown below:

The impressive successes in low-cost VR technologies (Oculus Rift, Virtuix Omni, Microsoft HoloLens/Illumiroom, movement detectors and sensor points Fitbit, etc) and the dramatic growth of many areas in computer vision (e.g. real-time monocular SLAM, 3D reconstruction, human activity recognition, etc) have offered an early glimpse into the fundamental changes that Virtual Reality (VR) can bring into the way humans participate in social activities, workplaces, learning, entertainment and other daily experiences. Further, these major advances fueled by the explosion of activity in the VR consumer market is raising visions of tremendous economic potential.

However, present day virtual reality systems require the use of specialized, and often cumbersome equipment for experiencing Virtual Environments (VE) (e.g. Head Mounted Devices (HMDs), active-vision or passive-vision glasses with tracking markers, etc).  These shortcomings have a seriously detrimental effect on the user’s experience and unless addressed well and fast, will not enable the realization of the societal and economic potential of VR technologies. Accordingly, the research proposed here aspires to address the above concerns and if successful will bring a transformational change in interaction technologies relating to Virtual Reality, pushing the state-of-the-art towards natural interaction using hand and body gestures, eye-gaze, speech and other natural and intuitive human action modalities.

In cooperation with the iMareCulture project, we are organizing a Workshop on Serious Games and Cultural Heritage in conjunction with the 9th International Conference on Virtual Worlds and Games for Serious Applications 2017.


The overall objectives of the workshop are the discussion and sharing of knowledge, and scientific and technical results, related to state-of-the-art solutions, technologies, and applications of serious games in cultural heritage, as well as the demonstration of serious games in cultural heritage. We believe the workshop will further enlarge the audience of this conference and thus keep improving the quality of the papers submitted. The nature of the workshop is multi-disciplinary, focusing on innovations in all the technology and application aspects of serious games and cultural heritage. The target audience is everyone in the general computer graphics and computer games research community with an interest in cultural heritage.


The workshop seeks original high-quality research and application/system paper submissions in all aspects of Serious Games and Cultural Heritage. Suggested topics include, but are not limited to:
• Interactive digital storytelling for virtual cultural heritage applications
• Challenges and trends in Serious Games for Cultural Heritage
• User engagement and motivation
• Assessment of the learning impact
• Human-Computer Interaction
• Game mechanics suited for CH education
• Personalization, adaptivity and Artificial Intelligence
• Game architectures
• Psychology and Pedagogy
• Best practices in the development and adoption of SGs for CH
• Generation and representation of cultural content in Games
• Culturally relevant Non-Player Characters
• Applications and case studies


Bart Simon (Concordia University)
Sudhir Mudur (Concordia University)
Charalambos Poullis (Concordia University)
Selma Rizvic (University of Sarajevo)
Dimitrios Skarlatos (Cyprus University of Technology)


Join us at the 12th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications, GRAPP 2017.

ICT Lab researcher Behnam Maneshgar will be presenting our paper “A Long-Range Vision System for Projection Mapping of Stereoscopic Content in Outdoor Areas“. The work is co-authored with Leila Sujir, Sudhir P. Mudur, Charalambos Poullis.

Abstract: Spatial Augmented Reality, or its more commonly known name Projection Mapping (PM), is a projection technique which transforms a real-life object or scene into a surface for video projection (Raskar et al., 1998b). Although this technique has been pioneered and used by Disney since the seventies, it is in recent years that it has gained significant popularity due to the availability of specialized software which simplifies the otherwise cumbersome calibration process (Raskar et al., 1998a). Currently, PM is being widely used in advertising, marketing, cultural events, live performances, theater, etc as a way of enhancing an object/scene by superimposing visual content (Ridel et al., 2014). However, despite the wide availability of specialized software, several restrictions are still imposed on the type of objects/scenes on which PM can be applied. Most limitations are due to problems in handling objects/scenes with (a) complex reflectance properties and (b) low intensity or distinct colors. In this work, we address these limitations and present solutions for mitigating these problems. We present a complete framework for calibration, geometry acquisition and reconstruction, estimation of reflectance properties, and finally color compensation; all within the context of outdoor long-range PM of stereoscopic content. Using the proposed technique, the observed projections are as close as possible [constrained by hardware limitations] to the actual content being projected; therefore ensuring the perception of depth and immersion when viewed with stereo glasses. We have performed extensive experiments and the results are reported.

iMARECULTURE:Advanced VR, iMmersive serious games and Augmented REality as tools to raise awareness and access to European underwater CULTURal heritagE.

The principal invesigator of the project is Prof. D. Skarlatos  from the Cyprus University of Technology.

The ICT lab is a member of the consortium and is working on the serious games and virtual environments.

Project summary:
The project iMARECULTURE is focusing in raising European identity awareness using maritime and underwater cultural interaction and exchange in Mediterranean Sea. Commercial ship routes joining Europe with other cultures are vivid examples of cultural interaction, while shipwrecks and submerged sites, unreachable to wide public are excellent samples that can benefit from immersive technologies, augmented and virtual reality. iMARECULTURE will bring inherently unreachable underwater cultural heritage within digital reach of the wide public using virtual visits and immersive technologies. Apart from reusing existing 3D data of underwater shipwrecks and sites, with respect to ethics, rights and licensing, to provide a personalized dry visit to a museum visitor or augmented reality to the diver, it also emphasizes on developing pre- and after- encounter of the digital visitor. The former one is implemented exploiting geospatial enabled technologies for developing a serious game of sailing over ancient Mediterranean and the latter for an underwater shipwreck excavation game. Both games are realized thought social media, in order to facilitate information exchange among users. iMARECULTURE supports dry visits by providing immersive experience through VR Cave and 3D info kiosks on museums or through the web. Additionally, aims to significantly enhance the experience of the diver, visitor or scholar, using underwater augmented reality in a tablet and an underwater housing. iMARECULTURE is composed by universities and SMEs with experience in diverse underwater projects, existing digital libraries, and people many of which are divers themselves.

Duration: 36 months (1 November 2016 – 31 October 2019)
Funding body: European Commision – Research Executive Agency
Amount: €2,644,025

For more up-to-date information please visit/bookmark the project’s website: iMARECULTURE