All posts in ICT Lab

In conjunction with the project “Exploring Elastic 3D Spaces: bodies and belonging” funded by the Social Sciences and Humanities Research Council of Canada, and in collaboration with Elastic Spaces (http://www.elasticspaces.hexagram.ca/), the Immersive & Creative Technologies Lab (www.theICTlab.org) at Concordia University is recruiting a highly-motivated researcher for a Post-Doctoral position.

The position is full-time and is funded for two years, starting in September 2017 or January 2018.

Topic:
Natural Interactions in Virtual Reality

Academic Requirements:
We are looking for highly motivated and creative individual who enjoys working in a collaborative research environment. Good communication skills and fluency in English are required. Applicants should have a strong academic training, including an undergraduate or graduate degree in a relevant discipline i.e. HCI, computer games, computer science, electrical engineering, mathematics, or statistics, and have excellent mathematical skills. Having experience with augmented/virtual reality, interaction technologies, 3D computer vision, graphics, robotics, or a related area, and a background in machine learning techniques is desirable and will be considered a plus.

How to Apply:
More information about the application process can be found here: https://goo.gl/PPm1Kb

Join us at the  2017 IEEE 3DTV-CON: True Vision – Capture, Transmission and Display of 3D Video, 3DTV-CON.

ICT Lab researcher Xichen Zhou will be presenting our paper “Automatic 2D to Stereoscopic Video Conversion for 3D TVs”. The work is co-authored with Bipin C. Desai, Charalambos Poullis.

Abstract: In this paper we present a novel technique for automatically converting 2D videos to  stereoscopic. Uniquely, the proposed approach leverages the strengths of Deep Learning to address the complex problem of depth estimation from a single image. A Convolutional Neural Network is trained on input RGB images and their corresponding depths maps. We reformulate and simplify the process of  generating the second camera’s depth map and present how this can be used to render an anaglyph image. The anaglyph image was used for demonstration only because of the easy and wide availability of red/cyan glasses however, this does not limit the applicability of the proposed technique to other stereo forms. Finally, we present preliminary results and discuss the challenges.

ICT lab researcher Oliver Philbin-Briscoe has been awarded the NSERC Undergraduate Student Research Award.

The proposal is on “Interactive Technologies in Virtual Reality”. A summary of the project is shown below:

The impressive successes in low-cost VR technologies (Oculus Rift, Virtuix Omni, Microsoft HoloLens/Illumiroom, movement detectors and sensor points Fitbit, etc) and the dramatic growth of many areas in computer vision (e.g. real-time monocular SLAM, 3D reconstruction, human activity recognition, etc) have offered an early glimpse into the fundamental changes that Virtual Reality (VR) can bring into the way humans participate in social activities, workplaces, learning, entertainment and other daily experiences. Further, these major advances fueled by the explosion of activity in the VR consumer market is raising visions of tremendous economic potential.

However, present day virtual reality systems require the use of specialized, and often cumbersome equipment for experiencing Virtual Environments (VE) (e.g. Head Mounted Devices (HMDs), active-vision or passive-vision glasses with tracking markers, etc).  These shortcomings have a seriously detrimental effect on the user’s experience and unless addressed well and fast, will not enable the realization of the societal and economic potential of VR technologies. Accordingly, the research proposed here aspires to address the above concerns and if successful will bring a transformational change in interaction technologies relating to Virtual Reality, pushing the state-of-the-art towards natural interaction using hand and body gestures, eye-gaze, speech and other natural and intuitive human action modalities.

In cooperation with the iMareCulture project, we are organizing a Workshop on Serious Games and Cultural Heritage in conjunction with the 9th International Conference on Virtual Worlds and Games for Serious Applications 2017.

Scope

The overall objectives of the workshop are the discussion and sharing of knowledge, and scientific and technical results, related to state-of-the-art solutions, technologies, and applications of serious games in cultural heritage, as well as the demonstration of serious games in cultural heritage. We believe the workshop will further enlarge the audience of this conference and thus keep improving the quality of the papers submitted. The nature of the workshop is multi-disciplinary, focusing on innovations in all the technology and application aspects of serious games and cultural heritage. The target audience is everyone in the general computer graphics and computer games research community with an interest in cultural heritage.

Topics

The workshop seeks original high-quality research and application/system paper submissions in all aspects of Serious Games and Cultural Heritage. Suggested topics include, but are not limited to:
• Interactive digital storytelling for virtual cultural heritage applications
• Challenges and trends in Serious Games for Cultural Heritage
• User engagement and motivation
• Assessment of the learning impact
• Human-Computer Interaction
• Game mechanics suited for CH education
• Personalization, adaptivity and Artificial Intelligence
• Game architectures
• Psychology and Pedagogy
• Best practices in the development and adoption of SGs for CH
• Generation and representation of cultural content in Games
• Culturally relevant Non-Player Characters
• Applications and case studies

Co-chairs

Bart Simon (Concordia University)
Sudhir Mudur (Concordia University)
Charalambos Poullis (Concordia University)
Selma Rizvic (University of Sarajevo)
Dimitrios Skarlatos (Cyprus University of Technology)

 

Join us at the 12th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications, GRAPP 2017.

ICT Lab researcher Behnam Maneshgar will be presenting our paper “A Long-Range Vision System for Projection Mapping of Stereoscopic Content in Outdoor Areas“. The work is co-authored with Leila Sujir, Sudhir P. Mudur, Charalambos Poullis.

Abstract: Spatial Augmented Reality, or its more commonly known name Projection Mapping (PM), is a projection technique which transforms a real-life object or scene into a surface for video projection (Raskar et al., 1998b). Although this technique has been pioneered and used by Disney since the seventies, it is in recent years that it has gained significant popularity due to the availability of specialized software which simplifies the otherwise cumbersome calibration process (Raskar et al., 1998a). Currently, PM is being widely used in advertising, marketing, cultural events, live performances, theater, etc as a way of enhancing an object/scene by superimposing visual content (Ridel et al., 2014). However, despite the wide availability of specialized software, several restrictions are still imposed on the type of objects/scenes on which PM can be applied. Most limitations are due to problems in handling objects/scenes with (a) complex reflectance properties and (b) low intensity or distinct colors. In this work, we address these limitations and present solutions for mitigating these problems. We present a complete framework for calibration, geometry acquisition and reconstruction, estimation of reflectance properties, and finally color compensation; all within the context of outdoor long-range PM of stereoscopic content. Using the proposed technique, the observed projections are as close as possible [constrained by hardware limitations] to the actual content being projected; therefore ensuring the perception of depth and immersion when viewed with stereo glasses. We have performed extensive experiments and the results are reported.

iMARECULTURE:Advanced VR, iMmersive serious games and Augmented REality as tools to raise awareness and access to European underwater CULTURal heritagE.

The principal invesigator of the project is Prof. D. Skarlatos  from the Cyprus University of Technology.

The ICT lab is a member of the consortium and is working on the serious games and virtual environments.

Project summary:
The project iMARECULTURE is focusing in raising European identity awareness using maritime and underwater cultural interaction and exchange in Mediterranean Sea. Commercial ship routes joining Europe with other cultures are vivid examples of cultural interaction, while shipwrecks and submerged sites, unreachable to wide public are excellent samples that can benefit from immersive technologies, augmented and virtual reality. iMARECULTURE will bring inherently unreachable underwater cultural heritage within digital reach of the wide public using virtual visits and immersive technologies. Apart from reusing existing 3D data of underwater shipwrecks and sites, with respect to ethics, rights and licensing, to provide a personalized dry visit to a museum visitor or augmented reality to the diver, it also emphasizes on developing pre- and after- encounter of the digital visitor. The former one is implemented exploiting geospatial enabled technologies for developing a serious game of sailing over ancient Mediterranean and the latter for an underwater shipwreck excavation game. Both games are realized thought social media, in order to facilitate information exchange among users. iMARECULTURE supports dry visits by providing immersive experience through VR Cave and 3D info kiosks on museums or through the web. Additionally, aims to significantly enhance the experience of the diver, visitor or scholar, using underwater augmented reality in a tablet and an underwater housing. iMARECULTURE is composed by universities and SMEs with experience in diverse underwater projects, existing digital libraries, and people many of which are divers themselves.

Duration: 36 months (1 November 2016 – 31 October 2019)
Funding body: European Commision – Research Executive Agency
Amount: €2,644,025

For more up-to-date information please visit/bookmark the project’s website: iMARECULTURE

3DUNDERWORLD-SLS v4 has been released.

The latest version features multiple optimizations in the processing pipeline. 3DUNDERWORLD-SLS v4.x requires two or more cameras and includes a CUDA GPU implementation as well as a CPU implementation in case an Nvidia card is not found. In this version, we provide a generic camera interface implementation and which the programmer can extend to support any kind of camera.

The source code and sample data can be found at the lab’s Github account.

The technical report can be found arxiv.

 

This is an article about our 3DUNDERWORLD-SLS software written by European Commission’s Innovation Union.

 

Details about important dates and submission instructions can be found on the workshop’s website: http://www.multimediauts.org/3DWorkshop_CVPR2016/

Recent research on large scale 3D data has been boosted by a number of rapid academic and industrial advances. 3D sensing devices, ranging from consumer depth cameras like Microsoft’s Kinect to professional laser-scanners, make 3D data capture readily available in real life. Moreover, structure from motion and dense multi-view stereo have matured to also deliver large scale point clouds. These point clouds typically need to be processed further into higher level geometric representations (for example, surface meshes), and semantically analysed (for example, object detection). These representations open up many exciting applications for mapping services, navigation systems and virtual/augmented reality devices.

This full-day workshop is inspired by these exciting advances and will cover large scale 3D data research topics, including acquisition, modelling and analysis. One key feature of our workshop is to introduce two 3D data challenges. The first challenge addresses semantic segmentation of large outdoor point clouds (see http://www.semantic3d.net/). The second challenge aims to evaluate multiple-view stereo algorithms applied to large numbers of commercial satellite images (check the workshop web site for the latest information).

Moreover, the full-day workshop is expected to demonstrate the convergence of state-of-the-art 3D sensor technology, 3D computer vision, and 3D applications such as augmented reality through a forum of invited talks by leading researchers and submitted research papers for oral and poster presentation. Authors are invited to submit a full paper (two-column format, 8 pages) in accordance with the CVPR guidelines available on the conference website: http://cvpr2016.thecvf.com. The review will be double-blind. Only electronic submissions will be accepted. Topics of interest include, but are not limited to:

  • Semantic segmentation of 3D outdoor point clouds in photogrammetry and mapping
  • Object description, detection and recognition on large scale point cloud data
  • Matching and registration of point cloud data across different sources of sensor
  • 3D scene reconstruction through multi-sensory data fusion, alignment, and registration
  • Camera pose tracking on mobile devices
  • Appearance and illumination modelling and representation
  • 3D rendering and visualization of large scale models (e.g. for urban areas)
  • Augmented reality and merging of virtual and real worlds, augmented reality in street view, and web-based 3D map applications
  • Multiple-view stereo algorithms applied to large numbers of commercial satellite images.

 

Contact

 

Organizers (listed in alphabetical order of last names)

Mohammed Bennamoun, mohammed.bennamoun@uwa.edu.au
Myron Brown, myron.brown@jhuapl.edu
Lixin Fan, lixin.fan@nokia.com
Thomas Fevens, fevens@cse.concordia.ca
Hak Jae Kim, hakjae.kim@iarpa.gov
Florent Lafarge, florent.lafarge@inria.fr
Sudhir Mudur, mudur@cse.concordia.ca
Marc Pollefeys, marc.pollefeys@inf.ethz.ch
Tiberiu Popa, tiberiu.popa@concordia.ca
Fatih Porikli, fatih.porikli@anu.edu.au
Charalambos Poullis, charalambos@poullis.org
Konrad Schindler, schindler@geod.baug.ethz.ch
Qiang Wu, qiang.wu@uts.edu.au
Jian Zhang, jian.zhang@uts.edu.au
Qian-Yi Zhou, qianyi.zhou@gmail.com

During a four-day international research colloquium from February 24th to 27th, 2016, the Elastic 3D Space group of Researchers, lead by Artists, Designers and Computer Scientists, will explore the potential of stereoscopic technologies with artistic practices. This event brings together over 15 researchers, artists and industry experts to share their research explorations on elastic space, and augmented and virtual reality, and future reality within multiple disciplines from six Universities Art Departments, two cultural production and exhibition sites, along with departments of Computer Science and Software Engineering, Architectural History, and Performance Studies and Design across three continents.

The February 24-27 event will start with a day of presentations including a walking tour in the afternoon, followed by three days of a workshop research exchange, with hand-on workshops, a session at the National Film Board stereoscopic studios, roundtable discussions, 3D drawings demos and virtual drawing prototypes.

This exchange will focus on both the technical exploration of stereoscopic technologies and software, while questioning its perceptual effects. It will deeply investigate the way our bodies relate to our built environment and interact within the illusory elastic 3D space.

There will be two keynote speeches by Ken Perlin, and Dorita Hannah

Keynotes: Elastic 3D Space Keynotes

Program: Elastic 3D Space Colloquium

Newsletter: newsletter