All posts in Uncategorized

Join us at the 3DTV Conference 2018

ICT lab researcher Chen Qiao will be presenting our work on “Single-shot Dense Reconstruction with Epic-flow”. This work is co-authored with Chen Qiao, Charalambos Poullis.

Abstract:

In this paper we present a novel method for generating dense reconstructions by applying only structure-from-motion(SfM) on large-scale datasets without the need for multi-view stereo as a post-processing step. A state-of-the-art optical flow technique is used to generate dense matches. The matches are encoded such that verification for correctness becomes possible, and are stored in a database on-disk. The use of this out-of-core approach transfers the requirement for large memory space to disk, therefore allowing for the processing of even larger-scale datasets than before. We compare our approach with the state-of-the-art and present the results which verify our claims.

The Photogrammetric Vision lab of the Cyprus University of Technology  will be presenting the joint work “Underwater Photogrammetry in Very Shallow Waters: Caustics Effect Removal and Main Challenges” at the ISPRS Technical Commission II Symposium, 2018. This work was done in collaboration with the ICT lab, and the Lab. of Photogrammetry of the National Technical University of Athens, and is co-authored with P. Agrafiotis, D. Skarlatos, T. Forbes, C.  Poullis, M. Skamantzari, A. Georgopoulos.

Abstract:

In this paper, main challenges of underwater photogrammetry in shallow waters are described and analysed. The very short camera to object distance in such cases, as well as buoyancy issues, wave effects and turbidity of the waters are challenges to be resolved. Additionally, the major challenge of all, caustics, is addressed by a new approach for caustics removal (Forbes et al., 2018) which is applied in order to investigate its performance in terms of SfM-MVS and 3D reconstruction results. In the proposed approach the complex problem of removing caustics effects is addressed by classifying and then removing them from the images. We propose and test a novel solution based on two small and easily trainable Convolutional Neural Networks (CNNs). Real ground truth for caustics is not easily available. We show how a small set of synthetic data can be used to train the network and later transfer the le arning to real data with robustness to intra-class variation. The proposed solution results in caustic-free images which can be further used for other tasks as may be needed.

Our work “DeepCaustics: Classification and Removal of Caustics from Underwater Imagery” will appear as a regular journal publication in IEEE Journal of Oceanic Engineering 2018. This work is co-authored with Timothy Forbes, Mark Goldsmith, Sudhir Mudur, Charalambos Poullis.

Abstract:

Caustics are complex physical phenomena resulting from the projection of light rays being reflected or refracted by a curved surface. In this work, we address the problem of classifying and removing caustics from images and propose a novel solution based on two Convolutional Neural Networks (CNNs): SalienceNet and DeepCaustics. Caustics result in changes in illumination which are continuous in nature, therefore the first network is trained to produce a classification of caustics which is represented as a saliency map of the likelihood of caustics occurring at a pixel. In applications where caustic removal is essential, the second network is trained to generate a caustic-free image. It is extremely hard to generate real ground truth for caustics. We demonstrate how synthetic caustic data can be used for training in such cases, and then transfer the learning to real data. To the best of our knowledge, out of the handful of techniques which have been proposed this is the first time that the complex problem of caustic removal has been reformulated and
addressed as a classification and learning problem. This work is motivated by the real-world challenges in underwater archaeology.

Join us at the 15th Conference on Computer and Robot Vision 2018

ICT lab researcher Timothy Forbes will be presenting our work on “Deep Autoencoders with Aggregated Residual Transformations for Urban Reconstruction from Remote Sensing Data”. This work is co-authored with Timothy Forbes and Charalambos Poullis.

Abstract:

In this work we investigate urban reconstruction and propose a complete and automatic framework for reconstructing urban areas from remote sensing data.

Firstly, we address the complex problem of semantic labeling and propose a novel network architecture named SegNeXT which combines the strengths of deep-autoencoders with feed-forward links in generating smooth predictions and reducing the number of learning parameters, with the effectiveness which cardinality-enabled residual-based building blocks have shown in improving prediction accuracy and outperforming deeper/wider network architectures with a smaller number of learning parameters. The network is trained with benchmark datasets and the reported results show that it can provide at least similar and in some cases better classification than state-of-the-art.

Secondly, we address the problem of urban reconstruction and propose a complete pipeline for automatically converting semantic labels into virtual representations of the urban areas. An agglomerative clustering is performed on the points according to their classification and results in a set of contiguous and disjoint clusters. Finally, each cluster is processed according to the class it belongs: tree clusters are substituted with procedural models, cars are replaced with simplified CAD models, buildings’ boundaries are extruded to form 3D models, and road, low vegetation, and clutter clusters are triangulated and simplified.

The result is a complete virtual representation of the urban area. The proposed framework has been extensively tested on large-scale benchmark datasets and the semantic labeling and reconstruction results are reported.

Join us at the 6th International Conference on Affective Computing and Intelligent Interaction – ACII2015.

ICT Lab researcher Chris Christou will be presenting our paper Psychophysiological Responses to Virtual Crowds: Implications for Wearable Computing. The work is co-authored by Kyriakos Herakleous, Aimilia Tzanavari, Charalambos Poullis.

Abstract: People’s responses to crowds was investigated with a simulation of a busy street using virtual reality. Both psychophysiological measures and a cognitive test were used to assess the influence of large crowds or individual agents who stood close to the participant while they performed a memory task. Results from most individuals revealed strong orienting responses to changes in the crowd. This was indicated by sharp increases in skin conductivity and reduction in peripheral blood volume amplitude. Furthermore, cognitive function appeared to be affected. Results of the memory test appeared to be influenced by how closely virtual agents approached the participants. These findings are discussed with respect to wearable affective computing that seeks robust identifiable correlates of autonomic activity that can be used in everyday contexts.

Venue: Xi’an, China – Grand New World Hotel – Hua Shan, Floor 1
Date: 22 September 2015
Time: 10:30-12:10, Track O2: Affect and Psychophysiology

Open Information Day – Friday, 12/7/2013

The Immersive and Creative Technologies Lab is organizing an Open Information Day for the general public on the 12th July 2013 at the lab’s premises. During the Open Information Day interested parties will have the oppurtinity to try the VR CAVE.

Promotion Video:

Open Information Day @ ICT Lab from TheICTLab on Vimeo.

The source code for the high-accuracy structure-light scanning system has been made available on the website’s download section. The software is available for non-commercial, research purposes only. Added: Mesh export to PLY format. Support for color.

Live Demos @ Researchers' Night, Nicosia, Cyprus

On 27th September 2012, the Research Promotion Foundation is organizing the annual information day named “Researchers’ Night” in Nicosia, Cyprus. 3DUNDERWORLD will be present with its own kiosk where the technologies developed during the project will be displayed and demonstrated. Come visit us and get scanned!