Immersive & Creative Technologies Lab
  • Home
  • About
  • People
  • Publications
  • Courses
  • Contact
Sign in Subscribe

Strategic Incorporation of Synthetic Data for Performance Enhancement in Deep Learning: A Case Study on Object Tracking Tasks

Charalambos (Charis) Poullis

Sep 5, 2023
Strategic Incorporation of Synthetic Data for Performance Enhancement in Deep Learning: A Case Study on Object Tracking Tasks

Our paper Strategic Incorporation of Synthetic Data for Performance Enhancement in Deep Learning: A Case Study on Object Tracking Tasks has been published as a conference paper at the 18th International Symposium on Visual Computing (ISVC), 2023. The work is co-authored by Jatin Katyal and Charalambos Poullis.

Sign up for more like this.

Enter your email
Subscribe
Fast Self-Supervised Depth and Mask Aware Association for Multi-Object Tracking
Featured

Fast Self-Supervised Depth and Mask Aware Association for Multi-Object Tracking

The paper “Fast Self-Supervised Depth and Mask Aware Association for Multi-Object Tracking” by Milad Khanchi, Maria Amer, and Charalambos Poullis has been accepted for publication at British Machine Vision Conference (BMVC) 2025. TL;DR: SelfTrEncMOT is a novel multi-object tracking framework that integrates zero-shot monocular depth estimation and promptable segmentation
Aug 19, 2025 1 min read
Depth-Aware Scoring and Hierarchical Alignment for Multiple Object Tracking

Depth-Aware Scoring and Hierarchical Alignment for Multiple Object Tracking

The paper 'Depth-Aware Scoring and Hierarchical Alignment for Multiple Object Tracking' by Milad Khanchi, Maria Amer, and Charalambos Poullis has been accepted for publication in IEEE International Conference on Image Processing (ICIP) 2025. TL;DR: DepthMOT is a cutting-edge multiple object tracking framework that enhances object association by
Jun 10, 2025 1 min read
DSV-LFS: Unifying LLM-Driven Semantic Cues with Visual Features for Robust Few-Shot Segmentation
Featured

DSV-LFS: Unifying LLM-Driven Semantic Cues with Visual Features for Robust Few-Shot Segmentation

The paper 'DSV-LFS: Unifying LLM-Driven Semantic Cues with Visual Features for Robust Few-Shot Segmentation' by Amin Karimi and Charalambos Poullis has been accepted for publication in IEEE/CVF Computer Vision and Pattern Recognition (CVPR) 2025. TL;DR: The paper introduces DSV-LFS, a framework that boosts few-shot semantic segmentation
Apr 8, 2025 1 min read
Immersive & Creative Technologies Lab © 2025
  • Sign up
Powered by Ghost