I2SLAB 송인표 석사 (지도교수: 이장원), IROS 2024 논문 채택
- 성균관대학교 컴퓨터교육과
- 조회수240275
- 2024-07-19
I2SLAB 송인표 석사 (지도교수: 이장원), IROS 2024 논문 채택
I2SLAB (지도교수: 이장원)의 송인표 (실감미디어공학과) 학생이 연구한 논문 “SFTrack: A Robust Scale and Motion Adaptive Algorithm for Tracking Small and Fast Moving Objects” 이 ICRA와 더불어 Robotics 분야에서 세계적 권위를 자랑하는 양대 국제학술대회인 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) 2024 에 게재 승인되어 10월에 발표될 예정입니다. 본 논문에서 이장원 교수 연구팀은 UAV 영상에서의 빠른 움직임과 고고도 및 광각뷰로 인해 객체가 불명확하게 나타나는 문제를 해결할 수 있는 새로운 다중 객체 추적 알고리즘을 제안하였습니다. 논문의 자세한 내용은 다음과 같습니다.
[논문]
Inpyo Song and Jangwon Lee, “SFTrack: A Robust Scale and Motion Adaptive Algorithm for Tracking Small and Fast Moving Objects,“ IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2024), Oct. 2024.
[Abstract]
This paper addresses the problem of multi-object tracking in Unmanned Aerial Vehicle (UAV) footage. It plays a critical role in various UAV applications, including traffic monitoring systems and real-time suspect tracking by the police. However, this task is highly challenging due to the fast motion of UAVs, as well as the small size of target objects in the videos caused by the high-altitude and wide-angle views of drones. In this study, we thus introduce a refined method to overcome these challenges. Our approach involves a new tracking strategy, which initiates the tracking of target objects from low-confidence detections commonly encountered in UAV application scenarios. Additionally, we propose revisiting traditional appearance-based matching algorithms to improve the association of low-confidence detections. To evaluate the effectiveness of our method, we conducted benchmark evaluations on two UAV-specific datasets (VisDrone2019, UAVDT) and a general dataset (MOT17). The results demonstrate that our approach surpasses current state-of-the-art methodologies, highlighting its robustness and adaptability in diverse tracking environments. Furthermore, we have improved the annotation of the UAVDT dataset by rectifying several errors and addressing omissions found in the original annotations. We will provide this refined version of the dataset to facilitate better benchmarking in the field.