Image

About iHunter Project

Intelligent UAV System for Autonomous Aerial Target Detection & Interception

The project aims to develop an autonomous drone system with onboard intelligence for tracking other drones or targets for surveillance and/or reconnaissance missions using advanced technology such as computer vision, machine learning, and robotics. The hunting drone would be equipped with sensors and cameras to detect and track targets, decision-making algorithms to determine the best approach to track, follow, for further interception action. The system would also include navigation and communication capabilities for remote monitoring and high-level supervision by an operator. The development of interception methodologies is beyond the scope of this project. The ultimate goal of the project is to provide a solution for defending against unauthorized or malicious drone activity in sensitive areas such as airports, prisons, and critical infrastructure, as well as for surveillance and tracking applications in various fields such as, search and rescue, and security.

  • Target (drone) detection using combination of AI models and Kalman Filter feedback algorithms

  • Multi-target state estimation using our field-tested implementation of Kalman filter

  • Accurate trajectory prediction using AI models trained using our 5000 generated flight tests

  • Feasible trajectory generation and tracking using MPC and SE3 controllers

Project Tasks

Four main pipelines: Perception, State Estimation, Prediction, Tracking

Drone Detection using AI models

We use a combination of AI-based object detection, traditional computer vision techniques all coupled with Kalman filtering feedback to dramatically enhance drone detection in real-time. This work is submitted to the Robotics and Automation Letters and currently under review.

Kalman-Filter Guided Detection

In this work, we introduce a novel approach to enhance autonomous object tracking in unmanned aerial vehicles (UAVs), crucial for applications such as security, surveillance, and drone-to-target interception. Our methodology synergizes advanced vision-based object detection, using the state-of-the-art YOLOv8, with Kalman Filter-based state estimation to ensure precise and stable tracking in dynamic environments. One of the primary challenges in UAV tracking is maintaining data continuity, especially in fast-paced scenarios where traditional sensor data can be unreliable. To overcome this, our method innovatively uses high-frequency state estimates from the Kalman Filter as feedback. This guides the search for new measurements when primary measurements from object detection methods are momentarily unavailable (even for extended time periods), thereby significantly reducing tracking discontinuity. We implement our approach with depth cameras (RGB+depth sensors) to detect target UAVs and validate its effectiveness in a realistic ROS2-based simulation environment. Our results demonstrate a marked improvement in tracking stability, maintaining accurate state estimations even when YOLOv8-based detection faces interruptions, with estimation RMSE as low as 0.04 m. This not only bolsters UAV tracking robustness but also paves the way for more dependable autonomous UAV operations in complex scenarios. Furthermore, we contribute to the robotics community by providing open-source ROS2-compatible implementations of both the Kalman Filter and our proposed tracking algorithms.

Paper: Submitted to RA-L( 2024), under review

Code: To be released soon

Tech Integration Image
Tech Integration Image
State Estimation

We use our own field-tested implementation of a multi-target state estimation Kalman filtering to enable multi-drone detection in real-time.

Code: https://github.com/mzahana/multi_target_kf/tree/ros2_humble

Trajectory Prediction

A precise prediction of the aerial target’s 3D trajectory is essential for accurate agile tracking and interception. We have been working on multiple frameworks that focus on real-time accurate trajectory prediction for multi-rotor UAVs.

Photo 1 Description
Photo 2 Description
Photo 3 Description

Real-Time 3D UAV Trajectory Prediction Using Sequence-Based Neural Models

This work introduces a new prediction scheme for real-time 3D Unmanned Aerial Vehicle (UAV) trajectory using sequence-based neural networks and gated recurrent units (GRUs). Unlike existing solutions, ours combines both position and velocity historical data to attain more accurate predictions of future UAV locations. Without loss of generality, our paper focuses on an experimental setting where an UAV entity is monitored by an observer such as an interceptor drone for aerial surveillance and defense purposes. A salient feature of our research lies in the use of both synthetic and real-world 3D UAV trajectories. These trajectories, characterized by varying agility patterns, curvatures and speeds, are used to assess the performance of the proposed prediction scheme. Synthetic trajectories are simulated based on realistic configurations of the {\em Gazebo robotics simulator} and {\em PX4 Autopilot}. To assess the suitability of our prediction algorithm, real-world drone trajectories are considered including the {\em UZH-FPV} and {\em Mid-Air} drone racing datasets. To enhance the memory capabilities of the prediction process, sequence-based neural networks are considered. GRU cells encode the prediction memory and the hidden/latent states. Training data consisting of drone 3D position and velocity samples are used to train the GRU-based prediction algorithm. The reported results indicate the superiority of the velocity-based position prediction algorithm. Additionally, our algorithm outperforms existing ones based on recurrent neural networks (RNNs). On equal foot based on dataset size, our GRU-based model outperforms those using transformers, as the latter suffer from the scaling problem. 24 different model configurations are assessed against simulated and real-world data achieving average mean square error (MSE) ranging from $2 \times 10^{-8}$ to $2 \times 10^{-7}$. Finally, the superiority of velocity-based predictions is confirmed in all the experiments performed.

Paper: Submitted (2024), under review.

Code: uav_dataset_generation_ros

Dataset: 5000 UAV trajectories

Enabling Real-Time Trajectory Prediction for agile Drone-to-Drone Tracking via Adaptive Model Selection

Tracking a target drone using another drone is crucial in various scenarios, such as protecting critical infrastructure, securing public events, enforcing no-fly zones, and countering illegal activities. However, real-time drone-to-drone tracking poses significant challenges, mainly if the target drone exhibits agile maneuvers in 3D due to the complex dynamics of unmanned aerial vehicles and the need for accurate trajectory prediction. The quality of drone-to-drone tracking depends on the accuracy of the target's predicted trajectory(e.g., position and velocity). This paper proposes the D2DTracker framework, which can generate accurate predictions of a target drone's trajectory in real-time using onboard sensing and computations. The D2DTracker's primary concept is to fit a library of predefined simple models using the target's past behavior and recent observations. The models are then used to generate multiple trajectory predictions in real time. The best model is the one that has the least root-mean-squared error (RMSE) compared with the corresponding real-time observations. The model fitting, prediction, and selection process is repeated using real-time observations to adapt to the target's changing behavior. This enables it to maintain high tracking accuracy even in challenging scenarios. The framework is demonstrated in realistic simulations of a quadcopter using the Robot Operating System (ROS), the Gazebo simulator, and the PX4 autopilot. Simulations show that the proposed method can select the best models that can generate predictions with 0.2 RMSE compared to actual observations for circular and infinity trajectory shapes. We also provide open-source software packages of the proposed framework.

Indoor drone detection

Paper: Accepted, 2024. Link will be provided once released on IEEE Xplore.

Code: d2dtracker_trajectory_prediction

Trajectory Tracking and Interception

For target interception, we use an online trajectory generation using Model Predictive Control to generate a feasible trajectory for the interceptor based on a given target predicted trajectory. The MPC output is then used as a reference signal to a lower level non-linear SE3 controller to provide precise trajectory tracking.

Automated Synthetic Data Generation

UAV Synthetic Dataset: A ROS2 Framework for Automated Generation of Large Dataset of UAV Trajectories

Predicting unmanned aerial vehicle (UAV) 3D trajectories using AI models poses significant challenges, primarily due to the need for comprehensive datasets. Addressing this, we introduce an open-source simulation framework, incorporating the Robot Operating System (ROS 2), Gazebo robotics simulator, and PX4 autopilot. This innovative tool is adept at creating extensive synthetic UAV trajectory datasets. It generates 3D position trajectories and provides diverse data types like GPS coordinates, IMU measurements, and color and depth images, meeting varied research needs in AI and robotics. The framework ensures realistic UAV trajectory simulations, leveraging Gazebo's high-fidelity environment and PX4's control systems to mirror real-world UAV behavior. This enhances the utility of the datasets, especially for high-precision applications. Our primary contribution includes a unique dataset of over 5000 random UAV simulated trajectories accumulated over 20 hours of flight time. The framework supports the automatic generation of varied trajectories and sensor data. It also includes a comprehensive preprocessing pipeline for dataset readiness, essential for AI model training, featuring uniform resampling, statistical normalization, and trajectory segment generation. This significantly aids in overcoming UAV dataset limitations and advances AI model development in UAV research.

X500 Quadcopter

Paper: Submitted to ICUAS(2024), under review.

Code: uav_dataset_generation_ros

Dataset: 5000 UAV trajectories

Meet The Team

Our Expert Team Members

Meet the dedicated professionals behind our groundbreaking work in autonomous drone system at the RIOTU Lab, Prince Sultan University.

Research Center Director

Prof. Anis Koubaa

Robotics Team Lead

Dr. Mohamed Abdelkader

Postdoc

Dr. Imen Jarray

Research Engineer

Eng. Khaled Gabr

Research Engineer

Eng. Abdullah AlMusalami

Research Engineer

Eng. Omar Najar

Research Engineer

Eng. Abdulrahman AlBatati

Contact

GET IN TOUCH

Address

Building 101, Prince Sultan University, PPP2+CGP, Nasir Ben Farhan Street, King Salman Neighborhood, Riyadh 12435.

© Robotics and Internet-of-Things Lab.