Professors Barry O’Sullivan and V S Subrahmanian discuss the value of a AI-powered drone defence system paired with human judgement.
On 1 December of this year, several drones reportedly flew toward the flight path of Ukrainian president Volodymyr Zelensky’s airplane as it was over the Irish Sea, approaching Dublin. Though Zelensky and his wife arrived safely, the challenge of deterring, detecting and mitigating coordinated, multi-drone threats in the future is critically important.
Much of the public discussion about the incident has related to attributing responsibility for this incident. An Taoiseach, Micheál Martin, has dismissed Russian denial of their role in this drone incident. This highlights the diplomatic challenges associated with incidents such as these, quite apart from the operational and technical challenges that arise. We wish to focus on these technical aspects here and consider them from a variety of perspectives.
Detecting drones
The first challenge lies in detecting the presence of drones. Instruments such as radar; radio-frequency scanning, which monitors communications between a drone and its handler; jamming communications; GPS-spoofing, where the GPS signal used for navigation is replaced with a spurious one; acoustic analysis; and optical imaging, all have their drawbacks.
By the time a threatening drone is detected, it is often too late. Had the drones in this recent incident intended to physically attack Zelensky’s plane, they may well have succeeded.
However, detecting the presence of a drone is insufficient. Once detected it is critical to be able to rapidly assess the potential and the nature of the threat posed, and to swiftly respond to the threat.
Assessing the threat
Therefore, the second challenge is knowing whether a detected drone is threatening or not. We have seen mixed reporting about the number and type of drones that were flying in the region of the Irish Sea where they were spotted. Even the number of drones deemed dangerous in this incident has varied. If there were 30 drones flying in the zone of interest, determining which of those 30 poses a threat is key.
A third challenge is understanding the nature of the threat. Were the several drones alleged to be suspicious behaving in a coordinated manner? What was the likelihood that they were armed? Which of many possible targets is of interest to them?
Perhaps the most significant challenge relates to time. Had the drones involved planned to shoot at Zelensky’s plane, would security officials have been able to address the threat before shots were fired? How long did it take before security officials noticed the threat? How long did it take before they identified the several drones that were subsequently determined to constitute a threat? And could these decisions have been accurately made in real time during the incident itself, while still providing ample opportunity to security officials to counteract the threat?
Human-AI defence systems
The solution appears to be a multi-tiered – a human-AI defence system that uses AI to achieve the speed that humans lack and humans to minimise errors made by AI systems.
The outputs of diverse traditional methods to detect the presence of drones (radar analysis, optical/thermal image analysis, acoustic analysis, radio frequency scan analysis) can feed into machine learning models that infer, with high accuracy, where and when a drone is present. Drones detected in this way can be fed into drone early warning systems that, again, use machine learning to predict whether an observed drone trajectory poses a threat or not.
Reports about the 1 December incident suggest that the drones in question violated a no-fly zone (something that should trigger an alert) and additionally flew toward Zelensky’s plane – a high-value asset that should also trigger an alert.
Such alerts could automatically trigger proportional responses such as jamming communications between the suspicious drones and their handler and fly a drone swarm on nearby naval vessels to intercept the suspicious drones. Such a swarm of drones could intercept the flight path and use advanced machine learning models to dynamically track the offending drone and shape its path away from a plane such as the one Zelensky was on.
Should such measures fail, security officials using an online dashboard that highlights drones predicted to be highly threatening can quickly examine these drones and change the threat level based on their expertise. Authorised officials trained in drone defence could push a button authorising one of the interceptor drones to fire on any or all the suspicious drones in a manner that does not jeopardise other entities.
At this stage, of course, we do not know what defences the Irish military had in place, nor do we know what rationale was used to take or not take certain actions. Of course, an objective of hybrid activities like these is often to test how a country might respond. Incidents like these are sometimes designed to provoke a response, possibly a disproportionate one, that might cause a further escalation or might be exploited politically and diplomatically. Sometimes the best response is to not respond at all unless there is a clear and present danger.
Also, we do not know the timing involved in the decisions that were made on 1 December. What we do know is that AI and security officials, working together, can jointly arrive at the right decision – quickly and responsibly.
By Barry O’Sullivan and V S Subrahmanian
Prof Barry O’Sullivan is director of the Insight Research Ireland Centre for Data Analytics at University College Cork.
Prof V S Subrahmanian is director of the Northwestern Security and AI Lab at Northwestern University, Evanston, Illinois.
Don’t miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic’s digest of need-to-know sci-tech news.