This post originally appeared on the IQT Blog.
IQT Labs recently completed an AI assurance audit of SkyScan, marking our third assurance audit using the AI Ethics Framework for the Intelligence Community.
Developed at IQT Labs, SkyScan is an automated system that collects and labels images of aircraft. By simultaneously capturing ADS-B (Automatic Dependent Surveillance–Broadcast) signals via a software-controllable radio receiver and images of aircraft in flight, SkyScan can generate a labeled dataset that can be used to train computer vision models to identify various types of aircraft.
As in IQT Labs’ prior audits (FakeFinder, a deepfake detection tool and RoBERTa, a large language model) we assessed a variety of risks, vulnerabilities, and potential concerns posed by the SkyScan system, including:
One key finding from this work is the many ways that hardware concerns complicate the auditing process, by adding complexity and increasing attack surfaces. To characterize and mitigate these risks, we divided them into three categories: technical, mechanical, and architectural. Then, to help us assess the severity of these risks, we designed and implemented 10 different attacks –- from GPS and ADS-B spoofing to Model Evasion and Data Poisoning, to a data science twist on a MITM attack that we called “Model-in-the-Middle.”
For more information on IQT Labs AI Audits, check out Interrogating RoBERTa: Inside the challenge of learning to audit AI models and tools and AI Assurance: What happened when we audited a deepfake detection tool called FakeFinder or get in touch with us by emailing firstname.lastname@example.org.