When it comes to software, one thing we know for sure is that some of the time, some of our tools are going to fail. Introducing Artificial Intelligence (AI) and Machine Learning (ML) into software tools creates additional vulnerabilities and risks.
We’ll never be able to prevent all AI incidents, but auditing AI & ML tools before they are deployed can help organizations identify and mitigate risks before they cause real-world harm. IQT Labs is developing a pragmatic, multi-disciplinary approach to auditing AI & ML tools. Our goal is to help people understand the limitations of AI/ML and identify risks before these tools are deployed in high stakes situations. We believe auditing can help diverse stakeholders build trust in emerging technologies…when that trust is warranted.
In 2021 we audited an Open Source deepfake detection tool called FakeFinder, which our IQT Labs colleagues developed earlier that year. FakeFinder uses several different crowd-sourced deep learning models to predict if a video is a “deepfake,” that is, whether it has been modified or manipulated algorithmically.
To identify a broad assortment of risks, we examined FakeFinder from four perspectives: ethics, bias, cybersecurity, and the user experience, using the AI Ethics Framework for the Intelligence Community as our guide. For more information, check out our full audit report or blog posts in the Resources section.
We are currently working on an audit of RoBERTa, a pre-trained Natural Language Understanding (NLU) model. NLU models are trained on very large text datasets scraped from the internet, which makes it impossible to remove all personal identifiable information (PII) from training data and difficult to anticipate biases in model output. We are examining these (and many other!) concerns as part of our AI Assurance audit of RoBERTa.
We are also actively seeking collaborators and advisors on this project, so if this work is of interest, please email email@example.com to get in touch with us!