Trust / AI Assurance Auditing

AI Assurance Auditing

Identifying risks before AI is deployed

Resources

IQT Explains: The Importance of Bias Testing in AI Assurance
IQT Explains: AI Assurance
AI Assurance: What happened when we audited a deepfake detection tool called FakeFinder
AI Assurance: A deep dive into the cybersecurity portion of our FakeFinder audit
AI Assurance: An “ethical matrix” for FakeFinder
AI Assurance: Do deepfakes discriminate?
Open Source Software Nutrition Labels: An AI Assurance Application

When it comes to software, one thing we know for sure is that some of the time, some of our tools are going to fail. Introducing Artificial Intelligence (AI) and Machine Learning (ML) into software tools creates additional vulnerabilities and risks.

We’ll never be able to prevent all AI incidents, but auditing AI & ML tools before they are deployed can help organizations identify and mitigate risks before they cause real-world harm. IQT Labs is developing a pragmatic, multi-disciplinary approach to auditing AI & ML tools. Our goal is to help people understand the limitations of AI/ML and identify risks before these tools are deployed in high stakes situations. We believe auditing can help diverse stakeholders build trust in emerging technologies…when that trust is warranted.

In 2021 we audited an Open Source deepfake detection tool called FakeFinder, which our IQT Labs colleagues developed earlier that year. FakeFinder uses several different crowd-sourced deep learning models to predict if a video is a “deepfake,” that is, whether it has been modified or manipulated algorithmically.

To identify a broad assortment of risks, we examined  FakeFinder from four perspectives: ethics, bias, cybersecurity, and the user experience, using  the AI Ethics Framework for the Intelligence Community as our guide. For more information, check out our full audit report or blog posts in the Resources section.

We are currently working on an audit of RoBERTa, a pre-trained Natural Language Understanding (NLU) model. NLU models are trained on very large text datasets scraped from the internet, which makes it impossible to remove all personal identifiable information (PII) from training data and difficult to anticipate biases in model output. We are examining these (and many other!) concerns as part of our AI Assurance audit of RoBERTa.

We are also actively seeking collaborators and advisors on this project, so if this work is of interest, please email labsinfo@iqt.org to get in touch with us!

Resources

Git Repo
About
Explore
Projects
Contact

Copyright © 2022 · IQT Labs LLC.  All Rights Reserved.

Twitter Github

Terms of Use | Privacy Policy

We use cookies to offer you a better experience. By continuing to use this website, you consent to the use of cookies in accordance with our Privacy Policy. Read More

Accept
CLOSE
ABOUT
EXPLORE
EDGE
DATA
TRUST
PROJECTS
CONTACT