MENU

Assurance

MENU

Trust

Are you sure your AI is doing what you think it’s doing?

Are you sure your AI is doing what you think it’s doing?

Throughout human history, technological revolutions have brought the opportunity to better human lives. This kind of change consistently puts immense stress on society’s ability to understand and integrate its impact into long established moral and ethical frameworks. Given that the tempo of change is often overwhelming, this generates understandable concern:

How exactly do I know technology is working with me and not against me?

Societal trust in emerging technology is not built upon the potential it may hold but is instead fundamentally rooted in a deep understanding of its faults and limitations. At IQT Labs we explore the shortcomings of emerging Open Source systems not as a reason to abandon their use, but precisely because we believe these explorations can help us all manage the staggering pace of change around us, and better use the emerging tools to benefit healthy, free societies.

When tools are used to make decisions that affect people’s lives, good design matters in a whole different way. So as much as we like to explore and build things out of the emerging Open Source, we like to stress test the limits of these technologies too. Breaking things to help build improve user trust – is there anything more fun than that?

Related Content

Is your AI a “Clever Hans”?

  • Blog, Trust
Quick Look
  • Blog, Trust

Is your AI a “Clever Hans”?

"Clever Hans," the horse mathematician, was a worldwide sensation in 1904, but his uncanny abilities were eventually debunked. As it turns out, Clever Hans had accidentally been trained to recognize his owners’ anticipation, and other subtle cues, rather than abstract mathematical concepts. Today, something similar is happening with many AI tools and products, which are capable of fooling us with specious performance. If you’re building — or looking to purchase — an AI system, how do you know if the technology does what you think it does? In this post we discuss two strategies: AI Audits and the creation of Evaluation Authorities for AI systems.
LEARN MORE
  • Report

AI Assurance Audit of RoBERTa, an Open source, Pretrained Large Language Model

  • Investigate, Trust
Quick Look
  • Investigate, Trust

AI Assurance Audit of RoBERTa, an Open source, Pretrained Large Language Model

Large Language Models (LLMs) have been in the news lot recently due to their tremendous power, but also concerns around their potential to generate offensive text. It's difficult to know how LLMs will perform in a specific context, or to anticipate undesirable biases in model output. Read more to learn about what happened during our latest AI audit report that focuses on a pretrained LLM called RoBERTa.
LEARN MORE

IQT Labs releases audit report of RoBERTa, an open source large language model

  • Blog, Home: Position 01, Trust
Quick Look
  • Blog, Home: Position 01, Trust

IQT Labs releases audit report of RoBERTa, an open source large language model

Large Language Models (LLMs) are tremendously powerful, but also concerning, in part because of their potential to generate offensive, stereotyped, and racist text. Since LLMs are trained on extremely large text datasets scraped from the Internet, it is difficult to know how they will perform in a specific context, or to anticipate undesirable biases in model output. Read more to learn about what happened during our latest AI audit report that focuses on a pretrained LLM called RoBERTa.
LEARN MORE

Interrogating RoBERTa: A Case Study in Cybersecurity

  • Blog, Home: Position 02, Trust
Quick Look
  • Blog, Home: Position 02, Trust

Interrogating RoBERTa: A Case Study in Cybersecurity

Found It! How we discovered a security hole in a popular software actively used by millions of people around the world? This blog details an unexpected discovery in our audit of a Large Language Model released by Facebook.
LEARN MORE
  • AI, natural language processing

Saisiyat is where it is at! Auditing multi-lingual AI models

  • Blog, Home: Position 04, Trust
Quick Look
  • Blog, Home: Position 04, Trust

Saisiyat is where it is at! Auditing multi-lingual AI models

How do you audit an AI model and what can you expect to learn from it? This blog explores auditing multi-lingual AI models.
LEARN MORE

Interrogating RoBERTa: Inside the challenge of learning to audit AI models and tools

  • Blog, Home: Position 05, Trust
Quick Look
  • Blog, Home: Position 05, Trust

Interrogating RoBERTa: Inside the challenge of learning to audit AI models and tools

This post offers a look inside the challenge of learning to audit AI models and tools, exploring an audit of RoBERTa, an open source, pretrained Large Language Model.
LEARN MORE

The Promises and Perils of Adversarial Camouflage

  • Blog, Trust
Quick Look
  • Blog, Trust

The Promises and Perils of Adversarial Camouflage

An introduction to the adversarial camouflage project, focusing on how effective legacy patches truly are.
LEARN MORE
  • Report

AI Assurance Audit of FakeFinder

  • Investigate, Trust
Quick Look
  • Investigate, Trust

AI Assurance Audit of FakeFinder

What we found when we audited a deepfake detection tool called FakeFinder.
LEARN MORE
  • Project

FakeFinder

Batch processing of deepfake detection within videos

  • Trust
Quick Look
  • Trust

FakeFinder

Batch processing of deepfake detection within videos

Increasingly fake videos are popping up around us because DeepFake generation models are getting better at producing realistic output at an alarming scale. FakeFinder was a hands-on project aimed at better understanding Open Source models that can be effective for debunking such videos at a similar rate.
LEARN MORE
  • Project

AI Assurance Auditing

Identifying risks before AI is deployed

  • Trust
Quick Look
  • Trust

AI Assurance Auditing

Identifying risks before AI is deployed

IQT Labs is developing a pragmatic, multi-disciplinary approach to auditing AI & ML tools. Our goal is to help people understand the limitations of AI/ML and identify risks before these tools are deployed in high stakes situations. We believe auditing can help diverse stakeholders build trust in emerging technologies...when that trust is warranted. For more info, check out this report which describes our auditing approach and what we found when we audited FakeFinder an Open Source deepfake detection tool.
LEARN MORE
  • Podcast

IQT Explains: The Importance of Bias Testing in AI Assurance

  • Trust, Watch & Listen
Quick Look
  • Trust, Watch & Listen

IQT Explains: The Importance of Bias Testing in AI Assurance

An IQT Podcast episode exploring how we test and assess AI technology to minimize unwanted biases and consider legalities and ethics. Listen in if you want to discover how AI technology is being evaluated from a legal and ethical stance.
LEARN MORE
  • Podcast

IQT Explains: AI Assurance

  • Trust, Watch & Listen
Quick Look
  • Trust, Watch & Listen

IQT Explains: AI Assurance

What happened when we audited a deepfake detection tool called FakeFinder? Our AI Assurance series continues with a podcast detailing findings from our audit on this Open Source tool that predicts whether a video is a deepfake.
LEARN MORE
  • BLOG

AI Assurance: Do deepfakes discriminate?

  • Blog, Trust
Quick Look
  • Blog, Trust

AI Assurance: Do deepfakes discriminate?

Do deepfakes discriminate? We explore this concept in the fifth blog post of our AI Assurance series, discussing the bias portion of our audit of the deepfake detection tool, Fakefinder.
LEARN MORE
  • BLOG

Open Source Software Nutrition Labels: Scaling Up with Anaconda Python

  • Blog, Trust
Quick Look
  • Blog, Trust

Open Source Software Nutrition Labels: Scaling Up with Anaconda Python

Our first blog in this series introduced our Nutrition Label concept, focused on providing opensource software transparency. This final post in the series explores how we scaled up this Open Source Software “Nutrition Label” prototype to analyze the Anaconda Linux/Python 3.9 ecosystem.
LEARN MORE
  • BLOG

AI Assurance: An “ethical matrix” for FakeFinder

  • Blog, Trust
Quick Look
  • Blog, Trust

AI Assurance: An “ethical matrix” for FakeFinder

With so many ways that AI can fail, how do we know when an AI tool is “ready” and “safe” for deployment? Our fourth blog post in our AI Assurance series explores the ethics portion of our audit on Fakefinder, a deepfake detection tool.
LEARN MORE
  • BLOG

Open Source Software Nutrition Labels: An AI Assurance Application

  • Blog, Trust
Quick Look
  • Blog, Trust

Open Source Software Nutrition Labels: An AI Assurance Application

Since software supply chains are often opaque and complex, we recently explored the concept of an Open Source Software “Nutrition Label” as part of our AI Assurance work.
LEARN MORE
  • BLOG

AI Assurance: A deep dive into the cybersecurity portion of our FakeFinder audit

  • Blog, Trust
Quick Look
  • Blog, Trust

AI Assurance: A deep dive into the cybersecurity portion of our FakeFinder audit

What happened when we dove into the cybersecurity portion of our FakeFinder audit? Read on to find out.
LEARN MORE
  • video

IQT Labs Presents FakeFinder, an Open Source Tool to Help Detect Deepfakes

  • Trust, Watch & Listen
Quick Look
  • Trust, Watch & Listen

IQT Labs Presents FakeFinder, an Open Source Tool to Help Detect Deepfakes

How do you know if a video clip is representative of reality, or the person you’re talking to on the other side of a video call is who they appear to be? IQT Labs built the FakeFinder framework to highlight some of the top performing deepfake detection algorithms developed in the Open Source.
LEARN MORE
  • BLOG

AI Assurance: What happened when we audited a deepfake detection tool called FakeFinder

  • Blog, Trust
Quick Look
  • Blog, Trust

AI Assurance: What happened when we audited a deepfake detection tool called FakeFinder

What happened when we audited a deepfake detection tool called FakeFinder? Read the first post from our AI Assurance blog series which details findings from our audit on this open source tool that predicts whether or not a video is a deepfake.
LEARN MORE
  • BLOG

FakeFinder: A Platform for Deepfake Detection

  • Blog, Trust
Quick Look
  • Blog, Trust

FakeFinder: A Platform for Deepfake Detection

Machine learning techniques to generate synthetic media continue to grow more sophisticated, allowing bad actors to use deepfakes tools for spreading misinformation. Learn how we created FakeFinder to help detect such deepfakes using deep learning.
LEARN MORE
  • BLOG

DeepFake Detection Challenge, Pt. II

  • Blog, Trust
Quick Look
  • Blog, Trust

DeepFake Detection Challenge, Pt. II

Advances in deep learning and generative models over the past half decade have brought about a rise in deepfake content. This blog breaks down our approach to Facebook’s DeepFake Detection Challenge.
LEARN MORE
  • BLOG

DeepFake Detection Challenge

  • Blog, Trust
Quick Look
  • Blog, Trust

DeepFake Detection Challenge

How do you know if the content you’re consuming online is real? This blog reviews Facebook’s DeepFake Detection Challenge DFDC and the role IQT Labs played in identifying false information online.
LEARN MORE
About
Explore
Projects
Contact

Copyright © 2022 · IQT Labs LLC.  All Rights Reserved.

Twitter Github

Terms of Use | Privacy Policy

CLOSE
ABOUT
EXPLORE
EDGE
DATA
TRUST
PROJECTS
CONTACT

We use cookies to offer you a better experience. By continuing to use this website, you consent to the use of cookies in accordance with our Privacy Policy. Read More

Accept