MIT Libraries logoDSpace@MIT

MIT
View Item 
  • DSpace@MIT Home
  • MIT Open Access Articles
  • MIT Open Access Articles
  • View Item
  • DSpace@MIT Home
  • MIT Open Access Articles
  • MIT Open Access Articles
  • View Item
JavaScript is disabled for your browser. Some features of this site may not work without it.

From Transparency to Accountability and Back

Author(s)
Cen, Sarah; Alur, Rohan
Thumbnail
Download3689904.3694711.pdf (545.4Kb)
Publisher Policy

Publisher Policy

Article is made available in accordance with the publisher's policy and may be subject to US copyright law. Please refer to the publisher's site for terms of use.

Terms of use
Article is made available in accordance with the publisher's policy and may be subject to US copyright law. Please refer to the publisher's site for terms of use.
Metadata
Show full item record
Abstract
Artificial intelligence (AI) is increasingly intervening in our lives, raising widespread concern about its unintended and undeclared side effects. These developments have brought attention to the problem of AI auditing: the systematic evaluation and analysis of an AI system, its development, and its behavior relative to a set of predetermined criteria. Auditing can take many forms, including pre-deployment risk assessments, ongoing monitoring, and compliance testing. It plays a critical role in providing assurances to various AI stakeholders, from developers to end users. Audits may, for instance, be used to verify that an algorithm complies with the law, is consistent with industry standards, and meets the developer’s claimed specifications. However, AI developers and companies will rarely grant auditors unfettered access to their systems. In this work, we examine a key consideration in AI auditing: what type of access to an AI system is needed to perform a meaningful audit? Addressing this question has direct policy relevance, as it can inform AI audit guidelines and requirements. We begin by discussing the factors that auditors balance when determining the appropriate type of access, and unpack the benefits and drawbacks of four types of access. We conclude that, at minimum, black-box access—providing query access to a model without exposing its internal implementation—should be granted to auditors. In particular, we argue that black-box access effectively balances concerns related to proprietary technology, data privacy, audit standardization, and audit efficiency. We then suggest a framework for determining how much further access (on top of black-box access) to provide to auditors. We show that auditing can be cast as a natural hypothesis test and argue that this framing provides clear and interpretable guidance on the implementation of AI audits. In particular, we draw parallels between aspects of hypothesis testing and those of legal procedure, such as legal presumption and burden of proof. As a result, hypothesis testing provides an approach to AI auditing that is both interpretable and effective, offering a potential path forward despite the challenges posed by AI’s opacity.
Description
EAAMO ’24, October 29–31, 2024, San Luis Potosi, Mexico
Date issued
2024-10-29
URI
https://hdl.handle.net/1721.1/157655
Department
Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science
Publisher
ACM|Equity and Access in Algorithms, Mechanisms, and Optimization
Citation
Cen, Sarah and Alur, Rohan. 2024. "From Transparency to Accountability and Back."
Version: Final published version
ISBN
979-8-4007-1222-7

Collections
  • MIT Open Access Articles

Browse

All of DSpaceCommunities & CollectionsBy Issue DateAuthorsTitlesSubjectsThis CollectionBy Issue DateAuthorsTitlesSubjects

My Account

Login

Statistics

OA StatisticsStatistics by CountryStatistics by Department
MIT Libraries
PrivacyPermissionsAccessibilityContact us
MIT
Content created by the MIT Libraries, CC BY-NC unless otherwise noted. Notify us about copyright concerns.