Show simple item record

dc.contributor.authorLiu, Eric
dc.contributor.authorSo, Wonyoung
dc.contributor.authorHosoi, Peko
dc.contributor.authorD'Ignazio, Catherine
dc.date.accessioned2024-11-21T17:34:05Z
dc.date.available2024-11-21T17:34:05Z
dc.date.issued2024-10-29
dc.identifier.isbn979-8-4007-1222-7
dc.identifier.urihttps://hdl.handle.net/1721.1/157628
dc.descriptionEAAMO ’24, October 29–31, 2024, San Luis Potosi, Mexicoen_US
dc.description.abstractThe integration of Large Language Models (LLMs) into a wide range of rental and real estate platforms could exacerbate historical inequalities in housing, particularly given that LLMs have exhibited gender, racial, ethnic, nationality, and language-based biases in other contexts. Examples of use cases already exist, with real estate listing platforms having launched ChatGPT plugins in 2023. In response to the critical need to assess the ways that LLMs may contribute to housing discrimination, we analyze GPT-4 housing recommendations in response to N = 168,000 prompts for renting and buying in the ten largest majority-minority cities in the US with prompts varying by demographic characteristics like sexuality, race, gender, family status, and source of income, many of which are protected under federal, state, and local fair housing laws. We find evidence of racial steering, default whiteness, and steering of minority homeseekers toward neighborhoods with lower opportunity indices in GPT-4’s housing recommendations to prospective buyers or renters, all of which could have the effect of exacerbating segregation in already segregated cities. Finally, we discuss potential legal implications on how LLMs could be liable under fair housing laws and end with policy recommendations regarding the importance of auditing, understanding, and mitigating risks from AI systems before they are put to use.en_US
dc.publisherACM|Equity and Access in Algorithms, Mechanisms, and Optimizationen_US
dc.relation.isversionofhttps://doi.org/10.1145/3689904.3694709en_US
dc.rightsCreative Commons Attributionen_US
dc.rights.urihttps://creativecommons.org/licenses/by/4.0/en_US
dc.sourceAssociation for Computing Machineryen_US
dc.titleRacial Steering by Large Language Models: A Prospective Audit of GPT-4 on Housing Recommendationsen_US
dc.typeArticleen_US
dc.identifier.citationLiu, Eric, So, Wonyoung, Hosoi, Peko and D'Ignazio, Catherine. 2024. "Racial Steering by Large Language Models: A Prospective Audit of GPT-4 on Housing Recommendations."
dc.contributor.departmentMassachusetts Institute of Technology. Department of Electrical Engineering and Computer Scienceen_US
dc.contributor.departmentMassachusetts Institute of Technology. Department of Urban Studies and Planningen_US
dc.contributor.departmentMassachusetts Institute of Technology. Department of Mechanical Engineeringen_US
dc.identifier.mitlicensePUBLISHER_CC
dc.eprint.versionFinal published versionen_US
dc.type.urihttp://purl.org/eprint/type/ConferencePaperen_US
eprint.statushttp://purl.org/eprint/status/NonPeerRevieweden_US
dc.date.updated2024-11-01T07:54:27Z
dc.language.rfc3066en
dc.rights.holderThe author(s)
dspace.date.submission2024-11-01T07:54:27Z
mit.licensePUBLISHER_CC
mit.metadata.statusAuthority Work and Publication Information Neededen_US


Files in this item

Thumbnail
Thumbnail

This item appears in the following Collection(s)

Show simple item record