← Back to Research

AI Can Be Cognitively Biased: An Exploratory Study on Threshold Priming in LLM-based Batch Relevance Assessment

Authors: N Chen, J Liu, X Dong, Q Liu, T Sakai, XM Wu
Time: 2024
Journal: Proceedings of the 2024 Annual International ACM SIGIR Conference on Research and Development in Information Retrieval

Abstract

This study explores cognitive bias in Large Language Models (LLMs) through the lens of threshold priming in batch relevance assessment. We investigate how LLM-based systems can exhibit systematic biases similar to human cognitive biases when making relevance judgments in batch processing scenarios.

Keywords: LLM, cognitive bias, relevance assessment, threshold priming

Citation

N Chen, J Liu, X Dong, Q Liu, T Sakai, XM Wu, "AI Can Be Cognitively Biased: An Exploratory Study on Threshold Priming in LLM-based Batch Relevance Assessment," Proceedings of the 2024 Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, 2024.