LLM-augmented Preference Learning from Natural Language
Published in 2024 Economics and Computation workshop, 2024
The study focused on using large language models (LLMs) to tackle the scarcity of preference data. By generating preference data and testing various LLMs, the research identified optimal prompts to enhance understanding of preferences in texts. LLama2 was used to condense extensive text, emphasizing preference detection, while BERT’s output, guided by instructive sentences, improved classification accuracy. Techniques like masking and segment embedding were also employed to aid in entity comparison.
Recommended citation: Kang, I., Ruan, S., Ho, T., Lin, J. C., Mohsin, F., Seneviratne, O., & Xia, L. (2023). LLM-augmented Preference Learning from Natural Language. arXiv preprint arXiv:2310.08523.
Download Paper