Adapting large language models (LLMs) to diverse cultural values is a challenging task, as existing LLMs often reflect the values of specific groups by default, and potentially causing harm to others. In this paper, we present CLCA, a novel framework for enhancing LLM alignment with cultural values based on cultural learning. The framework leverages simulated social interactions to generate conversations in which LLMs engage in role-playing within culturally adapted social scenarios, capturing implicit cultural norms for model fine-tuning. CLCA improves cultural value alignment across various model architectures measured using World Value Survey data, demonstrating the effectiveness of our proposed approach. Our results provide early evidence that understanding intent and social interactions can enhance cultural value adaptation in LLMs, highlighting the promise of training approaches based on cultural learning.
(1) The framework first automatically generates conversations through culture-adapted role-playing in social settings. (2) These conversations are then filtered using GPT models to ensure quality and relevance. (3) The filtered data is labelled with free-text intents. (4) Both the conversation and intent data are integrated into a cultural learning-based training process (CLCA). (5) The resulting models are evaluated using the World Values Survey.
@inproceedings{liu2025clca,
title={Cultural Learning-Based Culture Adaptation of Language Models},
author={Chen Cecilia Liu and Anna Korhonen and Iryna Gurevych},
booktitle = {Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics},
year={2025},
publisher = {Association for Computational Linguistics},
doi = {10.48550/ARXIV.2504.02953},
url={http://arxiv.org/abs/2504.02953},
}