Version 2 2025-09-02, 15:25Version 2 2025-09-02, 15:25
Version 1 2025-03-26, 12:30Version 1 2025-03-26, 12:30
conference contribution
posted on 2025-03-26, 12:30authored byFuxiang ChenFuxiang Chen, Iman Saberi, Amirreza Esmaeili, Fatemeh Fard
<p dir="ltr">Programming languages can benefit from one an-<br>other by utilizing a pre-trained model for software engineering<br>tasks such as code summarization and method name prediction.<br>While full fine-tuning of Code Language Models (Code-LMs) has<br>been explored for multilingual knowledge transfer, research on<br>Parameter Efficient Fine-Tuning (PEFT) for this purpose is lim-<br>ited. AdapterFusion, a PEFT architecture, aims to enhance task<br>performance by leveraging information from multiple languages<br>but primarily focuses on the target language.<br>To address this, we propose AdvFusion, a novel PEFT-based<br>approach that effectively learns from other languages before<br>adapting to the target task. Evaluated on code summarization and<br>method name prediction, AdvFusion outperforms AdapterFusion<br>by up to 1.7 points and surpasses LoRA with gains of 1.99, 1.26,<br>and 2.16 for Ruby, JavaScript, and Go, respectively. We open-<br>source our scripts for replication purposes1.</p>