posted on 2024-06-03, 15:23authored byCheng Kang, Jindrich Prokop, Lei Tong, Huiyu Zhou, Yong Hu, Daniel Novak
<p> Fine-tuning pre-trained language models (LMs) may not always be the most practical approach for downstream tasks. While adaptation fine-tuning methods have shown promising results, a clearer explanation of their mechanisms and further inhibition of the transmission of information is needed. To address this, we propose an Inhibition Adaptation (InA) fine-tuning method that aims to reduce the number of added tunable weights and appropriately reweight knowledge derived from pre-trained LMs. The InA method involves <strong>(1)</strong> inserting a small trainable vector into each Transformer attention architecture and <strong>(2)</strong> setting a threshold to directly eliminate irrelevant knowledge. This approach draws inspiration from the shunting inhibition, which allows the inhibition of specific neurons to gate other functional neurons. With the inhibition mechanism, InA achieves competitive or even superior performance compared to other fine-tuning methods on −, −, and − for text classification and question-answering tasks. </p>
Funding
Czech Technical University in Prague(grant number: SGS22/165/OHK3/3T/13), the Research Centre for Informatics (grant number: CZ.02.1.01/0.0/0.0/160_19/0000765), and the Brain Dynamics (grant number: CZ.02.01.01/00/22_008/0004643).
History
Author affiliation
College of Science & Engineering
Comp' & Math' Sciences