Robust Weight Imprinting: Insights from Neural Collapse and Proxy-Based Aggregation

Justus Westerhoff · Golzar Atefi · Mario Koddenbrock · Alexei Figueroa · Alexander Löser · Erik Rodner · Felix Alexander Gers

Video

Paper PDF

Thumbnail of paper pages

Abstract

The capacity of foundation models allows for their application to new, unseen tasks. The adaptation to such tasks is called transfer learning. An efficient transfer learning method that circumvents parameter optimization is imprinting. The conceptual differences between studies on imprinting form the basis for our systematic investigation. In this work, we propose the general $\texttt{IMPRINT}$ framework, identifying three main components: generation, normalization, and aggregation. Through the lens of this framework, we conduct an in-depth analysis and a comparison of the existing methods. Our findings reveal the benefits of representing novel data with multiple proxies in the generation step and show the importance of proper normalization. Beyond an extensive analytical grounding, our framework enables us to propose a novel variant of imprinting which outperforms previous work on transfer learning tasks by $4\%$. This variant determines proxies through clustering motivated by the neural collapse phenomenon -- a connection that we draw for the first time. We publicly release our code at \url{https://github.com/DATEXIS/IMPRINT}.