The reusability prior: comparing deep learning models without training

Polat, Aydın Göze and Alpaslan, Ferda Nur (2023) The reusability prior: comparing deep learning models without training. Machine Learning: Science and Technology, 4 (2). 025011. ISSN 2632-2153

[thumbnail of Polat_2023_Mach._Learn.__Sci._Technol._4_025011.pdf] Text
Polat_2023_Mach._Learn.__Sci._Technol._4_025011.pdf - Published Version

Download (641kB)

Abstract

Various choices can affect the performance of deep learning models. We conjecture that differences in the number of contexts for model components during training are critical. We generalize this notion by defining the reusability prior as follows: model components are forced to function in diverse contexts not only due to the training data, augmentation, and regularization choices, but also due to the model design itself. We focus on the design aspect and introduce a graph-based methodology to estimate the number of contexts for each learnable parameter. This allows a comparison of models without requiring any training. We provide supporting evidence with experiments using cross-layer parameter sharing on CIFAR-10, CIFAR-100, and Imagenet-1K benchmarks. We give examples of models that share parameters outperforming baselines that have at least 60% more parameters. The graph-analysis-based quantities we introduced for the reusability prior align well with the results, including at least two important edge cases. We conclude that the reusability prior provides a viable research direction for model analysis based on a very simple idea: counting the number of contexts for model parameters.

Item Type: Article
Subjects: Afro Asian Archive > Multidisciplinary
Depositing User: Unnamed user with email support@afroasianarchive.com
Date Deposited: 10 Jul 2023 05:43
Last Modified: 06 Sep 2024 09:09
URI: http://info.stmdigitallibrary.com/id/eprint/1226

Actions (login required)

View Item
View Item