RMIT University
Browse

Explainable Network Pruning for Model Acceleration Based on Filter Similarity and Importance

conference contribution
posted on 2024-11-03, 15:15 authored by Jinrong Wu, Phan Bach Su NguyenPhan Bach Su Nguyen, Damminda Alahakoon
Filter-level network pruning has effectively reduced computational cost, as well as energy and memory usage, for parameterized deep networks without damaging performance, particularly in computer vision applications. Most filter-level network pruning algorithms focus on minimizing the impact of pruning on network performance using either importance-based or similarity-based pruning approaches. However, no study has attempted to compare the effectiveness of the two approaches across different network configurations and datasets. To address these issues, this paper compares two explainable network pruning methods based on importance-based and similarity-based approaches to understand their key benefits and limitations. Based on the analysis findings, we propose an innovative hybrid pruning method and demonstrate its effectiveness using several models and datasets. The comparisons with other state-of-the-art filter pruning methods show the superiority of the new hybrid method.

History

Volume

13836 LNCS

Start page

214

End page

229

Total pages

16

Outlet

Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

Editors

Wei Qi Yan, Minh Nguyen, Martin Stommel

Name of conference

37th International Conference, IVCNZ 20222

Publisher

Springer

Place published

Switzerland

Start date

2022-11-24

End date

2022-11-25

Language

English

Copyright

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2023

Former Identifier

2006123746

Esploro creation date

2023-07-15

Usage metrics

    Scholarly Works

    Categories

    Exports

    RefWorks
    BibTeX
    Ref. manager
    Endnote
    DataCite
    NLM
    DC