To excel at their domain, large language models are comprised of billions of parameters. Yet, this comes at the cost of huge memory requirements, restricting their applicability in resource-constrained environments. To address the problem of neural network (NN) compression, Singular Value Decomposition (SVD) has played a key role as a fundamental component for matrix compression through decomposition. To minimize compression error and to maximize the efficacy of the compressed model on the downstream tasks, previous works focused on low-rank approximation of the NN’s weight matrices either from the perspective of parameter importance or per-layer functional equivalence. While previous works studied the aforementioned perspectives in isolation, in this work, we are investigating the effectiveness of an approach that combines ideas from these two perspectives in a single objective. In parallel to this, an important aspect that affects the compression quality is the distribution of the compression rate across layers and NN parameters. Earlier works mostly considered distributing the compression rate uniformly across layers and network weights or relied on computationally expensive heuristic search. Contrary to them, in this work, we propose an enhanced and computationally efficient algorithm for dynamic compression rate allocation. Experimental results support the efficacy of the proposed approach, which performs on par or substantially better than the previous state-of-the-art, especially under high compression ratios.