Optimization of Direct Convolution Algorithms on ARM Processors for Deep Learning Inference
Optimization of Direct Convolution Algorithms on ARM Processors for Deep Learning Inference
Blog Article
In deep learning, convolutional layers typically bear the majority of the here computational workload and are often the primary contributors to performance bottlenecks.The widely used convolution algorithm is based on the IM2COL transform to take advantage of the highly optimized GEMM (General Matrix Multiplication) kernel acceleration, using the highly optimized BLAS (Basic Linear Algebra Subroutine) library, which tends to incur additional memory overhead.Recent studies have indicated that direct convolution approaches can outperform traditional convolution implementations without additional memory overhead.In this paper, we propose a high-performance implementation of the direct convolution algorithm for inference that preserves the channel-first data layout of the convolutional layer inputs/outputs.We evaluate the performance of our proposed algorithm sequal eclipse 5 battery on a multi-core ARM CPU platform and compare it with state-of-the-art convolution optimization techniques.
Experimental results demonstrate that our new algorithm performs better across the evaluated scenarios and platforms.