SqueezeNet explained

SqueezeNet
Author:Forrest Iandola, Song Han, Matthew W. Moskewicz, Khalid Ashraf, Bill Dally, Kurt Keutzer
Latest Release Version:v1.1
Genre:Deep neural network
License:BSD license

In computer vision, SqueezeNet is the name of a deep neural network for image classification that was released in 2016. SqueezeNet was developed by researchers at DeepScale, University of California, Berkeley, and Stanford University. In designing SqueezeNet, the authors' goal was to create a smaller neural network with fewer parameters while achieving competitive accuracy.[1]

Framework support for SqueezeNet

SqueezeNet was originally released on February 22, 2016.[2] This original version of SqueezeNet was implemented on top of the Caffe deep learning software framework. Shortly thereafter, the open-source research community ported SqueezeNet to a number of other deep learning frameworks. On February 26, 2016, Eddie Bell released a port of SqueezeNet for the Chainer deep learning framework.[3] On March 2, 2016, Guo Haria released a port of SqueezeNet for the Apache MXNet framework.[4] On June 3, 2016, Tammy Yang released a port of SqueezeNet for the Keras framework.[5] In 2017, companies including Baidu, Xilinx, Imagination Technologies, and Synopsys demonstrated SqueezeNet running on low-power processing platforms such as smartphones, FPGAs, and custom processors.[6] [7] [8] [9]

As of 2018, SqueezeNet ships "natively" as part of the source code of a number of deep learning frameworks such as PyTorch, Apache MXNet, and Apple CoreML. In addition, third party developers have created implementations of SqueezeNet that are compatible with frameworks such as TensorFlow. Below is a summary of frameworks that support SqueezeNet.

FrameworkSqueezeNet SupportReferences
Apache MXNet[10]
Apple CoreML[11]
Caffe2[12]
Keras
MATLAB Deep Learning Toolbox[13]
ONNX[14]
PyTorch[15]
TensorFlow[16]
Wolfram Mathematica[17]

Relationship to AlexNet

SqueezeNet was originally described in a paper entitled "SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <0.5MB model size."[18] AlexNet is a deep neural network that has 240 MB of parameters, and SqueezeNet has just 5 MB of parameters. This small model size can more easily fit into computer memory and can more easily be transmitted over a computer network. However, it's important to note that SqueezeNet is not a "squeezed version of AlexNet." Rather, SqueezeNet is an entirely different DNN architecture than AlexNet.[19] What SqueezeNet and AlexNet have in common is that both of them achieve approximately the same level of accuracy when evaluated on the ImageNet image classification validation dataset.

Relationship to Deep Compression

Model compression (e.g. quantization and pruning of model parameters) can be applied to a deep neural network after it has been trained.[20] In the SqueezeNet paper, the authors demonstrated that a model compression technique called Deep Compression can be applied to SqueezeNet to further reduce the size of the parameter file from 5 MB to 500 KB. Deep Compression has also been applied to other DNNs, such as AlexNet and VGG.[21]

Offshoots of SqueezeNet

Some of the members of the original SqueezeNet team have continued to develop resource-efficient deep neural networks for a variety of applications. A few of these works are noted in the following table. As with the original SqueezeNet model, the open-source research community has ported and adapted these newer "squeeze"-family models for compatibility with multiple deep learning frameworks.

!DNN Model!Application!OriginalImplementation!OtherImplementations
SqueezeDet[22] [23] Object Detectionon ImagesTensorFlow[24] Caffe,[25] Keras[26] [27] [28]
SqueezeSeg[29] SemanticSegmentation

of LIDAR

TensorFlow[30]
SqueezeNext[31] ImageClassificationCaffe[32] TensorFlow,[33] Keras,[34] PyTorch[35]
SqueezeNAS[36] [37] Neural Architecture Searchfor Semantic SegmentationPyTorch[38]

In addition, the open-source research community has extended SqueezeNet to other applications, including semantic segmentation of images and style transfer.[39] [40] [41]

References

  1. News: Deep Learning Reading Group: SqueezeNet. Ganesh. Abhinav. KDnuggets. 2018-04-07.
  2. Web site: SqueezeNet. 2016-02-22. GitHub. 2018-05-12.
  3. Web site: An implementation of SqueezeNet in Chainer. Bell. Eddie. 2016-02-26. GitHub. 2018-05-12.
  4. Web site: SqueezeNet for MXNet. Haria. Guo. 2016-03-02. GitHub. 2018-05-12.
  5. Web site: SqueezeNet Keras Implementation. Yang. Tammy. 2016-06-03. GitHub. 2018-05-12.
  6. News: Baidu puts open source deep learning into smartphones. Chirgwin. Richard. 2017-09-26. The Register. 2018-04-07.
  7. News: Neural network SDK for PowerVR GPUs. Bush. Steve. 2018-01-25. Electronics Weekly. 2018-04-07.
  8. News: Xilinx AI Engine Steers New Course. Yoshida. Junko. 2017-03-13. EE Times. 2018-05-13.
  9. News: Deep learning computer vision algorithms ported to processor IP. Boughton. Paul. 2017-08-28. Engineer Live. 2018-04-07.
  10. Web site: squeezenet.py. GitHub: Apache MXNet. 2018-04-07.
  11. Web site: CoreML. Apple. 2018-04-10.
  12. Web site: SqueezeNet Model Quickload Tutorial. Inkawhich. Nathan. GitHub: Caffe2. 2018-04-07.
  13. Web site: SqueezeNet for MATLAB Deep Learning Toolbox . Mathworks . 2018-10-03.
  14. Web site: Fang . Lu . SqueezeNet for ONNX . Open Neural Network eXchange.
  15. Web site: squeezenet.py. GitHub: PyTorch. 2018-05-12.
  16. Web site: Tensorflow implementation of SqueezeNet. Poster. Domenick. GitHub. 2018-05-12.
  17. Web site: SqueezeNet V1.1 Trained on ImageNet Competition Data. Wolfram Neural Net Repository. 2018-05-12.
  18. 1602.07360. SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <0.5MB model size. Iandola. Forrest N. Han. Song. Moskewicz. Matthew W. Ashraf. Khalid. Dally. William J. Keutzer. Kurt. cs.CV. 2016.
  19. Web site: SqueezeNet. Short Science. 2018-05-13.
  20. Web site: Lab41 Reading Group: Deep Compression. Gude. Alex. 2016-08-09. 2018-05-08.
  21. Web site: Compressing and regularizing deep neural networks. Han. Song. 2016-11-06. O'Reilly. 2018-05-08.
  22. 1612.01051. SqueezeDet: Unified, Small, Low Power Fully Convolutional Neural Networks for Real-Time Object Detection for Autonomous Driving. Wu. Bichen. Wan. Alvin. Iandola. Forrest. Jin. Peter H.. Keutzer. Kurt. cs.CV. 2016.
  23. Web site: Introducing SqueezeDet: low power fully convolutional neural network framework for autonomous driving. Nunes Fernandes. Edgar. 2017-03-02. The Intelligence of Information. 2019-03-31.
  24. Web site: SqueezeDet: Unified, Small, Low Power Fully Convolutional Neural Networks for Real-Time Object Detection for Autonomous Driving. Wu. Bichen. 2016-12-08. GitHub. 2018-12-26.
  25. Web site: Caffe SqueezeDet. Kuan. Xu. 2017-12-20. GitHub. 2018-12-26.
  26. Web site: SqueezeDet on Keras. Padmanabha. Nischal. 2017-03-20. GitHub. 2018-12-26.
  27. Web site: Fast object detection with SqueezeDet on Keras. Ehmann. Christopher. 2018-05-29. Medium. 2019-03-31.
  28. Web site: A deeper look into SqueezeDet on Keras. Ehmann. Christopher. 2018-05-02. Medium. 2019-03-31.
  29. 1710.07368. SqueezeSeg: Convolutional Neural Nets with Recurrent CRF for Real-Time Road-Object Segmentation from 3D LiDAR Point Cloud. Wu. Bichen. Wan. Alvin. Yue. Xiangyu. Keutzer. Kurt. cs.CV. 2017.
  30. Web site: SqueezeSeg: Convolutional Neural Nets with Recurrent CRF for Real-Time Road-Object Segmentation from 3D LiDAR Point Cloud. Wu. Bichen. 2017-12-06. GitHub. 2018-12-26.
  31. 1803.10615. SqueezeNext: Hardware-Aware Neural Network Design. Gholami. Amir. Kwon. Kiseok. Wu. Bichen. Tai. Zizheng. Yue. Xiangyu. Jin. Peter. Zhao. Sicheng. Keutzer. Kurt. cs.CV. 2018.
  32. Web site: SqueezeNext. Gholami. Amir. 2018-04-18. GitHub. 2018-12-29.
  33. Web site: SqueezeNext Tensorflow: A tensorflow Implementation of SqueezeNext. Verhulsdonck. Tijmen. 2018-07-09. GitHub. 2018-12-29.
  34. Web site: SqueezeNext, implemented in Keras. Sémery. Oleg. GitHub. 2018-09-24. 2018-12-29.
  35. Web site: SqueezeNext.PyTorch. Lu. Yi. 2018-06-21. GitHub. 2018-12-29.
  36. 1908.01748. SqueezeNAS: Fast neural architecture search for faster semantic segmentation. Shaw. Albert. Hunter. Daniel. Iandola. Forrest. Sidhu. Sammy. cs.LG. 2019.
  37. News: Does Your AI Chip Have Its Own DNN?. Yoshida. Junko. 2019-08-25. EE Times. 2019-09-12.
  38. Web site: SqueezeNAS. Shaw. Albert. 2019-08-27. GitHub. 2019-09-12.
  39. News: Speeding up Semantic Segmentation for Autonomous Driving. Treml. Michael. 2016. NIPS MLITS Workshop. 2019-07-21. etal.
  40. Web site: SqueezeNet Neural Style on PyTorch. Zeng. Li. 2017-03-22. GitHub. 2019-07-21.
  41. Web site: The Impact of SqueezeNet. Wu. Bichen. Keutzer. Kurt. 2017. UC Berkeley. 2019-07-21.