Neural Networks with Model Compression -  David Doermann,  Tiancheng Wang,  Sheng Xu,  Baochang Zhang

Neural Networks with Model Compression (eBook)

eBook Download: PDF
2024 | 1st ed. 2024
IX, 260 Seiten
Springer Nature Singapore (Verlag)
978-981-99-5068-3 (ISBN)
Systemvoraussetzungen
160,49 inkl. MwSt
  • Download sofort lieferbar
  • Zahlungsarten anzeigen

Deep learning has achieved impressive results in image classification, computer vision and natural language processing. To achieve better performance, deeper and wider networks have been designed, which increase the demand for computational resources. The number of floating-point operations (FLOPs) has increased dramatically with larger networks, and this has become an obstacle for convolutional neural networks (CNNs) being developed for mobile and embedded devices. In this context, our book will focus on CNN compression and acceleration, which are important for the research community. We will describe numerous methods, including parameter quantization, network pruning, low-rank decomposition and knowledge distillation. More recently, to reduce the burden of handcrafted architecture design, neural architecture search (NAS) has been used to automatically build neural networks by searching over a vast architecture space. Our book will also introduce NAS due to its superiority and state-of-the-art performance in various applications, such as image classification and object detection. We also describe extensive applications of compressed deep models on image classification, speech recognition, object detection and tracking. These topics can help researchers better understand the usefulness and the potential of network compression on practical applications. Moreover, interested readers should have basic knowledge about machine learning and deep learning to better understand the methods described in this book.



Baochang Zhang is a full Professor with Institute of Artificial Intelligence, Beihang University, Beijing, China. He was selected by the Program for New Century Excellent Talents in University of Ministry of Education of China, also selected as Academic Advisor of Deep Learning Lab of Baidu Inc., and a distinguished researcher of Beihang Hangzhou Institute in Zhejiang Province. His research interests include explainable deep learning, computer vision and patter recognition. His HGPP and LDP methods were state-of-the-art feature descriptors, with 1234 and 768 Google Scholar citations, respectively. Both are 'Test-of-Time' works. Our 1-bit methods achieved the best performance on ImageNet. His group also won the ECCV 2020 tiny object detection, COCO object detection, and ICPR 2020 Pollen recognition challenges.

 

Tiancheng Wang are pursuing their Ph.D. degrees under the supervision of Baochang Zhang. His research topics include model compression and trustworthy deep learning, and he has published several high-quality papers on deep model compression. He was selected as visiting student of Zhongguancun laboratory, Beijing, China. 

 

Sheng Xu are pursuing their Ph.D. degrees under the supervision of Baochang Zhang. His research topics mainly focus on low-bit model compression, and he is one of the most active researchers in the field of binary neural networks. He has published more than 10 top-tier papers in computer vision with two of them are selected as CVPR oral papers.

 

Dr. David Doermann is a Professor of Empire Innovation at the University at Buffalo (UB) and the Director of the University at Buffalo Artificial Intelligence Institute. Prior to coming to UB, he was a program manager at the Defense Advanced Research Projects Agency (DARPA), where he developed, selected and oversaw approximately $150 million in research and transition funding in the areas of computer vision, human language technologies and voice analytics. He coordinated performers on all of the projects, orchestrating consensus, evaluating cross team management and overseeing fluid program objectives.



Deep learning has achieved impressive results in image classification, computer vision and natural language processing. To achieve better performance, deeper and wider networks have been designed, which increase the demand for computational resources. The number of floating-point operations (FLOPs) has increased dramatically with larger networks, and this has become an obstacle for convolutional neural networks (CNNs) being developed for mobile and embedded devices. In this context, our book will focus on CNN compression and acceleration, which are important for the research community. We will describe numerous methods, including parameter quantization, network pruning, low-rank decomposition and knowledge distillation. More recently, to reduce the burden of handcrafted architecture design, neural architecture search (NAS) has been used to automatically build neural networks by searching over a vast architecture space. Our book will also introduce NAS due to its superiority and state-of-the-art performance in various applications, such as image classification and object detection. We also describe extensive applications of compressed deep models on image classification, speech recognition, object detection and tracking. These topics can help researchers better understand the usefulness and the potential of network compression on practical applications. Moreover, interested readers should have basic knowledge about machine learning and deep learning to better understand the methods described in this book.
Erscheint lt. Verlag 5.2.2024
Reihe/Serie Computational Intelligence Methods and Applications
Zusatzinfo IX, 260 p. 101 illus., 67 illus. in color.
Sprache englisch
Themenwelt Informatik Grafik / Design Digitale Bildverarbeitung
Informatik Theorie / Studium Künstliche Intelligenz / Robotik
Schlagworte Artificial Intelligence • Binary Neural Network • computer vision • machine learning • Model compression
ISBN-10 981-99-5068-6 / 9819950686
ISBN-13 978-981-99-5068-3 / 9789819950683
Haben Sie eine Frage zum Produkt?
PDFPDF (Wasserzeichen)
Größe: 11,3 MB

DRM: Digitales Wasserzeichen
Dieses eBook enthält ein digitales Wasser­zeichen und ist damit für Sie persona­lisiert. Bei einer missbräuch­lichen Weiter­gabe des eBooks an Dritte ist eine Rück­ver­folgung an die Quelle möglich.

Dateiformat: PDF (Portable Document Format)
Mit einem festen Seiten­layout eignet sich die PDF besonders für Fach­bücher mit Spalten, Tabellen und Abbild­ungen. Eine PDF kann auf fast allen Geräten ange­zeigt werden, ist aber für kleine Displays (Smart­phone, eReader) nur einge­schränkt geeignet.

Systemvoraussetzungen:
PC/Mac: Mit einem PC oder Mac können Sie dieses eBook lesen. Sie benötigen dafür einen PDF-Viewer - z.B. den Adobe Reader oder Adobe Digital Editions.
eReader: Dieses eBook kann mit (fast) allen eBook-Readern gelesen werden. Mit dem amazon-Kindle ist es aber nicht kompatibel.
Smartphone/Tablet: Egal ob Apple oder Android, dieses eBook können Sie lesen. Sie benötigen dafür einen PDF-Viewer - z.B. die kostenlose Adobe Digital Editions-App.

Buying eBooks from abroad
For tax law reasons we can sell eBooks just within Germany and Switzerland. Regrettably we cannot fulfill eBook-orders from other countries.

Mehr entdecken
aus dem Bereich
Discover the smart way to polish your digital imagery skills by …

von Gary Bradley

eBook Download (2024)
Packt Publishing (Verlag)
39,59
Explore powerful modeling and character creation techniques used for …

von Lukas Kutschera

eBook Download (2024)
Packt Publishing (Verlag)
43,19
Generate creative images from text prompts and seamlessly integrate …

von Margarida Barreto

eBook Download (2024)
Packt Publishing (Verlag)
32,39