跳到主要內容區塊
:::

專題演講|美國麻省理工學院 韓松教授(2019年1月10日)

AI國際鏈結計畫推動辦公室將舉辦AI專題演講,歡迎大家踴躍參加,演講資訊如下:
➤演講時間:2019年01月10日(四) 14:00-15:30
➤演講地點:國立清華大學 資電館B1演講廳
➤講者:美國麻省理工學院 韓松教授
➤講題:Hardware-Centric AutoML: Design Automation for Efficient Deep Learning Computing
➤協辦單位:國立清華大學人工智慧研發中心

講者簡歷:
Song Han is an assistant professor in the EECS Department of Massachusetts Institute of Technology (MIT) and PI for HAN Lab: Hardware, AI and Neural-nets. Dr. Han's research focuses on energy-efficient deep learning and domain-specific architectures. He proposed “Deep Compression” that widely impacted the industry. He was the co-founder and chief scientist of DeePhi Tech based on his PhD thesis. Prior to joining MIT, Song Han graduated from Stanford University.

演講摘要: 
In the post-Moore’s Law era, the amount of computation per unit cost and power is no longer increasing at its historic rate. In the post-ImageNet era, researchers are solving more complicated AI problems using larger data sets which drives the demand for more computation. This mismatch between supply and demand for computation highlights the need for co-designing efficient machine learning algorithms and domain-specific hardware architectures. We introduce our recent work using machine learning to optimize the machine learning system (Hardware-centric AutoML): learning the optimal pruning strategy (AMC) and quantization strategy (HAQ) on the target hardware; learning the optimal neural network architecture that is specialized for a target hardware architecture (ProxylessNAS); learning to optimize analog circuit parameters, rather than relying on experienced analog engineers to tune those transistors(L2DC). For hardware-friendly machine learning algorithms, I'll introduce the temporal shift module (TSM) for efficient video understanding, that offers 8x lower latency, 12x higher throughput than 3D convolution-based methods, while ranking the first on both Something-Something V1 and V2 leaderboards. On the hardware side, I’ll describe efficient deep learning accelerators that can take advantage of these efficient algorithms, including both FPGA and ASIC designs for emerging deep learning architectures. I’ll conclude the talk by giving an outlook of the design automation for efficient deep learning computing.

報名請點我