Events
Efficient Computing for AI and Robotics: From Hardware Accelerators to Algorithm Design
The computing demands of AI and robotics continue to rise due to the rapidly growing volume of data to be processed; the increasingly complex algorithms for higher quality of results; and the demands for energy efficiency and real-time performance. In this talk, we will discuss the design of efficient hardware accelerators and the co-design of algorithms and hardware that reduce energy consumption while delivering real-time and robust performance for applications including deep neural networks, data analytics with sparse tensor algebra, and autonomous navigation. We will also discuss our recent work that balances flexibility and efficiency for domain-specific accelerators and reduces the cost of analog-to-digital converters for processing-in-memory accelerators. Throughout the talk, we will highlight important design principles, methodologies, and tools that can facilitate an effective design process.
Speaker Bio
Vivienne Sze is an associate professor in MIT’s Department of Electrical Engineering and Computer Science and leads the Research Lab of Electronics’ Energy-Efficient Multimedia Systems research group. Her group works on computing systems that enable energy-efficient machine learning, computer vision, and video compression/processing for a wide range of applications, including autonomous navigation, digital health, and the Internet of things. She is widely recognized for her leading work in these areas and has received many awards, including faculty awards from Google, Facebook, and Qualcomm, the Symposium on VLSI Circuits Best Student Paper Award, the IEEE Custom Integrated Circuits Conference Outstanding Invited Paper Award, and the IEEE Micro Top Picks Award. As a member of the Joint Collaborative Team on Video Coding, she received the Primetime Engineering Emmy Award for the development of the High-Efficiency Video Coding video compression standard. She is a co-author of the book entitled “Efficient Processing of Deep Neural Networks”.