Un dispositivo elettronico ultrasottile, dello spessore di tre micron, può essere applicato a tutti i tipi di superficie, irregolari, curve, delicate e flessibili, come foglie, lenti ottiche o bucce...
Leggi tuttoAlessandro Palla (Intel)
6 dicembre
dalle 15:30 alle 18:30
aula SI.5 del Polo B
Abstract
AI applications are increasingly shifting from the cloud to users' personal PCs. The integration of edge AI accelerators necessitates finely tuned hardware-software co-design to fulfill customer demands for high performance while maintaining an extremely low-power design.
This talk will provide insights into the workings of neural networks on AI accelerators, highlighting essential hardware-software trade-offs and compiler optimizations necessary to meet stringent performance/watt requirements. A live demo will illustrate these concepts, showcasing how optimized AI workloads achieve exceptional efficiency and responsiveness on edge devices.
Bio
Alessandro Palla is a staff deep learning engineer at Intel. He graduated in electronics engineering, and got the related PhD, in 2014 and 2018 respectively at the University of Pisa. He's working since 2017 in Intel Corporation, designing next generation Neural Processing Units (NPU) AI accelerator on Intel Client CPU. His domain of expertise is hardware/software codesign and compiler optimization techniques for AI accelerators.