I teach at the Department of Electrical and Computer Engineering at the University of Toronto where I have the privilege to work with a team of very talented graduate students. I have also taught at the Northwestern University, USA, the University of Athens, Greece, the Hellenic Open University, Greece, and as a invited professor at the École polytechnique fédérale de Lausanne, Switzerland. I received a Bachelors and a Master's Degree from the University of Crete, Greece and a Ph.D. from the University of Wisconsin-Madison.
My research interests lie primarily in the design of performance-, energy-, and/or cost-optimized computing engines for various applications domains. Most of my work thus far has been on high-performance general-purpose systems.My current work emphasizes highly-specialized computing engines for Deep Learning. I will also be serving as the Director of the newly formed National Sciences and Engineering Research Council Strategic Partnership Network on Machine Learning Hardware Acceleration (NSERC COHESA), a partnership of 19 Researchers across 7 Universities involving 8 Industrial Partners.
For the work I have done with my students and collaborators, I have been awarded the ACM SIGARCH Maurice Wilkes mid-career award, a National Research Foundation CAREER Award, two IBM Faculty Partnership awards, a Semiconductor Research Innovation award, an IEEE Top Picks in Computer Architecture Research, and a MICRO conference Hall of Fame award. I have served as the Program Chair for the ACM/IEEE International Symposium on Microarchitecture and the ACM/IEEE International Symposium on Performance Analysis of Systems and Software. I am also Fellow of the ACM and a Faculty Affiliate of the Vector Institute
Deep Learning Acceleration
Value-Based Acceleration: We are developing methods that reduce the work, storage and communication needed when executing Deep Learning models. We target optimizations at the middleware software and at the hardware levels so that they benefit out-of-the-box models and do not require intervention from the Machine Learning expert: developing models is hard enough already. Our methods rely on value properties exhibited by typical models such as value- and bit-sparsity and data type need variability. Our methods, however, do reward model optimizations. For example, our methods reward quantization to smaller data widths where possible but will still provide benefits for non-quantized models. Similarly for sparsity.
See overview articles here: Exploiting Typical Values to Accelerate Deep Learning