Deep learning (DL) models are enabling a significant paradigm shift in a diverse range of fields, including natural language processing, computer vision, as well as the design and automation of complex integrated circuits. While the deep models – and optimizations- based on them, e.g., Deep Reinforcement Learning (RL) – demonstrate a superior performance and a great capability for automated representation learning, earlier works have revealed the vulnerability of DLs to various attacks. The vulnerabilities include adversarial samples, model poisoning, fault injection, and Intellectual Property (IP) infringement attacks. On the one hand, these security threats could divert the behavior of the DL model and lead to incorrect decisions in critical tasks. On the other hand, the susceptibility of DLs to IP piracy attacks might thwart trustworthy technology transfer as well as reliable DL deployment. In this talk, Farinaz Koushanfar investigates the existing defense techniques to protect DLs against the above-mentioned security threats. Particularly, she reviews end-to-end defense schemes for robust deep learning in both centralized and federated learning settings. Her comprehensive taxonomy and horizontal comparisons reveal an important fact that defense strategies developed using DL/software/hardware co-design outperform the DL/software-only counterparts and show how they can achieve very efficient and latency-optimized defenses for real-world applications. She believes his systemization of knowledge sheds light on the promising performance of hardware-based DL security methodologies and can guide the development of future defenses. | Farinaz Koushanfar is professor in the Electrical and Computer Engineering department at the University of California San Diego (UCSD), where she is also the co-founder and co-director of the UCSD Center for Machine-Intelligence, Computing & Security.