Inspiring Trust in Outsourced Computations: From Secure Chip Fabrication to Verifiable Deep Learning in the Cloud

Guest Speaker:
Siddharth Garg — New York University

Tuesday, December 5, 2017
SAL 126
1:30PM

ABSTRACT: Computations are often outsourced by computationally weak clients to computationally powerful external entities. Cloud computing is an obvious example of outsourced computation; outsourced chip manufacturing to off-shore foundries or “fabs” is another (perhaps less obvious)   example.  Indeed, many major semiconductor design companies have now adopted the so-called “fabless” model. However, outsourcing raises a fundamental question of trust: how can the client ascertain that the outsourced computations were correctly performed? Using fabless chip manufacturing and “machine-learning as a service (MLaaS)” as exemplars, this talk will highlight the security vulnerabilities introduced by outsourcing computations and describe solutions to mitigate these vulnerabilities.

First, we describe the design of “verifiable ASICs” to address the problem of secure chip fabrication at off-shore foundries. Building on a rich body of work on the “delegation of computation” problem, we enable untrusted chips to provide run-time proofs of the correctness of computations they perform. These proofs are checked by a slower verifier chip fabricated at a trusted foundry. The proposed approach is the first to defend against arbitrary Trojan misbehaviors (Trojans refer to malicious modifications of a chip’s blueprint by the foundry) while providing formal and comprehensive soundness guarantees.

Next, we examine the “MLaaS” setting, in which both the training and/or inference of machine learning models is outsourced to the cloud. We show that outsourced training introduces new security risks: an adversary can create a maliciously trained neural network (a backdoored neural network, or a BadNet) that has state-of-the-art performance on the user’s training and validation samples, but behaves badly on specific attacker-chosen inputs. We conclude by showing how the same techniques we used design “verifiable ASICs” can be used to verify the results of neural networks executed on the cloud.

BIO: Siddharth Garg is an Assistant Professor in the ECE Department at NYU since Fall 2014 and prior to that, was an Assistant Professor at the University of Waterloo from 2010-2014. His research interests are in in secure, reliable and energy-efficient computing. Siddharth was listed in Popular Science Magazine’s annual list of “Brilliant 10” researchers in 2016 for his work on hardware security, and is the recipient of an NSF CAREER Award (2015), best paper awards at the IEEE Symposium on Security and Privacy (S&P) 2016, USENIX Security Symposium 2013, at the Semiconductor Research Consortium TECHCON in 2010, and the International Symposium on Quality in Electronic Design (ISQED) in 2009. Siddharth also received the Angel G. Jordan Award from ECE department of Carnegie Mellon University for outstanding thesis contributions and service to the community. He received a Ph.D. in ECE from Carnegie Mellon University, an M.S. degree in EE from Stanford University, and a B.Tech. degree in EE from IIT Madras.

Hosted by: Paul Bogdan