Hyper-parameter Tuning for ML Models: A Monte-Carlo Tree Search (MCTS) Approach
The University of Texas at Austin
Wednesday, October 2, 2019
Abstract: We study the application of online learning techniques in the context of hyper-parameter tuning, which is of growing importance in general machine learning. Modern neural networks have several tunable parameters, where training for even one such parameter configuration can take several hours to days. We first cast hyper-parameter tuning as optimizing a multi-fidelity black-box function (which is noise-less) and propose a multi-fidelity tree search algorithm for the same. We then present extensions of our model and algorithm, so that they can function even in the presence of noise. We show that our tree-search based algorithms can outperform state of the art hyper-parameter tuning algorithms on several benchmark data-sets.
Biography: Sanjay Shakkottai received his Ph.D. from the ECE Department at the University of Illinois at Urbana-Champaign in 2002. He is with The University of Texas at Austin, where he is currently the Temple Foundation Endowed Professor No. 3, and a Professor in the Department of Electrical and Computer Engineering. He received the NSF CAREER award in 2004, and was elected as an IEEE Fellow in 2014. His research interests lie at the intersection of algorithms for resource allocation, statistical learning and networks, with applications to wireless communication networks and online platforms.
Host: Paul Bogdan