Elad Hazan | |
Nationality: | Israeli-American |
Occupation: | Computer scientist, academic, author and researcher |
Awards: | Bell Labs Prize Marie Curie Fellow, European Research Council Google Research Award Amazon Research Award |
Website: | https://www.ehazan.com/ |
Education: | B.Sc., Computer Science, Tel Aviv University M.Sc., Computer Science, Tel Aviv University Ph.D., Computer Science, Princeton University |
Workplaces: | Princeton University |
Elad Hazan is an Israeli-American computer scientist, academic, author and researcher. He is a Professor of Computer Science at Princeton University, and the co-founder and director of Google AI Princeton.[1] [2]
Hazan co-invented adaptive gradient methods and the AdaGrad algorithm. He has published over 150 articles and has several patents awarded. He has worked machine learning and mathematical optimization, and more recently on control theory and reinforcement learning.[3] He has authored a book, entitled Introduction to Online Convex Optimization. Hazan is the co-founder of In8 Inc., which was acquired by Google in 2018.[4]
Hazan studied at Tel Aviv University and received his bachelor's and master's degrees in Computer Science in 2001 and 2002, respectively. He then moved to the United States, earning his Doctoral Degree in Computer Science from Princeton University in 2006 under Sanjeev Arora.[1]
Upon receiving his doctoral degree, Hazan held an appointment as a Research Staff Member in the Theory Group at IBM Almaden Research Center in 2006. Following this appointment, he joined Technion - Israel Institute of Technology as an assistant professor in 2010 and was tenured and promoted to Associate Professor in 2013.[1] In 2015, he joined Princeton University as an Assistant Professor of Computer Science, and later became Professor of Computer Science in 2016. Since 2018, he has been serving as a Director of Google AI Princeton.[5]
Hazan's research primarily focuses on machine learning, mathematical optimization, control theory and reinforcement learning. He is the co-inventor of five US patents.
Hazan co-introduced adaptive subgradient methods to dynamically incorporate knowledge of the geometry of the data observed in earlier iterations, and to perform more informative gradient-based learning. The AdaGrad algorithm changed optimization for deep learning and serves as the basis for today's fastest algorithms. In his study, he also made substantial contributions to the theory of online convex optimization, including the Online Newton Step and Online Frank Wolfe algorithm, projection free methods, and adaptive-regret algorithms.[6]
In the area of mathematical optimization, Hazan proposed the first sublinear-time algorithms for linear classification as well as semi-definite programming. He also gave the first linearly converging Frank-Wolfe-type algorithm.[7]
More recently, Hazan and his group proposed a new paradigm for differentiable reinforcement learning called non-stochastic control, which applies online convex optimization to control.[8]