Visit
Fri 02/08
Percy Liang headshot

LMSS @ Cornell Tech: Percy Liang (Stanford)

Can Language Robustify Learning?

Machine learning is facing a robustness crisis. Despite reaching human-level performance on a wide range of benchmarks, state-of-the-art systems can be easily fooled by seemingly small perturbations that don’t affect humans. The problem is perhaps fundamental to learning: fitting huge low bias models can easily overfit the superficial statistics of the benchmarks. In this talk, we explore natural language as a way to address this problem by providing stronger supervision: instead of requiring the machine learning algorithm to infer a function from raw examples, we specify the function directly using language. We present two initial forays into this space: First, we convert natural language explanations to a function that labels unlabeled data, which can be used to train a predictor. Second, users interactively teach high-level concepts using natural language definitions.

Speaker Bio

Percy Liang is an Assistant Professor of Computer Science at Stanford University (B.S. from MIT, 2004; Ph.D. from UC Berkeley, 2011). His research spans machine learning and natural language processing, with the goal of developing trustworthy agents that can communicate effectively with people and improve over time through interaction. Specific topics include question answering, dialogue, program induction, interactive learning, and reliable machine learning. His awards include the IJCAI Computers and Thought Award (2016), an NSF CAREER Award (2016), a Sloan Research Fellowship (2015), and a Microsoft Research Faculty Fellowship (2014).