60 5th Avenue
New York, NY 10011
My research focuses on applications of statistical machine learning to natural language processing (NLP), natural language understanding (NLU), and reasoning. I am currently primarily working on ways (e.g. reasoning) to improve the generalizability and robustness of ML, NLP, and NLU models especially on data and tasks beyond those encountered during their training. More specifically, I have worked on algorithms to abduce world models (i.e. theories) from logical forms, and to reason over previously-acquired knowledge to perform downstream tasks, such as question-answering. I am also interested in symbolic and neuro-symbolic representations of meaning and reasoning in the context of NLU. In the past, I developed generative probabilistic models of grammar to train semantic parsers (i.e. how can a machine learn to convert natural language into logical forms?) and generate natural language utterances from logical forms (i.e. how can a machine learn to convert logical forms into natural language?). In my work, I have applied Bayesian nonparametrics, approximate posterior inference, combinatorial optimization, and real-time visualization. I am also broadly interested in other applications of statistical machine learning, such as to the natural sciences.
Prior to NYU, I received my Ph.D. and M.S. in Machine Learning at Carnegie Mellon University, advised by Professor Tom Mitchell. I received my B.S.E. in Computer Science at Princeton University with certificates (minors) in Applied Mathematics and Neuroscience.