Johns Hopkins University, HLTCOE
Stieff Building / 810 Wyman Park Drive
Baltimore, MD 21211-2840, USA
Email: email@example.com where x=kevinduh
Hi! I am a senior research scientist at the Johns Hopkins University Human Language Technology Center of Excellence (HLTCOE). I am also an assistant research professor in the Department of Computer Science and a member of the Center for Language and Speech Processing (CLSP). Previously, I was assistant professor at the Nara Institute of Science and Technology (2012-2015) and research associate at NTT CS Labs (2009-2012). I received my B.S. in 2003 from Rice University, and PhD in 2009 from the University of Washington, both in Electrical Engineering. My research interests lie at the intersection of Natural Language Processing and Machine Learning, in particular on areas relating to machine translation, semantics, and deep learning.
My recent projects include: (a) domain adaptation for neural machine translation [see 1, 2, 3, 4, 5], (b) end-to-end cross-lingual information extraction and retrieval, (c) multi-objective optimization for fast, small, and accurate deep learning models. I'm fortunate to have great collaborators.
- Current Doctoral Advisees: Shuo Sun, Xuan Zhang, Suzanna Sia, Jeremy Gwinnup, Neha Verma (co-advised with Kenton Murray)
- Previous lab members: : Mitchell Gordon, Pamela Shapiro (now at Comcast), Sheng Zhang (now at Microsoft, co-advised with Ben Van Durme) Muhammad Rahman (now at NIH), Sorami Hisamoto (now at WorksApplications), Hiroki Ouchi (now at NAIST), Fei Cheng (now at NII), Xiaoyi Wu, Frances Yung (now at Saarland University), Masashi Tsubaki (now at AIST), Xiaodong Liu (now at Microsoft), Yanyan Luo (now at Baidu)
Selected Publications [click here for full list, or Google scholar profile]:
- Reproducible and Efficient Benchmarks for Hyperparameter Optimization of NMT Systems (TACL2020)
- Membership Inference Attacks on Seq2Seq Models (TACL2020)
- Multilingual end-to-end speech translation (ASRU2019)
- HABLex: Human Annotated Bilingual Lexicons for Experiments in Machine Translation (EMNLP2019)
- Curriculum Learning for Domain Adaptation in Neural Machine Translation (NAACL2019)
- Stochastic Answer Networks for Machine Reading Comprehension (ACL2018)
- Representation Learning using Multi-Task Deep Neural Nets for Semantic Classification & IR (NAACL2015)
To prospective students: Thanks for your interest! I am not able to reply to your inquiries individually, due to the large volumes of email I receive ("email event horizon"). If you are seeking admission, please refer to the official CS admissions page. I am currently not accepting internship students (including self-funded ones).