Bio: I am a member of the Mathematical Institute for Data Science (MINDS), the Center for Language and Speech Processing (CLSP) and the Institute for Data Intensive Engineering and Science (IDIES). Prior to joining Johns Hopkins, I was a Research Assistant Professor / post-doctoral scholar at Toyota Technological Institute at Chicago (TTIC), a visiting researcher at Microsoft Research, Redmond and a research associate at the University of Washington, Seattle. I received my Ph.D. from the University of Wisconsin-Madison.

Raman Arora

Assistant Professor

Department of Computer Science

Mathematical Institute for Data Science (MINDS)

Center for Language and Speech Processing (CLSP)

Johns Hopkins University


3400 N Charles Street

Malone Hall 331

Baltimore, MD 21218

arora at cs dot jhu dot edu


Google Scholar

Research Interests


Machine Learning: Provable methods for deep learning and representation learning, subspace learning, multiview learning, streaming algorithms for kernel methods, online learning


Stochastic Optimization: Non-convex optimization, stochastic approximation for large-scale problems, robust adversarial learning


Differential Privacy: Computational tradeoffs in private machine learning, local learning, federated learning, privacy in streaming algorithms and continual release models


My research is supported by an NSF CAREER award on Understanding Inductive Biases in Modern Machine Learning, an NSF BIGDATA award on Privacy in Machine Learning, a DARPA award on  Robust Adversarial Learning, an NSF BIGDATA award on Stochastic Approximation for Subspace and Multiview Representation Learning, an NSF TRIPODS award on Foundations of Graph and Deep Learning, and an NSF CRCNS award on Computational Neuroscience. See here for more details.


During 2019 - 2020, I was a member of the Department of Mathematics at the Institute for Advanced Study. I was a visiting scientist at the Simons Institute for the Theory of Computing at UC Berkeley during summer 2019 program as part of the program on Foundations of Deep Learning, during Fall 2020 program as part of the program on Theory of Reinforcement Learning, and during Spring 2022 as part of the program on Learning and Games.

Group:

Thanh Nguyen-Tang (Postdoc, CS)

Enayat Ullah (Ph.D. student, CS)

Yunjuan Wang (Ph.D. student, CS)

Austin Watkins (Ph.D. student, CS)

Kaibo Zhang (Ph.D. student, CS)

Anh Do (Ph.D. student, CS)

Sihan Wei (Ph.D. student, CS)

Ashley Llorens (D.Engg. student, Microsoft)


Alumni

Teodor Marinov (Ph.D., CS, currently a researcher at Google) [THESIS]

Poorya Mianjy (Ph.D., CS, currently a quant researcher at Citadel) [THESIS]

Tuo Zhao (Ph.D., CS, currently an assistant professor at GaTech) [THESIS]

Jalaj Upadhyay (Postdoc, CS, currently a researcher at Apple)

Mo Yu (Visiting Ph.D. student, CLSP, currently at WeChat AI, Tencent)

Neil Mallinar (B.S., currently pursuing Ph.D. at UCSD)

Corbin Rosset (B.S., currently at Microsoft)

Chengzhi Shi (M.S., BME)

Openings: I am seeking highly motivated and self-driven M.S./Ph.D. students with a strong mathematical background in machine learning. Prospective PhD students are encouraged to apply to the CS graduate program at Johns Hopkins: https://www.cs.jhu.edu/graduate-studies/phd-program/.


I am also looking for postdoctoral researchers with an interest in theoretical machine learning, online learning, robustness, and differential privacy. Contact me for more details.