Bio: I am a member of the Mathematical Institute for Data Science (MINDS), the Center for Language and Speech Processing (CLSP) and the Institute for Data Intensive Engineering and Science (IDIES). Prior to joining Johns Hopkins, I was a Research Assistant Professor / post-doctoral scholar at Toyota Technological Institute at Chicago (TTIC), a visiting researcher at Microsoft Research, Redmond and a research associate at the University of Washington, Seattle. I received my Ph.D. from the University of Wisconsin-Madison.

Raman Arora

Assistant Professor

Department of Computer Science

Mathematical Institute for Data Science (MINDS)

Center for Language and Speech Processing (CLSP)

Johns Hopkins University


3400 N Charles Street,

Malone Hall 331

Baltimore, MD 21218

arora at cs dot jhu dot edu


Google Scholar

Research Interests


Machine Learning: Provable methods for deep learning and representation learning, subspace learning, multiview learning, streaming algorithms for kernel methods, online learning


Stochastic Optimization: Non-convex optimization, stochastic approximation for large-scale problems, robust adversarial learning


Differential Privacy: Computational tradeoffs in private machine learning, local  ≈learning, federated learning, privacy in streaming algorithms and continual release models


My research is supported by an NSF CAREER award on Understanding Inductive Biases in Modern Machine Learning, an NSF BIGDATA award on Privacy in Machine Learning, a DARPA award on  Robust Adversarial Learning, an NSF BIGDATA award on Stochastic Approximation for Subspace and Multiview Representation Learning, an NSF TRIPODS award on Foundations of Graph and Deep Learning, an NSF CRCNS award on Computational Neuroscience. See here for more details.


During 2019 - 2020, I was a member of the Department of Mathematics at the Institute for Advanced Study. I was a visiting scientist at the Simons Institute for the Theory of Computing at UC Berkeley during summer 2019 program as part of the program on Foundations of Deep Learning and during Fall 2020 as part of the program on Theory of Reinforcement Learning.

Group:

Poorya Mianjy (Ph.D. student, CS)

Teodor Marinov (Ph.D. student, CS)

Enayat Ullah (Ph.D. student, CS)

Yunjuan Wang (Ph.D. student, CS)

Ashley Llorens (D.Engg. student, JHU APL)

Chengzhi Shi (M.S. student, BME)


Alumni

Tuo Zhao (Ph.D. student, CS, co-advised with Han Liu, currently Assistant Prof. at GaTech)

Jalaj Upadhyay (Postdoc, CS, currently a researcher at Apple)

Mo Yu (Visiting Ph.D. student, CLSP, co-advised with Mark Dredze, currently at IBM Research)

Neil Mallinar (B.S., currently Ph.D. student at UCSD)

Corbin Rosset (B.S., currently at Microsoft)

Openings: We are seeking highly motivated and self-driven M.S./Ph.D. students with a strong mathematical background in machine learning. Prospective PhD students are encouraged to apply to the CS graduate program at Johns Hopkins:

http://ml.jhu.edu/apply/

https://www.cs.jhu.edu/graduate-studies/phd-program/


We are also looking for postdoctoral researchers with an interest in theoretical machine learning, stochastic optimization and differential privacy. Contact me for more details.