Bio: I am a member of the center for language and speech processing (CLSP) and the institute for data intensive engineering and science (IDIES). Prior to joining Johns Hopkins, I was a Research Assistant Professor at Toyota Technological Institute at Chicago (TTIC), a post-doctoral scholar at TTIC (hosted by Karen Livescu), a visiting researcher at Microsoft Research Redmond (hosted by Ofer Dekel) and a research associate at the University of Washington (host: Maya Gupta) in Seattle. I received my Ph.D. from the University of Wisconsin-Madison in 2009.

Raman Arora

Assistant Professor

Department of Computer Science

Center for Language and Speech Processing

Institute for Data Intensive Engineering & Science

Johns Hopkins University


3400 N Charles Street

Malone Hall 331

Baltimore, MD 21218

410-516-1327

arora at cs dot jhu dot edu

Research: The nature of signal and information processing has evolved dramatically over the years as we try to investigate increasingly intricate, dynamic, and large-scale systems such as the internet, gene regulatory networks, the human brain, financial markets, and social networks. We are witnessing an explosion in both the amount and complexity of data and this poses new challenges

for efficient information extraction from massive, multimodal, corrupted and very high-dimensional datasets. My research focuses on developing representation learning techniques that can capitalize on unlabeled data which is often cheap and abundant and sometimes virtually unlimited. The goal of these ubiquitous techniques is to learn a representation that reveals intrinsic low-dimensional structure in data, disentangles underlying factors of variation in data by incorporating universal priors such as smoothness and sparsity, and is useful across multiple tasks and domains.


I focus on representation learning techniques, including subspace learning, multi-view learning, deep learning and spectral learning, in a high-dimensional big-data setting. Central to my research is the theory and application of stochastic approximation algorithms that process one sample at a time and can thus be significantly more efficient both in time and space, and more practical than batch learning algorithms when processing modern data sets of several million or billion sample points. I am interested in application of these techniques to speech and language processing, healthcare and computational neuroscience. See here for more details.


My research has been supported by an NSF BIG-DATA award, a science of learning grant, an IDIES award, and a Lieber Institute for Brain Development award.

Group:

Poorya Mianjy (Ph.D. student, CLSP)

Teodor Marinov (Ph. D. student, CLSP)

Nils Holzenberger (Ph. D. student, CLSP)

Enayat Ullah (Ph. D. student, CS)

Corbin Rosset (B.S. student, CS)


Alum:

Tuo Zhao (Ph.D. student, CS, co-advised with Han Liu, currently at GaTech)

Mo Yu (Visiting Ph.D. student, CLSP, co-advised with Mark Dredze, currently at IBM Research)

Neil Mallinar (B.S., currently at IBM Watson)

Openings: We are seeking highly motivated and self-driven M.S./Ph.D. students with a strong mathematical background in machine learning. Prospective PhD students are encouraged to apply to the CS graduate program at Johns Hopkins:

http://ml.jhu.edu/apply/

http://www.clsp.jhu.edu/about-clsp/admissions/

We are also looking for postdoctoral researchers with an interest in representation learning and stochastic optimization. Contact me for more details.

We host multiple undergraduate students for summer internships every year as part of our summer research expeditions (SRE) program in computational sciences, systems and engineering at JHU. Please check here for updates on our SRE program for the coming year.