Summer 2018

Computer Science Student Defense

June 18, 2018

Paraphrasing–communicating the same meaning with different surface forms–is one of the core characteristics of natural language and represents one of the greatest challenges faced by automatic language processing techniques. In this research, we investigate approaches to paraphrasing entire sentences within the constraints of a given task, which we call monolingual sentence rewriting. We focus on three representative tasks: sentence compression, text simplification, and grammatical error correction.

Monolingual rewriting can be thought of as translating between two types of English (such as from complex to simple), and therefore our approach is inspired by statistical machine translation. In machine translation, a large quantity of parallel data is necessary to model the transformations from input to output text. Parallel bilingual data naturally occurs between common language pairs (such as English and French), but for monolingual sentence rewriting, there is little existing parallel data, and annotation is costly. We modify the statistical machine translation pipeline to harness monolingual resources and insights into task constraints in order to drastically diminish the amount of annotated data necessary to train a robust system. Our method generates more meaning-preserving and grammatical sentences than earlier approaches and requires less task-specific data.

Speaker Biography: Courtney Napoles is a PhD candidate in the Computer Science Department and the Center for Language and Speech Processing at Johns Hopkins University, where she is co-advised by Chris Callison-Burch and Benjamin Van Durme. During her PhD, she interned at Educational Testing Service (ETS) and Yahoo Research. She is the recipient of an NSF Graduate Research Fellowship and holds a Bachelor’s degree in Psychology from Princeton University with a Certificate in Linguistics. Before graduate school, she edited non-fiction books for a trade publisher.

Computer Science Student Defense:

July 18, 2018

A new paradigm is beginning to emerge in radiology with the advent of increased computational capabilities and algorithms. The future of radiological reading rooms is heading towards a unique collaboration between computer science and radiologists. The goal of computational radiology is to probe the underlying tissue using advanced computational algorithms and imaging parameters and produce a personalized diagnosis that can be correlated to pathology. This thesis presents a complete computational radiology framework for personalized clinical diagnosis, prognosis and treatment planning using an integration of graph theory, radiomics and deep learning (I-GRAD).

Speaker Biography: Vishwa Parekh is a PhD candidate in Computer Science at JHU. He is primarily advised by Dr. Michael Jacobs and co-advised by Dr. Russell Taylor and Dr. Jerry Prince. Vishwa received a B.E. in Computer Science from BITS, Pilani in 2011 and a M.S.E in Computer Science from JHU in 2013.

Vishwa’s research interest lies in developing techniques that enable us to “see” patterns in high dimensional imaging data that are not visually perceivable to naked eye. During his Ph.D., Vishwa published 5 journal papers, 2 conference papers, 8 abstracts and filed 4 patents. His research in manifold and deep learning was covered in AuntMinnie.com for “on the road to RSNA” for years 2015 and 2017. In addition, his work on manifold learning in prostate imaging was selected for power pitch presentation (top 2%) at The International Society for Magnetic Resonance in Medicine in 2017.

July 19, 2018

The reconstruction of the 3D world from images is among the central challenges in computer vision. Starting in the 2000s, researchers have pioneered algorithms which can reconstruct camera motion and sparse feature-points in real-time. In my talk, I will introduce direct methods for camera tracking and 3D reconstruction which do not require feature point estimation, which exploit all available input data and which recover dense or semi-dense geometry rather than sparse point clouds. Applications include 3D photography, free-viewpoint television and autonomous vehicles.

Speaker Biography: Daniel Cremers received Bachelor degrees in Mathematics (1994) and Physics (1994), and a Master’s degree in Theoretical Physics (1997) from the University of Heidelberg. In 2002 he obtained a PhD in Computer Science from the University of Mannheim, Germany. Subsequently he spent two years as a postdoctoral researcher at the University of California at Los Angeles and one year as a permanent researcher at Siemens Corporate Research in Princeton. From 2005 until 2009 he was associate professor at the University of Bonn, Germany. Since 2009 he holds the Chair of Computer Vision and Artificial Intelligence at the Technical University, Munich. He has coauthored over 300 publications which received numerous awards, most recently the SGP 2016 Best Paper Award, the CVPR 2016 Best Paper Honorable Mention and the IROS 2017 and ICRA 2018 Best Paper Award Finalist. For pioneering research he received a Starting Grant (2009), a Proof of Concept Grant (2014) and a Consolidator Grant (2015) from the European Research Council. In December 2010 he was listed among “Germany’s top 40 researchers below 40” (Capital). Prof. Cremers received the Gottfried-Wilhelm Leibniz Award 2016, the most important research award in German academia.