Fall 2015

Student Seminar

September 9, 2015

This work broadens the space of rich yet practical models for structured prediction. We introduce a general framework for modeling with four ingredients: (1) latent variables, (2) structural constraints, (3) learned (neural) feature representations of the inputs, and (4) training that takes the approximations made during inference into account. We build up to this framework through an empirical study of three NLP tasks: semantic role labeling, relation extraction, and dependency parsing — obtaining state-of-the-art results on the former two. We apply the resulting graphical models with structured and neural factors, and approximation-aware learning to jointly model part-of-speech tags, a syntactic dependency parse, and semantic roles in a low-resource setting where the syntax is unobserved. We also present an alternative view of these models as neural networks with a topology inspired by inference on graphical models that encode our intuitions about the data.

Speaker Biography: Matt Gormley is a final year PhD candidate in Computer Science at Johns Hopkins University, co-advised by Mark Dredze and Jason Eisner. His current research focuses on joint modeling of multiple linguistic strata in learning settings where supervised resources are scarce. He has authored papers in a variety of areas including global optimization, joint inference and learning, topic modeling, and neural networks. He holds a Bachelor’s degree in Computer Science from Carnegie Mellon University (CMU). He will return to CMU this spring to join the faculty of the Machine Learning department.

September 22, 2015

Each year we gather our faculty and graduate students to update them on the state of the department. Many changes have taken place over the last few months and we want to make sure you are aware of them. And we want you to have the opportunity to get to know and interact with your fellow students before you get deeply engrossed in your classes and research.

This year we are asking for you to RSVP so that we can make sure we have adequate seating and food for everyone. Please use this survey link (https://www.surveymonkey.com/r/656TCHM) and reply by Monday Sept 14, 2015.

Student Seminar

October 1, 2015

Social media predictive analytics bring unique opportunities to study people and their behaviors in real time, at an unprecedented scale: who they are, what they like and what they think and feel. Such large-scale real-time social media predictive analytics provide a novel set of conditions for the construction of predictive models.

This work focuses on various approaches to handling this dynamic data for predicting latent user demographics, from constrained-resource batch classification, to incremental bootstrapping, and then iterative learning via interactive rationale (feature) crowdsourcing. In addition, we study the relationships between a variety of perceived user properties e.g., income, education etc. and opinions, emotions and interests in a social network. Finally, we demonstrate how user demographics can be useful for downstream prediction tasks e.g., gender-informed sentiment analysis.

Speaker Biography: Svitlana Volkova is a PhD candidate in Computer Science at the Center for Language and Speech Processing, Johns Hopkins University. Her PhD research focuses on building text-driven predictive models for socio-linguistic content analysis in social media. She has been mainly working on online models for streaming social media analytics, fine-grained emotion detection and multilingual sentiment analysis, and effective annotation techniques via crowdsourcing incorporated into the active learning framework. She interned at Microsoft Research in 2011, 2012 and 2014 at the Natural Language Processing and Machine Learning and Perception teams. She was awarded the Google Anita Borg Memorial Scholarship in 2010 and the Fulbright Scholarship in 2008.

Student Seminar

October 5, 2015

n networks such as the IP-based networks that run the Internet, nodes trust one another to properly execute routing and forwarding. When a node is compromised (i.e. Byzantine failure), this trust can be exploited by such compromised nodes to launch routing attacks that can disrupt communication throughout the network. In addition, a compromised node can drop, delay, reorder, replay, or duplicate messages, or inject its own messages into the network to consume resources. While these attacks are examples related to networking, in fact, a compromised node can perform any arbitrary action. Therefore, addressing this vulnerability requires an attack-agnostic approach that maintains network functionality even in the presence of compromised nodes.

We introduce the first practical solution for intrusion-tolerant networking. Our approach guarantees well-defined semantics to applications, rather than solely routing packets, and allows multiple different semantics to coexist. Specifically, we define two semantics that fit the needs of many applications: one guarantees prioritized timely delivery, and the other guarantees reliable delivery. We introduce a Maximal Topology with Minimal Weights to prevent routing attacks, and provide generic support for source-based routing, limiting the power of the adversary. Specifically, we discuss two source-based routing techniques: K Node-Disjoint Paths, which is resilient to K-1 compromised nodes, and Constrained Flooding, which provides the optimal guarantee that it will deliver messages if there exists a correct path from source to destination. We also describe the resilient overlay architecture necessary for the deployment of these ideas and to make the solution holistic, allowing the resulting system to overcome benign faults as well as malicious and resource-consumption attacks in the underlying network. We present a formal specification of the guarantees and evaluate an implementation deployed on a global cloud spanning 12 data centers from East Asia to North America to Europe.

Speaker Biography: Daniel Obenshain is a final year PhD candidate in Computer Science at Johns Hopkins University, advised by Yair Amir, and is a Beauchamp Fellow. His research focuses on creating systems that are highly resilient, even to the point of tolerating intrusions, and applying theoretical analysis to give provable guarantees for those systems. He holds a Bachelor’s degree in Computer Science from the California Institute of Technology (Caltech). He will start at Facebook in January, working in their infrastructure team.

Video Recording >>

October 8, 2015

It is widely recognized that future generations of computer systems will be qualitatively different from current and past generations. Specifically, they will be built with thousands of homogeneous and heterogeneous processor cores per chip combined with heterogeneous memory hierarchies, their performance will be driven by parallelism (thousand-way parallelism in mobile devices, and million-way parallelism in a single-rack departmental server), and will be constrained by fundamental energy and data movement limitations. They will also be subject to frequent faults and failures. Unlike previous generations of hardware evolution, these “extreme scale” systems will have a profound impact on future software. The software challenges are further compounded by the need to support application domains on the frontier of Computer Science that have traditionally not had to worry much about hardware limitations in the past. Addressing these challenges for extreme scale systems will require major advances in the core of Computer Science, including programming languages, compilers, runtime systems, and processor architectures. These advances are also critical for extending the end-game of Moore’s Law, by contributing to exponential performance improvements in successive generations of computer systems without relying solely on exponential increases in transistor density. In this talk, we discuss approaches to addressing software challenges for extreme scale systems based on experiences gained in the Habanero Extreme Scale Software Research Project at Rice University [1]. Our approach is based on introducing a set of unified primitives for parallelism and concurrency, which can be used to support a wide range of application patterns including task parallelism, functional parallelism, loop parallelism, data flow parallelism, actor concurrency, and transactional concurrency. These primitives define a family of structured parallel programming models, each of which has a well defined set of semantic guarantees with respect to serial elision, deadlock freedom, datarace freedom, functional determinism, structural determinism, and permission safety. Our claim is that these structured parallel primitives greatly simplify the job of developing correct and efficient software for extreme scale systems. Application experiences discussed in this talk will be drawn from the medical imaging and genomics analysis domains, based on research conducted in the NSF Expeditions Center for Domain-Specific Computing (CDSC) [2]. Background material for this talk will be drawn in part from the DARPA Exascale Software Study report [3] led by the speaker. This talk will also draw from a recent study led by the speaker on Synergistic Challenges in Data-Intensive Science and Exascale Computing [4] for the US Department of Energy’s Office of Science. We would like to acknowledge the contributions of all participants in both studies and in CDSC.

Speaker Biography: Vivek Sarkar is Professor and Chair of Computer Science at Rice University. He conducts research in multiple aspects of parallel software including programming languages, program analysis, compiler optimizations and runtimes for parallel and high performance computer systems. He currently leads the Habanero Extreme Scale Software Research Laboratory at Rice University, and serves as Associate Director of the NSF Expeditions Center for Domain-Specific Computing and PI of the DARPA-funded Pliny project on “big code” analytics. Prior to joining Rice in July 2007, Vivek was Senior Manager of Programming Technologies at IBM Research. His responsibilities at IBM included leading IBM’s research efforts in programming model, tools, and productivity in the PERCS project during 2002- 2007 as part of the DARPA High Productivity Computing System program. His prior research projects include the X10 programming language, the Jikes Research Virtual Machine for the Java language, the ASTI optimizer used in IBM’s XL Fortran product compilers, the PTRAN automatic parallelization system, and profile-directed partitioning and scheduling of Sisal programs. In 1997, he was on sabbatical as a visiting associate professor at MIT, where he was a founding member of the MIT Raw multicore project. Vivek became a member of the IBM Academy of Technology in 1995, the E.D. Butcher Chair in Engineering at Rice University in 2007, and was inducted as an ACM Fellow in 2008. He holds a B.Tech. degree from the Indian Institute of Technology, Kanpur, an M.S. degree from University of Wisconsin-Madison, and a Ph.D. from Stanford University. Vivek has been serving as a member of the US Department of Energy’s Advanced Scientific Computing Advisory Committee (ASCAC) since 2009.

Video Recording >>

October 29, 2015

From assisted-living to the hospital ward and operating room, robotic assistants with manipulation capabilities have the potential to significantly reduce costs while increasing consistency and quality of care. To create algorithms for robot manipulation in this broad class of domains, we must address the fundamentals of interacting with the world and collaborating with people. Significant progress has been made in manipulating rigid objects, where algorithms exist for general-purpose pick-and-place tasks regardless of the size and shape of the object. However, no such methods exist for a similarly broad and practical class of deformable object manipulation tasks, such as making a bed or retracting tissue during surgery. The problem is indeed challenging, as these objects are not straightforward to model and have infinite-dimensional configuration spaces, making it difficult to apply established motion planning approaches. Our approach seeks to bypass these difficulties by representing deformable objects using simplified geometric models at both the global and local planning levels. Though we do not predict the state of the object precisely, we nevertheless can perform tasks such as cable-routing, cloth folding, and surgical probe insertion in geometrically-complex environments. Building on this work, our new projects in this area aim to blend exploration of the model space with goal-directed manipulation of deformable objects and to generalize our methods to motion planning for inherently safe soft robot arms, where we can exploit contact to mitigate actuation uncertainty.

Speaker Biography: Dmitry Berenson received a BS in Electrical Engineering from Cornell University in 2005 and received his Ph.D. degree from the Robotics Institute at Carnegie Mellon University in 2011, where he was supported by an Intel PhD Fellowship. He completed a post-doc at UC Berkeley in 2011 and started as an Assistant Professor in Robotics Engineering and Computer Science at WPI in 2012. He founded and directs the Autonomous Robotic Collaboration (ARC) Lab at WPI, which focuses on motion planning, manipulation, and human-robot collaboration.

Student Seminar

November 4, 2015

Informal spoken content is being generated, stored, and shared on mind-boggling scales across the globe. Smart phones and social media, among other technologies, have enabled the creation of large volume repositories of user-generated, informal content in all spoken languages.

In spite of the lack of resources typically needed to train automatic speech recognition (ASR) systems in most languages, we are motivated by the robustness of topic information – content words, keywords, etc – in the presence of ASR errors. We explore ways in which topic information can be used to enable efficient recognition and retrieval of this wealth of spoken documents. From this we consider the interrelated concepts of locality, repetition, and ‘subject of discourse’ in the context of speech processing applications: ASR, speech retrieval (KWS), and topic identification of speech.

We can demonstrate how supervised and unsupervised models of topics, applicable to any language, can improve accuracy in accessing spoken content. In particular we focus on two complementary aspects of topic information in lexical content: local context – locality or repetition of word usage – and broad context – the typical ‘subject matter’ definition of a topic. By augmenting ASR language models with topic information we can demonstrate consistent improvements in both recognition and retrieval metrics.

We add locality to bags-of-words topic identification models, quantify the relationship between topic information and keyword retrieval, and consider word repetition both in terms of keyword based retrieval and language modeling. Finally, we combine these concepts and develop joint models of local and broad context via latent topic models.

Speaker Biography: Jonathan Wintrode received the A. B. degree cum laude in Computer Science from Harvard University in 2000, the M. S. degree in Computer Science from the Naval Postgraduate School in 2005, enrolled in the Computer Science Ph.D. program at Johns Hopkins University in 2010, and completed the M. S. E. degree in Computer Science in 2014. He won the Naval Postgraduate School Computer Science Department’s Outstanding Department of Defense Student Award in 2005. His research focuses on identifying and leveraging topic content for speech recognition and retrieval systems.

November 12, 2015

The concept of personal privacy as a precious and fragile commodity worthy of protection has come under siege in today’s data-driven world. Users are eager to share their data online, and mobile applications and web services aggressively collect and monetize that information. This talk describes our vision for a new, privacy-preserving world; in it, users are more aware of the privacy implications of their online actions, and systems and applications are designed from the ground up with privacy in mind. In support of this vision, we describe our research agenda to design, build, and evaluate new transparency tools that increase users’ and privacy watchdogs’ visibility into how personal data is being used by applications, and programming abstractions and tools that facilitate the construction of privacy-mindful applications. We provide two examples of such tools and abstractions. First, we describe Sunlight, a new web transparency tool that helps privacy watchdogs track how web services use individuals’ personal data to target ads, personalize content, or adjust prices. Second, we describe FairTest, a new testing toolkit that helps programmers test for unfair or discriminatory effects within their data-driven applications. Overall, our tools and abstractions aim to increase privacy by promoting a more responsible, fair, and accountable approach to user data management.

Speaker Biography: Roxana Geambasu is an Assistant Professor of Computer Science at Columbia University. She joined Columbia in Fall 2011 after finishing her Ph.D. at the University of Washington. For her work in cloud and mobile data privacy, she received an Early Career Award in Cybersecurity from the University of Washington Center for Academic Excellence, a Microsoft Research Faculty Fellowship, a 2014 “Brilliant 10” Popular Science nomination, an NSF CAREER award, an Honorable Mention for the 2013 inaugural Dennis M. Ritchie Doctoral Dissertation Award, a William Chan Dissertation Award, two best paper awards at top systems conferences, and the first Google Ph.D. Fellowship in Cloud Computing.

Video Recording >>

November 24, 2015

This talk will tell two stories, one about designing sustainable data centers and one about the underlying algorithmic challenges, which fall into the context of online convex optimization.

Story 1: The typical message surrounding datacenters and energy is an extremely negative one: Data centers are energy hogs. This message is pervasive in both the popular press and academia, and it certainly rings true. However, the view of datacenters as energy hogs is too simplistic. One goal of this talk is to highlight that, yes, data centers use a lot of energy, but data centers can also be a huge benefit in terms of integrating renewable energy into the grid and thus play a crucial role in improving the sustainability of our energy landscape. In particular, I will highlight a powerful alternative view: data centers as demand response opportunities.

Story 2: Typically in online convex optimization it is enough to exhibit an algorithm with low (sub-linear) regret, which implies that the algorithm can match the performance of the best static solution in retrospect. However, what if one additionally wants to maintain performance that is nearly as good as the dynamic optimal, i.e., a good competitive ratio? In this talk, I’ll highlight that it is impossible for an online algorithm to simultaneously achieve these goals. Luckily though, in practical settings (like data centers), noisy predictions about the future are often available, and I will show that, under a general model of prediction noise, even very limited predictions about the future are enough to overcome the impossibility result.

Speaker Biography: Adam Wierman is a Professor in the Department of Computing and Mathematical Sciences at the California Institute of Technology, where he is a founding member of the Rigorous Systems Research Group (RSRG) and maintains a popular blog called Rigor + Relevance. His research interests center around resource allocation and scheduling decisions in computer systems and services. He received the 2011 ACM SIGMETRICS Rising Star award, the 2014 IEEE Communications Society William R. Bennett Prize, and has been coauthor on papers that received of best paper awards at ACM SIGMETRICS, IEEE INFOCOM, IFIP Performance (twice), IEEE Green Computing Conference, IEEE Power & Energy Society General Meeting, and ACM GREENMETRICS.

Video Recording >>

December 1, 2015

Geometric problems – such as finding corresponding points over a collection of shapes, or computing shape deformation under geometric constraints – pose various computational challenges. I will show that despite the very different nature of these two highly non-convex problems, Semidefinite Programming (SDP) can be leveraged to provide a tight convex approximation in both cases. A different approach is used for each problem, demonstrating the versatility of SDP: (i) For establishing point correspondences between shapes, we devise an SDP relaxation. I will show it is a hybrid of the popular spectral and doubly-stochastic relaxations, and is in fact tighter than both. (ii) For the computation of piecewise-linear mappings, we introduce a family of maximal SDP restrictions. Solving a sequence of such SDPs enables the optimization of functionals and constraints expressed in terms of singular values, which naturally model various geometry processing problems.

Speaker Biography: Shahar Kovalsky is a PhD student in the department of Computer Science and Applied Math at the Weizmann Institute of Science, Israel. His main research interests are in numerical optimization, computer graphics and vision, and in particular, applications of convex optimization to geometry processing. Shahar holds a B.Sc. in Mathematics and B.Sc. and M.Sc. in Electrical Engineering from Ben-Gurion University.

Krasnopoler Lecturer

December 7, 2015

This talk will walk through the process Valve (developer of Half-Life, Team Fortress, DOTA, Counter-Strike, Portal, Left 4 Dead and Steam) uses to create, iterate, and ship its products and how that same process can yield the skills and experience necessary to succeed in the software industry. The key takeaway is that what works inside of Valve will work just as well when you’re an undergraduate looking for your first job and should help you develop the ability to get where you want to go.

Speaker Biography: Mike Ambinder is a senior experimental psychologist at Valve who joined the company in 2008. He has a PhD in Experimental Psychology (with an emphasis on visual cognition) from the University of Illinois, and a BA in Computer Science and Psychology from Yale University. He is the author or co-author of 11 published research papers and has given over 60 invited talks at various conferences. His work at Valve focuses on the application of knowledge and methodologies from psychology to game design.

Student

December 18, 2015

Many tasks in natural language processing require both human annotated data and special purpose models for separate domains and languages. In this work, we develop models and associated inference techniques that have milder data requirements and broader applicability across domains and languages. We develop our methods in the context of three specific problems: proper name modeling, named-entity recognition, and cross-document coreference resolution. Our technical approach is distinguished in combining ideas from representation learning and Bayesian inference.

Speaker Biography: Nick Andrews is a PhD candidate in Computer Science at Johns Hopkins University, co-advised by Jason Eisner and Mark Dredze. His research focuses on the problem of extracting structured information from unstructured natural language with minimal human supervision. His technical interests are in generative modeling, neural networks, and Marko chain Monte Carlo methods for approximate inference. He holds Bachelor’s degrees in Mathematics and Computer Science from Virginia Tech.