Fall 2014

Video Recording >>

September 11, 2014

In this talk, I will tell the story of our work with some truly remarkable undergraduate students at Rutgers-Camden, who despite many odds have achieved success that is unprecedented for the Camden campus. I will discuss the various challenges that we faced and some ideas that have worked very well (and some that have not) for us. We have been applying some of these ideas in our work with high school students and students at other institutions.

Speaker Biography: Dr. Rajiv Gandhi is an Associate Professor of Computer Science at the Rutgers University-Camden. He received his Ph.D. in Computer Science from the University of Maryland, College Park in 2003. His research interests lie in the broad area of theoretical computer science. Specifically, he is interested in approximation and randomized algorithms, distributed algorithms, and graph theory. He has published papers in these areas in leading journals and conferences. He has been the recipient of several teaching excellence awards — at Rutgers and at other universities. He was also the recipient of the Chancellor’s award for Civic Engagement at Rutgers-Camden in 2013. He was a Fulbright Fellow from Jan-June 2011, during which he worked with students in Mumbai. Since 2009, he has also been working with high school students as part of the Program in Theoretical Computer Science.

Video Recording >>

September 23, 2014

Human language acquisition and use are central problems for the advancement of machine intelligence, and pose some of the deepest scientific challenges in accounting for the capabilities of the human mind. In this talk I describe several major advances we have recently made in this domain made possible by combining leading ideas and techniques from computer science and cognitive science. Central to these advances is the use of generative probabilistic models over richly structured linguistic representations. In language comprehension, I describe how we have used these models to develop detailed theories of incremental parsing that unify the central problems of ambiguity resolution, prediction, and syntactic complexity, and that yield compelling quantitative fits to behavioral data from both controlled psycholinguistic experiments and reading of naturalistic text. I also describe noisy-channel models relating the accrual of uncertain perceptual input with sentence-level language comprehension that account for critical outstanding puzzles for previous theories, and that when combined with reinforcement learning yield state-of-the-art models of human eye movement control in reading. This work on comprehension sets the stage for a theory in language production of how speakers tend toward an optimal distribution of information content throughout their utterances, whose predictions we confirm in statistical analysis of a variety of types of optional function word omission. Finally, I conclude with examples of how we use nonparametric models to account for some of the most challenging problems in language acquisition, including how humans learn phonetic category inventories and acquire and rank phonological constraints.

Speaker Biography: Roger Levy is Associate Professor of Linguistics at the University of California, San Diego, where he directs the world’s first Computational Psycholinguistics Laboratory. He received his B.S. from the University of Arizona and his M.S. and Ph.D. from Stanford University. He was a UK ESRC Postdoctoral Fellow at the University of Edinburgh before his current appointment. His awards include an NSF CAREER grant, an Alfred P. Sloan Research Fellowship, and a Fellowship at the Center for Advanced Study in the Behavioral Sciences. Levy’s research program is devoted to theoretical and applied questions at the intersection of cognition and computation, focusing on human language processing and acquisition. Inherently, linguistic communication involves the resolution of uncertainty over a potentially unbounded set of possible signals and meanings. How can a fixed set of knowledge and resources be acquired and deployed to manage this uncertainty? To address these questions Levy uses a combination of computational modeling and psycholinguistic experimentation. This work furthers our foundational understanding of linguistic cognition, and helps lay the groundwork for future generations of intelligent machines that can communicate with humans via natural language.

Video Recording >>

September 30, 2014

As far as we know, no single kind of cue carries sufficient information to enable a language to be successfully learnt, so some kind of cue integration seems essential. This talk uses computational models to study how a diverse range of information sources can be exploited in word learning. I describe a non-parametric Bayesian framework called Adaptor Grammars, which can express computational models that exploit information ranging from stress cues through to discourse and contextual cues for learning words. We use these models to compare two different approaches a learner could use to acquire a language. A staged learner learns different aspects of a language independently of each other, while a joint learner learns them simultaneously. A joint learner can take advantage of synergistic dependencies between linguistic components in ways that a staged learner cannot. By comparing “minimal pairs” of models we show that there are interactions between the non-local context, syllable structure and the lexicon that a joint learner could synergistically exploit. This suggests that it would be advantageous for a language learner to integrate different kinds of cues according to Bayesian principles. We end with a discussion of the broader implications of a non-parametric Bayesian approach, and survey other applications of these techniques.

Speaker Biography: Mark Johnson is a Professor of Language Science (CORE) in the Department of Computing at Macquarie University, and is Director of the Macquarie Centre for Language Sciences. He received a BSc (Hons) in 1979 from the University of Sydney, an MA in 1984 from the University of California, San Diego and a PhD in 1987 from Stanford University. He held a postdoctoral fellowship at MIT from 1987 until 1988, and has been a visiting researcher at the University of Stuttgart, the Xerox Research Centre in Grenoble, CSAIL at MIT and the Natural Language group at Microsoft Research. He has worked on a wide range of topics in computational linguistics, and is mainly known for his work on syntactic parsing and its applications to text and speech processing. Recently he has developed non-parametric Bayesian models of human language acquisition. He was President of the Association for Computational Linguistics in 2003 and will be President of ACL’s SIGDAT (the organisation that runs EMNLP) in 2015, and was a professor from 1989 until 2009 in the Departments of Cognitive and Linguistic Sciences and Computer Science at Brown University.

Video Recording >>

October 2, 2014

Characterizing human language processing as rational probabilistic inference has yielded a number of useful insights. For example, surprisal theory (Hale, Levy) represents an elegant formalization of incremental processing that has met with empirical success (and some challenges) in accounting for word-by-word reading times. A theoretical challenge now facing the field is integrating rational analyses with bounded computational/cognitive mechanisms, and with task-oriented perception and action. A standard approach to such challenges (Marr and others) is to posit (bounded) mechanisms/algorithms that approximate functions specified at a rational analysis level. I discuss an alternative approach, computational rationality, that incorporates the bounds themselves in the definition of rational problems of utility maximization. This approach naturally admits of two kinds of analyses: the derivation of control strategies (policies or programs) for bounded agents that are optimal in local task settings, and the identification of processing mechanisms that are optimal across a broad range of tasks. As an instance of the first kind of analysis, we consider the derivation of eye-movement strategies in a simple word reading task, given general assumptions about noisy lexical representations and oculomotor architecture. These analyses yield novel predictions of task and payoff effects on fixation durations that we have tested and confirmed in eye-tracking experiments. (The model can be seen as a kind of ideal-observer/actor model, and naturally extends to account for distractor-ratio and pop-out effects in visual search). As an instance of the second kind of analysis, we consider properties of an optimal short-term memory system for sentence parsing, given general assumptions about noisy representations of linguistic features. Such a system provides principled explanations of similarity-based interference slow-downs and certain speed-accuracy tradeoffs in sentence processing. I conclude by sketching steps required for an integrated theory that jointly derives task-driven parsing and eye-movement strategies constrained by noisy memory and perception.

Speaker Biography: Richard Lewis is a cognitive scientist at the University of Michigan, where he is Professor of Psychology and Linguistics. He received his PhD in Computer Science at Carnegie Mellon with Allen Newell, followed by a McDonnell Fellowship in Psychology at Princeton and a position as Assistant Professor of Computer Science at Ohio State. His research interests include sentence processing, eye-movements, short-term memory, cognitive architecture, reinforcement learning and intrisic reward, and optimal control approaches to modeling human behavior. He was elected a Fellow of the Association for Psychological Science in 2010.

Video Recording >>

October 7, 2014

If a person is trained to recognize or categorize objects or events using one sensory modality, the person can often recognize or categorize those same (or similar) objects and events via a novel modality, an instance of cross-modal transfer of knowledge. How is this accomplished? The Multisensory Hypothesis states that people extract the intrinsic, modality-independent properties of objects and events, and represent these properties in multisensory representations. These representations mediate the transfer of knowledge across modality-specific representations. In this talk, I’ll present three studies, using experimental and computational methodologies, of the Multisensory Hypothesis. The first study examines visual-haptic transfer of object shape knowledge, the second study examines visual-auditory transfer of sequence category knowledge, and the final study examines a novel latent variable model of multisensory perception.

Speaker Biography: For my undergraduate studies, I attended the University of Pennsylvania where I majored in Psychology. I spent the next two years working as a Research Assistant in a biomedical research laboratory at Rockefeller University. For graduate school, I attended the University of Massachusetts at Amherst where I earned a Ph.D. degree in Computer and Information Science (graduate advisor: Andrew Barto). I then served in two postdoc positions, one in the Department of Brain & Cognitive Sciences at the Massachusetts Institute of Technology (postdoc advisor: Michael Jordan), and the other in the Department of Psychology at Harvard University (postdoc advisor: Stephen Kosslyn). I’m currently a faculty member at the University of Rochester where my title is Professor of Brain & Cognitive Sciences, of Computer Science, and of the Center for Visual Science. I am also a member of the Center for Computation and the Brain.

Video Recording >>

October 21, 2014

A fundamental problem of vision is how to deal with the astronomical complexity of images, scenes, and visual tasks. For example, considering the enormous input space of images and output space of objects, how can a human observer obtain a coarse interpretation of an image within less than 150 msec? And how can the observer, given more time, be able to parse the image into its components (objects, object parts, and scene structures) and reason about their relationships and actions? The same complexity problem arguably arises in most aspects of intelligence and addressing it is critical to understanding the brain and to designing artificial intelligence systems. This talk describes a research program which addresses this problem by using hierarchical compositional models which represent objects, and scene structures, in terms of elementary components which can be grouped together to form more complex structures, shared between different objects, and which are represented more abstractly in summary form. This program is illustrated by examples including: (i) low-level representations of images, (ii) segmentation and bottom-up attentional mechanisms, (iii) detection and parsing objects, (iv) estimating the 3D shapes of objects and scene structures from single images. We briely discuss ongoing work that relates these models to experimental studies of the brain, including psychophysics, elextrophysiology, and fMRI.

.

Speaker Biography: Professor Yuille is the Director of the UCLA Center for Cognition, Vision, and Learning, as well as a Professor at the UCLA Department of Statistics, with courtesy appointments at the Departments of Psychology, Computer Science, and Psychiatry. He is affiliated with the UCLA Staglin Center for Cognitive Neuroscience, the NSF Center for Brains, Minds and Machines, and the NSF Expedition in Visual Cortex On Silicon. His undergraduate degree was in Mathematics and his Phd in Theoretical Physics, both at the University of Cambridge. He has held appointments at MIT, Harvard, the Smith-Kettlewell Eye Research Institute, and UCLA. His research interests include computer vision, cognitive science, neural network modeling and machine learning. He has over three hundred peer reviewed publications. He has won several awards including the Marr prize and the Helmholtz test of time award. He is a fellow of IEEE.

Video Recording >>

October 23, 2014

Preparing students for careers in professional computer security can be difficult if for no other reason than the breadth of knowledge required today. The security profession includes widely diverse subfields including cryptography, network architectures, programming, programming languages, design, coding practices, software testing, pattern recognition, economic analysis, and even human psychology. While an individual may choose to specialize in one of these more narrow elements, there is a pressing need for practitioners that have a solid understanding of the unifying principles of the whole.

In teaching network security to graduate security students, I created the PLAYGROUND network simulation as a pedagogical tool. The primary goals were (1) provide a simulation sufficiently powerful to permit rigorous study of desired principles, (2) at the same time reduce unnecessary and distracting complexities inherent in real-life networks, and (3) enable the application of security concepts in a “from scratch” environment. Using this framework, I created a semester long, multi-stage lab for student experimentation. The lab-work provided an effective mix of a wide range of the security subfields previously mentioned and a framework for “big picture” comprehension for all of the lectures, readings, and other non-lab assignments in the course. This talk will describe the PLAYGROUND and explain the pedagogical theory behind it, as well as demonstrating some of the assignments and experiences teaching the course.

Speaker Biography: Seth James Nielson is a Principal at Harbor Labs and a lecturer in the Johns Hopkins Computer Science department. He received his Ph.D. from Rice University in 2009. For the past ten years, he has worked as a security consultant both building and analyzing computer security systems including advanced high-speed firewalls, anti-virus, intrusion detection, DRM, secure communications products, and so forth. Additionally, Dr. Nielson has consulted on projects related to the DMCA, trade secrets, code theft, wire tapping, and protecting PII.

Video Recording >>

Distinguished Lecturer

November 4, 2014

Since it’s introduction in 1982, the area of two and multi-party computation has been an exciting and vibrant research topic. The theoretical research in and the applications of multi-party computations are a source of beautiful results and great importance in the era of the internet and cloud computing. Solutions from this area provide enhanced security and privacy in our connected world. In this talk we will give a flavor of the techniques and discuss various applications introduced in the 30 years of innovation in the field.

Speaker Biography: Tal Rabin is the manager and a research staff member of the Cryptography Research Group at IBM’s T.J. Watson Research Center. Her research focuses on the general area of cryptography and, more specifically, on secure multiparty computation, threshold cryptography and proactive security which the National Research Council Cybersecurity Report to Congress identified as “exactly the right primitives for building distributed systems that are more secure”. Rabin obtained her Ph.D. in Computer Science from the Hebrew University, Israel in 1994, and was an NSF Postdoc Fellow at MIT between 1994-1996. Following her postdoc, she joined the cryptography group in IBM Research in 1996 and started managing it in 1997. She has served as the Program and General Chair in leading cryptography conferences and is an editor of the Journal of Cryptology. She is a member of the SIGACT Executive Board, serves as a council member of the Computing Community Consortium, and is on the membership committee of the AWM (Association of Women in Mathematics). Rabin is the 2014 Anita Borg Women of Vision Award winner for innovation. She has initiated and organizes the Women in Theory Workshop, a biennial event for graduate students in Theory of Computer Science. Rabin has appeared in the New York Times (“Women Atop their Fields Dissect the Scientific Life”), the World Science Festival and on WNYC’s (NPR) Science Fair.

Video Recording >>

November 11, 2014

I will attempt to cover several interrelated topics in analysis of big biomedical data, spending more time on parts that generate feedback.

First, I will introduce our recent study analyzing phenotypic data harvested from over 100 million unique patients. Curiously, these non-genetic large-scale data can be used for genetic inferences. We discovered that complex diseases are associated with unique sets of rare Mendelian variants, referred to as the “Mendelian code.” We found that the genetic loci indicated by this code were enriched for common risk alleles. Moreover, we used probabilistic modeling to demonstrate for the first time that deleterious Mendelian variants likely contribute to complex disease risk in a non-additive fashion.

The second topic that I hope to cover is related to analysis of apparent clusters of neurodevelopmental disorders. Disease clusters are defined as geographically compact areas where a particular disease, such as a cancer, shows a significantly increased rate. It is presently unclear how common are such clusters for neurodevelopmental maladies, such as autism spectrum disorders (ASD) and intellectual disability (ID). As in the first story, examining data for one third of the whole US population, we demonstrated that (1) ASD and ID are manifesting strong clustering across US counties; (2) counties with high ASD rates also appear to have high ID rates, and (3) the spatial variation of both phenotypes appears to be driven by environment, and, by a lesser extent, by economic incentives at the state level.

Speaker Biography: Andrey Rzhetsky is a Professor of Medicine and Human Genetics, at the University of Chicago. He is also a Pritzker Scholar, and a Senior Fellow of both the Computation Institute, and the Institute for Genomics and Systems Biology at the University of Chicago. His research is focused on computational analysis of complex human phenotypes in context of changes and perturbations of underlying molecular networks and environmental insults.

Video Recording >>

Distinguished Lecturer

November 13, 2014

In this talk, I will survey a number of efficient cryptographic techniques that can increase user privacy when storing and utilizing data stored in a cloud. I will first describe multiple vulnerabilities that exist (and are not protected by standard encryption and authentication mechanisms). I will then describe several general techniques that allow one to minimize these risks in case of a malicious (or negligent) cloud provider, including combatting insider threats. These techniques are based on several advances in cryptography that make theoretical results far more practical. I will also describe several applications of these techniques to real-life scenarios. The talk will be self-contained and accessible to the general audience.

Speaker Biography: Rafail Ostrovsky is a Professor of Computer Science and Professor of Mathematics at UCLA. Dr. Ostrovsky received his Ph.D. in Computer Science from MIT in 1992. Prof. Ostrovsky’s research centers on various issues in theoretical computer science, including complexity theory, algorithms, cryptography and computer security. Prof. Ostrovsky is a Fellow of the International Association of Cryptologic Research (IACR); he has 11 U.S. patents issued and over 200 papers published in refereed journals and conferences. Dr. Ostrovsky currently serves as Vice-Chair of the IEEE Technical Committee on Mathematical Foundations of Computing; he has served on 38 international conference Program Committees including as Program Chair for FOCS 2011. He is a member of the Editorial Board of JACM; Editorial Board of Algorithmica; and Editorial Board of Journal of Cryptology. In 2011, Dr. Ostrovsky was invited by the Honorable Michael B. Donley (Secretary of the Air Force) to serve on U.S. Air Force Third Annual National Security Scholars Conference. He was invited to be Plenary Speaker at a conference organized by FBI in 2009. Dr. Ostrovsky was a Plenary Keynote Speaker for Public Key Cryptography International Conference in 2007. Dr. Ostrovsky is a recipient of multiple awards and honors including Henry Taub Prize. At UCLA, Prof. Ostrovsky heads security and cryptography multi-disciplinary Research Center at Henry Samueli School of Engineering and Applied Science.

Video Recording >>

December 4, 2014

Sequencing of mRNA through RNA-seq has transformed our ability to identify the genes responsible for adaptive evolution, a fundamental topic in modern evolutionary biology. Using RNA-seq, scientists are now able to generate extensive transcriptome data from diverse eukaryotes in a timely and cost-effective manner, and simultaneously characterize transcribed genes in multiple cell types and changing environments. The enormous amounts of data generated by the sequencing projects require sophisticated, efficient, and innovative new algorithms to analyze them. Previous efforts to model genes de novo, via recognition of splice sites, coding regions, and other signals, have been superseded by more accurate methods based on RNA-seq data. Here we introduce a new transcript assembly algorithm, StringTie, which uses a combination of de novo assembly ideas and a novel application of a network flow algorithm, a method imported from other areas of computer science research.

Speaker Biography: Mihaela Pertea is a Computer Scientist who since 2011 has been an Assistant Professor in the McKusick-Nathans Institute of Genetic Medicine at Johns Hopkins University. She received her B.S. and M.S. degrees in Computer Science from the University of Bucharest in Romania, and her M.S.E and Ph.D in Computer Science from Johns Hopkins University. In 2001 she joined The Institute for Genomic Research (TIGR) in Rockville, Maryland, one of the world’s leading DNA sequencing centers at the time, where she was a Bioinformatics Scientist until 2005. From 2005-2011 she was an Assistant Research Scientist in the Center for Bioinformatics and Computational Biology at the University of Maryland, College Park. Dr. Pertea’s major area of research is in computational biology – an interdisciplinary field situated at the intersection of several scientific disciplines, including molecular biology, computer science and statistical mathematics.

Video Recording >>

December 11, 2014

The goal of causal inference is the discovery of cause effect relationships from observational data, using appropriate assumptions. Two innovations that proved key for this task are a formal representation of potential outcomes under a random treatment assignment (due to Neyman), and viewing cause effect relationships via directed acyclic graphs (due to Wright). Using a modern synthesis of these two ideas, I consider the problem of mediation analysis which decomposes an overall causal effect into component effects corresponding to particular causal pathways. Simple mediation problems involving direct and indirect effects and linear models were considered by Baron and Kenny in the 1980s, and a significant literature has been developed since.

In this talk, I consider mediation analysis at its most general: I allow arbitrary models, the presence of hidden variables, multiple outcomes, longitudinal treatments, and effects along arbitrary sets of causal pathways. There are three distinct but related problems to solve — a representation problem (what sort of potential outcome does an effect along a set of paths correspond to), an identification problem (can a causal parameter of interest be expressed as a functional of observed data), and an estimation problem (what are good ways of estimating the resulting statistical parameter). I report a complete solution to the first two problems, and progress on the third. In particular, I show that for some parameters that arise in mediation settings a triply robust estimator exists, which relies on an outcome model, a mediator model, and a treatment model, and which remains consistent if any two of these three models are correct.

Some of the reported results are a joint work with Eric Tchetgen Tchetgen, Caleb Miles, Phyllis Kanki, and Seema Meloni.

Speaker Biography: Ilya Shpitser is a Lecturer in Statistics at the University of Southampton. Previously, he was a Research Associate at the Harvard School of Public Health, working in the causal inference group with James M. Robins, Tyler VanderWeele, and Eric Tchetgen Tchetgen. His dissertation work was done at UCLA under the supervision of Judea Pearl. The fundamental question driving his research is this: “what makes it possible (or impossible) to infer cause-effect relationships?” Ilya received Ph.D. in Computer Science from UCLA in 2008. He then did a postdoctoral fellowship in the causal inference group at the Harvard School of Public Health until 2012.