January 19, 2021
In order to determine how the perception, Autopilot, and driver monitoring systems of Tesla Model 3s interact with one another, and also to determine the scale of between- and within-car variability, a series of four on-road tests were conducted. Three sets of tests were conducted on a closed track and one was conducted on a public highway. Results show wide variability across and within three Tesla Model 3s, with excellent performance in some cases but also likely catastrophic performance in others. This presentation will not only highlight how such interactions can be tested, but also how results can inform requirements and designs of future autonomous systems.
Speaker Biography: Professor Mary (Missy) Cummings received her B.S. in Mathematics from the US Naval Academy in 1988, her M.S. in Space Systems Engineering from the Naval Postgraduate School in 1994, and her Ph.D. in Systems Engineering from the University of Virginia in 2004. A naval officer and military pilot from 1988-1999, she was one of the U.S. Navy’s first female fighter pilots. She is currently a Professor in the Duke University Electrical and Computer Engineering Department, and the Director of the Humans and Autonomy Laboratory. She is an American Institute of Aeronautics and Astronautics (AIAA) Fellow, and a member of the Defense Innovation Board. Her research interests include human supervisory control, explainable artificial intelligence, human-autonomous system collaboration, human-robot interaction, human-systems engineering, and the ethical and social impact of technology.
February 4, 2021
Decision-making processes are prevalent in many applications, yet their exact mechanism is often unknown, leading to challenges to replicate the process. For instance, how medical providers decide on treatment plans for patients or how chronic patients choose and adhere to their dietary recommendations. Much effort has been focused on learning these decisions through data-intensive approaches. However, the decision-making process is usually complex and highly constrained. While the inner workings of these constrained optimizations may not be fully known, the outcomes of them (the decisions) are often observable and available, e.g., the historical data on clinical treatments. In this talk, we focus on Inverse Optimization techniques to recover the underlying optimization models that lead to the observed decisions. Inverse optimization can be employed to infer the utility function of a decision-maker or to inform the guidelines for a complicated process. We present a data-driven inverse linear optimization framework (called Inverse Learning) that finds the optimal solution to the forward problem based on the observed data. We discuss how combining inverse optimization with machine learning techniques can utilize the strengths of both approaches. Finally, we validate the methods using examples in the context of precision nutrition and personalized daily diet recommendations.
Speaker Biography: Kimia Ghobadi is a John C. Malone Assistant Professor of Civil and Systems Engineering, the associate director of the Center for Systems Science and Engineering (CSSE), and a member of the Malone Center for Engineering in Healthcare. She obtained her Ph.D. at the University of Toronto, and before joining Hopkins, was a postdoctoral fellow at the MIT Sloan School of Management. Her research interests include inverse optimization techniques, mathematical modeling, real-time algorithms, and analytics technics with application in healthcare systems, including healthcare operations and medical decision-making.
February 12, 2021
Pathogen genomic data are rich with information and growing exponentially. At the same time, new genomics-based technologies are transforming how we surveil and combat pathogens. Yet designing biological sequences for these technologies is still done largely by hand, without well-defined objectives and with a great deal of trial and error. We lack computational capabilities to efficiently design and optimize frontline public health and medical tools, such as diagnostics, based on emerging genomic information.
In this talk, I examine computational techniques — linked closely with biotechnologies — that enhance how we proactively prepare for and respond to pathogens comprehensively. I discuss CATCH, an algorithm that designs assays for simultaneously enriching the genomes of hundreds of viral species including all their known variation; they enable hypothesis-free viral detection and sequencing from patient samples with high sensitivity. I also discuss ADAPT, which combines a deep learning model with combinatorial optimization to design CRISPR-based viral diagnostics that are maximally sensitive over viral variation. ADAPT rapidly and fully-automatically designs diagnostics for thousands of viruses, and they exhibit lower limits of detection than state-of-the-art design strategies. The results show that principled computational design will play a vital role in an arsenal against infectious diseases. Finally, I discuss promising directions for design methods and applications to other diseases.
Speaker Biography: Hayden Metsky is a postdoctoral researcher at the Broad Institute in Pardis Sabeti’s lab. He completed his PhD, MEng, and SB in computer science at MIT. His research focuses on developing and applying computational methods that enhance the tools we use to detect and treat disease, concentrating on viruses.
February 12, 2021
Precision medicine efforts propose leveraging complex molecular and medical data towards a better life. This ambitious objective requires advanced computational solutions. Here, however, deeper understanding will not simply diffuse from deeper machine learning, but from more insight into the details of molecular function and a mastery of applicable computational techniques.
My lab’s novel machine learning-based methods predict functional effects of genomic variants and leverage the identified patterns in functional changes to infer individual disease susceptibility. We have optimized our genome-to-disease mapping pipeline to both accommodate compute-resistant biologists and allow for custom variant scoring functions, feature selection, and machine learning techniques. We also built novel computational methods, including training the first general purpose language model for bacterial short-read DNA sequences, to be used in high-throughput functional profiling of microbiome data that can further elaborate on health and disease. Our purely computational work motivates new experimentally testable hypothesis regarding the biological mechanisms of disease. It also provides a potential means for earlier prognosis, more accurate diagnosis, and the development of better treatments.
Speaker Biography: Research in Yana Bromberg’s lab at Rutgers University is focused on designing machine learning, network analysis, and other computational techniques for the molecular functional annotation of genes, genomes, and metagenomes in the context of specific environments and diseases. The lab also studies evolution of life’s electron transfer reactions in Earth’s history and as potentially applicable to other planets. Dr. Bromberg received her Bachelor degrees in Biology and Computer Sciences from the State University of New York at Stony Brook and a Ph.D. in Biomedical Informatics from Columbia University. She is currently an Associate Professor at the Department of Biochemistry and Microbiology at Rutgers University. She also holds an adjunct position at the Department of Genetics at Rutgers and is a fellow of the Institute for Advanced Study at the Technical University of Munich, Germany. Dr. Bromberg is also the vice-president of the Board of Directors of the International Society for Computational Biology.
February 12, 2021
Twelve years ago, biologists developed the repertoire sequencing technology (Rep-seq) that samples millions out of a billion constantly changing antibodies (or immunoglobulins) circulating in each of us. Repertoire sequencing represented a paradigm shift as compared to the previous “one-antibody-at-a-time” approaches, raised novel algorithmic, statistical, information theory, and machine learning challenges, and led to the emergence of computational immunogenomics.
I will describe our recent work on reconstructing the evolution of antibody repertoires, inferring novel diversity (D) genes in the immunoglobulin loci, and solving the three-decade-old puzzle of explaining the mechanism for generating biomedically important ultralong antibodies via tandem D-D fusions. I will also describe several collaborative projects in the emerging fields of personalized immunogenomics (analyzing how mutations in the immunoglobulin loci affect our ability to develop antibodies that neutralize flu and HIV) and agricultural immunogenomics (analyzing cow antibody repertoires to assist in breeding efforts).
Speaker Biography: Yana Safonova received the B.Sc. (2010) and M.Sc. (2012) degrees in Computer Science from the Nizhny Novgorod State University, Russia, and the Ph.D. degree (2017) in Bioinformatics from the Saint Petersburg State University, Russia. Since 2017, she has been a Postdoctoral Scholar at the Computer Science and Engineering Department at University of California, San Diego (UCSD), USA. Since 2019, she has also been affiliated with the Department of Biochemistry and Molecular Genetics at the University of Louisville School of Medicine, USA.
Her research interests cover open problems in immunogenomics and computational immunology that include applications of the recently emerged repertoire sequencing technologies to design of antibody drugs, prediction of vaccine efficacy, and population analysis of the immune loci. Dr. Safonova was selected as a recipient of the Data Science Postdoctoral Fellowship (2017) by UCSD and the Intersect Fellowship for Computational Scientists and Immunologists (2019) by the American Associations of Immunologists. She is a member of the The Adaptive Immune Receptor Repertoire (AIRR) Community of The Antibody Society and an author of a graduate Immunogenomics course.
February 16, 2021
Sustained space habitation is no longer a next-generation challenge. With NASA’s Artemis Plan, the advent of the US Space Force and the “new space” sector’s scrappy enthusiasm, there is serious momentum to bring humans to space for extended periods in the coming decade. We can’t do this alone. We’ll need to adapt the highly automated systems we’ve been designing for everyday purposes to help us survive. However, if we build space-faring AI systems anything like how we have been building smart cities, we are going to have some problems. AI systems that have been designed for civil society are not built for digital or physical resilience. Largely, they have lacked human-centricity, which has been a major contributor to this challenge. At this talk, we’ll discuss and raise questions about the calls for autonomous space systems, and if we do not have a track record for building safe and secure human-centric AI systems on Earth, how can we build them for space? The stakes are higher there.
Speaker Biography: rof. Gregory Falco is the first faculty hire at the Johns Hopkins Institute for Assured Autonomy (IAA), where he will be an Assistant Professor jointly between the IAA and the Civil and Systems Engineering Department starting in Fall 2021. He has been at the forefront of smart city and space system security and safety in both industry and academia for the past decade. His research entitled Cybersecurity Principles for Space Systems was highly influential in the recent Space Policy Directive-5, which shared the same title. He has worked closely with NASA’s Jet Propulsion Laboratory to help advance space asset security capabilities using AI. Falco led the inaugural university cohort research team for the United States Space Force’s Hyperspace Challenge. He has been listed in Forbes 30 Under 30 for his inventions and contributions to critical infrastructure cyber security. Falco has been published in Science for his work on cyber risk. Falco is a Cyber Research Fellow at Harvard University’s Belfer Center, Research Affiliate at MIT’s Computer Science and Artificial Intelligence Laboratory and Postdoctoral Scholar at Stanford University. Falco completed his PhD at MIT’s Computer Science and Artificial Intelligence Laboratory, master’s degree at Columbia University and bachelor’s degree at Cornell University.
February 16, 2021
Increasingly, practitioners are turning to ML to build causal models, and predictive models that perform well under distribution shifts. However, current techniques for causal inference typically rely on having access to large amounts of data, limiting their applicability to data-constrained settings. In addition, empirical evidence has shown that most predictive models are insufficiently robust with respect to shifts at test time. In this talk, I will present my work on building novel techniques addressing both of these problems.
Much of the causal literature focuses on learning accurate individual treatment effects, which can be complex and hard to estimate from small samples. However, it is often sufficient for the decision maker to have estimates of upper and lower bounds on the potential outcomes of decision alternatives to assess risks and benefits. I will show that in such cases we can improve sample efficiency by estimating simple functions that bound these outcomes instead of estimating their conditional expectations. I will present a novel algorithm that leverages these theoretical insights.
I will also talk about approaches to deal with distribution shifts using causal knowledge and auxiliary data. I will discuss how distribution shift arises when training models to predict contagious infections in the presence of asymptomatic carriers. I will present a causally-motivated regularization scheme that enables prediction of the true infection state with high accuracy even if the training data is collected under biased test administration.
Speaker Biography: Maggie Makar is a PhD student at CSAIL, MIT. While at MIT, Maggie interned at Microsoft Research, and Google Brain. Prior to MIT, Maggie worked at Brigham and Women’s Hospital, studying end-of-life care. Her work has appeared in ICML, AAAI, JSM, the Journal of the American Medical Association (JAMA), Health Affairs, and Epidemiology among others. Maggie received a B.Sc. in Math and Economics from the University of Massachusetts, Amherst.
February 19, 2021
To create trustworthy AI systems, we must safeguard machine learning methods from catastrophic failures. For example, we must account for the uncertainty and guarantee the performance for safety-critical systems, like in autonomous driving and health care, before deploying them in the real world. A key challenge in such real-world applications is that the test cases are not well represented by the pre-collected training data. To properly leverage learning in such domains, we must go beyond the conventional learning paradigm of maximizing average prediction accuracy with generalization guarantees that rely on strong distributional relationships between training and test examples. In this talk, I will describe a distributionally robust learning framework that offers accurate uncertainty quantification and rigorous guarantees under data distribution shift. This framework yields appropriately conservative yet still accurate predictions to guide real-world decision-making and is easily integrated with modern deep learning. I will showcase the practicality of this framework in applications on agile robotic control and computer vision. I will also introduce a survey of other real-world applications that would benefit from this framework for the future work.
Speaker Biography: Anqi (Angie) Liu is a postdoctoral scholar research associate at the Department of Computing and Mathematical Sciences in the California Institute of Technology. She obtained her Ph.D. from the Department of Computer Science of the University of Illinois at Chicago. She is interested in machine learning for safety-critical tasks and the societal impact of AI. She aims to design principled learning methods and collaborate with domain experts to build more reliable systems for the real world. She has been selected for the EECS Rising Star in UC Berkeley 2020. Her publication appears in prestigious machine learning conferences like Neurips, ICML, ICLR, AAAI, and AISTAT.
February 19, 2021
Why do some misleading articles go viral? Does partisan speech affect how people behave? Many pressing questions require understanding the effects of language. These are causal questions: did an article’s writing style cause it to go viral or would it have gone viral anyway? With text data from social media and news sites, we can build predictors with natural language processing (NLP) techniques but these methods can confuse correlation with causation. In this talk, I discuss my recent work on NLP methods for making causal inferences from text. Text data present unique challenges for disentangling causal effects from non-causal correlations. I present approaches that address these challenges by extending black box and probabilistic NLP methods. I outline the validity of these methods for causal inference, and demonstrate their applications to online forum comments and consumer complaints. I conclude with my research vision for a data analysis pipeline that bridges causal thinking and machine learning to enable better decision-making and scientific understanding.
Speaker Biography: Dhanya Sridhar is a postdoctoral researcher in the Data Science Institute at Columbia University. She completed her PhD at the University of California Santa Cruz. Her current research is at the intersection of machine learning and causal inference, focusing on applications to social science. Her thesis research focused on probabilistic models of relational data.
February 26, 2021
Video is becoming a core medium for communicating a wide range of content, including educational lectures, vlogs, and how-to tutorials. While videos are engaging and informative, they lack the familiar and useful affordances of text for browsing, skimming, and flexibly transforming information. This severely limits who can interact with video content and how they can interact with it, makes editing a laborious process, and means that much of the information in videos is not accessible to everyone.
But, what are the future systems will make videos useful for all users?
In this talk, I’ll share my work creating interactive Human-AI systems that leverage multiple mediums of communication (e.g., text, video, and audio) across two main research areas: 1) helping domain-experts surface content of interest through interactive video abstractions, and 2) making videos non-visually accessible through interactions for video accessibility. First I will share core challenges of seeking information in videos from interviews with domain experts. Then, I will share new interactive systems that leverage AI, and evaluations that demonstrate system efficacy. I will conclude with how hybrid HCI-AI breakthroughs will make digital communication more effective and accessible in the future, and how new interactions can help us to realize the full potential of recent AI/ML advances.
Speaker Biography: Amy Pavel is a Postdoctoral Fellow at CMU HCII and a Research Scientist in AI/ML at Apple. Her research explores how interactive tools, augmented with machine learning techniques, can make digital communication more effective and accessible. She has published her work in conferences including UIST, CHI, ASSETS, and other ACM/IEEE venues. She previously received her Ph.D. in CS at UC Berkeley, where her work was supported by an NDSEG fellowship.
February 26, 2021
Algorithms play a central role in our lives today, mediating our access to civic engagement, social connections, employment opportunities, news media and more. While the sociotechnical systems deploying these algorithms—search engines, social networking sites, and others—have the potential to dramatically improve human life, they also run the risk of reproducing or intensifying social inequities. In my research, I ask whether and how these systems are biased, and how those biases impact users.
Understanding sociotechnical systems and their effects requires a combination of computational and social techniques. In this talk, I will describe my work conducting algorithm audits and randomized controlled user experiments to study representation and bias, focusing on my recent study of gender and racial bias in image search. By auditing gender and race in image search results for common U.S. occupations and comparing to baselines in the U.S. workforce we find that marginalized people are underrepresented relative to their workforce participation rates. When measuring people’s responses to synthetic search results in which the gender and racial composition are manipulated, however, we see that the effect of diverse image search results is complex and mediated by the user’s own identity. I will conclude by discussing the implications of these findings for building sociotechnical systems, and directions for future research studying algorithmic bias.
Speaker Biography: Danaë Metaxa (they/she) is a PhD candidate in Computer Science at Stanford University, advised by James Landay and Jeff Hancock. A member of the Human-Computer Interaction group, Danaë’s research interests focus on building and understanding sociotechnical systems and their effects on users in domains like employment and politics. Danaë has been a pre-doctoral scholar with Stanford’s Program on Democracy and the Internet, a fellow with the McCoy Center for Ethics in Society, and the winner of an NSF Graduate Research Fellowship.
March 2, 2021
As society progresses towards increasing levels of embedded, ubiquitous, and autonomous computation, one key societal opportunity is to leverage this technology to maximize human wellbeing. The challenge for wellbeing technology is two-fold, how to precisely measure wellbeing, and how to deliver long-term engaging interventions to optimize wellbeing states and their fundamental components, such as stress. Ultimately, managing stress, for example, can have significant implications in health, wellbeing, productivity, and attention. The current approaches to assessing wellbeing and stress are somewhat limited, as these assessments are based on subjective observations and they impose models of use that do not scale or adapt well to diverse populations. Additionally, little research is done in developing human-centered intervention technology that maximizes engagement over the long term. In this talk, I present my research agenda that focuses on unobtrusive sensing and interventions that are efficacious and engaging, i.e., allowing for long-term use, which is especially important for public health interventions. I present a series of research projects exploring and validating novel ideas on the design of passive “sensorless” sensors and subtle just-in-time personalized interventions. I show the promise of repurposing existing signals from computing peripherals (i.e., mouse and trackpad) or cars (steering wheel) and repurposing existing media as subtle just-in-time interventions. Finally, inspired by biology and the behavioral sciences, I propose we leverage technology to make “mundane” devices, such as chairs, desks, cars, or even urban lights, into devices that deliver personalized, adaptive, and autonomous wellbeing interventions. I close with a brief discussion of the ethical implications and the research needed to systematically study ethics in pervasive wellbeing technology.
Speaker Biography: Pablo Paredes earned his Ph.D. in Computer Science from the University of California, Berkeley in 2015 with Prof. John Canny. He is currently a Clinical Assistant Professor in the Psychiatry and Behavioral Sciences Department, and the Epidemiology and Population Health Department (by courtesy) at the Stanford University School of Medicine. He leads the Pervasive Wellbeing Technology Lab, which houses a diverse group of students from multiple departments such as computer science, electrical engineering, mechanical engineering, anthropology, neuroscience, and linguistics. Prior to joining the School of Medicine, Dr. Paredes was a Postdoctoral Researcher in the Computer Science Department at Stanford University with Prof. James Landay. During his Ph.D. career, he held internships on behavior change and affective computing at Microsoft Research and Google. He has been an active associate editor for the Interactive, Mobile, Wireless, and Ubiquitous Technology Journal (IMWUT), as well as a reviewer and editor for multiple top CS and medical journals. Before 2010, he was a senior strategic manager with Intel in Sao Paulo, Brazil, a lead product manager with Telefonica in Quito, Ecuador, and an entrepreneur in his native Ecuador and more recently in the US. In these roles, he has had the opportunity to hire and closely evaluate designers, engineers, business people, and researchers in telecommunications and product development. During his academic career, Dr. Paredes has advised close to 40 mentees including postdocs, Ph.D., master’s, and undergraduate students, collaborated with colleagues from multiple departments across engineering, medicine, and the humanities, and raised funding from NSF, NIH, and large multidisciplinary intramural research projects.
IAA and ISI Speaker
March 4, 2021
Nicole Perlroth is The New York Times cybersecurity reporter and the author of This Is How They Tell Me the World Ends, the untold history of the global cyber arms trade and cyberweapons arms race spanning three decades. Perlroth reveals for the first time the classified market’s origins (a Russian attack on American embassy), its Godfather, brokers, mercenaries, hackers and its spread to the furthest corners of the globe, from the United States to Israel, the Middle East, South America, China and beyond. She documents attacks across nations and how each new attack builds on the last, as nation states learn and improve upon one another’s playbooks, extending into high-profile attacks on multi-national companies and private organizations. Perlroth’s reporting spans the period from the 1990s to the 2020 election and its aftermath, when Russia has been engaged in a months-long hack of the United States federal government itself, an attack that Perlroth continues to report for the Times, building on her book’s extraordinary revelations.
Speaker Biography: Nicole Perlroth is an award-winning cybersecurity journalist for The New York Times, where her work has been optioned for both film and television. She is a regular lecturer at the Stanford Graduate School of Business and a graduate of Princeton University and Stanford University. She lives with her family in the Bay Area, but increasingly prefers life off the grid in their cabin in the woods.
IAA & CS Seminar Series
March 16, 2021
Robots will transform our everyday lives, from home service and personal mobility, to large-scale warehouse management and agriculture monitoring. Across these applications, robots need to interact with humans and other robots in complex, dynamic environments. Understanding how robots interact allows us to design safer and more robust systems. This talk presents an overview on how we can integrate underlying cooperation and interaction models into the design of the robot teams. We use tools from behavioral decision theory to design interaction models, combined with game theory and control theory to develop distributed control strategies with provable performance guarantees. This talk focuses on applications in autonomous driving, where better understanding of human intent improves safety, as well as exploring recent results in designing UVC-equipped mobile robots for human-centric environments.
Speaker Biography: Alyssa Pierson is an Assistant Professor of Mechanical Engineering at Boston University. Her research interests include trust and cooperation in multi-agent systems, distributed robotics control, and socially-compliant autonomous system design. She focuses on designing robotic systems that interact with humans and other robots in complex, dynamic environments.
Prior to joining BU, Professor Pierson was a research scientist with the Computer Science and Artificial Intelligence Laboratory (CSAIL) at MIT. She received her PhD degree from Boston University in 2017 and BS in Engineering from Harvey Mudd College. During her PhD, she was awarded the Clare Booth Luce Fellowship and was a Best Paper Finalist at the 2016 International Conference on Robotics and Automation.
IAA & CS Seminar Series
March 22, 2021
Isaac Asimov’s Laws for Robots placed intelligent robots under three ethical duties eerily similar to the Belmont Report’s respect for persons, beneficence, and justice. Law scholars Jack Balkin and Frank Pasquale suggest that laws for AI/ML systems are best directed not at the robots but at humans who program them, use them, and let ourselves be governed by them. Recent theorizations of the perils of AI/ML software focus heavily on the problem of modern surveillance societies where citizens are relentlessly tracked, analyzed, and scored as they go about their daily lives. It is tempting for bioethicists to draw on these rich theorizations, but doing so mis-frames the challenges and opportunities of the healthcare context in which AI/ML clinical decision support software operates. This talk identifies distinctive features of the healthcare setting that make AI/ML medical software likely to break the emerging rules about how to protect human dignity in a modern surveillance society. Protecting patients in an AI/ML-enabled clinical heath care setting is a different problem. It requires fresh, context-appropriate thinking about a set of privacy, bias, and accountability issues that this talk sets out for debate.
Speaker Biography: Barbara J. Evans is Professor of Law and Stephen C. O’Connell Chair at University of Florida’s Levin College of Law and Professor of Engineering at UF’s Herbert Wertheim College of Engineering. Her work focuses on data privacy andthe regulation of machine-learning medical software, genomic technologies, and diagnostic testing. She is an elected member of the American Law Institute, a Senior Member of the Institute of Electrical and Electronics Engineers and was named a Greenwall Foundation Faculty Scholar in Bioethics for 2010-2013. Before coming to academia, she was a partner in the international regulatory practice of a large New York law firm and is admitted to the practice of law in New York and Texas. She holds a BS in electrical engineering from the University of Texas at Austin, an MS & PhD from Stanford University, a JD from Yale Law School, an LLM in Health Law from the University of Houston Law Center, and she completed a post-doctoral fellowship in Clinical Ethics at the MD Anderson Cancer Center.
April 15, 2021
In this talk, Dr. Pérez-Quiñones presents some of the somber statistics of underrepresentation in computing. He argues that computer science students and professionals should care deeply about this inequity: a lack of diversity in software development teams can have serious consequences for a fair society. Dr. Pérez-Quiñones presents examples of the negative effects that underrepresentation in computing teams can have. The presentation concludes with an open question: What can we do to broaden participation in computing?
Speaker Biography: Dr. Manuel A. Pérez-Quiñones is Professor of Software and Information Systems at UNC at Charlotte. His research interests include HCI, CS education, and diversity in computing. He has held various administrative positions in academia, including Associate Dean for the Graduate School at VT and Associate Dean of the College of Computing and Informations. He was Chair of the Coalition to Diversify Computing, Program Chair for the 2014 Tapia Conference, and Symposium Co-Chair for SIGCSE 2019. He serves on the SIGCSE Board, the Advisory Board for CMD-IT, member of the Steering Committee for BPCNet and Technical Consultant for the Center for Inclusive Computing at Northeastern. His service to diversify computing has been recognized with ACM Distinguished Member status, the A. Nico Habermann award, and Richard A. Tapia Achievement Award. In over 30 years of professional experience, he has worked at UNCC (6 years), Virgina Tech (15 years), University of Puerto Rico-Mayaguez (4 years), Visiting Professor at the US Naval Academy, and Computer Scientist at the Naval Research Lab (6 years).
IAA & CS Seminar Series
April 20, 2021
Networks have historically been treated as plumbing, used to interconnect computing systems to build larger distributed computing systems, but advances in Software-Defined Networks (SDN) make it possible to treat the network, itself, as a programmable platform. Networks can now be programmed end-to-end and top-to-bottom. This talk discusses how this programmability can be used to support verifiable closed-loop control, including throughout 5G mobile networks. The talk also describes our experiences building Aether, an open source 5G-enabled edge cloud that demonstrates the value of treating the network as a programmable platform. A pilot deployment of Aether is being deployed in campuses and enterprises around the world.
Speaker Biography: Larry Peterson is the Robert E. Kahn Professor of Computer Science, Emeritus at Princeton University, where he served as Chair from 2003-2009. He is a co-author of the best selling networking textbook Computer Networks: A Systems Approach (6e), which is now available as open source on GitHub. His research focuses on the design, implementation, and operation of Internet-scale distributed systems, including the widely used PlanetLab and MeasurementLab platforms. He is currently working on a new access edge cloud called CORD, an open source project of the Open Networking Foundation (ONF), where he serves the CTO. Professor Peterson is a former Editor-in-Chief of the ACM Transactions on Computer Systems, and served as program chair for SOSP, NSDI, and HotNets. He is a member of the National Academy of Engineering, a Fellow of the ACM and the IEEE, the 2010 recipient of the IEEE Kobayashi Computer and Communication Award, and the 2013 recipient of the ACM SIGCOMM Award. He received his Ph.D. degree from Purdue University in 1985.
April 29, 2021
By 2030, the old will begin to outnumber the young for the first time in recorded history. Population aging is poised to impose a significant strain on economies, health systems, and social structures. However, it also presents a unique opportunity for AI to introduce personalization and inclusiveness to ensure equity in aging. Vulnerable populations such as older adults learn, trust, and use new technologies differently. Any prediction algorithm that we develop must use high-quality and population-representative input data outside of the clinic and produce accurate, generalizable, and unbiased results. Therefore, the translational path for AI into clinical care needs deeper engagement with all the stakeholders to ensure that we solve a pressing problem with a practical solution that end-users, clinicians, and patients all find value in. In this talk, I will provide some examples of working systems, evaluated by controlled experiments, and potentially be deployed in the real world to ensure equity and access among the aging population. In particular, I will highlight two specific examples: 1. Innovating for Parkinson’s, the fastest-growing neurodegenerative disease. 2. Modeling end-of-life communication with terminal cancer patients where their values and preferences are respected as they plan for a deeply personal human experience such as death.
Speaker Biography: Ehsan Hoque is an associate professor of computer science at the University of Rochester, where he co-leads the Rochester Human-Computer Interaction (ROC HCI) Group. From 2018-2019, he was also the Interim Director of the Goergen Institute for Data Science. Ehsan earned his Ph.D. from MIT in 2013, where the MIT Museum highlighted his dissertation—the development of an intelligent agent to improve human ability — as one of MIT’s most unconventional inventions. Building on the work/patent, Microsoft released “Presenter Coach” in 2019 to be integrated into PowerPoint. Ehsan is best known for introducing and extensively validating the idea of using AI to train and enhance elements of basic human ability. Ehsan and his students’ work has been recognized by NSF CAREER Award, MIT TR35, Young Investigator Award by the US Army Research Office (ARO). In 2017, Science News named him one of the 10 scientists to watch, and in 2020, the National Academy of Medicine recognized him as one of the emerging leaders in health and sciences. Ehsan is an inaugural member of the ACM’s Future of Computing Academy.
IAA & CS Seminar Series
May 20, 2021
Autonomous driving needs machine learning, because it relies so heavily on perception. But machine learning is notoriously unpredictable and unverifiable. How then can an autonomous car ever be convincingly safe? Dr. Jackson and his research team have been exploring the classic idea of a runtime monitor: a small trusted base that executes in parallel and intervenes to prevent violations of safety.
Unfortunately, in this context, the traditional runtime monitor is not very plausible. If it processes sensor data itself, it is likely either to be no less complex than the main system, or to be too crude to allow early intervention. And if it does not process sensor data, and instead relies on the main system for that, the key benefit of a small trusted base is lost.
The research team has been pursuing a new approach in which the main controller constructs a “certificate” that embodies a run-time safety case. The monitor is only responsible for checking the certificate, which gives the desired reduction in complexity, exploiting the typical gap between the cost of finding solutions to computational problems and the cost of checking them.
Dr. Jackson will illustrate this idea with some examples his team has implemented in simulation, with the disclaimer that this research is in the early stages. His hope is to provoke an interesting discussion.
Speaker Biography: Daniel Jackson is a Professor of Computer Science at MIT, a MacVicar teaching fellow, and an Associate Director of the Computer Science and Artificial Intelligence Laboratory. His research has focused primarily on software modeling and design. Jackson is also a photographer; his most recent projects are Portraits of Resilience (http://portraitsofresilience.com), and At a Distance (https://dnj.photo/projects/distance). His book about software design, The Essence of Software: Why Concepts Matter for Great Design, will be published this fall by Princeton University Press.