Fall 2020

View the recording >>

Institute for Assured Autonomy & Computer Science Seminar Series

September 22, 2020

Abstract: Recent years have seen an astounding growth in the deployment of AI systems in critical domains—such as in autonomous vehicles, criminal justice, health care, hiring, housing, human resource management, law enforcement, and public safety—where decisions taken by AI agents directly impact human lives. Consequently, there is an increasing concern whether these decisions can be trusted to be correct, reliable, fair, and safe—especially under adversarial attacks. How, then, can we deliver on the promise of the benefits of AI, but address these scenarios that have life-critical consequences for people and society? In short, how can we achieve trustworthy AI? Under the umbrella of trustworthy computing, there is a long-established framework employing formal methods and verification techniques for ensuring trust properties, like the reliability, security, and privacy of traditional software and hardware systems. Just as for trustworthy computing, formal verification could be an effective approach for building trust in AI-based systems. However, the existing set of properties needs to be extended beyond reliability, security, and privacy to include fairness, robustness, probabilistic accuracy under uncertainty, and other properties yet to be identified and defined. Further, there is a need for new property specifications and verification techniques to handle new kinds of artifacts—e.g., data distributions, probabilistic programs, and machine-learning-based models that may learn and adapt automatically over time. This talk will pose a new research agenda from a formal methods perspective for us to increase trust in AI systems.

Speaker Biography: Jeannette M. Wing is the Avanessians Director of the Data Science Institute and a professor of computer science at Columbia University. From 2013 to 2017, she was a corporate vice president of Microsoft Research. She is an adjunct professor of computer science at Carnegie Mellon University, where she twice served as the head of the computer science department and where she has been a faculty member since 1985. From 2007 to 2010, she was the assistant director of the NSF Directorate for Computer and Information Science and Engineering. She received her SB, SM, and PhD degrees in computer science from the Massachusetts Institute of Technology. Wing’s general research interests are in the areas of trustworthy computing, specification and verification, concurrent and distributed systems, programming languages, and software engineering. Her current interests are in the foundations of security and privacy, with a new focus on trustworthy AI. She was or is on the editorial board of twelve journals, including the Journal of the ACM and Communications of the ACM. Wing is known for her work on linearizability, behavioral subtyping, attack graphs, and privacy-compliance checkers. Her 2006 seminal essay “Computational Thinking” is credited with helping to establish the centrality of computer science to problem-solving in fields where previously it had not been embraced. Wing is currently a member of: the National Library of Medicine Blue Ribbon Panel; the Science, Engineering, and Technology Advisory Committee for the American Academy for Arts and Sciences; the Board of Trustees for the Institute of Pure and Applied Mathematics; the Advisory Board for the Association for Women in Mathematics; and the Alibaba DAMO Technical Advisory Board. She has been chair and/or a member of many other academic, government, and industry advisory boards. Wing received the Computing Research Association’s Distinguished Service Award in 2011 and the ACM’s Distinguished Service Award in 2014. She is a fellow of the American Academy of Arts and Sciences, the American Association for the Advancement of Science, the ACM, and the IEEE.

CS Seminar Series

September 24, 2020

Recent technological advancements have enabled spatially-resolved transcriptomic measurements of hundreds to thousands of mRNA species with a throughput of hundreds to thousands of single cells per single day experiment. However, computational methods for statistical analysis capable of taking advantage of this new spatial dimension are still needed to connect transcriptional and spatial-contextual differences in single cells as well as identify putative subpopulations and patterns in their spatial organization from within a probabilistic framework. Here, we will demonstrate how we applied computational analysis of transcriptome-scale multiplexed error-robust FISH (MERFISH) data to identify RNA species enriched in different subcellular compartments, transcriptionally distinct cell states corresponding to different cell-cycle phases, and spatial patterning of transcriptionally distinct cells. We anticipate that such spatially resolved transcriptome profiling coupled with spatial computational analyses could help address a wide array of questions ranging from the regulation of gene expression in cells to the development of cell fate and organization in tissues.

Speaker Biography: I am an Assistant Professor in the Department of Biomedical Engineering at Johns Hopkins University. My lab is interested in understanding the molecular and spatial-contextual factors shaping cellular identity and heterogeneity, particularly in the context of cancer and how this heterogeneity impacts tumor progression, therapeutic resistance, and ultimately clinical prognosis. We develop new open-source computational software for analyzing single-cell multi-omic and imaging data that can be tailored and applied to diverse cancer types and biological systems. I was previously an NCI F99/K00 post-doctoral fellow in the lab of Dr. Xiaowei Zhuang at Harvard University. I received my PhD in Bioinformatics and Integrative Genomics at Harvard under the mentorship of Dr. Peter Kharchenko at the Department of Biomedical Informatics and in close collaboration with Dr. Catherine Wu at the Dana-Farber Cancer Institute.

View the recording >>

Gerald M. Masson Distinguished Lecture Series

October 13, 2020

Abstract: In the past 10 years, large network owners and operators have taken control of the software that controls their networks. They are now starting to take control of how packets are processed, too. Networks, for the first time, are on the cusp of being programmable end-to-end, specified top-to-bottom, and defined entirely by software. We will think of the network as a programmable platform, a distributed system, that we specify top-down using software rather than protocols. This has big ramifications for networks in the future, creating some interesting new possibilities to verify that a network is “correct by construction,” to measure and validate its behavior in real-time against a network specification, and to correct bugs through closed-loop control.

Speaker Biography: Nick McKeown is the Kleiner Perkins, Mayfield, Sequoia Professor of Electrical Engineering and Computer Science at Stanford University. His research work has focused mostly on how to improve and scale the internet. From 1988 to 2005, he focused mostly on making the internet faster, and since 2005, he has focused on how to evolve networks faster than before. McKeown co-founded several networking startups, including Nicira (software-defined networking and network virtualization) and Barefoot Networks (programmable switches and P4). He co-founded the Open Networking Foundation, the Open Networking Lab, the P4 Language Consortium, and an educational non-profit called CS Bridge dedicated to teaching high school students worldwide, in-person, how to program. McKeown is a member of the National Academy of Engineering and the American Association for the Advancement of Science and is a fellow of the Royal Academy of Engineering. He received the ACM Special Interest Group on Data Communication Award for Lifetime Achievement in 2012, the NEC C&C Prize in 2015, and an honorary doctorate from the Federal Institute of Technology Zurich in 2014.

View the recording >>

Computer Science Seminar Series

October 15, 2020

Abstract: As machine learning models are trained on ever-larger and more complex datasets, it has become standard practice to distribute this training across multiple physical computing devices. Such an approach offers a number of potential benefits, including reduced training time and storage needs due to parallelization. Distributed stochastic gradient descent (SGD) is a common iterative framework for training machine learning models. In each iteration, local workers compute parameter updates on a local dataset. These are then sent to a central server, which aggregates the local updates and pushes global parameters back to local workers to begin a new iteration. Distributed SGD, however, can be expensive in practice: Training a typical deep learning model might require several days and thousands of dollars on commercial cloud platforms. Cloud-based services that allow occasional worker failures (e.g., locating some workers on Amazon spot or Google preemptible instances) can reduce this cost, but may also reduce training accuracy. We quantify the effect of worker failure and recovery rates on model accuracy and wall-clock training time and show both analytically and experimentally that these performance bounds can be used to optimize the SGD worker configurations. In particular, we can optimize the number of workers that utilize spot or preemptible instances. Compared to heuristic worker configuration strategies and standard on-demand instances, we dramatically reduce the cost of training a model, with modest increases in training time and the same level of accuracy. Finally, we discuss implications of our work for federated learning environments, which use a variant of distributed SGD. Two major challenges in federated learning are unpredictable worker failures and a heterogeneous (non-IID) distribution of data across the workers; we show that our characterization of distributed SGD’s performance under worker failures can be adapted to this setting.

Speaker Biography: Carlee Joe-Wong is an assistant professor of electrical and computer engineering at Carnegie Mellon University. She received her AB, MA, and PhD degrees from Princeton University in 2011, 2013, and 2016, respectively. Joe-Wong’s research is in optimizing networked systems, particularly on applying machine learning and pricing to data and computing networks. From 2013 to 2014, she was the director of advanced research at DataMi, a startup she co-founded from her PhD research on mobile data pricing. She has received a few awards for her work, including the Army Research Office Young Investigator Award in 2019, an NSF CAREER Award in 2018, and the INFORMS Information Systems Society Design Science Award in 2014.

View the recording >>

Computer Science Seminar Series

October 20, 2020

Abstract: Sparsity has been a driving force in signal and image processing and machine learning for decades. In this talk, we’ll explore sparse representations based on dictionary learning techniques from two perspectives: overparameterization and adversarial robustness. First, we will characterizes the surprising phenomenon that dictionary recovery can be facilitated by searching over the space of larger (over-realized/overparameterized) models. This observation is general and independent of the specific dictionary learning algorithm used. We will demonstrate this observation in practice and provide a theoretical analysis of it by tying recovery measures to generalization bounds. We will further show that an efficient and provably correct distillation mechanism can be employed to recover the correct atoms from the over-realized model, consistently providing better recovery of the ground-truth model. We will then switch gears towards the analysis of adversarial examples, focusing on the hypothesis class obtained by combining a sparsity-promoting encoder coupled with a linear classifier, and show an interesting interplay between the flexibility and stability of the (supervised) representation map and a notion of margin in the feature space. Leveraging a mild encoder gap assumption in the learned representations, we will provide a bound on the generalization error of the robust risk to L2-bounded adversarial perturbations and a robustness certificate for end-to-end classification. We will demonstrate the applicability of our analysis by computing certified accuracy on real data and comparing with other alternatives for certified robustness. This analysis will shed light on to how to characterize this interplay for more general models.

Speaker Biography: Jeremias Sulam is an assistant professor of biomedical engineering at the Johns Hopkins University and a faculty member of its Mathematical Institute for Data Science and the Center for Imaging Science. He received his PhD in computer science from the Technion – Israel Institute of Technology in 2018. Sulam was named a Best Graduate from Engineering Careers of Argentine Universities by the National Academy of Engineering, Argentina. His research interests include machine learning, signal and image processing, and representation learning and their application to biomedical sciences.

View the recording >>

ACM Lecture Series in Memory of Nathan Krasnopoler

October 27, 2020

Speaker Biography: Ed Catmull, Turing Prize Award winner for his contributions to 3D graphics and CGI filmmaking, is a co-founder of Pixar Animation Studios and is the former president of Pixar, Walt Disney Animation Studios, and Disneytoon Studios. For over 25 years, Pixar has dominated the world of animation, producing 14 consecutive #1 box office hits, which have grossed more than $8.7 billion at the worldwide box office to date and have won thirty Academy Awards®. Catmull’s book, Creativity, Inc.—co-written with journalist Amy Wallace and years in the making—is a distillation of the ideas and management principles he has used to develop a creative culture. A book for managers who want to lead their employees to new heights, it also grants readers an all-access trip into the nerve center of Pixar Animation Studios—into the meetings, postmortems, and “braintrust” sessions where some of the most successful films in history have been made. Catmull has been honored with five Academy Awards®, including an Oscar of Lifetime Achievement for his technical contributions and leadership in the field of computer graphics for the motion picture industry. He also has been awarded the Turing Award by the ACM, the world’s largest society of computing professionals, for his work on 3D computer graphics. Catmull earned BS degrees in computer science and physics and a PhD in computer science from the University of Utah. In 2005, the University of Utah presented him with an honorary doctoral degree in engineering. In 2018, Catmull announced his retirement from Pixar, but has cemented his legacy as an innovator in technology, entertainment, business, and leadership.

View the recording >>

Computer Science Seminar Series

October 29, 2020

Abstract: The meteoric rise in performance of modern AI raises many concerns when it comes to autonomous systems, their use, and their ethics. In this talk, Jim Hendler reviews some of the challenges emerging, with a particular emphasis on one of the issues faced by current deep learning technology (including neural symbolic approaches): How do AI systems know what they don’t know? Hendler avoids the generic issue, which has been raised by philosophers of AI, and looks more specifically at where the failures are coming from. The limitations of current systems, issues such as “personalization” that increase the challenge, and some of the governance issues that arise from these limitations will be covered.

Speaker Biography: Jim Hendler is the director of the Institute for Data Exploration and Applications and the Tetherless World Professor of Computer, Web, and Cognitive Sciences at Rensselaer Polytechnic Institute. He also heads the RPI-IBM Center for Health Empowerment by Analytics, Learning and Semantics. Hendler has authored over 400 books, technical papers, and articles in the areas of the Semantic Web, artificial intelligence, agent-based computing, and high-performance processing. One of the originators of the Semantic Web, Hendler was the recipient of a 1995 Fulbright Foundation Fellowship, is a former member of the U.S. Air Force Science Advisory Board, and is a fellow of the Association for the Advancement of Artificial Intelligence, the British Computer Society, the Institute of Electrical and Electronics Engineers, the American Association for the Advancement of Science, and the ACM. He is also the former chief scientist of the Information Systems Office at the U.S. Defense Advanced Research Projects Agency and was awarded a U.S. Air Force Exceptional Civilian Service Medal in 2002. In 2016, he became a member of the National Academies Board on Research Data and Information, in 2017 an advisor to the National Security Directorate at Pacific Northwest National Laboratory, and in 2018 was elected as a fellow of the National Academy of Public Administration.

CS Seminar Series

November 5, 2020

Robotic assisted surgery (RAS) systems, incorporate highly dexterous tools, hand tremor filtering, and motion scaling to enable a minimally invasive surgical approach, reducing collateral damage and patient recovery times. However, current state-of-the-art telerobotic surgery requires a surgeon operating every motion of the robot, resulting in long procedure times and inconsistent results. The advantages of autonomous robotic functionality have been demonstrated in applications outside of medicine, such as manufacturing and aviation. A limited form of autonomous RAS with pre-planned functionality was introduced in orthopedic procedures, radiotherapy, and cochlear implants. Efforts in automating soft tissue surgeries have been limited so far to elemental tasks such as knot tying, needle insertion, and executing predefined motions. The fundamental problems in soft tissue surgery include unpredictable shape changes, tissue deformations, and perception challenges.

My research goal is to transform current manual and teleoperated robotic soft tissue surgery to autonomous robotic surgery, improving patient outcomes by reducing the reliance on the operating surgeon, eliminating human errors, and increasing precision and speed. This presentation will introduce our Intelligent Medical Robotic Systems and Equipment (IMERSE) lab and discuss our novel strategies to overcome the challenges encountered in soft tissue autonomous surgery. Presentation topics will include: a) a robotic system for supervised autonomous laparoscopic anastomosis, b) magnetically steered robotic suturing, c) development of patient specific biodegradable nanofiber tissue-engineered vascular grafts to optimally repair congenital heart defects (CHD), and d) our work on COVID-19 mitigation in ICU robotics, safe testing, and safe intubation.

Speaker Biography: Axel Krieger, PhD, and his IMERSE team joined LCSR in July 2020. He is an Assistant Professor in the Department of Mechanical Engineering at the Johns Hopkins University. He is leading a team of students, scientists, and engineers in the research and development of robotic tools and laparoscopic devices. Projects include the development of a surgical robot called smart tissue autonomous robot (STAR) and the use of 3D printing for surgical planning and patient specific implants. Professor Krieger is an inventor of over twenty patents and patent applications. Licensees of his patents include medical device start-ups Activ Surgical and PeriCor as well as industry leaders such as Siemens, Philips, and Intuitive Surgical. Before joining the Johns Hopkins University, Professor Axel Krieger was Assistant Professor in Mechanical Engineering at the University of Maryland and Assistant Research Professor and Program Lead for Smart Tools at the Sheikh Zayed Institute for Pediatric Surgical Innovation at Children’s National. He has several years of experience in private industry at Sentinelle Medical Inc and Hologic Inc. His role within these organizations was Product Leader developing

View the recording >>

Computer Science Seminar Series

November 10, 2020

Abstract: Recent advances in informatics, robotics, machine learning, and augmented reality provide exciting opportunities for developing integrated surgical systems for orthopaedic surgery. These systems have the potential to help surgeons improve conventional approaches and empower them to test novel surgical intervention techniques. These integrated interventional systems can assist surgeons to make use of both patient-specific and population-specific data to perform technical analysis for optimizing a surgical plan. Integrated within the surgical navigation system are dexterous tools and manipulators and enhanced visualization methods to improve the accessibility, precision, and perception of the surgeon. Finally, the overall system architecture can include tools for patient-specific outcome analysis; the resulting analytics can populate and improve existing informatics databases. This talk will discuss our current efforts and challenges in developing such computer/robot-assisted systems for applications in orthopaedic surgery.

Speaker Biography: Mehran Armand is professor of orthopaedic surgery, a research professor of mechanical engineering, and a principal staff member at the Johns Hopkins Applied Physics Laboratory. He received PhD degrees in mechanical engineering and kinesiology from the University of Waterloo with a focus on bipedal locomotion. Prior to joining the APL in 2000, he completed postdoctoral fellowships at the Johns Hopkins Departments of Orthopaedic Surgery and Otolaryngology (ENT). Armand currently directs the laboratory for Biomechanical- and Image-Guided Surgical Systems within the Laboratory for Computational Sensing and Robotics. He also co-directs the newly established Neuroplastic Surgery Research Laboratory and the AVICENNA Laboratory for advancing surgical technologies, located at the Johns Hopkins Bayview Medical Center. Aramand’s lab encompasses collaborative research in continuum manipulators, biomechanics, medical image analysis, and augmented reality for translation to clinical applications in the areas of orthopaedic, ENT, and craniofacial reconstructive surgery.

View the recording >>

Institute for Assured Autonomy & Computer Science Seminar Series

November 12, 2020

Abstract: Under most conditions, complex systems are imperfect. When errors occur, as they inevitably will, systems need to be able to (1) localize the error and (2) take appropriate action to mitigate the repercussions of that error. In this talk, Leilani Gilpin presents new methodologies for detecting and explaining errors in complex systems. Her novel contribution is a system-wide monitoring architecture, which is composed of introspective, overlapping committees of subsystems. Each subsystem is encapsulated in a “reasonableness” monitor, an adaptable framework that supplements local decisions with commonsense data and reasonableness rules. This framework is dynamic and introspective: It allows each subsystem to defend its decisions in different contexts—to the committees it participates in and to itself. For reconciling system-wide errors, she developed a comprehensive architecture called “Anomaly Detection through Explanations” (ADE). The ADE architecture contributes an explanation synthesizer that produces an argument tree, which in turn can be traced and queried to determine the support of a decision and to construct counterfactual explanations. She has applied this methodology to detect incorrect labels in semiautonomous vehicle data and to reconcile inconsistencies in simulated anomalous driving scenarios. Her work has opened up the new area of explanatory anomaly detection, working toward a vision in which complex systems will be articulate by design: They will be dynamic, internal explanations will be part of the design criteria, system-level explanations will be provided, and they can be challenged in an adversarial proceeding.

Speaker Biography: Leilani H. Gilpin is a research scientist at Sony AI and a collaborating researcher at the Massachusetts Institute of Technology Computer Science and Artificial Intelligence Laboratory. Her research focuses on enabling opaque autonomous systems to explain themselves for robust decision-making, system debugging, and accountability. Her current work integrates explainability into reinforcement learning. She has a PhD in computer science from MIT, an MS in computational and mathematical engineering from Stanford University, and a BS in mathematics with honors, a BS in Computer Science with highest honors, and a music minor from the University of California San Diego. She is currently co-organizing the Association for the Advancement of Artificial Intelligence’s Fall Symposium on Anticipatory Thinking, where she is the lead of the autonomous vehicle challenge problem. Outside of research, Gilpin enjoys swimming, cooking, rowing, and org-mode.

View the recording >>

Computer Science Seminar Series

November 17, 2020

Abstract: Rapid advances in computing, networking, and sensing technologies have resulted in ubiquitous deployment of medical cyber-physical systems (MCPS) in various clinical and personalized settings. However, with the growing complexity and connectivity of software, the increasing use of artificial intelligence for control and decision-making, and the inevitable involvement of human operators in supervision and control of MCPS, there are still significant challenges in ensuring their safety and security. In this talk, Homa Alemzadeh will present her recent work on the design of context-aware safety monitors that can be integrated with an MCPS controller and that can detect the early signs of adverse events through real-time analysis of measurements from operational, cyber, and physical layers of the system. Her proposed monitors are evaluated on a real-world system for robot-assisted surgery and are shown to be effective in the timely detection of unsafe control actions caused by accidental faults, unintentional human errors, or malicious attacks in cyberspace before they manifest in the physical system and lead to adverse consequences and harm to patients.

Speaker Biography: Homa Alemzadeh is an assistant professor in the Department of Electrical and Computer Engineering with a courtesy appointment in Computer Science at the University of Virginia. She is also a member of the Link Lab, a multidisciplinary center for research and education in cyber-physical systems (CPS). Before joining UVA, she was a research staff member at the IBM T. J. Watson Research Center. Alemzadeh received her PhD in electrical and computer engineering from the University of Illinois at Urbana-Champaign and her BSc and MSc degrees in computer engineering from the University of Tehran. Her research interests are at the intersection of computer systems dependability and data science, in particular data-driven resilience assessment and design of CPS with applications to medical devices, surgical robots, and autonomous systems. She is the recipient of the 2017 William C. Carter PhD Dissertation Award in Dependability from the Institute of Electrical and Electronics Engineers Technical Committee on Dependable Computing and Fault Tolerance and the International Federation for Information Processing Working Group 10.4 on Dependable Computing and Fault Tolerance. Her work on the analysis of safety incidents in robotic surgery was selected as the Maxwell Chamberlain Memorial Paper at the 50th annual meeting of the Society of Thoracic Surgeons  and was featured in The Wall Street Journal, MIT Technology Review, and on the BBC, among other outlets.

Distinguished Lecturer

December 1, 2020

The digitization of practically everything coupled with advances in machine learning, the automation of knowledge work, and advanced robotics promises a future with democratized use of machines and wide-spread use of AI, robots and customization. While the last 60 years have defined the field of industrial robots, and empowered hard bodied robots to execute complex assembly tasks in constrained industrial settings, the next 60 years could be ushering in our time with Pervasive robots that come in a diversity of forms and materials, helping people with physical and cognitive tasks. However, the pervasive use of machines remains a hard problem. How can we accelerate the creation of machines customized to specific tasks? Where are the gaps that we need to address in order to advance the bodies and brains of machines? How can we develop scalable and trustworthy reasoning engines?

In this talk I will discuss recent developments in machine learning and robotics, focusing on about how computation can play a role in (1) developing Neural Circuit Policies, an efficient approach to more interpretable machine learning engines, (2) making machines more capable of reasoning in the world, (3) making custom robots, and (4) making more intuitive interfaces between robots and people.

Speaker Biography: Daniela Rus is the Andrew (1956) and Erna Viterbi Professor of Electrical Engineering and Computer Science, Director of the Computer Science and Artificial Intelligence Laboratory (CSAIL) at MIT, and Deputy Dean of Research in the Schwarzman College of Computing at MIT. She is also a senior visiting fellow at Mitre Corporation. Rus’ research interests are in robotics and artificial intelligence. The key focus of her research is to develop the science and engineering of autonomy. Rus is a Class of 2002 MacArthur Fellow, a fellow of ACM, AAAI and IEEE, a member of the National Academy of Engineering, and of the American Academy of Arts and Sciences. She is the recipient of the Engelberger Award for robotics. She earned her PhD in Computer Science from Cornell University.

CS Seminar Series

December 8, 2020

Technological advancements have led to a proliferation of robots using machine learning systems to assist humans in a wide range of tasks. However, we are still far from accurate, reliable, and resource-efficient operations of these systems. Despite the strengths of convolutional neural networks (CNNs) for object recognition, these discriminative techniques have several shortcomings that leave them vulnerable to exploitation from adversaries. In addition, the computational cost incurred to train these discriminative models can be quite significance. Discriminative-generative approaches offers a promising avenue for robust perception and action. Such methods combine inference by deep learning with sampling and probabilistic inference models to achieve robust and adaptive understanding. The focus is now on implementing a computationally efficient generative inference stage that can achieve real-time results in an energy efficient manner. In this talk, I will present our work on Generative Robust Inference and Perception (GRIP), a discriminative-generative approach for pose estimation that offers high accuracy especially in unstructured and adversarial environments. I will then describe how we have designed an all-hardware implementation of this algorithm to obtain real-time performance with high energy-efficiency.

Speaker Biography: R. Iris Bahar received the B.S. and M.S. degrees in computer engineering from the University of Illinois, Urbana-Champaign, and the Ph.D. degree in electrical and computer engineering from the University of Colorado, Boulder. Before entering the Ph.D program at CU-Boulder, she worked for Digital Equipment Corporation on their microprocessor designs. She has been on the faculty at Brown University since 1996 and now holds a dual appointment as Professor of Engineering and Professor of Computer Science. Her research interest have centered on energy-efficient and reliable computing, from the system level to device level. Most recently, this includes the design of robotic systems. Recently, she served as the Program Chair and General Chair of the International Conference on Computer-Aided Design (ICCAD) in 2017, 2018 respectively and the General Chair of the International Conference on Architectural Support for Programming Languages and Operating Systems (ASPLOS) in 2019. She is the 2019 recipient of the Marie R. Pistilli Women in Engineering Achievement Award and the Brown University School of Engineering Award for Excellence in Teaching in Engineering. More information about her research can be found at http://cs.brown.edu/people/irisbahar

View the recording >>

Computer Science Seminar Series

December 15, 2020

Abstract: In recent years, there has been an explosion of reports of automated systems exhibiting undesirable behavior, often manifesting in terms of gross violations of social norms like privacy and fairness. This poses new challenges for regulation and governance, in part because these bad algorithmic behaviors are not the result of malicious intent on the part of their designers, but are instead the unanticipated side effects of applying the standard tools of machine learning. The solution must therefore be in part algorithmic—we need to develop a scientific approach aiming to formalize the kinds of behaviors we want to avoid, and design algorithms that avoid them. We will survey this area, focusing in both the more mature area of private algorithm design, as well as the more nascent area of algorithmic fairness. We will also touch on other issues, including how we can think about the larger societal effects of imposing constraints on specific algorithmic parts of larger sociotechnical systems.

Speaker Biographies: Michael Kearns is a professor in the Department of Computer and Information Science at the University of Pennsylvania, where he holds the National Center Chair and has joint appointments in the Wharton School. He is founder of Penn’s Networked and Social Systems Engineering (NETS) program and is the director of Penn’s Warren Center for Network and Data Sciences. His research interests include topics in machine learning, algorithmic game theory, social networks, and computational finance. He has worked and consulted extensively in the technology and finance industries. He is a fellow of the American Academy of Arts and Sciences, the ACM, and the Association for the Advancement of Artificial Intelligence. Kearns has consulted widely in the finance and technology industries, including in his current role as an Amazon Scholar. With Aaron Roth, he is the co-author of the recent general-audience book The Ethical Algorithm: The Science of Socially Aware Algorithm Design (Oxford University Press). Aaron Roth is a professor in the Department of Computer and Information Science at Penn, an affiliate of the Warren Center for Network and Data Science, and the co-director of the NETS program. He is also an Amazon Scholar at Amazon Web Services. He is the recipient of a Presidential Early Career Award for Scientists and Engineers awarded by President Obama in 2016, an Alfred P. Sloan Research Fellowship, an NSF CAREER award, and research awards from Yahoo, Amazon, and Google. His research focuses on the algorithmic foundations of data privacy, algorithmic fairness, game theory and mechanism design, learning theory, and the intersections of these topics. Together with Cynthia Dwork, he is the author of the book The Algorithmic Foundations of Differential Privacy. Together with Michael Kearns, he is the author of The Ethical Algorithm.