Fall 2020

Video Recording >>

IAA Speaker

September 22, 2020

Recent years have seen an astounding growth in deployment of AI systems in critical domains such as autonomous vehicles, criminal justice, healthcare, hiring, housing, human resource management, law enforcement, and public safety, where decisions taken by AI agents directly impact human lives. Consequently, there is an increasing concern if these decisions can be trusted to be correct, reliable, fair, and safe, especially under adversarial attacks. How then can we deliver on the promise of the benefits of AI but address these scenarios that have life-critical consequences for people and society? In short, how can we achieve trustworthy AI? Under the umbrella of trustworthy computing, there is a long-established framework employing formal methods and verification techniques for ensuring trust properties like reliability, security, and privacy of traditional software and hardware systems. Just as for trustworthy computing, formal verification could be an effective approach for building trust in AI-based systems. However, the set of properties needs to be extended beyond reliability, security, and privacy to include fairness, robustness, probabilistic accuracy under uncertainty, and other properties yet to be identified and defined. Further, there is a need for new property specifications and verification techniques to handle new kinds of artifacts, e.g., data distributions, probabilistic programs, and machine learning based models that may learn and adapt automatically over time. This talk will pose a new research agenda, from a formal methods perspective, for us to increase trust in AI systems.

Speaker Biography: BIO Jeannette M. Wing is Avanessians Director of the Data Science Institute and Professor of Computer Science at Columbia University. From 2013 to 2017, she was a Corporate Vice President of Microsoft Research. She is Adjunct Professor of Computer Science at Carnegie Mellon where she twice served as the Head of the Computer Science Department and had been on the faculty since 1985. From 2007-2010 she was the Assistant Director of the Computer and Information Science and Engineering Directorate at the National Science Foundation. She received her S.B., S.M., and Ph.D. degrees in Computer Science, all from the Massachusetts Institute of Technology. Professor Wing’s general research interests are in the areas of trustworthy computing, specification and verification, concurrent and distributed systems, programming languages, and software engineering. Her current interests are in the foundations of security and privacy, with a new focus on trustworthy AI. She was or is on the editorial board of twelve journals, including the Journal of the ACM and Communications of the ACM. Professor Wing is known for her work on linearizability, behavioral subtyping, attack graphs, and privacy-compliance checkers. Her 2006 seminal essay, titled Computational Thinking, is credited with helping to establish the centrality of computer science to problem-solving in fields where previously it had not been embraced. She is currently a member of: the National Library of Medicine Blue Ribbon Panel; the Science, Engineering, and Technology Advisory Committee for the American Academy for Arts and Sciences; the Board of Trustees for the Institute of Pure and Applied Mathematics; the Advisory Board for the Association for Women in Mathematics; and the Alibaba DAMO Technical Advisory Board. She has been chair and/or a member of many other academic, government, and industry advisory boards. She received the CRA Distinguished Service Award in 2011 and the ACM Distinguished Service Award in 2014. She is a Fellow of the American Academy of Arts and Sciences, American Association for the Advancement of Science, the Association for Computing Machinery (ACM), and the Institute of Electrical and Electronic Engineers (IEEE).

CS Seminar Series

September 24, 2020

Recent technological advancements have enabled spatially-resolved transcriptomic measurements of hundreds to thousands of mRNA species with a throughput of hundreds to thousands of single cells per single day experiment. However, computational methods for statistical analysis capable of taking advantage of this new spatial dimension are still needed to connect transcriptional and spatial-contextual differences in single cells as well as identify putative subpopulations and patterns in their spatial organization from within a probabilistic framework. Here, we will demonstrate how we applied computational analysis of transcriptome-scale multiplexed error-robust FISH (MERFISH) data to identify RNA species enriched in different subcellular compartments, transcriptionally distinct cell states corresponding to different cell-cycle phases, and spatial patterning of transcriptionally distinct cells. We anticipate that such spatially resolved transcriptome profiling coupled with spatial computational analyses could help address a wide array of questions ranging from the regulation of gene expression in cells to the development of cell fate and organization in tissues.

Speaker Biography: I am an Assistant Professor in the Department of Biomedical Engineering at Johns Hopkins University. My lab is interested in understanding the molecular and spatial-contextual factors shaping cellular identity and heterogeneity, particularly in the context of cancer and how this heterogeneity impacts tumor progression, therapeutic resistance, and ultimately clinical prognosis. We develop new open-source computational software for analyzing single-cell multi-omic and imaging data that can be tailored and applied to diverse cancer types and biological systems. I was previously an NCI F99/K00 post-doctoral fellow in the lab of Dr. Xiaowei Zhuang at Harvard University. I received my PhD in Bioinformatics and Integrative Genomics at Harvard under the mentorship of Dr. Peter Kharchenko at the Department of Biomedical Informatics and in close collaboration with Dr. Catherine Wu at the Dana-Farber Cancer Institute.

Video Recording >>

Gerald M. Masson Distinguished Lecturer

October 13, 2020

In the past 10 years, large network owners and operators have taken control of the software that controls their networks. They are now starting to take control of how packets are processed too. Networks, for the first time, are on the cusp of being programmable end-to-end, specified top-to-bottom, and defined entirely by software. We will think of the network as a programmable platform, a distributed system, that we specify top-down using software, rather than protocols. This has big ramifications for networks in the future, creating some interesting new possibilities to verify that a network is “correct by construction”, to measure and validate its behavior in real-time against a network specification, and to correct bugs through closed-loop control.

Speaker Biography: Nick McKeown is the Kleiner Perkins, Mayfield and Sequoia Professor of Electrical Engineering and Computer Science at Stanford University. His research work has focused mostly on how to improve and scale the Internet. From 1988-2005, he focused mostly on making the Internet faster; since 2005, he has focused on how to evolve networks faster than before. He co-founded several networking startups, including Nicira (SDN and network virtualization), and Barefoot Networks (programmable switches and P4). He co-founded the Open Networking Foundation (ONF), the Open Networking Lab (ON.Lab) and the P4 Language Consortium (P4.org), and an educational non-profit called “CS Bridge” dedicated to teaching high school students worldwide, in-person, how to program. Nick is a member of the NAE, the AAAS, and a Fellow of the Royal Academy of Engineering (UK). He received the ACM Sigcomm Lifetime Achievement Award (2012), the NEC C&C Prize (2015) and an Honorary Doctorate from ETH (2014).

Video Recording >>

CS Seminar Series

October 15, 2020

As machine learning models are trained on ever-larger and more complex datasets, it has become standard to distribute this training across multiple physical computing devices. Such an approach offers a number of potential benefits, including reduced training time and storage needs due to parallelization. Distributed stochastic gradient descent (SGD) is a common iterative framework for training machine learning models: in each iteration, local workers compute parameter updates on a local dataset. These are then sent to a central server, which aggregates the local updates and pushes global parameters back to local workers to begin a new iteration. Distributed SGD, however, can be expensive in practice: training a typical deep learning model might require several days and thousands of dollars on commercial cloud platforms. Cloud-based services that allow occasional worker failures (e.g., locating some workers on Amazon spot or Google preemptible instances) can reduce this cost, but may also reduce the training accuracy. We quantify the effect of worker failure and recovery rates on the model accuracy and wall-clock training time, and show both analytically and experimentally that these performance bounds can be used to optimize the SGD worker configurations. In particular, we can optimize the number of workers that utilize spot or preemptible instances. Compared to heuristic worker configuration strategies and standard on-demand instances, we dramatically reduce the cost of training a model, with modest increases in training time and the same level of accuracy. Finally, we discuss implications of our work for federated learning environments, which use a variant of distributed SGD. Two major challenges in federated learning are unpredictable worker failures and a heterogeneous (non-i.i.d.) distribution of data across the workers, and we show that our characterization of distributed SGD’s performance under worker failures can be adapted to this setting.

Speaker Biography: Carlee Joe-Wong is an Assistant Professor of Electrical and Computer Engineering at Carnegie Mellon University. She received her A.B., M.A., and Ph.D. degrees from Princeton University in 2011, 2013, and 2016, respectively. Dr. Joe-Wong’s research is in optimizing networked systems, particularly on applying machine learning and pricing to data and computing networks. From 2013 to 2014, she was the Director of Advanced Research at DataMi, a startup she co-founded from her Ph.D. research on mobile data pricing. She has received a few awards for her work, including the ARO Young Investigator Award in 2019, the NSF CAREER Award in 2018, and the INFORMS ISS Design Science Award in 2014.

Carlee will be available for a Q&A after her talk until 1 PM.

Video Recording >>

CS Seminar Series

October 20, 2020

Sparsity has been a driving force in signal & image processing and machine learning for decades. In this talk we’ll explore sparse representations based on dictionary learning techniques from two perspectives: over-parameterization and adversarial robustness. First, we will characterizes the surprising phenomenon that dictionary recovery can be facilitated by searching over the space of larger (over-realized/parameterized) models. This observation is general and independent of the specific dictionary learning algorithm used. We will demonstrate this observation in practice and provide a theoretical analysis of it by tying recovery measures to generalization bounds. We will further show that an efficient and provably correct distillation mechanism can be employed to recover the correct atoms from the over-realized model, consistently providing better recovery of the ground-truth model. We will then switch gears towards the analysis of adversarial examples, focusing on the hypothesis class obtained by combining a sparsity-promoting encoder coupled with a linear classifier, and show an interesting interplay between the flexibility and stability of the (supervised) representation map and a notion of margin in the feature space. Leveraging a mild encoder gap assumption in the learned representations, we will provide a bound on the generalization error of the robust risk to L2-bounded adversarial perturbations and a robustness certificate for end-to-end classification. We will demonstrate the applicability of our analysis by computing certified accuracy on real data, and comparing with other alternatives for certified robustness. This analysis will shed light on to how to characterize this interplay for more general models.

Speaker Biography: Jeremias Sulam is an assistant professor at the Biomedical Engineering department at JHU, and a faculty member of the Mathematical Institute for Data Science (MINDS) and the Center for Imaging Science (CIS). He received his PhD in Computer Science from the Technion-Israel Institute of Technology, in 2018. He is the recipient of the Best Graduates Award of the Argentinean National Academy of Engineering. His research interests include machine learning, signal and image processing, representation learning and their application to biomedical sciences.

Krasnopoler Lecture

October 27, 2020

Ed Catmull will be hosting a live Q&A.

Speaker Biography: Ed Catmull, Turing Prize Award winner for his contributions to 3D graphics and CGI filmmaking.

Dr. Ed Catmull is co-founder of Pixar Animation Studios and the former president of Pixar, Walt Disney Animation Studios, and Disneytoon Studios. For over twenty-five years, Pixar has dominated the world of animation, producing fourteen consecutive #1 box office hits, which have grossed more than $8.7 billion at the worldwide box office to date, and won thirty Academy Awards®.

His book Creativity, Inc.—co-written with journalist Amy Wallace and years in the making—is a distillation of the ideas and management principles Ed has used to develop a creative culture. A book for managers who want to lead their employees to new heights, it also grants readers an all-access trip into the nerve center of Pixar Animation Studios—into the meetings, postmortems, and “Braintrust” sessions where some of the most successful films in history have been made.

Dr. Catmull has been honored with five Academy Awards®, including an Oscar of Lifetime Achievement for his technical contributions and leadership in the field of computer graphics for the motion picture industry. He also has been awarded the Turing Award by the world’s largest society of computing professionals, the Association for Computing Machinery, for his work on three-dimensional computer graphics. Dr. Catmull earned B.S. degrees in computer science and physics and a Ph.D. in computer science from the University of Utah. In 2005, the University of Utah presented him with an Honorary Doctoral Degree in Engineering. In 2018, Catmull announced his retirement from Pixar, though he has cemented his legacy as an innovator in technology, entertainment, business, and leadership.

Video Recording >>

CS Seminar Series

October 29, 2020

The meteoric rise in performance of modern AI raises many concerns when it comes to autonomous systems, their use, and their ethics. In this talk, Jim Hendler reviews some of the challenges emerging, with a particular emphasis on one of the issues faced by current deep learning technology (including neural symbolic approaches) – how do AI systems know what they don’t know? Hendler avoids the generic issue, which has been raised by philosophers of AI, and looks more specifically at where the failures are coming from. The limitations of current systems, issues such as ‘personalization’ that increase the challenge, and some of the governance issues that arise from these limitations will be covered.

Speaker Biography: James Hendler is the Director of the Institute for Data Exploration and Applications and the Tetherless World Professor of Computer, Web and Cognitive Sciences at RPI. He also heads the RPI-IBM Center for Health Empowerment by Analytics, Learning and Semantics (HEALS). Hendler has authored over 400 books, technical papers and articles in the areas of Semantic Web, artificial intelligence, agent-based computing and high-performance processing. One of the originators of the “Semantic Web,” Hendler was the recipient of a 1995 Fulbright Foundation Fellowship, is a former member of the US Air Force Science Advisory Board, and is a Fellow of the AAAI, BCS, the IEEE, the AAAS and the ACM. He is also the former Chief Scientist of the Information Systems Office at the US Defense Advanced Research Projects Agency (DARPA) and was awarded a US Air Force Exceptional Civilian Service Medal in 2002. In 2016, he became a member of the National Academies Board on Research Data and Information, in 2017 an advisor to the National Security Directorate at PNNL, and in 2018 was elected a Fellow of the National Academy of Public Administration.

CS Seminar Series

November 5, 2020

Robotic assisted surgery (RAS) systems, incorporate highly dexterous tools, hand tremor filtering, and motion scaling to enable a minimally invasive surgical approach, reducing collateral damage and patient recovery times. However, current state-of-the-art telerobotic surgery requires a surgeon operating every motion of the robot, resulting in long procedure times and inconsistent results. The advantages of autonomous robotic functionality have been demonstrated in applications outside of medicine, such as manufacturing and aviation. A limited form of autonomous RAS with pre-planned functionality was introduced in orthopedic procedures, radiotherapy, and cochlear implants. Efforts in automating soft tissue surgeries have been limited so far to elemental tasks such as knot tying, needle insertion, and executing predefined motions. The fundamental problems in soft tissue surgery include unpredictable shape changes, tissue deformations, and perception challenges.

My research goal is to transform current manual and teleoperated robotic soft tissue surgery to autonomous robotic surgery, improving patient outcomes by reducing the reliance on the operating surgeon, eliminating human errors, and increasing precision and speed. This presentation will introduce our Intelligent Medical Robotic Systems and Equipment (IMERSE) lab and discuss our novel strategies to overcome the challenges encountered in soft tissue autonomous surgery. Presentation topics will include: a) a robotic system for supervised autonomous laparoscopic anastomosis, b) magnetically steered robotic suturing, c) development of patient specific biodegradable nanofiber tissue-engineered vascular grafts to optimally repair congenital heart defects (CHD), and d) our work on COVID-19 mitigation in ICU robotics, safe testing, and safe intubation.

Speaker Biography: Axel Krieger, PhD, and his IMERSE team joined LCSR in July 2020. He is an Assistant Professor in the Department of Mechanical Engineering at the Johns Hopkins University. He is leading a team of students, scientists, and engineers in the research and development of robotic tools and laparoscopic devices. Projects include the development of a surgical robot called smart tissue autonomous robot (STAR) and the use of 3D printing for surgical planning and patient specific implants. Professor Krieger is an inventor of over twenty patents and patent applications. Licensees of his patents include medical device start-ups Activ Surgical and PeriCor as well as industry leaders such as Siemens, Philips, and Intuitive Surgical. Before joining the Johns Hopkins University, Professor Axel Krieger was Assistant Professor in Mechanical Engineering at the University of Maryland and Assistant Research Professor and Program Lead for Smart Tools at the Sheikh Zayed Institute for Pediatric Surgical Innovation at Children’s National. He has several years of experience in private industry at Sentinelle Medical Inc and Hologic Inc. His role within these organizations was Product Leader developing

Video Recording >>

CS Seminar Series

November 10, 2020

Recent advances in informatics, robotics, machine learning, and augmented reality provide exciting opportunities for developing integrated surgical systems for orthopaedic surgery. These systems have the potential to help surgeons improve conventional approaches and empower them to test novel surgical intervention techniques. These integrated interventional systems can assist the surgeon to make use of both patient-specific and population-specific data to perform technical analysis for optimizing a surgical plan. Integrated within the surgical navigation system are dexterous tools and manipulators as well as enhanced visualization to improve the accessibility, precision, and perception of the surgeon. Finally, the overall system architecture can include tools for patient-specific outcome analysis. The resulting analytics can populate and improve existing informatics databases. The talk will discuss our current efforts and challenges in developing such computer/robot assisted systems for applications in orthopaedic surgery.

Speaker Biography: Mehran Armand is Professor of Orthopaedic Surgery, Research Professor of Mechanical Engineering, and Principal Staff at the Johns Hopkins University Applied Physics Laboratory (JHU/APL). He received a Ph.D. degree in mechanical engineering and a Ph.D. degree in kinesiology from the University of Waterloo with a focus on bipedal locomotion. Prior to joining JHU/APL in 2000, he completed postdoctoral fellowships at JHU Departments of Orthopaedic Surgery and Otolaryngology (ENT). He currently directs the laboratory for Biomechanical- and Image-Guided Surgical Systems (BIGSS) within the Laboratory for Computational Sensing and Robotics (LCSR). He also co-directs the newly established Neuroplastic Surgery Research laboratory and AVICENNA laboratory for advancing surgical technologies, located at the Johns Hopkins Bayview Medical Center. His lab encompasses collaborative research in continuum manipulators, biomechanics, medical image analysis, and augmented reality for translation to clinical applications in the areas of orthopaedic, ENT, and craniofacial reconstructive surgery.

Video Recording >>

IAA & CS Seminar Series

November 12, 2020

Under most conditions, complex systems are imperfect. When errors occur, as they inevitably will, systems need to be able to (1) localize the error and (2) take appropriate action to mitigate the repercussions of that error. In this talk, I present new methodologies for detecting and explaining errors in complex systems. My novel contribution is a system-wide monitoring architecture, which is composed of introspective, overlapping committees of subsystems. Each subsystem is encapsulated in a “reasonableness” monitor, an adaptable framework that supplements local decisions with commonsense data and reasonableness rules. This framework is dynamic and introspective: it allows each subsystem to defend its decisions in different contexts: to the committees it participates in and to itself. For reconciling system-wide errors, I developed a comprehensive architecture that I call “Anomaly Detection through Explanations (ADE).” The ADE architecture contributes an explanation synthesizer that produces an argument tree, which in turn can be traced and queried to determine the support of a decision, and to construct counterfactual explanations. I have applied this methodology to detect incorrect labels in semiautonomous vehicle data, and to reconcile inconsistencies in simulated, anomalous driving scenarios. My work has opened up the new area of explanatory anomaly detection, working towards a vision in which complex systems will be articulate by design: they will be dynamic; internal explanations will be part of the design criteria; system-level explanations will be provided, and they can be challenged in an adversarial proceeding.

Speaker Biography: Leilani H. Gilpin is a research scientist at Sony AI and a collaborating researcher at MIT CSAIL. Her research focuses on enabling opaque autonomous systems to explain themselves for robust decision-making, system debugging, and accountability. Her current work integrates explainability into reinforcement learning. She has a PhD in Computer Science from MIT, an M.S. in Computational and Mathematical Engineering from Stanford University, and a B.S. in Mathematics (with honors), B.S. in Computer Science (with highest honors), and a music minor from UC San Diego. She is currently co-organizing the AAAI Fall Symposium on Anticipatory Thinking, where she is the lead of the autonomous vehicle challenge problem. Outside of research, Leilani enjoys swimming, cooking, rowing, and org-mode.

View previous seminars at https://iaa.jhu.edu/event/

Video Recording >>

CS Seminar Series

November 17, 2020

Rapid advances in computing, networking, and sensing technologies have resulted in ubiquitous deployment of Medical Cyber-Physical Systems (MCPS) in various clinical and personalized settings. However, with the growing complexity and connectivity of software, the increasing use of artificial intelligence for control and decision making, and the inevitable involvement of human operators in supervision and control of MCPS, there are still significant challenges in ensuring safety and security. In this talk, I will present our recent work on the design of context-aware safety monitors that can be integrated with an MCPS controller and detect the early signs of adverse events through real-time analysis of measurements from operational, cyber, and physical layers of the system. Our proposed monitors are evaluated on a real-world system for robot-assisted surgery and are shown to be effective in timely detection of unsafe control actions caused by accidental faults, unintentional human errors, or malicious attacks in cyberspace before they manifest in the physical system and lead to adverse consequences and harm to patients.

Speaker Biography: Homa Alemzadeh is an Assistant Professor in the Department of Electrical and Computer Engineering with a courtesy appointment in Computer Science at the University of Virginia. She is also a member of the Link Lab, a multi-disciplinary center for research and education in Cyber-Physical Systems (CPS). Before joining UVA, she was a research staff member at the IBM T. J. Watson Research Center. Homa received her Ph.D. in Electrical and Computer Engineering from the University of Illinois at Urbana-Champaign and her B.Sc. and M.Sc. degrees in Computer Engineering from the University of Tehran. Her research interests are at the intersection of computer systems dependability and data science, in particular data-driven resilience assessment and design of CPS with applications to medical devices, surgical robots, and autonomous systems. She is the recipient of the 2017 William C. Carter Ph.D. Dissertation Award in Dependability from the IEEE TC and IFIP Working Group 10.4 on Dependable Computing and Fault Tolerance. Her work on the analysis of safety incidents in robotic surgery was selected as the Maxwell Chamberlain Memorial Paper at the 50th annual meeting of the Society of Thoracic Surgeons (STS) and was featured in the Wall Street Journal, MIT Technology Review, and BBC, among others.

Distinguished Lecturer

December 1, 2020

The digitization of practically everything coupled with advances in machine learning, the automation of knowledge work, and advanced robotics promises a future with democratized use of machines and wide-spread use of AI, robots and customization. While the last 60 years have defined the field of industrial robots, and empowered hard bodied robots to execute complex assembly tasks in constrained industrial settings, the next 60 years could be ushering in our time with Pervasive robots that come in a diversity of forms and materials, helping people with physical and cognitive tasks. However, the pervasive use of machines remains a hard problem. How can we accelerate the creation of machines customized to specific tasks? Where are the gaps that we need to address in order to advance the bodies and brains of machines? How can we develop scalable and trustworthy reasoning engines?

In this talk I will discuss recent developments in machine learning and robotics, focusing on about how computation can play a role in (1) developing Neural Circuit Policies, an efficient approach to more interpretable machine learning engines, (2) making machines more capable of reasoning in the world, (3) making custom robots, and (4) making more intuitive interfaces between robots and people.

Speaker Biography: Daniela Rus is the Andrew (1956) and Erna Viterbi Professor of Electrical Engineering and Computer Science, Director of the Computer Science and Artificial Intelligence Laboratory (CSAIL) at MIT, and Deputy Dean of Research in the Schwarzman College of Computing at MIT. She is also a senior visiting fellow at Mitre Corporation. Rus’ research interests are in robotics and artificial intelligence. The key focus of her research is to develop the science and engineering of autonomy. Rus is a Class of 2002 MacArthur Fellow, a fellow of ACM, AAAI and IEEE, a member of the National Academy of Engineering, and of the American Academy of Arts and Sciences. She is the recipient of the Engelberger Award for robotics. She earned her PhD in Computer Science from Cornell University.

CS Seminar Series

December 8, 2020

Technological advancements have led to a proliferation of robots using machine learning systems to assist humans in a wide range of tasks. However, we are still far from accurate, reliable, and resource-efficient operations of these systems. Despite the strengths of convolutional neural networks (CNNs) for object recognition, these discriminative techniques have several shortcomings that leave them vulnerable to exploitation from adversaries. In addition, the computational cost incurred to train these discriminative models can be quite significance. Discriminative-generative approaches offers a promising avenue for robust perception and action. Such methods combine inference by deep learning with sampling and probabilistic inference models to achieve robust and adaptive understanding. The focus is now on implementing a computationally efficient generative inference stage that can achieve real-time results in an energy efficient manner. In this talk, I will present our work on Generative Robust Inference and Perception (GRIP), a discriminative-generative approach for pose estimation that offers high accuracy especially in unstructured and adversarial environments. I will then describe how we have designed an all-hardware implementation of this algorithm to obtain real-time performance with high energy-efficiency.

Speaker Biography: R. Iris Bahar received the B.S. and M.S. degrees in computer engineering from the University of Illinois, Urbana-Champaign, and the Ph.D. degree in electrical and computer engineering from the University of Colorado, Boulder. Before entering the Ph.D program at CU-Boulder, she worked for Digital Equipment Corporation on their microprocessor designs. She has been on the faculty at Brown University since 1996 and now holds a dual appointment as Professor of Engineering and Professor of Computer Science. Her research interest have centered on energy-efficient and reliable computing, from the system level to device level. Most recently, this includes the design of robotic systems. Recently, she served as the Program Chair and General Chair of the International Conference on Computer-Aided Design (ICCAD) in 2017, 2018 respectively and the General Chair of the International Conference on Architectural Support for Programming Languages and Operating Systems (ASPLOS) in 2019. She is the 2019 recipient of the Marie R. Pistilli Women in Engineering Achievement Award and the Brown University School of Engineering Award for Excellence in Teaching in Engineering. More information about her research can be found at http://cs.brown.edu/people/irisbahar

Video Recording >>

CS Seminar Series

December 15, 2020

In recent years, there has been an explosion of reports of automated systems exhibiting undesirable behavior, often manifesting itself in terms of gross violations of social norms like privacy and fairness. This poses new challenges for regulation and governance, in part because these bad algorithmic behaviors are not the result of mal-intent on the part of their designers, but are instead the unanticipated side effects of applying the standard tools of machine learning. The solution must therefore be in part algorithmic—we need to develop a scientific approach aiming to formalize the kinds of behaviors we want to avoid, and design algorithms that avoid them. We will survey this area, focusing in both the more mature area of private algorithm design, as well as the more nascent area of algorithmic fairness. We will touch on other issues, including how we can think about the larger societal effects of imposing constraints on specific algorithmic parts of larger sociotechnical systems.

Speaker Biography: Dr. Michael Kearns is a professor in the Computer and Information Science department at the University of Pennsylvania, where he holds the National Center Chair and has joint appointments in the Wharton School. He is founder of Penn’s Networked and Social Systems Engineering (NETS) program, and director of Penn’s Warren Center for Network and Data Sciences. His research interests include topics in machine learning, algorithmic game theory, social networks, and computational finance. He has worked and consulted extensively in the technology and finance industries. He is a fellow of the American Academy of Arts and Sciences, the Association for Computing Machinery, and the Association for the Advancement of Artificial Intelligence. Kearns has consulted widely in the finance and technology industries, including a current role as an Amazon Scholar. With Aaron Roth, he is the co-author of the recent general-audience book “The Ethical Algorithm: The Science of Socially Aware Algorithm Design” (Oxford University Press). Dr. Aaron Roth is a professor in the Computer and Information Science department at the University of Pennsylvania, affiliated with the Warren Center for Network and Data Science, and co-director of the Networked and Social Systems Engineering (NETS) program. He is also an Amazon Scholar at Amazon AWS. He is the recipient of a Presidential Early Career Award for Scientists and Engineers (PECASE) awarded by President Obama in 2016, an Alfred P. Sloan Research Fellowship, an NSF CAREER award, and research awards from Yahoo, Amazon, and Google. His research focuses on the algorithmic foundations of data privacy, algorithmic fairness, game theory and mechanism design, learning theory, and the intersections of these topics. Together with Cynthia Dwork, he is the author of the book “The Algorithmic Foundations of Differential Privacy.” Together with Michael Kearns, he is the author of “The Ethical Algorithm.”