Biases are a major source of harm in our world, and it is now widely recognized that the use of algorithms and AI can maintain, exacerbate, and even create these social, structural, and psychological biases. In response, there have been many proposed measures of bias, aspirational principles, and even proposed regulations and policies to eliminate (algorithmic) biases. I will argue, though, that most of these responses fail to address the core ethical and societal challenges of bias; at best, they provide very noisy guides that might sometimes be helpful. After surveying these issues, I will offer a diagnosis: our solutions have focused on only one of either technical or policy responses, rather than joint technical-policy solutions informed by domain expertise. I will provide an example of such a joint solution based on our recent work that integrates bias discovery (technical) with mechanism knowledge (domain expertise) to identify potential responses (policy). While this approach also has flaws, it is better able to identify both sources of problematic bias and potential mitigation actions.
David Danks is Professor of Data Science & Philosophy and affiliate faculty in Computer Science & Engineering at University of California, San Diego. His research interests range widely across philosophy, cognitive science, and machine learning, including their intersection. Danks has examined the ethical, psychological, and policy issues around AI and robotics in multiple sectors, including transportation, healthcare, privacy, and security. He has also done significant research in computational cognitive science and developed multiple novel causal discovery algorithms for complex types of observational and experimental data. Danks is the recipient of a James S. McDonnell Foundation Scholar Award, as well as an Andrew Carnegie Fellowship. He currently serves on multiple advisory boards, including the National AI Advisory Committee.