Instructor: John W. Sheppard
Dr. John Sheppard is the RightNow Technologies Distinguished Professor in Computer Science at Montana State University. Recently, he was elected as an IEEE Fellow "for contributions to system-level diagnosis and prognosis." Prior to joining Hopkins, he was a Fellow at ARINC Incorporated in Annapolis, MD where he worked for almost 20 years. Dr. Sheppard performs research in Bayesian classification, factorial hidden Markov models, recurrent neural networks, and reinforcement learning. In addition, Dr. Sheppard is active in IEEE Standards activities. Currently, he serves as a member of the IEEE Computer Society Standards Activities Board and is the Vice Chair of IEEE Standards Coordinating Committee 20 on Test and Diagnosis for Electronic Systems. He has served as co-chair of the Diagnostic and Maintenance Control Subcommittee of SCC20 and as an official US delegate to the International Electrotechnical Commission's Technical Committee 93 on Design Automation.
Course Description
This seminar course will look at currrent research in machine learning. Topics will be selected from those of mutual interest between students and the instructor. Sample topics include reinforcement learning, kernel methods, experimental methods in machine learning, computational learning theory, lazy learning, evolutionary computation, and neural networks. Students are expected to select papers and lead discussion.
Spring 2008 Topic
The machine learning topic of interest in the Fall 2008 semester will focus on learning in games and computational game theory. The seminar will examine material including lectures, draft textbooks, and papers that have considered different types of games and different approaches to learning. It is expected that this seminar will build on prior work in reinforcement learning.
Schedule
The current schedule and location are in the CS Conference Room on Thursdays, 1:30-2:45 eastern time. The following will list the schedule, based on that time, of when students will lead discussion. The schedule will be adapted based on the desires of those participating in the class. As papers are assigned and made available, they will be included here for download (password protected).
- September 18: Steve Aldrich
- Gerald Tesauro, "Programming Backgammon Using Self-Teaching Neural Nets," Artificial Intelligence, 134 (2002) 181-199.
- September 25: Brian Haberman
- Daphne Koller and Avi Pfeffer, "Representations and Solutions for Game-Theoretic Problems," Artificial Intelligence, 94:1 (1997) 167-215. [Part 1: Sections 1-3]
- October 2: Anthony Arnone
- Daphne Koller and Avi Pfeffer, "Representations and Solutions for Game-Theoretic Problems," Artificial Intelligence, 94:1 (1997) 167-215. [Part 2: Sections 4-7]
- October 9: Bob Wall
- Jordan Pollack and Alan Blair, "Co-Evolution in the Successful Learning of Backgammon Strategy," Machine Learning, 32 (1998) 225-240.
- Gerald Tesauro, "Comments on 'Co-Evolution in the Successful Learning of Backgammon Strategy'," Machine Learning, 32 (1998) 241-243.
- October 16: Patrick Donnelly
- Yevgeniy Vorobeychik, Michael P. Wellman, and Satinder Singh, "Learning Payoff Functions in Infinite Games," Machine Learning, 67 (2007) 145-168.
- October 23: Stephyn Butcher
- Pieter Spronck, Marc Ponsen, Ida Sprinkhuizen-Kuyper, and Eric Postma, "Adaptive Game AI with Dynamic Scripting," Machine Learning, 63 (2006) 217-248.
- October 30: Benjamin Mitchell
- David Aha, Matthew Molineaux, and Marc Ponsen, "Learning to Win: Case-Based Plan Selection in a Real-Time Strategy Game," Proceedings of the 6th Internation Conference on Case Based Reasoning, Chicago: Springer, 2005, 5-10.
- November 6: Neal Richter
- Arthur L. Samuel, "Some Studies in Machine Learning Using the Game of Checkers," IBM Journal, July 1959, pp. 210-229.
- Arthur L. Samuel, "Some Studies in Machine Learning Using the Game of Checkers II--Recent Progress," IBM Journal, November 1967, pp. 601-617.
Game Theory References
- Drew Fudenberg and Jean Tirole, Game Theory, Cambridge, MA: The MIT Press, 1991.
- Elwyn Berlekamp, John Conway, and Richard Guy, Winning Ways for your Mathematical Plays, Volumes 1-4, AK Peters, Ltd, 2001.
- Ian Millington, Artificial Intelligence for Games, Morgan Kaufman Publishers, 2006.
- John Nash, Noncooperative Games, PhD Dissertation, Department of Mathematics, Princeton Universtity, May 1950.
- Noan Nisan, Tim Roughgarden, Eva Tardos, and Vijay Vazirani, Algorithmic Game Theory, New York: Cambridge University Press, 2007.
- John von Neumann and Oskar Morgenstern, Theory of Games and Economic Behavior, (Commemorative Edition), Princeton University Press, 2007.
- NEW JOURNAL: IEEE Transactions on Computational Intelligence and AI in Games, a publication of the IEEE Computational Intelligence Society, the IEEE Computer Society, the IEEE Consumer Electronics Society, and the IEEE Sensors Council. (First issue scheduled for March 2009)
Machine Learning References
- S. Russell and P. Norvig, Artificial Intelligence: A Modern Approach, 2nd edition, Prentice-Hall, 2003. (This is an excellent reference for basic artificial intelligence and provides a lot of information introducing machine learning as well. For this class, I would recommend that chapters on decision/utility theory (16), Markov decision processes (17), and reinforcement learning (21). There is a little discussion of genetic algorithms in chapter 4.)
- T. Mitchell, Machine Learning, McGraw-Hill, 1997. (This has become the "standard" textbook on machine learning and provides chapters on genetic algorithms (9) as well as reinforcement learning (13). While a bit dated, the book is still excellent.)
- E. Alpaydin, Introduction to Machine Learning, The MIT Press, 2004. (This is the newest textbook on machine learning, but I am not particularly excited by it. I offer it up as a more recent resource if the date of Mitchell's text is a concern. This book tends to combine elements of machine learning from a traditional AI perspective with machine learning from the statistical pattern recognition perspective. It does have a chapter on reinforcement learning, but none on genetic algorithms.)
- V. Cherkassky and F. Mulier, Learning from Data: Concepts, Theory, and Methods, Wiley Interscience, 1998. (While also a bit dated, I really like this book. Similar to the Alpaydin text, it approaches machine learning from a statistical point of view, but does so in both a rigorous and lucid manner.)
- R. Duda, P. Hart, and D. Stork, Pattern Classification, Wiley Interscience, 2001. (This is an update to the classic "Duda and Hart" text titled Pattern Classification and Scene Analysis from 1973. Written from the perspective of statistical pattern recognition, this book became the standard for machine learning when it first formed into its own discipline. It provides good descriptions of neural networks, and even has a little bit on evolutionary algorithms.)
- S. Haykin, Neural Networks: A Comprehensive Foundation, Prentice-Hall, 1999. (This provides a solid, mathematical introduction to neural networks with some interesting chapters on radial basis function networks (5) and neurodynamic programming (12), as well as all the traditional ANN topics.)
- K. DeJong, Evolutionary Computation: A Unified Approach, The MIT Press, 2006. (This is a brand new book that looks at the main topics of evolutionary computation from a consistent, unifying point of view. It is very readable and covers all of the main algorithms, including genetic algorithms, evolutionary programming, and evolution strategies. It treats genetic programming as a specialization of the genetic algorithm, so no special treatment of GP is provided.)
- M. Mitchell, An Introduction to Genetic Algorithms, The MIT Press, 1996. (This provides a nice introduction and overview to standard genetic algorithms. Similar to other MIT Press books, it is small but well written. It is a bit dated, but given the overview nature of the book, the material is still relevant.)
- R. Sutton and A. Barto, Reinforcement Learning: An Introduction, The MIT Press, 1998. (As far as I know, this is the only book dedicated to reinforcement learning. There is nothing specific to evolutionary or connectionist techniques in the book, except or a chapter on function approximation, but it still provides a good overview. A version of the book is available online at http://www.cs.ualberta.ca/%7Esutton/book/ebook/the-book.html.)