A (proper) graph coloring is a function which assigns a color to each vertex of a graph, so that no two adjacent vertices receive the same color. A simple greedy strategy can produce such a coloring in linear time, using at most D+1 colors where D is the maximum degree of the graph. If we want to produce not just any coloring, but a uniformly random coloring, in polynomial time, how many colors are needed? To address this question, we will use the Markov chain Monte Carlo approach (MCMC). This is a general and powerful technique for sampling random elements from combinatorially-defined sets. Other well-known applications of MCMC include Google’s PageRank, many dynamical models in statistical physics (e.g. for gas dynamics, ferromagnetism, etc), and approximating the permanent. I will describe a simple Markov chain whose state space is proper graph colorings, and whose distribution converges to uniform as the number of steps tends to infinity. Simulating this chain for a finite number of steps is the best known algorithm for sampling graph colorings approximately uniformly at random. I will tell you about a sequence of results for this chain, each of which introduced a new general technique for the analysis of MCMC algorithms.