In “A Neuro-evolution Approach to General Atari Game Playing“, Hausknecht, Miikkulainen, and Stone (2013) describe and test four general game learning AIs based on evolving neural nets. They apply the AIs to sixty-one Atari 2600 games exceeding the best known human performance in three of the sixty-one games (Bowling, Kung Fu Master, and Video Pinball). This work improves their previous Atari gaming AI described in “HyperNEAT-GGP: A HyperNEAT-based Atari General Game Player” (2012) with P. Khandelwal.
The Atari 2600 presents a uniform interface for all its games: a 2D screen, a joystick, and one button. The Atari 2600 games are simulated in the Arcade Learning Environment which has allowed several researchers to develop AIs for the Atari.
The four algorithms tested are:
- Fixed topology neural nets that adapt by changing weights between neurons
- Neural nets that evolve both the weights and the topology of the network (NEAT created by Stanley and Miikkulainen (2002))
- “Indirect encoding of the network weights” (HyperNEAT created by Gauci and Stanley (2008))
- “A hybrid algorithm combining elements of both indirect encodings and individual weight evolution” (HybrID by Clune, Stanley, Pennock, and Ofria (2011))
All of the algorithms evolved a population of 100 individual neural nets over 150 generations mostly using the topology shown below.
For MOVIES of the AIs in action click below