Results of the CIG 2016 Level Generation Competition
We had two tracks in our competition. Participants had to submit a level generation program that creates game levels under several constraints. During the competition, we generated 100 levels for each participant and randomly selected 20 of them.
The aim of the first track, the "Fun Track", was to automatically create levels that are most fun and entertaining for players. This was evaluated by a group of four judges who played and analysed all randomly selected levels. All panel members agreed that the "Baseline Generator" and "Funny Quotes" created the most entertaining levels. The Baseline Generator created levels that were very hard to solve, some even appeared unsolvable, but some panel members managed to solve one of the seemingly unsolvable levels. This was quite challenging and thus entertaining, Overall the levels all appeared somehow similar, overloaded with blocks and a bit confusing.
Funny Quotes created levels that displayed text phrases and also mathematical equations that were quite sophisticated in the way the individual characters were created from blocks. Target objects were mostly placed on the characters, sometimes formed part of the equations, so overall levels were easier to solve. Despite being similar in style an despite not being very challenging to solve, it was always fun to play and never got boring.
After intensive discussions, the panel unanimously agreed that Funny Quotes created the most entertaining levels and was declared the winner of track 1. Congratulations to Yuxuan Jiang, Quentin Harscoet, Tomohiro Harada, Ruck Thawonmas from Ritsumeikan University in Japan for being the first AIBIRDS Level Generation Champion!
The Baseline Generator by Matthew Stephenson came second and Nestware by Sergio González Montañés, Maria Pilar Paulet González, and José Javier Paulet González third. Congratulations to them as well!
The aim of the second track was to create levels that were very hard but solvable. This was evaluated by people who played the levels (basically CIG participants who played using a web interface) and AI agents who tried to solve the levels. The Baseline Generator by Matthew Stephenson was the clear winner of track 2 with very challenging levels, both for humans and for AI agents. The Baseline Generator was the generator that was provided to all participants to build on and/or to compare their own generators during development. A more detailed description of Matthews approaches can be found in papers published at CIG 2016 and AIIDE 2016. Congratulations to Matthew for providing a very good baseline! Second place winner was Nestware and third place was Funny Quotes.
This was our first level generation competition. We thank all participants for submitting sophisticated generators and all CIG participants for contributing to the evaluation. We hope to continue our competition next year and encourage all interested teams to participate in this exciting challenge.