RobotTournament

From SPA Wiki

Jump to: navigation, search

From http://blog.mattwynne.net/2010/05/20/random-notes-from-spa2010/

I ran a session, Robot Tournament at the conference. Despite what I had considered thorough preparation, I had some rather exciting moments when the tournament engine spluttered and needed some running repairs. Overall though the feedback I got was positive. Some observations:

The (accidental) downtime gave people an opportunity to make build scripts and so on. I wonder whether this could be engineered deliberately another time.

More logging and visibility of exactly what’s going on when a player runs would be useful to help participants with debugging.

The warm-up should include calling a robot with a command-line argument so that any teething problems with reading the input can be resolved at a relaxed pace.

A better explanation (role play?) of how the tournament works would help.

Need to limit the number of players to 1 per team. Although it was worth experimenting with allowing more than one, there were a couple of disadvantages that seemed to outweigh the advantages: when people realised they could write scripts to add several robots, this really slowed down the time to run a round due to the number of permutations of matches. I guess here you could deal with this by using a league system, but for now the simplest thing seems to be to just limit the number of players. there is a strategy (which the winning team used) where you use a patsy player which can recognise a game against another player from the same team and throw the game, thus giving that player an advantage. By releasing several patsy players you leverage that advantage.

I was surprised (and a bit disappointed) at how conservative most of the language choices were. I think we had 3 Ruby robots, 2 Java ones and one Haskell robot. Sadly I couldn’t get smalltalk working for the guy who wanted to use that. It seemed clear that rather than one language being particularly better than another for the problem at hand, teams who used a language they were familiar with did best.

It was hard for people to see what was going on when they were getting their robots running. More visibility how exactly what it looks like when their program is run on the server environment would be helpful.

Also more functionality on the UI to slice the results and look at just how your own robot had performed.

The problem was so small that tests were hardly needed. Pivoting, changing the rules of the game half-way through the tournament might have helped here.

I would also be interested in trying out variable-length iterations - some long ones, some short ones. Shipping simple solutions early was definitely a strategy that had worked for everyone.

People enjoyed the fact that the goal - getting points - was clear, so that rather than it being about writing clean code or writing tests, it was more realistic to a business context.

Trying a more open game where you could learn more about your opponent might be interesting Getting teams to swap code might also be interesting

Doing a code show & tell wasn’t in my plan but worked really well

The session format ended up being something like this:

10 minutes introduction

25 minutes warm-up

30-45 minutes faffing around fixing the engine while people started to build their real robots break

7 x 7 = 50 minutes tournament rounds

25 minutes code show & tell

15 retrospective looking at what we’d learned and any insights