The Tabletop Games Balancing Competition
The Tabletop Games Balancing Competition

Introduction

As its most basic form, Game Balancing is an optimisation problem. Solving this problem requires modifying game rules to achieve a goal, however, there is no agreed upon definition on what this goal is. Despite this, most game designers will agree that Game Balancing is an import part of the game design process.


Tabletop Games are true to their name, games which are played on a flat (both physical and digital) surface. Common examples of Tabletop Games are Poker and Catan. Game Balance is particularly important for Tabletop Games. Their multiplayer nature results in most players wanting to feel they have a fair chance to win, giving an achievable balance goal. Additionally, their predominantly physical nature means that if changes are to be made to the rules of a game, it has to be to be delivered in a material form such as a new edition of the game, which is a bigger logistical challenge than changing the code of a video game and pushing out a patch.


There has been previous research on using AI to help automate the Game Balancing Process. A common way to balance these games is through using an optimisation algorithm such as an Evolutionary Algorithm. These work by iteratively modifying the game parameters trying to maximise a metric, with this metric being created by the designer to gauge how balanced the game is. As there is no accepted definition of what makes a game balanced, this metric can be uniquely designed for each game.


These optimisation algorithms need data derived from gameplay in order to find a set of balanced rules. A common way of achieving this is to play the game in a simulated environment, with AI Agents acting as players. For Tabletop Games, there is the Tabletop Games Framework (TAG). This is a framework implemented in Java which provides an interface for implementing tabletop games in a state where AI agents can play them. TAG has an integrated forward model for agents such as Monte Carlo Tree Search (MCTS), and compatibility with python-based Reinforcement Learning libraries. There are currently over 20 games implemented in TAG, and there has already been competitions ran using it as a simulator.


There has previously been competitions focused on game balance in a particular game. However to our knowledge, there has not been a competition where to objective is to create an optimisation agent able to balance multiple games of different genres within the Tabletop Games umbrella. (e.g.. Card Games, Eurogames, Wargames, Party Games, etc)


Our proposed competition will require entrants to create an optimisation algorithm which is able to balance multiple games from the TAG framework. This presents a novel challenge as not only will the agent have to successfully balance different games, but also games of different genres within the Tabletop Games scope. To achieve this, we will create a language agnostic API which allows for agents to run games with modified rule-sets in TAG, and retrieve data from this gameplay to iteratively achieve balanced games.


Game Balance has been a topic of research at multiple IEEE Conference on Games (CoG), thus we imagine there will be interest in the competition from within the community. Additionally, we hope it may attract participants interested in optimisation problems from different disciplines. Finally, by designing the competition to be simple to enter (details of this will be outlined in the subsequent section), there may submissions from participants not previously associated ith Game Balancing or Optimisation.


After the competition we aim to do a collaborative paper following Togelius’ Big Hippie Family publication model. Our motivation for this model is to allow for any discoveries of scientific value to be disseminated from one central paper, while also getting contributions from participants of the competition.

Logistics

The competition will be hosted on a bespoke designed website. Participants will be able to create an account on the website through signing up with an external login such as Github or Google (using OAuth2). Entries will be submitted through these accounts throughout the competition. The website will display a constantly updating leaderboard, which will give instant results to participants as to the strength of their entry. In the final stage of the competition this leaderboard will be frozen to add some suspense to the announcement of the winners. The following subsection will go into detail on specific aspects of running the competition.


The winner of the competition will be the participant who has the agent which successfully achieves the lowest average Balance Loss in a secret subset of the games implemented in TAG. We plan to apply for IEEE CIS funding to offer a prize to the winner, with $1000 being our target.

Participants

The two main target audiences will be people who are either interested in how Artificial Intelligence can be used to improve game design, or for Artificial Intelligence researchers interested in a black-box optimisation problem which requires a general solution (due to multiple games being balanced).


This is a brand new competition so we do not have any previous participants to advertise too. Instead we will post announcements on relevant communities such as the cigames google group, as well as Discord channels related to Game Design and Artificial Intelligence. Finally, direct invitations will be sent to academics who have previous work in the field of Automatic Game Balancing.

Timeline

We plan to start the competition on the 14th of April 2025. The competition will the run normally until the 14th of July 2025, upon this point the competition will enter the final stage. The leaderboard will be frozen, and participants will not see how their entry compares to other. Entries will be closed on the 1st of August 2025. The results will then be announced live at the conference. All participants will be encouraged to attend the conference for the for the competition session and results announcement.

Organizers

George Long is a PhD Researcher at Queen Mary University of London. George got into the field of game balancing through the concept of min-maxing and how it impacts the design of games. His work on wargame balancing has been presented both at CoG, and in the journal ToG.


Diego Perez-Liebana is a Senior Lecturer at Queen Mary University of London, with extensive experience in organising game AI competitions for the CIG/CoG community. Diego led the organisation of the PTSP and GVGAI competitions run between 2012 and 2019, receiving a high number of participants in most editions.


Dr Spyros Samothrakis is a Senior Lecturer in the Department of Computer Science and Electronic Engineering, University of Essex. He is also Chief Scientific Advisor to Essex County Council and Independent Scientific Advisor to the Alan Turing Institute for the Innovate UK Bridge AI programme. His research specialises in machine learning with a focus on out-of-distribution generalisation, meta-learning, causal inference, and reinforcement learning. He has published extensively within the Game AI domain.

Scroll to Top