Dice Adventure Human-AI Teaming Competition
Dice Adventure Human-AI Teaming Competition

Introduction

Dice Adventure Human-AI Teaming Competition is a novel competition focusing on human-AI teaming. We are running the 2nd iteration of this competition in 2025. We offer two tracks and welcome participants at all levels. To participate, you must sign up for one or both track(s). If you are interested in developing an agent, please check out the guidelines in the [Submit AI] for agent track. The starter code and training environment can be accessed at the [Dice-Adventure-Agents repo] on GitHub. If you do not wish to submit an agent but are still interested in this competition, please check out the details on the [Play] page for signing up as a player in the player track. We will be hosting a few virtual match making events to bring players together.


The game environment for the competition – Dice Adventure, is a turn-based, dungeon crawling adventure game developed at Carnegie Mellon University. Three players take on the roles of a dwarf, a giant and a human to coordinate together and navigate through maze-like dungeons to reach goals. Each role is designed to have unique and asymmetric abilities that can address different challenges in the game. Players will encounter multiple obstacles (represented as monsters, traps, and stones) as they move around the maze. Upon encountering an obstacle, players must neutralize these threats by rolling their unique die, all of which can advantage or disadvantage a player depending on the threat. Although players cannot communicate verbally or through text, the game offers a pinning system as a mechanism for communication between teammates. Players can place these pins during the pinning phase and must learn how their teammates use and interpret the pins. To complete a level, each player must reach their individual goal (called a “shrine”) and then at least one player must reach the final goal (“tower”).


There are three possible team formations under the player track — 1) one human two agents team; 2) two humans one agent team; 3) all humans team. We have a matchmaking system that can randomly distribute players to one of the three team types and randomly assign characters regardless of their orders of entering the game. Players can play the game as many times as they like during the matchmaking sessions. Registered participants will receive a Competition ID through email to put in the text box prior to the matchmaking sessions. People can also play as anonymous if they are ineligible for the prizes. We will release the schedule for matchmaking sessions by June 1, 2025.


Please check out our website for the most up-to-date information. Happy Dice Adventuring!


Logistics

Track information

  1. Agent Track: The goal for this track is for participants to submit AI agents trained to play any of the three characters and team up with human players in Dice Adventure. The agents should be able to perform all game actions such as moving around and using the pinning system to communicate. We will provide startup code and setup tutorials.

  2. Player Track: People who are interested in this competition but do not wish to submit to the agent track can also participate by joining the player track. They will be randomly assigned to a team and play with other human players or agents. We will provide a pre-trained Reinforcement Learning (RL) agent and a pre-trained Hierarchical Task Network (HTN) agent for players to play with in addition to submissions from the agent track.

Evaluation criteria

  • Participants will be evaluated as a team.

  • We are using a scoring function (available on our competition website) to calculate the total score a team achieved after several levels.

Registration and submission

  • Agents should be developed in Python.

  • Agent development template, local build of the game (available on Windows, macOS and Linux), environment setup tutorial, and GitHub repo for the Unity project will be provided for agent track participants.

  • Agent submissions will be handled by uploading a zip file to a Dropbox link. Agent track participants will also fill out a Qualtrics form to briefly explain their approach.

  • Registration for player track will be handled through a Qualtrics survey.

  • We will be collecting anonymized game play data for academic research and future paper publication. Both agent track and player track participants will be asked to consent when they register for the competition.

Prizes

  • We received $1000 as the competition fund from IEEE CIS in 2024. We are applying for this fund this year.

  • We will select the top three teams with the highest overall scores to award prizes if we get the fund.

  • Winners will be announced on our website, at the conference and awarded certificates.

Participants

Our competition welcomes people from all walks of life to participate and have fun. For the agent track, we encourage participants to submit creative solutions and think about 1) how to design agents that can work with humans; 2) the possibility to incorporate human knowledge and strategy to train the agents. For the player track, we hope to give players experience with human-AI teaming, raise their awareness of this topic, and learn about their feedback when working with human/AI teammates. We encourage participants in the agent track to submit to the CoG Auxiliary Paper track and share their approach.

Timeline

We have released the following items on our website (https://strong-tact.github.io/) by the time of submitting this competition proposal.

  • Agent track registration start: March 15, 2025

  • Player track registration start: May 1, 2025

  • Agent track submission deadline: May 31, 2025 (23:59 AoE)

  • Online matchmaking events: June 1 – June 30, 2025

  • Winner announcement: mid August 2025

We will be hosting multiple online matchmaking events from June 1, 2025 to June 30, 2025. The event schedule will be posted on our website under the “Play” tab.
The winning teams, and their scores will be announced on our website and at IEEE CoG.

Organizers

We are a team of researchers, game designers, developers and data scientists with a enthusiasm in human-AI teaming research and game development. We develop games that are not only fun to play but can also be used to study critical human-AI teaming questions. Six members on our team organized the 2024 Dice Adventure Competition at CoG. Below are the short bios for each team member.

  • Qiao Zhang

    Qiao is a PhD student in Computer Science at Georgia Institute of Technology. She holds a Master’s degree in Statistics from University of Pennsylvania. Using multiple games as environment, her research focuses on understanding human-AI teaming dynamics in collaborative games from the perspectives of communication, coordination and adaptation. Qiao took lead on organizing the Dice Adventure Competition in 2024.


  • Glen Smith

    Glen is a PhD student at Georgia Institute of Technology pursuing a degree in Computer Science with a focus in Intelligent Systems. He holds a Master’s degree in Data Science from City, University of London and has worked in industry as a Department of Defense contractor. His current research focuses on designing teachable agents that learn tasks much like people do. His work has wide-reaching applications and has been explored in the space of intelligent tutoring systems and human-machine teaming in video games.


  • Ziyu Li

    Ziyu is a Research Scientist at Carnegie Mellon University. He is a game designer and developer with extensive experience in game prototyping and systems design. Ziyu explores design paradigms and game mechanics in multi-player cooperative games that can be used to test AI algorithms.


  • Varun Girdhar

    Varun holds a Master’s degree in Entertainment Technology at Carnegie Mellon University. He is a producer with a background in software development, and skilled in managing cross-functional teams to deliver immersive, high-quality experiences.


  • Avery Gong

    Avery is pursuing an MSCS degree with a specialization in Machine Learning, focusing on the intersection of AI and human-computer interaction at Georgia Institute of Technology. She is passionate about developing innovative technologies that enhance human experiences and has worked on projects like the human-robot interaction of robotic guide dogs, exploring AI’s potential to improve accessibility and navigation.


  • Shreyas Ravishanker

    Shreyas is a Masters Student pursuing Computer Science with a specialization in Machine Learning. He’s currently working with the Teachable AI Lab at Georgia Tech to develop agents that can be taught game knowledge by humans to play Dice Adventure. Prior to starting graduate school he was in a Machine Learning role at PNC Bank.


  • Dr. Erik Harpstead

    Dr. Erik Harpstead is a Senior Systems Scientist in the Human-Computer Interaction Institute at Carnegie Mellon University. His research focuses on the use of data in and around games to understand human learning and decision making. Recently, he leads a team at CMU developing a set of games to serve as testbeds for studying Human-AI Teaming. One of these games was used as the´focus of a competition at CoG 2024.


  • Dr. Christopher J. MacLellan

    Assistant Professor MacLellan in the School of Interactive Computing at Georgia Institute of Technology is an expert in Human-AI Teaming. His research focuses on designing AI agents that can teach, learn, and collaborate with people. He has organized numerous symposia, hackathons, and competitions on the topic of human-AI teaming.


Scroll to Top