Hanabi: DeepMind’s Answer to Theory of Mind in AI

Last week I retweeted an article about DeepMind, and their new project to teach their AI to play a card game called Hanabi that requires theory of mind and more reasoning beyond other card games. I wanted to bring it up in the Twitter discussion, but we decided to table the discussion on AI until this week. However, I couldn’t get this topic out of my mind, and therefore decided to look further into it.

DeepMind and Hanabi

As we know, Google acquired DeepMind in 2014, and DeepMind is focused on progressing artificial intelligence to solve complex problems on its own. One of their recent ventures, is to get their AI to play Hanabi, which involves cooperating with other players, and knowing everyone’s hand but your own. The AI must give effective hints to help the other players succeed, while simultaneously understanding and converting the other players’ hints into useful information. DeepMind hopes that this will better their AI’s ability to cooperate with humans, which this entire game is dependent on. The core aspect that this game requires, theory of mind, requires comprehending others’ mental states and being able to understand that their differ from our own. This quality is obviously very crucial to daily human interaction, and if AI is able to adopt it, can change the quality of human-AI interaction tremendously.

DeepMind’s plan to advance in this mission, is to have the community actively participate in an open-source Hanabi environment. This concept isn’t new to DeepMind, but a common method they use in line with their “culture of collaboration and shared progress” (deepmind.com). DeepMind will release open source code, environments, and data sets to bring in the community to further the progress of their work. In this case, DeepMind released the Hanabi Learning Environment, in which people can interact to create code that will achieve a high score without aid from other AI, and to “test and train” (Wiggers) AI players to play and cooperate with both other AI and humans.

Hanabi Learning Environment platform for the community to contribute to.

Why Theory of Mind?

This aspect of theory of mind, that Hanabi can help AI develop, is crucial to bridging the gap for AI-human interaction, and appears to be the next step for AI. Children develop theory of mind at approximately the age of four, and this remains crucial to our social interactions. It can create “common sense” for AI, to help it understand people’s needs. We saw in our readings for the week, that a large struggle with machine learning is the “black box” in that we cannot effectively communicate a lot of the information we know, and much of our information is tacit. If we cannot put into words much of our known information, how can we code it into a robot to know the same information. I think this is where the theory of mind can make a difference. Programmers wouldn’t necessarily need to tell the robot what to do, but it would have the “mental” capacity to understand a human’s needs and respond appropriately.


DeepMind’s strategy to pursue development of theory of mind includes a series of neural networks called ToMnet. The three separate networks have different functions: following and learning from past AI’s tendencies, creating an overview of the mindset at a moment, including beliefs and mindset, and then the third predicts the AI’s actions based on the other two neural networks. Here you can see how the AI is learning from itself. Through observations and reflection, the AI can learn to adapt, and can learn new skills. However, this isn’t a formed concept of theory of mind. Hopefully in the process of learning to play Hanabi, the AI can develop stronger understanding of others’ mindsets and differences from their own, to create a more complete theory of mind.

As I mentioned above, ToMnet is DeepMind’s theory of mind AI, that has learned many things from itself, but needs to continue to improve. One experiment it underwent was watching three characters, with different specifications move around a room to grab colored boxes to gain points. The blind character tended to stay towards the walls, while the character that could not remember past runs went to the closest box. Finally, the character that could both see and remember past runs, developed a strategy to gain the most points, that improved with every succession. ToMnet improved to the point that it could differentiate and predict the future moves of each character. The AI is improving upon itself without human programming, but is still lacking in the area of theory of mind.

An important thing to remember is that this example covers how AI is learning to read the “minds” of other computers, not humans. However, these characters simulate what a person with the corresponding qualities would likely do. The missing aspect is understanding human mindsets and how they differ from one another and that of the AI. I think that although it appears these advancements are occurring so quickly, there is still a big step to create this understanding on a human level.

ToMnet’s past experiments

I cannot believe our trip is only two weeks away, and I cannot wait! This research has definitely opened up new questions for me to investigate when we visit Google. See you all Wednesday!


7 thoughts on “Hanabi: DeepMind’s Answer to Theory of Mind in AI

  1. Really interesting, it’s hard to imagine how many applications an AI that can make use of theory of the mind would have. I imagine it’s ability understand peoples needs would make it very valuable for marketing applications, but I’m sure there are a lot more that I’m not thinking about. Theory of the mind definitely seems like the next big step in AI development and should be interesting to follow!

    Liked by 1 person

  2. Love this post Maddie! Examples like this show that the theoretical talks we have about AI may not be as far-fetched as we believe. Also, you mentioned the parallels of AI development to the stages of child development; I think this is becoming more and more common as researches try to discover the next steps forward. Maybe Google has some child-development experts on retainer?

    Liked by 1 person

  3. A scary thought once AI is truly able to capture the theory of mind. It would be a huge step for AI, yet, I would be curious to see what kind of privacy concerns come about with robots once they develop enough to essentially have a mind of their own

    Liked by 1 person

  4. Great post! I thought that your point about how AI is learning how to read the “minds” of other computers and not humans. I think that this really proves the power of AI and whether we like it or not, it is going to be very present in our lives from now on. I wonder how it will play out when these AI machines develop enough to claim they are “smarter” than humans themselves…

    Liked by 1 person

  5. Cool post Maddie! Although AI learning the “theory of mind” could pose some problems, I see a lot of potential with the uses of AI developing human characteristics like helping people with anxiety be more comfortable in different situations by interacting to a human-like system. I also like how Google keeps most of their developments as open-source software even if it’s just to “test and train” their developments.

    Liked by 1 person

  6. Theory of mind isn’t something that I had even considered with regards to AI. Great post! It will be interesting to see later on how exactly we tease apart performance on card games such as Hanabi and other ToM tasks and the AI actually having a theory of mind. Like you mentioned the AI is a black box so it is quite possible that there is some other component at play that is allowing the AI to perform well in a ToM task that we ourselves cannot think of. We are in for some wild times in the future.

    Liked by 1 person

Leave a Reply

Please log in using one of these methods to post your comment:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s