Anders Drachen

Anders Drachen, Ph.D. is a veteran Data Scientist, Game Analytics consultant and Professor at the DC Labs, University of York (UK).

How do we work with gameplay metrics at the fundamental level? In the first part of this post the concepts of analysis (“breaking down things”) and synthesis (“putting things together”) were introduced as two fundamental approaches used here at Game Analytics to help categorize and define workflows. In the second part, we dig deeper and look at the factors driving the work.

Both synthesis and analysis can be applied to explorative work (where we look for patterns in data) or hypothesis-driven work (where we have an idea what the answer is and need to confirm or reject the idea).

Explorative metrics work is when the possible answers cannot be, or are hard to predict from looking at the game design. For example, finding which set of 2000 virtual items that are the most important drivers for converting non-paying users to paying users in a social online game. Or the most effective build order in an RTS like StarCraft. Explorative. A commondata-driven method for explorative research is the drill-down analysis, where you examine the gameplay metrics data at more and more detailed levels until an answer is found.
Hypothesis-driven metrics work is when we are looking to confirming conclusions or ideas, or when we can predict the answer. For example, we may think that Zombies are way too powerful on level 10, and perform metrics analysis in order to confirm this suspicion, finding that either we are right or wrong in our hypothesis (wrong in the case of the only zombie in the level being legless and suffering from bad eyesight).

Alternatively, we could have a hypothesis stating that the amounts of player deaths on a certain map correlates to the perceived difficulty level of the map. Checking metrics data on player death events with feedback from research study participants can either lead to confirmation or rejection of the hypothesis, possibly leading to the formulation of a new hypothesis.

A commonly applied method in game data mining to answer these kinds of questions are prediction analysis – the application of specific algorithms to predict something, e.g. which users that will convert to paying users, or when a person will stop playing (more on prediction analysis in future posts).

In practice, as soon as you move outside of the kind of questions that can be answered with synthesis, a quick analysis or standard algorithms, e.g. “what is the number of active users today?” or “what is the average playtime for level 22?”; you often end up mixing hypothesis-driven and explorative work.

In the experience of Alessandro, Janus and I, the explorative questions are usually more time-consuming to answer and more often requires analysis than the hypothesis-driven, specific questions, which can more often be handled using synthesis (or very simple statistical analysis) of the relevant data.

Purely explorative questions are in our experience rare – a game developer usually does not have the luxury of throwing a dataset at some people and tasking them to see what interesting stuff they can find. This is not to say that purely explorative analysis of gameplay metrics data cannot be useful, but it is often a kind of blue-water research that companies have a hard time justifying the expenditure of.

Hypothesis, explorative, synthesis, analysis … why do we care about these terms? 

The reason is that the fundamental ways we can approach gameplay metrics analysis are ordered according to these terms. They provide us with a means for classifying methods, and a terminology to use when discussing gameplay metrics work, something that is crucial for maturing analytics practices. Finally, this kind of structure provides guidance on planning how to answer particular problems – e.g. considering whether a problem is best solved analytically or using simple synthesis, whether we already have an idea about what the answer is and should test it, or not – and so forth.

Acknowledgements: The ideas and many examples from this post stem from a text that Alessandro Canossa from the IT University Copenhagen; Janus Rau Sørensen, lead game user researcher at Square Enix, and I wrote for an upcoming book on game telemetry.

Anders Drachen

Anders Drachen, Ph.D. is a veteran Data Scientist, Game Analytics consultant and Professor at the DC Labs, University of York (UK).

Join a community of passionate game developers, who get our newsletter every week!

Sign up for a free surprise