Authors
Sebastian Berns, Vanessa Volz, Laurissa Tokarchuk, Sam Snodgrass, Christian Guckelsberger
Publication date
2024
Conference
Conference on Human Factors in Computing Systems (CHI)
Publisher
ACM
Description
Similarity estimation is essential for many game AI applications, from the procedural generation of distinct assets to automated exploration with game-playing agents. While similarity metrics often substitute human evaluation, their alignment with our judgement is unclear. Consequently, the result of their application can fail human expectations, leading to e.g. unappreciated content or unbelievable agent behaviour. We alleviate this gap through a multi-factorial study of two tile-based games in two representations, where participants (N=456) judged the similarity of level triplets. Based on this data, we construct domain-specific perceptual spaces, encoding similarity-relevant attributes. We compare 12 metrics to these spaces and evaluate their approximation quality through several quantitative lenses. Moreover, we conduct a qualitative labelling study to identify the features underlying the human similarity judgement …
Scholar articles
S Berns, V Volz, L Tokarchuk, S Snodgrass… - Proceedings of the CHI Conference on Human Factors …, 2024