Authors
Anne Rozinat, AK Alves De Medeiros, Christian W Günther, Anton JMM Weijters, Wil MP Van der Aalst
Publication date
2007
Publisher
Technische Universiteit Eindhoven
Description
Although there has been a lot of progress in developing process mining algorithms in recent years, no effort has been put in developing a common means of assessing the quality of the models discovered by these algorithms. In this paper, we outline elements of an evaluation framework that is intended to enable (a) process mining researchers to compare the performance of their algorithms, and (b) end users to evaluate the validity of their process mining results. Furthermore, we describe two possible approaches to evaluate a discovered model (i) using existing comparison metrics that have been developed by the process mining research community, and (ii) based on the so-called k-fold-cross validation known from the machine learning community. To illustrate the application of these two approaches, we compared a set of models discovered by different algorithms based on a simple example log.
Total citations
20082009201020112012201320142015201620172018201920202021202220232024766913915981111468543