Declarative process models as explainable and verifiable Artificial Intelligence

Research output: Book/ReportPh.D. thesisResearch

As Artificial Intelligence-based decision support systems become an ever increasing part of our society, regulators have found it necessary to devise laws and guidelines on the ethical use of AI. In this light explainable AI has regained attention in the minds of researchers and businesses. Explainable AI has been with us for decades in the form of rule-based systems where high-level declarative rules are often implemented into more imperative or procedural behaviour-based code. Process mining has emerged as prominent AI field where the goal is to use process models to describe real-world business processes. The “white-box” qualities that make process models ideal at describing business processes also makes them prime candidates for use in decision support systems.

With the goal of enhancing model explainability and expressiveness, this thesis addresses selected aspects of the process mining pipeline, from data extraction to model verification, with a focus on enhanced trustworthiness, understandability, and verifiability of declarative process models. The objective is to advance the process mining field towards providing explainable Artificial Intelligence-based decision support systems.

To improve trustworthiness we propose a participatory design-based method for event log extraction that improves collaboration among stakeholders by extracting event logs that accurately and transparently capture business rules. To improve understandability we propose new algorithms to discover simpler, more understandable, declarative process models that take into account the users cognitive load. To enhance verifiability we propose two approaches tailored for the Dynamic Condition Response graphs declarative process notation. The first approach is based on a bisimilar mapping from Dynamic Condition Response graphs to Petri nets which preserves the declarative behaviour. This allows us to take advantage of mature model checking tools available for Petri nets. The second approach allows for the verification of select trace behaviour via test driven modelling. Specifically we check that we do not violate past modelling decisions when declarative models evolve over time.

Finally, to enhance expressiveness of mined declarative models we propose a timed extension for the declarative DisCoveR miner and show that verifiability is preserved by extending the bisimlar mapping from timed Dynamic Condition Response graphs to a timed variant of Petri nets.
Original languageEnglish
PublisherDepartment of Computer Science, Faculty of Science, University of Copenhagen
Number of pages204
Publication statusPublished - 2024

ID: 399344219