Model Transformation Languages and Performance Engineering


Part 1: Investigating claimed Advantages and Disadvantages of Model Transformation Languages

In a recent literature review, we systematized many explicit claims about the advantages and disadvantages of model transformation languages compared to general-purpose programming languages made in the reviewed publications. Claims touch on topics such as Analysability, Comprehensibility, Expressiveness, Tool Support, and Performance but remain unsubstantiated for the most part.

In this talk we present an excerpt of the results of the literature review, as well as results of two studies where we explicitly investigated claimed advantages surrounding Expressiveness on the example of the Atlas Transformation Language (ATL) and the general-purpose language Java. We show, that, while newer language features in GPLs do help reduce overhead, they are still far from achieving the accessibility for developing model transformations that dedicated languages possess. This is due to the abstractions that MTLs bring with them, some of which have their origin in graph transformations.

Part 2: Current State of Performance Engineering of Model Transformations

In general-purpose languages, various performance engineering techniques have been established for years. In contrast, for domain-specific languages such techniques are unexplored. In our research, we focus in particular on the performance of model transformation languages, since their performance will also gain in importance due to the increasing use of models at runtime. The relevance of these concerns is highlighted by the results of our mixed methods study, which we present in this talk.

Our study focused on investigating how relevant performance of model transformations is and how developers currently deal with performance issues. Our results show that there is a need for suitable performance analysis techniques, because transformation developers already desire a certain execution time, which is not always achieved. For the analysis of the performance, very experienced transformation developers have already developed their own methods, based in particular on their expert knowledge of the transformation engine. However, this makes them inaccessible to pure users of a transformation language. Furthermore, we were able to compile a list of information about a transformation execution that the study participants consider important to analyze and improve the performance.

Friday, October 8, 2021 15:00 Europe/Paris
GReTA seminar

References for Part 1

  • Claimed advantages and disadvantages of (dedicated) model transformation languages: a systematic literature review, Stefan Götz, Matthias Tichy, Raffaela Groner. In: Software and Systems Modeling (SoSyM) 2020

  • Investigating the Origins of Complexity and Expressiveness in ATL Transformations, Stefan Götz, Matthias Tichy. In: Journal of Object Technology, 19 :12:1-21

  • Dedicated Model Transformation Languages vs. General-Purpose Languages: A Historical Perspective on ATL vs. Java, Stefan Götz, Matthias Tichy, Timo Kehrer. In: Proceedings of the 9th International Conference on Model-Driven Engineering and Software Development - Volume 1: MODELSWARD'21

References for Part 2

  • A Survey on the Relevance of Performance of Model Transformations., R. Groner, K. Juhnke, S. Höppner, M. Tichy, S. Becker, V. Vijayshree, S. Frank. Journal of Object Technology, Volume 20, no. 2 (2021)

  • An Exploratory Study on Performance Engineering in Model Transformations, R. Groner, L. Beaucamp, M. Tichy, S. Becker. In ACM/IEEE 23rd International Conference on Model Driven Engineering Languages and Systems (MODELS ’20)

  • A profiler for the matching process of henshin., R. Groner, S. Gylstorff, and M. Tichy. 2020. In /Proceedings of the 23rd ACM/IEEE International Conference on Model Driven Engineering Languages and Systems: Companion Proceedings/.

Stefan Höppner
Stefan Höppner
PhD student

Stefan Höppner is a PhD student at Ulm University. Before that, he was a student of software engineering at the same university where he received his M. Sc. degree. His research focuses on topics related to the application of empirical methods for the evaluation of model transformation languages.

Raffaela Groner
Raffaela Groner
PhD student

Raffaela Groner is a Ph.D student at the Ulm University, Germany. Prior, she studied Computer Science at the Ulm University, where she received her M. Sc. in Computer Science in 2017. Her research is focused on the performance of model transformations languages. Thereby, a special interest lies in declarative transformation languages.