SIGS DATACOM Fachinformationen für IT-Professionals

German Testing Day 2020

Die unabhängige Konferenz zu Software-Qualität
Frankfurt am Main, 01. - 02. September 2020


Vortrag: GTD 3.3
Datum: Fr, 08.06.2018
Uhrzeit: 12:05 - 12:40

Testing languages, generators and runtimes for a safety-critical system

Uhrzeit: 12:05 - 12:40
Vortrag: GTD 3.3


The use of domain-specific languages and custom code generators is often seen as incompatible with the requirements of software development in safety-critical contexts. This is mainly because the DSLs and custom generators are by definition no proven-in-use, and formally proving them correct is also hopeless. I this talk I will present an architecture that uses systematic testing and redundant execution to work around this problem. We validate the approach with a case study from the medical domain, where the approach was used successfully to develop mobile phone apps for prescriptions of medicine and monitoring of side-effects in chemotherapies. The case study relies on Jetbrains MPS, but the approach works for any DSL/generator technology.

Target Audience: Testers and developers, in particular, in safety-critical domains
Prerequisites: Basic understanding of modeling and code generation
Level: Advanced
You will learn:
* The risks involved in DSLs and code generation
* How to systematically test languages, generators and interpreters
* The role of advanced testing concepts, such as mutation testing
* The interplay between testing and architecture, in particular, redundant execution"

Extended Abstract:

Language workbenches allow developers to create, integrate and efficiently use domain-specific languages, typically by generating programming language code from models expressed with domain-specific languages. This can lead to increased productivity and higher quality. However, in safety-/mission-critical environments, such generated code may not be considered trustworthy, because of the lack of trust in the mechanisms used to generate the code. This makes it harder to justify the use of language workbenches in such an environment. In this paper we demonstrate an approach to use such tools in critical environments. We argue that models created with domain-specific languages are easier to validate, and that the additional risk resulting from the transformation to code can be mitigated by a suitably designed transformation and verification architecture. We validate the approach with an industrial case study from the healthcare domain.