Monthly Archives: September 2017

CASE 2017 – Special Session on Model-Based Testing of Hybrid Systems

At the 13th IEEE Conference on Automation Science and Engineering, August 20-23, 2017, Wyndham Grand Hotel, Xi’an, China, two SyTeC colleagues, Knut Åkesson and Martin Fabian, organized, invited and chaired a special session on Model-Based Testing of Hybrid Systems. The presented papers were (presenters in bold):

  • Simulation of Hybrid Systems from Natural Language Specifications
    Oliveira, Bruno. Univ. Federal De Pernambuco
    Carvalho, Gustavo. Univ. Federal De Pernambuco
    Mousavi, Mohammad Reza. Halmstad Univ
    Sampaio, Augusto. Univ. Federal De Pernambuco
  • Objective Functions for Falsification of Signal Temporal Logic Properties in Cyber-Physical Systems
    Eddeland, Johan. Volvo Car Corp
    Miremadi, Sajed. Volvo Cars Corp
    Fabian, Martin. Chalmers Univ. of Tech
    Åkesson, Knut. Chalmers Univ. of Tech
  • Vacuity Aware Falsification for MTL Request-Response Specifications
    Dokhanchi, Adel. Arizona State Univ
    Yaghoubi, Shakiba. Arizona State Univ
    Hoxha, Bardh Southern. Illinois Univ
    Fainekos, Georgios. Arizona State Univ
  • Temporal Specification Testing of Hybrid Systems
    Dang, Thao. CNRS/VERIMAG
    Dreossi, Tommaso. Univ. of California, Berkeley

Conference website

Volvo – 1st Workshop on Automatic Generation of Test Cases for Mechatronic Systems

Workshop: September 5, 13:00-16:00, Volvo Car Corporation

With the rapidly increasing complexity of the mechatronic systems in the automotive industry, it is very hard to reach a sufficient level of test coverage during software testing in different test environments. In near future, it will be almost impossible to only rely on manually generated test cases!

In this workshop, we will present the latest results that have been achieved by integrating new tools and methodologies in the software development chain at Vehicle Propulsion (electric drive, traction battery, charging and combustion engine), in an attempt to assist the test engineers by automatically generating test cases. We will also talk about the next steps in this area. Our ambition is to have a quite interactive session with a lot of questions and discussions. We hope that as many of you can participate from different areas in order to find synergies, exchange information and together drive this area forward.

Relevant target groups include, but are not limited to, software testers, software developers, requirement engineers, technical leaders, and managers.

Angela Wallenburg – Safe and Secure Programming Using SPARK

Seminar September 1, 12:15, EA

Speaker:  Angela Wallenburg, Altran, UK

Sometimes software really has to work. SPARK is a programming language, a verification toolset and a design method which, taken together, ensure that ultra-low defect software can be deployed. By design, the SPARK language is immune to many programming language vulnerabilities (such as those listed by NIST/CWE/SANS). Under the hood, the SPARK static analysis tools use formal verification (automatic mathematical proof).

Generally, formal verification is rarely used in industry due to its high cost and level of skill required. However, over the past 25 years, SPARK has been applied worldwide in a range of industrial applications. A few examples include Rolls Royce Trent (engine control), EuroFighter Typhoon (military aircraft), and NATS iFACTS (air traffic control). Recently, SPARK has been attributed the reason of success for the Vermont Tech CubeSat, the only one of NASA’s 2013 launch of 12 mini satellites (ELaNa project) that remained operational until reentry.

During this talk, you will learn about the rationale of SPARK. We will cover topics such as strong typing, unambiguous semantics, modular verification, contracts, the verifying compiler, scalability, powerful static analysis, combination of test and proof, and cost-competitive development of software to regulations (such as DO-178C).

(original announcement)

Jan Tretmans – Model-Based Testing: There is Nothing More Practical than a Good Theory

Seminar: 25 April, 2017, 13:15, EF

Speaker: Jan Tretmans
TNO – Embedded Systems Innovation, Eindhoven, NL,
Radboud University, Nijmegen, NL,
Halmstad University, Halmstad, S.

We build ever larger and more complex software systems. Systematic testing plays an important role in assessing the quality of such systems. The effort for testing, however, turns out to grow even faster than the size and complexity of the systems under test themselves.One of the promising testing technologies to detect more bugs faster and cheaper is model-based testing. (more…)

Model-based testing starts with an abstract model of the system’s behaviour. This model constitutes a precise and concise specification of what the system shall do, and, consequently, is a good basis for the algorithmic generation of test cases and the analysis of test results. Model-based testing enables the next step in test automation by combining automatic test generation with test execution and providing more, longer, and more diversified test cases with less effort.

The presentation aims at covering the chain from theoretical concepts, via algorithms and tools, to industrial applications of model-based testing. Starting point is the ‘ioco’-testing theory for labelled transition systems, to which concepts from process algebra, the theoryof testing equivalences, symbolic transition systems, algebraic data types, satisfaction-modulo-theories tools, and equational reasoning are added. We show how these theories have led to the development of the model-based testing tool ‘TorXakis’.

On the one hand, TorXakis provides provably sound and exhaustive (in the limit) test generation from models. These models combine state-based control flow and complex data definitions, they deal with uncertainty through nondeterminism, they support compositionality by providing combinators for alternative, concurrent, sequential, exceptional, and interrupting behaviours, and they support abstraction and under-specification. On the other hand, TorXakis has shown practical usability in academia, both in research and in education,
as well as in industrial applications, ranging from smart-card software to large, systems-of-systems kind of applications. So, for model-based testing there is nothing more practical than a good theory.

Original announcement