With the realisation of the ADMORPH vision embedded systems will gain the ability to change their behaviour. These systems will learn how to counteract specific threats. A robot may learn that a given path is not traversable and will look for alternatives to reach its objective. A radar may use more or less power to detect objects. A controller may learn not to trust sensor data because they have likely been compromised. However, one hard question to answer is: “how can we test that the software that these systems execute behave in the way we expect”? Even more: “are we really able to determine what we expect”?

Testing software in the presence of learning and adaptation is an extremely complex problem. Should we let the system learn for a while before starting the testing procedure? If we had learn something different, would we then be better or worse? Suppose for example that we have a camera that is trying to detect people in the video images. Imagine we never feed it with an image that contains people. Can we really say that we had enough data for the camera to start working in the way it is supposed to work?

We try to find an answer to some of these questions in our publication “Testing Self-Adaptive Software with Probabilistic Guarantees on Performance Metrics” that has received an ACM SIGSOFT Distinguished Paper Award at the ACM Joint European Software Engineering Conference and Symposium on the Foundations of Software Engineering (ESEC/FSE) 2020.

In the paper we talk about how the testing of adaptive software should switch paradigm and go from being deterministic to providing probabilistic guarantees and we argue about why it is not possible to do anything different. We use a tool called scenario theory to perform software testing for adaptive systems with probabilistic guarantees. We apply the theory to two case studies (an adaptive video encoder, and and tele-assistance service).