The Complexity Gap

Software systems are at the heart of modern business. While varying greatly, they share common characteristics: they rarely work in isolation; they are all complex; and thorough testing is often very difficult or impossible. All this is compounded by networking and the concatenation of multiple devices, systems, architectures, and applications. 

The combinatorial complexity inherent in this level of interconnectivity cannot be over stated.  Even the simplest of metrics currently used to measure ‘complexity’ within code, cyclomatic complexity [1], illustrates this. Such metrics are wholly inadequate as they do not reflect the full measure of complexity that is operationally axiomatic as each of these systems also maintain numerous independent states. Further; as the number of possible states within the application layer increases, so does the degree of interaction, which sees an exponential increase in complexity. 

This interlinking of discreet systems at scale has enabled an explosion of innovation and accelerated the rate at which applications and features can be delivered. But the increase in the speed of delivery has not been without cost! Indeed, the verification and validation of these systems is where this is most apparent, with huge increases in time and resources dedicated to validating integration between systems still failing to identify failure mechanisms.  

The Coverage Gap 

As a new feature is added to an existing system the potential for interaction with existing states rapidly increases; the impact is generally exponential as the size of the state and the number of features increases. Unfortunately, the means of increasing test coverage is linear as this invariably relies upon either manual execution or manually writing automated scripts [2]. The underlying risk this gap poses (see Figure 1) is rarely acknowledged by those developing the applications. [3] 

There have been new developments utilising Machine Learning to attempt to address the difficulty in achieving test coverage. However, many of these approaches currently either focus down at the unit tier [3], or focus on simulating user interaction on a user interface [4]. 

At this state stage we do not have an adequate answer for this complexity gap, nor a solution at the integration level. However we are actively researching this area to see if we can contribute to solving this problem. 

References 

[1]   T. J. McCabe, “A Complexity Measure,” in Proceedings of the 2Nd International Conference on Software Engineering, Los Alamitos, 1976.  
[2]   J. Arbon, “AI and Machine Learning for Testers,” 6 June 2017. [Online]. Available: https://www.slideshare.net/TechWellPresentations/ai-and-machine-learning-for-testers. [Accessed 8 10 2018]. 
[3]   Microsoft MSDN, “Intellitest, Test more with less (effort),” 30 9 2015. [Online]. Available: https://blogs.msdn.microsoft.com/visualstudio/2015/09/30/intellitest-for-net-test-more-with-less-effort/. [Accessed 20 10 2018]. 
[4]   J. Colantonio, “How AI is changing test automation,” 29 05 2018. [Online]. Available: https://www.joecolantonio.com/how-ai-is-changing-test-automation/. [Accessed 18 10 2018]. 
[5]   Z. Xu., P. Liu and J. Ai, “Probability Model-Based Test Suite Reduction,” SIGSOFT Softw. Eng. Notes, vol. 42, pp. 1-6, 2017.  

Ready to start delivering something amazing?

Related Blogs

About the author

IJYI Ltd

IJYI Ltd.