Skip to main content

Framework for Amount of Exploratory Testing

 This post will provide a framework that can be used for deciding how much exploratory testing should be done on a particular feature. It will start by talking about risk then it will define some factors that can affect the quality of a feature and finally it will describe how 3 factors that can be used to define the amount of exploratory testing to do.

 Risk.

 Testing is all about Risk.

 Now we manage the risk and the amount of testing we should do based upon how 'risky' the feature we are developing is. Now there are multiple ways you can test a piece of software. You can use automation, exploratory testing or maybe not even test at all. One testing technique that has gained popularity is exploratory testing and it is defined on Wikipedia as:

 "Exploratory testing is an approach to software testing that is concisely described as simultaneous learning, test design and test execution"

 Now in my experience with this method of testing you would typically timebox the amount of time you wanted to test a particular charter and away you go. But how much exploratory testing should you do? How much is enough?

 When you think about the development of a specific feature there are multiple factors that could pose a risk to the quality of that feature. It includes things like:

  •  Tester and developer experience
  • The quality of the codebase
  • Relevant documentation
  • Access the team have to the product owner
  • How well understood is the domain

 So as you can see there are many factors that can impact the quality of a feature so when we decide how much time to assign to exploratory testing a feature we need so consider some of these things. So how can you do this? Well you can look at some of the risks associated with that feature and based upon those risks assign an amount of time for testing.

 So what I have come up with is a simple framework to help define the amount of effort that goes into exploratory testing and it uses some risk factors to help you. The factors are:

  •  Impact to the users should the feature not work or go wrong
  • Complexity of the feature (technical and business)
  • Experience of the developers implementing the feature

 So what do I mean by each of these factors…..

 Impact

This is asking the question….

  "What impact will this feature have on our customer/user if it doesn’t work or goes wrong?"

 Will the user lose money? Will it dent the confidence of our product in the eyes of the user? This factor is thinking about it from the customer/users perspective. Now the higher the negative impact the more testing you should do as if an issue got through the effect on the business could lead to serious consequences, for example loss of revenue.

 Complexity

This is asking the question….

 "How complex is the feature from both a technical perspective and business logic perspective?"

 Now from a technical perspective it could be that it requires updates to legacy code that is bloated and overengineered and just one small change could cause issues in other areas of the system. From a business logic perspective it could be that there is a complex algorithm involved in the feature. Now the more complex the change in both of these areas the more likely you are probably going to want to perform more exploratory testing than if the complexity was low.

 Experience

This is asking the question….

 How experienced is the developer(s) implementing the feature?"

 Now if you have a new team member it may be that they do not have much knowledge of the domain or the technology stack you are using. In the case of a developer not being very experienced, you would probably want to do a little more exploratory testing to mitigate this risk . Now as time goes by the developer would gain valuable experience so the amount of testing time moving forward would reflect that.

 The framework uses these 3 factors together to come up an amount of exploratory testing that should be done on a specific feature.

 I will now work through an example of how this works…..

So let's say we have a feature for a bank, that based upon a customer's various attributes assigns a rating. And depending on that rating a customer can get various products, ranging from a bank account to a large loan. This feature is being written by a new team that have little experience in the domain. And finally the codebase is complex as it is using an old bloated framework on an old technology stack.

So let's work through the factors:

Impact

So the negative impact of this change not working is that customers could get offered products they should not be. This could lead to reputational impact as someone could get into lots of debt as they are offered incorrect products and also existing customers may lose confidence if they hear reports about people being offered the wrong products and getting into financial difficulty. The impact of this therefore would be high.

Complexity

Now this feature is written based upon an old framework on an old technology stack so the complexity will be high as there will be potentially technical issues to overcome. Also from a business perspective the logic is complex as there are multiple customer attributes that need to be considered and the rating needs to be accurate.

Experience

This would be low as the team that are implementing the feature are new.

Now that we have these we can use the framework to decide how much to test. Here is an example:

No alt text provided for this image

So in our example above we would do a High amount of exploratory testing.

 Now the amount of exploratory testing column can be whatever you want. Could be man days could be number of charters to do. It really is up to you. What it gives you is an amount of testing that you are going to do based upon the 3 risk factors.

 So how might this look in the real world?

 Let's take a couple of extreme examples:

 Extreme 1

No alt text provided for this image

 So for a change like this is may involve:

  • Multiple exploratory testing charters each lasting at least an hour
  • Mob Exploratory testing sessions with the whole team

 Extreme 2

No alt text provided for this image

 A change like this may involve:

  • A timeboxed exploratory testing session lasting no more than 30 mins
  • Timeboxed pair exploratory testing with the developer on the feature branch 

Issues will always get through testing. There is a phrase that goes something like this "Nothing is certain except death and taxes". It really should be "Nothing is certain except death, taxes and software bugs" But by using a framework similar to what I have described you can mitigate the risk of as many issues getting through. 

Comments

Popular posts from this blog

Testing and Mindfulness

How aware are you? Do you live in the here and now or is your mind always somewhere else? This blog post is about Mindfulness. Mindfulness is a simple meditation and is defined as (According to Wikipedia): "The intentional, accepting and non-judgemental focus of one's attention on the emotions, thoughts and sensations occurring in the present moment" Now Mindfulness has become more popular in the west in recent years as it has shown to have benefits for people who are suffering from Depression and Anxiety. It has been around for a while and is often thought to of originated from Buddhism and some people believe it started thousands of years ago. Now modern life is hectic and I’m sure we all have lots of things going on in our lives that keep our Brains busy and trying to focus on one thing at a time can be a challenge. I can't remember the number of times I've been doing something and my mind is somewhere else entirely. Mindfuln...

How to deal with pauses and timeouts in specflow

So this blogpost is in response to the weekly Specflow blog posts and challenges that have been written by Gojko Adzic. This weeks challenge was how would you rewrite or rephrase the below scenario: Given a user registers successfully When the account page reloads And the user waits 2 seconds Then the account page displays "Account approved" My initial though was something like this: Given a user registers successfully  When the account page reloads   Then the account page is displayed within a satisfactory time period     And the account page displays "Account Approved" Now the problem with this scenario is what defines a satisfactory time? You could add it as a comment or in a scenario outline but over time the time a user waits could change and if this is updated in the code behind but the scenario outline or comments are not, then what the test does and what is described do not match - this would potentially cause issues in the future. My next ide...

Building a test strategy for a new team

Teams, we have all been on them. Some are good and some are bad. Some we never wanted to leave and others we probably couldn't wait to leave. Now most of the time (well in my experience anyway) you tend to get put into a team that already exists. Maybe you are a new hire or maybe you have asked to change to a different product team.  When you do this, more than likely there will already be a testing strategy in place. It may be that you adapt it and change it in any way you see fit to improve the testing. But imagine if everyone on the team was new? How would you decide your testing strategy? This post will go through some useful things you can do to help a new team develop a test strategy. Table of Contents ๐Ÿ“ˆ What is a Test Strategy? ๐Ÿค” Where should I start? ๐ŸŽฏ Understand the company and their goals ๐Ÿ’ช Play to the teams strengths ๐Ÿ‘️‍๐Ÿ—จ️ Understand what quality looks like ๐Ÿ“ Understand Scope ๐Ÿงช Understand the type of tests you need ๐Ÿ“Š Measure your success ๐Ÿค Collaborate ๐Ÿ“ Summar...