Skip to main content

Building a test strategy for a new team

Teams, we have all been on them. Some are good and some are bad. Some we never wanted to leave and others we probably couldn't wait to leave. Now most of the time (well in my experience anyway) you tend to get put into a team that already exists. Maybe you are a new hire or maybe you have asked to change to a different product team. 


When you do this, more than likely there will already be a testing strategy in place. It may be that you adapt it and change it in any way you see fit to improve the testing. But imagine if everyone on the team was new? How would you decide your testing strategy? This post will go through some useful things you can do to help a new team develop a test strategy.

Table of Contents

๐Ÿ“ˆ What is a Test Strategy?

๐Ÿค” Where should I start?

๐ŸŽฏ Understand the company and their goals

๐Ÿ’ช Play to the teams strengths

๐Ÿ‘️‍๐Ÿ—จ️ Understand what quality looks like

๐Ÿ“ Understand Scope

๐Ÿงช Understand the type of tests you need

๐Ÿ“Š Measure your success

๐Ÿค Collaborate

๐Ÿ“ Summary

๐Ÿ“ˆ What is a Test Strategy?

Like all good posts I’m going to start with a definition…. In this case of strategy.


‘A plan of action designed to achieve a long-term or overall aim’ (Wikipedia)


A test strategy is anything that describes the approach that you are going to take to testing a particular thing. That thing could be a product, project, feature  or even a bug. It typically includes things such as:


  • A description of the tools that will be used

  • What environments with be used for testing

  • What you are going to test

  • How you are going to test

  • Who will test


Now with a new team this can be a challenge as potentially, no-one knows anyone else and the skills that they bring to the team. It can also be frustrating as not having a clear testing strategy can lead to effort being put into the wrong areas and also create a disjointed vision of what testing should look like in the team.  So what can a team do to help them to come up with a good test strategy when everyone is new? Below are a few things that can be done.

๐Ÿค” Where should I start?

๐ŸŽฏ Understand the company and their goals

Understanding the company and their goals is essential to make sure you come up with the right strategy. If your strategy doesn’t align with the aims of the business then your strategy will go against what the business is looking to achieve and therefore it will probably fail. 


Lets say a goal of the business is to provide users with cutting edge features before anyone else. Now how would this affect your test strategy? Well it would mean that you need to think about making your testing fast and efficient so that releases can be done quickly so that you can get the jump on your competitors. 


So your strategy needs to include things such as test automation and fast feedback loops. If you didn't do this and decided that the testing would be all manual and a typical developer throwing releases over the wall to the testers, then your strategy is not in line with the business. The business wants x and you are doing y which isn’t enabling x - this will ultimately cause issues, one which the business will typically always win.

๐Ÿ’ช Play to the teams strengths

Play to the team's strengths. Like with all teams, old or new you need to utilise the strengths that the team has. So if everyone is new, understanding what strengths everyone has and their level is essential. If everyone on the team is very technical then the strategy needs to be one where you utilise those technical skills. This may mean lots of automation and bespoke testing tools. 


Understanding the level of the skill is important as well as a team needs to know how well team members can do a certain thing. So let's say for example you have a team member you says “Hey I know C#”. You think great, I can let them lose on UI automation, only to find out that their C# skills were them doing a beginners online course. Yes they have the skills but not necessarily to the level you want. 


How the level is measured will vary but it should be agreed amongst the group so that everyone can give a true reflection of how much they know on a particular thing. This then enables the team to make sure that no one feels under too much pressure and that they are happy with what they are required to do.

๐Ÿ‘️‍๐Ÿ—จ️ Understand what quality looks like

Understanding what quality looks like for a product or even a feature will help you come up with the right strategy that focuses on the right things. Now it may have to be an internal view of quality i.e. what are the team happy to release or it could be an external view of quality i.e the user expects certain things from the product. It doesn't really matter what it is, what's important is knowing what it is. 


For example if one aspect of quality is that the application should return search results in less than a second, then the test strategy should include some performance tests and relevant metrics to make sure that that performance quality indicator stays within the second. It can also help with the test strategy from a different viewpoint by shifting the testing left. 


An example of this would be user experience. It may be that users' view of quality is that their user experience is exceptional. One way to do this would be to have prototypes of the UX to ruin by the users prior to delivery of a feature. It’s still testing, just at an earlier stage of the development lifecycle. 

๐Ÿ“ Understand Scope

Now this may seem silly and you may think “We know what we are supposed to test”. And you may well be right. However, in order to make sure you don’t start testing things you don’t need to, it is best to check. By understanding the scope you can focus on what matters and not get distracted. 


For example, you are tasked with testing a new feature that enables a user to add music to a playlist. What about the API that is available? Do you need to test that? If you did and then found out another team was doing that you would have duplicated effort for probably no benefit. I know it sounds obvious but it is always best to understand the scope of what you are testing. 

๐Ÿงช Understand the type of tests you need

Understanding the types of testing you need is an important thing to know and by doing so you can stop your team from a lot of wasted effort and potential future issues. Now the type of testing you need will be product and feature specific, for example you may be implementing an API and therefore UI tests would be kind of pointless.


 So you need to think about what you are testing to understand the type of tests that you need. Also just because you can doesn’t mean you should. Say, you have a UI framework that does not mean you should write thousands of UI tests. If you did, you may have a maintenance nightmare or tests that don't actually add any value. Remember that tests should add value, if they don't you don't need them. 

๐Ÿ“Š Measure your success 

Once you have your strategy it’s important to measure its success. Now how do you measure that success? Well the answer is it depends and will vary. It could be the number of sev 1 issues that got through to production. It could be the number of bugs caught before the feature or product gets released. 


Whatever it is, make sure that everyone in the team agrees what success looks like. If you find that your measure is getting worse then tweak the strategy. No test strategy is ever set in stone, it often needs to evolve and change as other factors in the team change. 

๐Ÿค Collaborate

Once you have your test strategy remember to collaborate


It may sound silly if I said work together, as you may say well we are working together as we are in the same team, and you are right. However, what I mean is working together through collaboration. This is useful when there are elements outside of your control, for example you have to use a certain tool/programming language/framework. You can pair program on the programming language that no one in the team knows, or pair write yaml files for your CI pipeline. 


By collaboration, you are enhancing everyone's learning and disseminating skills to more than just one person in a team. For example, say no one had any yaml experience and just one member of the team goes off and learns it. That's great, you now have a CI pipeline that can be used. Now let's say that person is off ill for 2 weeks. Not so bad you say…. What happens then when the pipeline breaks. Who's going to fix it? You now have a problem. If 2 or more people paired on that CI pipeline you would have someone else who can help resolve the issue. Also by collaborating you will get to know one another better and this will help build relationships that will be valuable in the future. 

๐Ÿ“ Summary

In summary there are some things you can do to help a brand new team come up with a test strategy. They are not difficult and will help you come up with a strategy that everyone in the team is comfortable with. Having a brand new team thrown together is a rarity in my experience but hopefully if you do find yourself in that position the above points will help you and your team. Even if you are not in a new team the points above can still be useful and make sure that your existing test strategy is applicable in your context.









Comments

Popular posts from this blog

How to deal with pauses and timeouts in specflow

So this blogpost is in response to the weekly Specflow blog posts and challenges that have been written by Gojko Adzic. This weeks challenge was how would you rewrite or rephrase the below scenario: Given a user registers successfully When the account page reloads And the user waits 2 seconds Then the account page displays "Account approved" My initial though was something like this: Given a user registers successfully  When the account page reloads   Then the account page is displayed within a satisfactory time period     And the account page displays "Account Approved" Now the problem with this scenario is what defines a satisfactory time? You could add it as a comment or in a scenario outline but over time the time a user waits could change and if this is updated in the code behind but the scenario outline or comments are not, then what the test does and what is described do not match - this would potentially cause issues in the future. My next ide

Developer and Tester Walkthroughs

In this blog post I am going to talk through a new technique that I have started where me, the tester, and the developer, have a walkthrough of a change that the developer has made for a particular feature. So what are these walkthroughs?  So these walkthroughs are a time for the developer and tester to get together and for the developer to talk through and explain the code changes that they have made. By talking through the code, I mean the actual code that the developer has written and not a demo of the new behaviour. During this session, the tester is free to ask any questions, this could range from questions about the code to questions about the effect that the code changes have on existing behaviour.  These sessions are time boxed to 30 minutes and typically take place before the developer has raised a pull request to merge the changes into the master branch. Why do we do them? In short - to find issues before I get a release.... but the 2 main reasons are based upon a co