Skip to main content

What does quality look like?

 How do you know the software product your team produces is quality?

 Well the answer is… you don't really know, the only people that can evaluate the quality of a piece of software is the customer. Some of you may disagree, but for me having been part teams that develop software that we don’t use on an everyday basis in real world situations, we are not in a position to say whether it is quality or not. It's like building a car and never driving it - how do you know it's any good? However,  what we can do is understand under what conditions we are happy to ship our product to our clients. What we can do is to define what we think our 'internal quality' is and what it looks like.

 So how can we define what quality looks like?

 In the dim and distant past I have worked in places where so long as the software worked it was OK to be shipped. Yeah they may have been a quick check to make sure the system wasn’t slow but outside of that it was pretty much all about functionality. These days where there are many competing products and negative experiences with your software can spread through social media like wildfire, just making sure the software works from a functional perspective is not good enough. How then, can we make sure we look at and understand other areas of our product and its respective quality?

 Define Quality Attributes

 One way is to define a set of quality attributes for our product and within each attribute have some factors that can be measured to give us an idea as to whether the product satisfies our view of quality. Like I said earlier only the customer knows what quality looks like but we can improve what we provide to our customers by having this internal view. To do this we need to define some areas that we want to measure the quality of. What areas could be use? Here are some examples:

  • Functionality

  • Performance
  • Security
  • Testability

Now that we have these we could use some factors that sit within these that we can measure - either through metrics or anecdotally. Now how you decide what areas and factors to cover is dependent on your product and the industry you are in but by having these and what constitutes quality you now have a quality bar that you can aim for.

 Let's go through an example….

 So let's take an example of a desktop application that enables you to log in and add and view your exercise and let's look at the areas of performance and testability.

 Performance

 So from a performance perspective let's look at 2 factors we could use:

  • Logging in should take less than 2 seconds

  • After logging in, adding an exercise should take less than 1 second to save

Now if we look at these factors we can add some parameters to them and each parameter can map to a RAG.

  • Logging in should take less than 2 seconds
  • Green– Log in under 2 seconds
  • Amber – Log in between 2 and 4 seconds
  • Red  - Log in takes more than 4 seconds

Adding an exercise should take less than 1 second to save

  • Green – Exercise take less than 1 second to save
  • Amber – Exercise takes between 1 second and a second and a half to save
  • Red – Exercise takes over 1.5 seconds to save.

Testability

From a testability perspective we could  use these factors:

  • Deploying the application locally takes less than 15 mins
  • Access to the test environment database should not require any post deployment work

Now if we look at these factors we can add some parameters to them and each parameter can map to a RAG.

Deploying the application locally takes less than 15 mins

  • Green - Deploying takes less than 15 mins
  • Amber -  Deployment takes between 15 and 20 mins
  • Red - Deploying takes more than 20 mins

 Access to the test environment database should not require any post deployment work

  • Green - Post deployment the database can be accessed without additional steps
  • Amber – a couple of manual steps are required post deployment to get database access
  • Red  - No access to the database is available post deployment

 Now what does this give us?  

We now have a set of quality attributes (with measures) we can use to start to monitor the internal quality of our product. The factors need to be aggregated up to create an overall RAG for that area and this can be done in various ways - the choice is up to  you. Now if you put these areas RAG status on a dashboard that everyone can see the current quality of the application.

 So let's say that our factors were as follows:

Performance Factors

Testability Factors

To keep it simple let's say we aggregate out the amber, this would give us a performance quality RAG of red and a testability RAG of green. So our quality dashboard would look something like this:

Quality RAG status

Obviously it can look a lot more professional and slick, but what you have now is something that the whole company can see and know the current quality of  the product.  When someone asks what does quality look like, they can look at the dashboard to see. Now how often you do this is up to you, in my current company we do this monthly.

 Now all the areas don’t all have to be green for a team to be happy to ship the product. In the above example performance is red. The key question to ask in this scenario is... What does that mean and what's the impact? If an area is red, tasks can be added to future sprints to try and rectify the issue. So, for example, if there are performance issues saving then it may be that a task is added to a future sprint to understand why and to fix it as we think the impact to our customers is low. It may be the opposite, as it is an urgent issue and it needs rectifying before a new release goes out to our clients.  What these status give us is a nudge towards a question of "Do we have a problem here?" If there red you probably do, but it may not be one that you have to deal with right away. These RAG status also do not have to be measurable metrics. They could be anecdotal as certain things cannot be measured. One example could be usability  - that’s hard to measure and it may be that your team feedback on usability and you make a decision together as to what constitutes the RAG status.

 Now I mentioned earlier that only the customer can define what quality looks like. Ideally some of these factors are received from the users. For example, user forums may be a good source of what users expect as well as feedback on the application via a feedback option that a user can use if they want to. Having more customer client factors or what your users expect with those factors  you have a better picture of how the application compares to a user's expectations.

 When coming up with areas and factors the team need to agree what the areas are and how they are going to be measured. That way everyone knows what quality looks like and everyone is on the same page.

 So why would you do this?

  • Gives us a benchmark to measure potential releases against As you perform releases you can look at these and understand what quality factors have improved or got worse over time across releases.

  • Will help improve the design and implementation of the code e.g. performance and accessibility When designing features, knowing what constitutes quality will help influence the design so that it keeps or improves the level of quality defined.
  • Can see degradation that occurs over time If the metrics show degradation over time, you will get early sight of this and rectify it before it becomes an issue. If you can’t see this degradation, then rework to resolve an issue could be huge.
  • Focus’s the team on what’s important If everyone has a different view on what quality is then people could be focusing their efforts in the wrong areas.
  • Stops bug bun fights If your team don’t know what quality looks like, bugs may be raised which have no impact on these quality areas. In which case the time could have been better utilized to add new features than fix bugs that don’t impact the quality.

 What happens if we don’t do it?

  • No one knows what we are working towards or trying to maintain from a quality perspective If you don’t know what quality looks like there is no sense of what the team are aiming for.
  • Slower to release Time could be spent on areas that do not improve the quality of the product.
  • Cannot improve something we don’t know about If areas are not measured then there is no way to improve them as you know nothing about them.
  • Bite us in years to come (progressive quality degradation) Not measuring certain things like performance my mean that slowly over time the areas degrade and to resolve them would cost a lot of time and money.
  • Lack of confidence in releases If you don’t know about each area and when a release does into production it could have a major impact in one of those areas. This could lead to more support calls and unhappy customers.

So as you can see knowing what quality looks like from an internal perspective is a useful way to make sure your product  does not disappoint.

Comments

Popular posts from this blog

How to deal with pauses and timeouts in specflow

So this blogpost is in response to the weekly Specflow blog posts and challenges that have been written by Gojko Adzic. This weeks challenge was how would you rewrite or rephrase the below scenario: Given a user registers successfully When the account page reloads And the user waits 2 seconds Then the account page displays "Account approved" My initial though was something like this: Given a user registers successfully  When the account page reloads   Then the account page is displayed within a satisfactory time period     And the account page displays "Account Approved" Now the problem with this scenario is what defines a satisfactory time? You could add it as a comment or in a scenario outline but over time the time a user waits could change and if this is updated in the code behind but the scenario outline or comments are not, then what the test does and what is described do not match - this would potentially cause issues in the future. My next ide

Testing and Mindfulness

How aware are you? Do you live in the here and now or is your mind always somewhere else? This blog post is about Mindfulness. Mindfulness is a simple meditation and is defined as (According to Wikipedia): "The intentional, accepting and non-judgemental focus of one's attention on the emotions, thoughts and sensations occurring in the present moment" Now Mindfulness has become more popular in the west in recent years as it has shown to have benefits for people who are suffering from Depression and Anxiety. It has been around for a while and is often thought to of originated from Buddhism and some people believe it started thousands of years ago. Now modern life is hectic and I’m sure we all have lots of things going on in our lives that keep our Brains busy and trying to focus on one thing at a time can be a challenge. I can't remember the number of times I've been doing something and my mind is somewhere else entirely. Mindfulness helps you focus on

Building a test strategy for a new team

Teams, we have all been on them. Some are good and some are bad. Some we never wanted to leave and others we probably couldn't wait to leave. Now most of the time (well in my experience anyway) you tend to get put into a team that already exists. Maybe you are a new hire or maybe you have asked to change to a different product team.  When you do this, more than likely there will already be a testing strategy in place. It may be that you adapt it and change it in any way you see fit to improve the testing. But imagine if everyone on the team was new? How would you decide your testing strategy? This post will go through some useful things you can do to help a new team develop a test strategy. Table of Contents 📈 What is a Test Strategy? 🤔 Where should I start? 🎯 Understand the company and their goals 💪 Play to the teams strengths 👁️‍🗨️ Understand what quality looks like 📏 Understand Scope 🧪 Understand the type of tests you need 📊 Measure your success 🤝 Collaborate 📝 Summar