How do you know the software product your team produces is quality?
Well the answer is… you don't really know, the only people that can evaluate the quality of a piece of software is the customer. Some of you may disagree, but for me having been part teams that develop software that we don’t use on an everyday basis in real world situations, we are not in a position to say whether it is quality or not. It's like building a car and never driving it - how do you know it's any good? However, what we can do is understand under what conditions we are happy to ship our product to our clients. What we can do is to define what we think our 'internal quality' is and what it looks like.
So how can we define what quality looks like?
In the dim and distant past I have worked in places where so long as the software worked it was OK to be shipped. Yeah they may have been a quick check to make sure the system wasn’t slow but outside of that it was pretty much all about functionality. These days where there are many competing products and negative experiences with your software can spread through social media like wildfire, just making sure the software works from a functional perspective is not good enough. How then, can we make sure we look at and understand other areas of our product and its respective quality?
Define Quality Attributes
One way is to define a set of quality attributes for our product and within each attribute have some factors that can be measured to give us an idea as to whether the product satisfies our view of quality. Like I said earlier only the customer knows what quality looks like but we can improve what we provide to our customers by having this internal view. To do this we need to define some areas that we want to measure the quality of. What areas could be use? Here are some examples:
- Functionality
- Performance
- Security
- Testability
Now that we have these we could use some factors that sit within these that we can measure - either through metrics or anecdotally. Now how you decide what areas and factors to cover is dependent on your product and the industry you are in but by having these and what constitutes quality you now have a quality bar that you can aim for.
Let's go through an example….
So let's take an example of a desktop application that enables you to log in and add and view your exercise and let's look at the areas of performance and testability.
Performance
So from a performance perspective let's look at 2 factors we could use:
- Logging in should take less than 2 seconds
- After logging in, adding an exercise should take less than 1 second to save
Now if we look at these factors we can add some parameters to them and each parameter can map to a RAG.
- Logging in should take less than 2 seconds
- Green– Log in under 2 seconds
- Amber – Log in between 2 and 4 seconds
- Red - Log in takes more than 4 seconds
Adding an exercise should take less than 1 second to save
- Green – Exercise take less than 1 second to save
- Amber – Exercise takes between 1 second and a second and a half to save
- Red – Exercise takes over 1.5 seconds to save.
Testability
From a testability perspective we could use these factors:
- Deploying the application locally takes less than 15 mins
- Access to the test environment database should not require any post deployment work
Now if we look at these factors we can add some parameters to them and each parameter can map to a RAG.
Deploying the application locally takes less than 15 mins
- Green - Deploying takes less than 15 mins
- Amber - Deployment takes between 15 and 20 mins
- Red - Deploying takes more than 20 mins
Access to the test environment database should not require any post deployment work
- Green - Post deployment the database can be accessed without additional steps
- Amber – a couple of manual steps are required post deployment to get database access
- Red - No access to the database is available post deployment
Now what does this give us?
We now have a set of quality attributes (with measures) we can use to start to monitor the internal quality of our product. The factors need to be aggregated up to create an overall RAG for that area and this can be done in various ways - the choice is up to you. Now if you put these areas RAG status on a dashboard that everyone can see the current quality of the application.
So let's say that our factors were as follows:
To keep it simple let's say we aggregate out the amber, this would give us a performance quality RAG of red and a testability RAG of green. So our quality dashboard would look something like this:
Obviously it can look a lot more professional and slick, but what you have now is something that the whole company can see and know the current quality of the product. When someone asks what does quality look like, they can look at the dashboard to see. Now how often you do this is up to you, in my current company we do this monthly.
Now all the areas don’t all have to be green for a team to be happy to ship the product. In the above example performance is red. The key question to ask in this scenario is... What does that mean and what's the impact? If an area is red, tasks can be added to future sprints to try and rectify the issue. So, for example, if there are performance issues saving then it may be that a task is added to a future sprint to understand why and to fix it as we think the impact to our customers is low. It may be the opposite, as it is an urgent issue and it needs rectifying before a new release goes out to our clients. What these status give us is a nudge towards a question of "Do we have a problem here?" If there red you probably do, but it may not be one that you have to deal with right away. These RAG status also do not have to be measurable metrics. They could be anecdotal as certain things cannot be measured. One example could be usability - that’s hard to measure and it may be that your team feedback on usability and you make a decision together as to what constitutes the RAG status.
Now I mentioned earlier that only the customer can define what quality looks like. Ideally some of these factors are received from the users. For example, user forums may be a good source of what users expect as well as feedback on the application via a feedback option that a user can use if they want to. Having more customer client factors or what your users expect with those factors you have a better picture of how the application compares to a user's expectations.
When coming up with areas and factors the team need to agree what the areas are and how they are going to be measured. That way everyone knows what quality looks like and everyone is on the same page.
So why would you do this?
- Gives us a benchmark to measure potential releases against As you perform releases you can look at these and understand what quality factors have improved or got worse over time across releases.
- Will help improve the design and implementation of the code e.g. performance and accessibility When designing features, knowing what constitutes quality will help influence the design so that it keeps or improves the level of quality defined.
- Can see degradation that occurs over time If the metrics show degradation over time, you will get early sight of this and rectify it before it becomes an issue. If you can’t see this degradation, then rework to resolve an issue could be huge.
- Focus’s the team on what’s important If everyone has a different view on what quality is then people could be focusing their efforts in the wrong areas.
- Stops bug bun fights If your team don’t know what quality looks like, bugs may be raised which have no impact on these quality areas. In which case the time could have been better utilized to add new features than fix bugs that don’t impact the quality.
What happens if we don’t do it?
- No one knows what we are working towards or trying to maintain from a quality perspective If you don’t know what quality looks like there is no sense of what the team are aiming for.
- Slower to release Time could be spent on areas that do not improve the quality of the product.
- Cannot improve something we don’t know about If areas are not measured then there is no way to improve them as you know nothing about them.
- Bite us in years to come (progressive quality degradation) Not measuring certain things like performance my mean that slowly over time the areas degrade and to resolve them would cost a lot of time and money.
- Lack of confidence in releases If you don’t know about each area and when a release does into production it could have a major impact in one of those areas. This could lead to more support calls and unhappy customers.
So as you can see knowing what quality looks like from an internal perspective is a useful way to make sure your product does not disappoint.
Comments
Post a Comment