Skip to main content

Conflicting Information

In this blog post I will be talking about testing and conflicting information. Now I am a new Mac user and whilst I was playing around with my new toy a while ago I wanted to see how much storage I had left available. So I went into 'About this Mac' and bought up the storage dialogue box. 

Can you see spot any issues...........?











Now, I did not notice the issues when I first looked at the dialog box as my eye was drawn to the text that stated how much space I had left. I thought "Great,  I have loads of space left" as I had over 140 GB of space left. I then had a closer look........

You will notice that the total amount of space I have left is 140.41 GB but the white space in the space bar (which represents free space) looks a little too short to represent 140.41 GB. Also it states that I have 249.78 GB of space in total but the disk icon text states that I have 251 GB of flash storage. 

All of this tells me that there is some conflicting information being displayed and how do I know which one is correct? Maybe they are both incorrect. 


 As testers we can do a few things when we find conflicting information.
  • Find out if the conflicting information is an issue
How do you know this is a problem? Look at and analyse your Oracles and see if any of these will give you an idea as to whether this is an issue or not. Usually (and I'm making an assumption here), when conflicting information is found there will usually be a bug present in the system. I can't think of an example where the same piece of information is represented multiple times with different values is seen as correct. Usually multiple pieces of information that represent the same thing should be consistent and any difference should be investigated. 
  •  Find out what piece of information is correct
Instead of raising an issue stating that the data is conflicting and wrong, digging a little deeper into the data that is wrong can use a useful exercise and can be used by the developer to fix the problem. In my example I could run a third party application that would confirm the actual amount of space that I have left, I could then share this with the developer and this in turn would help then to identify the wrong piece of data. This investigation could potentially take some time and the time spent really should be linked to how severe the effect of the conflicting information is (see below)
  •  Evaluate how severe the effects of the difference could be
How severe is the effect of the data difference? It may not be too much of an issue if the conflicting information is small. So in my example if the difference was say 0.5 MB then this could be seen as not particularly severe. On the other hand a small difference could be very severe. Imagine a medical piece of machinery that gives doses of a drug to a patient at set times every hour. Now If this was tested and the amount that was actually given was different to the amount that was configured to be administered, the consequences could be fatal. So small variances in information does not mean that the conflicting data does not matter.

 We should be alert to conflicting information, although this can be difficult due to the potential subtleness of the difference and also we need to be aware of how severe this difference could be, so like most things testing, it is all about context.

If you have any thoughts on this please feel free to comment. 


Comments

Popular posts from this blog

How to deal with pauses and timeouts in specflow

So this blogpost is in response to the weekly Specflow blog posts and challenges that have been written by Gojko Adzic. This weeks challenge was how would you rewrite or rephrase the below scenario: Given a user registers successfully When the account page reloads And the user waits 2 seconds Then the account page displays "Account approved" My initial though was something like this: Given a user registers successfully  When the account page reloads   Then the account page is displayed within a satisfactory time period     And the account page displays "Account Approved" Now the problem with this scenario is what defines a satisfactory time? You could add it as a comment or in a scenario outline but over time the time a user waits could change and if this is updated in the code behind but the scenario outline or comments are not, then what the test does and what is described do not match - this would potentially cause issues in the future. My next ide...

Benchmarking in C# - Simple Job attribute

In this post I am going to continue my servies on BenchMarkDotNet and I will explain what the various parameters do that are present in the simplejob attribute.  Now I say all....... the documentation is not great so I will explain what 3 out of the 4 do :) So If you remember in my previous post I had a MyFirstBenchmark class which contained the details about the benchmark test that I wanted to run. In this class there is an attribute called SimpleJob and this attribute contains a few parameters that can be configured when you run a test.  By default BenchMarkDotNet will choose the number of iterations that will return the best precision. However, using the simplejob attribute allows you to quickly get a set of statistics. Below is how my test was setup: Now as you can see I had the following parameters in the SimpleJob attribute: launchCount - This parameter allows you to define how many times the benchmarking test is run warmupCount ...

Monitoring and Observability

In this post I am going to talk about monitoring and observability. Now what are monitoring and observability? Monitoring is defined as….. "to watch and check a situation carefully for a period of time in order to discover something about it" From <https://dictionary.cambridge.org/dictionary/english/monitoring> Observability is defined as….. "to watch carefully the way something happens or the way someone does something, especially in order to learn more about it" From <https://dictionary.cambridge.org/dictionary/english/observe?q=Observing> So the main difference between the 2 is that one is around checking to discover something whilst the other is about watching something in order to learn more about it. Why are both important? They are both important as they can tell us different things about a system and can provide us with information that can feed into decisions and actions that can be made both now and in the future to improve software quality as ...