Skip to main content

Developer and Tester Walkthroughs


In this blog post I am going to talk through a new technique that I have started where me, the tester, and the developer, have a walkthrough of a change that the developer has made for a particular feature.

So what are these walkthroughs? 

So these walkthroughs are a time for the developer and tester to get together and for the developer to talk through and explain the code changes that they have made. By talking through the code, I mean the actual code that the developer has written and not a demo of the new behaviour. During this session, the tester is free to ask any questions, this could range from questions about the code to questions about the effect that the code changes have on existing behaviour.  These sessions are time boxed to 30 minutes and typically take place before the developer has raised a pull request to merge the changes into the master branch.

Why do we do them?

In short - to find issues before I get a release.... but the 2 main reasons are based upon a couple of common statements that I've heard multiple times:


"Testing is something that should happen as early in the development process as possible"


Now this statement is quite common and one which I agree is true. By doing these walkthroughs we are performing 'testing'* earlier in the SDLC and any bugs that are found are cheaper to fix as we are fixing them before any code has been checked in.

* By testing I mean learning about the system which does not necessarily limit it to just testing through the UI.

"Testing should happen at any stage of the SDLC"

Now this statement is not so common but again it is one that I agree with. Testing is not just testing the UI via selenium or a REST API via a tool like postman. It is also learning about the product, building a model of the product and coming to a conclusion about the quality of the product. By having these reviews I can start to build a model of the product which in turn can influence how I decide to test it when I get a release. Also, depending on the change I have an idea of the quality of the change. This is due to experience and usually only applicable to changes that are similar to ones we have made in the past.


How did these walkthroughs help?

These walkthroughs had a positive impact on the quality of the product as well as on the development and testing process.


1) Develops the testers understanding of the code

By being exposed to the code, the tester can (over time) start to understand code syntax and what it does. This is great for tester  in that should they need to look at any code in the future for whatever reason, they will have a better understanding. For example, If they decide to investigate why a particular bug is occurring  they can look at the actual code where the bug is present and try to figure out why, this in turn can help the developer.

2) Issues found early

After implementing these reviews multiple times issues where being found early. Examples include:
  • The developer, when explaining a change realised that what they had done was incorrect.
  • A particular changes Acceptance criteria was not thoroughly defined enough so the developer and me sought clarification
  • I was reviewing the logic and highlighted a potential gap in the logic that required some investigation
So the payback from these reviews has found issues which would of otherwise not been found till I have received a release and started testing. 

3) Builds relationships

These walk-through were usually done remotely and then was due to the developer being located away from the office. Usually I may not speak to a developer unless there was an issue but by doing these walkthroughs a more in depth relationship was built between myself and the developers. 

4) Help developers understand testers

By a developer having a tester ask questions they can start to gain an understanding of how that particular tester thinks. This, in turn will help the developer for future changes. So, for example, a tester keeps asking questions about the security when a user enters personal details. The next time the developer writes a similar feature they will know what questions the tester will ask and therefore look at security more thoroughly before the walk through.

Conclusion

Looking back at these reviews it is definitely something that I am glad we implemented. The time that it has saved on bug fixes later on in the SDLC has paved the way for other types of testing to be planned in, for example API testing, performance testing etc.... As they are time boxed they don't take up too much time but the payback is definitely worth it.





Comments

Popular posts from this blog

How to deal with pauses and timeouts in specflow

So this blogpost is in response to the weekly Specflow blog posts and challenges that have been written by Gojko Adzic. This weeks challenge was how would you rewrite or rephrase the below scenario: Given a user registers successfully When the account page reloads And the user waits 2 seconds Then the account page displays "Account approved" My initial though was something like this: Given a user registers successfully  When the account page reloads   Then the account page is displayed within a satisfactory time period     And the account page displays "Account Approved" Now the problem with this scenario is what defines a satisfactory time? You could add it as a comment or in a scenario outline but over time the time a user waits could change and if this is updated in the code behind but the scenario outline or comments are not, then what the test does and what is described do not match - this would potentially cause issues in the future. My next ide...

Benchmarking in C# - Simple Job attribute

In this post I am going to continue my servies on BenchMarkDotNet and I will explain what the various parameters do that are present in the simplejob attribute.  Now I say all....... the documentation is not great so I will explain what 3 out of the 4 do :) So If you remember in my previous post I had a MyFirstBenchmark class which contained the details about the benchmark test that I wanted to run. In this class there is an attribute called SimpleJob and this attribute contains a few parameters that can be configured when you run a test.  By default BenchMarkDotNet will choose the number of iterations that will return the best precision. However, using the simplejob attribute allows you to quickly get a set of statistics. Below is how my test was setup: Now as you can see I had the following parameters in the SimpleJob attribute: launchCount - This parameter allows you to define how many times the benchmarking test is run warmupCount ...

Monitoring and Observability

In this post I am going to talk about monitoring and observability. Now what are monitoring and observability? Monitoring is defined as….. "to watch and check a situation carefully for a period of time in order to discover something about it" From <https://dictionary.cambridge.org/dictionary/english/monitoring> Observability is defined as….. "to watch carefully the way something happens or the way someone does something, especially in order to learn more about it" From <https://dictionary.cambridge.org/dictionary/english/observe?q=Observing> So the main difference between the 2 is that one is around checking to discover something whilst the other is about watching something in order to learn more about it. Why are both important? They are both important as they can tell us different things about a system and can provide us with information that can feed into decisions and actions that can be made both now and in the future to improve software quality as ...