Go the extra mile

Leave a comment

In my roles, wherever I’ve worked, I’ve often been asked to test a new piece of functionality that had only vague requirements and came with little in the way of instruction. In addition to testing it, I also had to work out what the process was either by asking people, or by good old fashioned trial and error.

I’d usually then document that process for myself, either for test scripts, or as a cheat sheet to remind myself what was the next step, after I come back from documenting a defect.

For quite a few of these, I would then find myself co-ordinating UAT; sitting with the user and walking them through the process to determine if what they’ve got at the endpoint matches up with what they wanted in the first place.

This user isn’t usually the only person to be using the new functionality, and they’ll end up going back and going through it all with their team.

What struck me after going through this a couple of times was that I’ve got the process there, documented – and this would be of real benefit as a training aid. It’s not usually especially user-friendly in the form of a test script or cheat sheet, but spending just an extra 20 minutes tidying up the formatting and adding a few screenshots, and you’ve suddenly got an attractive one or two sheet training document that can be handed to the user along with the new code.

Most of the work was being done anyway, so you’re not doing much extra, and I’ve found that the users really appreciate it. I’ve mentioned before about adding value and contributing to the overall solution – this is just another example of how a tester can bring more to the table than might be immediately apparent.

What Could Possibly Go Wrong?

2 Comments

Oh, how many times have I seen this mistake made? Probably about as many times as I’ve made it myself!
It’s all too easy to do, this is ‘just an upgrade’ or a ‘straightforward change’ or even a ‘simple addition’. The testing is therefore assumed to be minimal, and to take very little time as everything is bound to pass.

Of course, what actually happens is the same for most IT projects – the scope was underestimated and the finished product has more bugs than we thought it would. What was scheduled to take a day is suddenly lasting a week, developers who are needed to fix the bugs have already had their time assigned to other projects, and deadlines are either fast approaching or have already been and gone.

There’s no real remedy to this that I know of outside of pessimism from the outset – this pessimism leads to increased estimates and contingency, although there’s no guarantee that the project will agree to either! Assume that the application you’re going to get will fall over as soon as you even glance at it. Assume that everything will collapse with even the most basic of tests – that way you can only ever be pleasantly surprised.

Beautiful Scripts

2 Comments

Most testers I’ve seen just write test scripts without ever going near the formatting options – this is such a waste!
Consider the below two examples:






1. Login to application
2. Access Subscriptions form
3. Add a subscription with Name=”Test Subscription AD01″, Start date=”01/01/11″, End date = “31/12/15”
4. Save record
5. Close the form

1. Login to application

2. Access Subscriptions form

3. Add a subscription with
Name=”Test Subscription AD01“,
Start date=”01/01/11“,
End date = “31/12/15

4. Save record

5. Close the form

Which of the following is easier to read? Imagine that simple test scaled up to a 30-40 step test, and how much easier to interpret it would become with formatting.

Even Excel allows for different formatting within the same cell, so there’s no excuse not to do it.
Choose your own standard, but I tend to make names of forms, windows columns, fields etc. bold. Key actions (e.g. Login) are also bolded. The actual data is marked bold and red.

Of course, I’m not advocating spending hours needlessly turning your scripts into a work of art, but adding some basic formatting will improve readability and acts as a review period, when you can step back for a second and look at what you’ve just written.

Why Write Scripts?

Leave a comment

How detailed should test scripts be?

At the start of any project you would normally think at a high level about the approach you were going to take, and possibly present this in a plan. At some point you’ll get that down to a finer level of detail, thinking about individual scenarios or processes to look at. Beyond that you have permutations and then beyond that you have your test steps (and in most cases, expected results).

Let’s take an example – a new form has been created that holds subscriptions and the associated from/to dates.

Our plan would be along the lines of creating new subscriptions, then editing and deleting existing ones. We would also intend to look at date range validity.

We’d then look at scenarios:

  • Add record
  • Modify record
  • Delete record
  • Check for valid dates
  • Check for start date before end date
  • Check for start date same as end date
  • Check for non valid date types/strings/symbols etc.
  • Do each of the above checks for new records and then edits

The list goes on…

You can then look at permutations, which are essentially the above, but with actual data in.

You then, finally get down to the test steps:

1. Login to application

2. Access Subscriptions form

3. Add a subscription with:

Name=”Test Subscription AD01″, Start date=”01/01/11″, End date = “31/12/15”

4. Save record

5. Close the form

Now, the above steps would actually hold a lot more detail, such as how we access that subscription form – is it directly, or is it attached to a customer? If it’s directly, then how do we associate the subscription with a customer? All that detail though, as well as steps 1, 2 and 4, remain the same in almost every scenario we’ve posited.

I’ve seen some arguments that suggest that we don’t need to go to that level of detail – a competent tester can work from scenarios or at most, the permutations.

I’ve seen other arguments that insist that full steps must be written out for each test. You’ve no guarantee who will be running that test, and it’s your job to write it such that the proverbial ‘man on the street’ can run it.

I’ve actually been told this one so often that I started to believe it, despite the fact that it cheapens our role. The biggest problem with this approach is that it is tantamount to writing an automated test script, and has most of the disadvantages that come with that, while having few of the advantages, as the script is still to be run by a human.

I take a different approach.

I believe that full steps for the process are necessary. Our example is a completely new form – what if the link has been put in the wrong place, or if the link once found actually opens up the wrong form? What if saving the record or closing the form gives an error? Essentially, by eliminating steps you are introducing assumptions – and none of those assumptions are guaranteed to be true.

So, by writing the full steps we are forced to discard all assumptions. We’ve got to state upfront where the link for the form will be, and in the absence of a working application, we’ll have to rely on documentation – so we’re now effectively testing the documentation for completeness as we write our scripts.

That being said, with all the scenarios given, there’s a huge amount of repetition, and although computers make it easy to copy and paste it’s wasteful to duplicate all the steps for each one.

I believe that we should start out being as complete as possible, and then for subsequent tests we can gradually drop the detail.

Using the example above, we keep what we’ve got for test one, but test two becomes:

1. Add a new subscription:

Name = “Test subscription AD02”

Start Date = “31/12/2015”

End Date = “01/01/2011”

2. Save the form – note that an error is given, indicating that the end date is before the start date.

This way, we’ve struck a balance between completeness and efficiency. We’ve made no assumptions as our process is fully documented, but we haven’t endlessly and needlessly repeated ourselves.

Most importantly it allows us to add further ideas for tests quickly and easily. No amount of documentation is a substitute for getting your hands on the application, and once you’ve got some experience of actually using the application you’re testing, you’ll come up with more ideas inspired by how the UI works, what sort of responses you’re getting, defects you’ve raised etc.

Not all of this exploratory testing will be documented, but it’s far less likely to be documented at all if every instance of a test has to have a full wrapper of basic starting steps.