How detailed should test scripts be?
At the start of any project you would normally think at a high level about the approach you were going to take, and possibly present this in a plan. At some point you’ll get that down to a finer level of detail, thinking about individual scenarios or processes to look at. Beyond that you have permutations and then beyond that you have your test steps (and in most cases, expected results).
Let’s take an example – a new form has been created that holds subscriptions and the associated from/to dates.
Our plan would be along the lines of creating new subscriptions, then editing and deleting existing ones. We would also intend to look at date range validity.
We’d then look at scenarios:
- Add record
- Modify record
- Delete record
- Check for valid dates
- Check for start date before end date
- Check for start date same as end date
- Check for non valid date types/strings/symbols etc.
- Do each of the above checks for new records and then edits
The list goes on…
You can then look at permutations, which are essentially the above, but with actual data in.
You then, finally get down to the test steps:
1. Login to application
2. Access Subscriptions form
3. Add a subscription with:
Name=”Test Subscription AD01″, Start date=”01/01/11″, End date = “31/12/15”
4. Save record
5. Close the form
Now, the above steps would actually hold a lot more detail, such as how we access that subscription form – is it directly, or is it attached to a customer? If it’s directly, then how do we associate the subscription with a customer? All that detail though, as well as steps 1, 2 and 4, remain the same in almost every scenario we’ve posited.
I’ve seen some arguments that suggest that we don’t need to go to that level of detail – a competent tester can work from scenarios or at most, the permutations.
I’ve seen other arguments that insist that full steps must be written out for each test. You’ve no guarantee who will be running that test, and it’s your job to write it such that the proverbial ‘man on the street’ can run it.
I’ve actually been told this one so often that I started to believe it, despite the fact that it cheapens our role. The biggest problem with this approach is that it is tantamount to writing an automated test script, and has most of the disadvantages that come with that, while having few of the advantages, as the script is still to be run by a human.
I take a different approach.
I believe that full steps for the process are necessary. Our example is a completely new form – what if the link has been put in the wrong place, or if the link once found actually opens up the wrong form? What if saving the record or closing the form gives an error? Essentially, by eliminating steps you are introducing assumptions – and none of those assumptions are guaranteed to be true.
So, by writing the full steps we are forced to discard all assumptions. We’ve got to state upfront where the link for the form will be, and in the absence of a working application, we’ll have to rely on documentation – so we’re now effectively testing the documentation for completeness as we write our scripts.
That being said, with all the scenarios given, there’s a huge amount of repetition, and although computers make it easy to copy and paste it’s wasteful to duplicate all the steps for each one.
I believe that we should start out being as complete as possible, and then for subsequent tests we can gradually drop the detail.
Using the example above, we keep what we’ve got for test one, but test two becomes:
1. Add a new subscription:
Name = “Test subscription AD02”
Start Date = “31/12/2015”
End Date = “01/01/2011”
2. Save the form – note that an error is given, indicating that the end date is before the start date.
This way, we’ve struck a balance between completeness and efficiency. We’ve made no assumptions as our process is fully documented, but we haven’t endlessly and needlessly repeated ourselves.
Most importantly it allows us to add further ideas for tests quickly and easily. No amount of documentation is a substitute for getting your hands on the application, and once you’ve got some experience of actually using the application you’re testing, you’ll come up with more ideas inspired by how the UI works, what sort of responses you’re getting, defects you’ve raised etc.
Not all of this exploratory testing will be documented, but it’s far less likely to be documented at all if every instance of a test has to have a full wrapper of basic starting steps.