Agile Testing with DSDM Atern

The objective of each timebox is to deliver an incremental subset of the solution.
Deliver. I will repeat that – deliver! To your customer.
That means the person/s who raised the requirements initially should be able to use (at least in a test environment) the features created during that timebox. Obviously all of the Musts, and hopefully all of the Shoulds. In this context, anything that is partially completed or untested is effectively useless. Each timebox therefore must include the testing needed to ensure that the features coded actually work.
Testers are traditionally trained to plan, write and execute a series of tests against a software application or product in order to find any defects. In an agile team, that job changes hugely. Finding defects is no longer the goal. Instead the software tester’s primary goal is to deliver working software.
Wait a minute. That sounds suspiciously like the developer’s job. Or the Project Manager’s for that matter.
You’re absolutely right. You see, in an agile team, we all have the same objective – delivering working software to the customer as soon and as frequently as possible. Time spent finding and fixing defects is waste, and we want to waste as little time as possible. The agile tester’s role is as much about preventing defects from arising in the first place as it is about finding and managing them afterwards.
To illustrate how the testers role has changed, I will compare a fictitious traditional tester – let’s call him Dave – operating in an agile team, to an agile tester – Susie – operating in the same team through the different project stages.

During requirements gathering workshops, Dave would be sitting quietly listening to the Business Ambassador and Advisors listing their requirements, and to the development team asking questions to ensure they understood each one sufficiently to start development. He might be making notes that would later help him remember what test cases to write.

Susie would be checking that each requirement, as written (as a user story) was testable, by asking the business how they intended to ‘sign off’ each requirement; how would they know it was done. These acceptance criteria would then be written on the back of each user story card. When it came to estimating the work, she would ensure each estimate included the testing effort. This is important to ensure that each story is delivered tested within the timebox.

Dave writes up his test cases and scripts them while the developers are coding. After all, he cannot test until the development is complete.

Susie, meanwhile, sits with whoever raised the requirements – and possibly the business analyst – to turn those acceptance criteria into Scenarios.
What this does is obtain a clear understanding of the desired behaviour of the feature under discussion. She then adds these scenarios into the test tool as executable tests.
She will sit with the developers, helping them to understand how she will be testing each story, helping them write their unit and integration tests to ensure that each change made accurately reflects the business need.

Dave waits until the developers deploy a set of features into his controlled testing environment. He then starts running through his test cases. As he tests, he finds defects – and after three weeks worth of development, there are quite a few of them. He raises those defects and assigns them to be resolved. He will now wait until the fixes are deployed so that he can re-test. In the meantime, his test cases are marked as Failed.

Susie starts badgering the developers on a daily basis for something to test. She wants the smallest possible testable piece of functionality at a time, because she knows that the smaller the item deployed, the less chance there is that it contains defects. And the quicker it will be to find and fix them. So when code is deployed into the integration environment every day, she immediately runs the tests pertaining to the scenarios written for the stories just deployed. As she finds defects, she talks to the developers about them so that they can be resolved immediately.
The tests that she runs are integrated into a regression test suite which is run as often as is needed, ideally every time code is checked in.

As you can see, the big difference between Dave and Susie is that Susie is by far the more proactive of the two. While Dave is focussed on defect management and resolution, Susie’s focus is on defect prevention – a subtle but important difference.
The agile tester needs to think about things in a different way. If you are a tester, ask yourself:
• Do we all understand the business need that this requirement represents?
• What’s the smallest part of that feature that I can test?
• How soon can I test it?
• How can I help ensure that it works the first time it’s delivered?

Advertisements

About aterny

Agile enthusiast and evangelist, DSDM practitioner, trainer and coach. Specialist in Agile project and programme management, governance and organisational transformation
This entry was posted in Agile and tagged , , . Bookmark the permalink.

One Response to Agile Testing with DSDM Atern

  1. Chris says:

    THis makes the different in approach and ‘mindset’ in agile/DSDM clear – I am still seeking to understand how we then manage effective testing of the increasingly complex aspects of ‘integration’ in our service oriented world…..

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s