SPA Conference session: TDD and Non-Functionals
|One-line description:||How does TDD affect maintainability?|
|Session format:||Workshop (75 mins) [read about the different session types]|
|Abstract:||An informal experiment to see if TDD code and good code written in other ways have different non-functional characteristics, specifically "maintainability"|
Most code in industry is re-written much more than it is written. Most code in industry is read much more that it is written or re-written. The bulk of the TCO of most industrial software comes after initial development, in support, defect fixing, and enhancement. Studies of TDD tend to focus on how quickly new features can be added, or how few defects are found. That's fine, but may miss the point slightly in regard of how useful TDD is as a long-term industrial practice.
In this session we will try to simulate the sort of work that mainstream developers spend most of their time doing, and see if there are any clues that TDD makes a difference, and if so what.
|Benefits of participating:||Gain new insights into the effect of TDD and other development practices on a relatively little-discussed area of software development practice: maintaining other people's code.|
|Materials provided:||Repo prepopulated with acceptance tests|
|Process:||Attendees will be provided with access to a build server (for Java) watching a repo (hg or similar) with a bunch of branches set up already with a substantial suite of acceptance tests.|
Attendees will work in pairs and will be allocated to Team A or Team B, each pair gets a branch. Team A must use TDD and as a simple check the build server will fail builds with less than 100% line and branch coverage of unit tests. Team B must use any technique except TDD. Spa has enough regular attendees who are, shall we say, rich in experience and will be able to remember how code was written in the good old days. It might be worth having team B builds fail if any unit tests are found.
Pairs will work to write an implementation that makes the supplied acceptance tests pass. At the first checkpoint (just after the tea break) we will see whether the TDD or the non-TDD teams are going faster at making tests pass, but this is of merely passing interest.
Then after tea the branches will be swapped between teams: team A pairs will get team B branches (with no unit tests in them). Pairs will then be reallocated to teams 1 and 2. Pairs in team 1 will now proceed in a TDD fashion, and pairs in team 2 will proceed in a non-TDD fashion (no further unit tests to be added).
A new suite of acceptance tests will be presented and pairs will set off to implement them, on someone else's code which might or might not have been developed TDD.
At the second checkpoint we will then have four kinds of branch:
1) TDD code maintained using TDD
2) TDD code maintained using other techniques
3) non-TDD code maintained using TDD
4) non-TDD code maintained using non-TDD techniques
The interesting point is then to see who made the most progress in the second "maintenance" phase.
All very unscientific (I doubt the sample size will be big enough to draw any firm conclusions) but I expect that the results will be interesting.
|Detailed timetable:||from to duration activity|
00:00 00:05 5 intro
00:05 00:20 15 sorting out the mechanics of connecting to servers etc
00:20 00:30 10 Explain the roles of Team A (TDD) and team B (not TDD)
00:30 01:10 40 code phase 1: get acceptance tests passing
01:10 01:25 15 BREAK
01:25 01:35 10 Review stats from build server: is TDD or not TDD going faster?
01:35 01:40 5 swap branches, pull in new functional tests.
01:40 02:20 40 code phase 2: get more tests passing on someone else's code…
02:20 02:25 5 review stats from build server: which combination is going faster now?
02:25 02:40 15 wrap up: what happened there? Lessons learned for the day job.
|Outputs:||Public access to and commentary on the repo where attendees checked in their code.|
|1. Keith Braithwaite