SPA Conference session: Continuous source code analysis to help steer the super tanker and / or hit the iceberg

One-line description:A case study into the pros and cons of over three years experience of continuous source code analysis followed by an interactive session using the real tool on real source code.
 
Session format: Case study (75 mins) [read about the different session types]
 
Abstract:Continuous measurement of an evolving source code base can give developers and architects an incredible insight into real software quality as it changes. This tool can provide the feedback required to make a development team adaptive to real issues that require attention. It can allow those tasked with quality and technical direction (typically the technical architect) to apply lighter direction enabling the team to become self-governing. The best tools for source analysis have powerful visualisation capabilities. This allows clients, sponsors and managers to explore and investigate their codebase. They can begin to understand underlying trends of development quality and the consequences of their management decisions on that code base. These visualisation tools allow architects and developers to assimilate large swathes of code then drill down to specific areas in seconds rather than spending hours wading through generated data and code reviews. Suddenly it feels possible to steer the super tanker of software development.

Continuous measurement of an evolving source code base can supply developers and architects with a surprising amount of misinformation. This tool can supply metrics which can be abused by developers and managers alike to prove or disprove pet theories. The act of looking at a metric often results in conscious or subconscious gaming of that metric for short term gain with no real benefit other than the temporarily improved renumeration or status of individuals. The best tools for source analysis have powerful visualisation capabilities. Well meaning, intelligent individuals will become incredibly excited by the presentation of an attractive infographic whilst having zero understanding of what it is they are seeing. Architects and developers become obsessed by a gradient on a graph or the exact hue of a pulsating ball and forget about working software. Suddenly the development team hits the iceberg of software entropy.

This session incorporates a case study covering both of the above perspectives across several projects in diverse organisations. Positive and negatives will be highlighted using anonymised but empirical data which captures scenarios such as:

* Identifying areas of concern using metrics to identify a 'smell' then drilling down using a variety of techniques.
* Demonstrating to managers and architects that a major refactoring exercise was meeting its goals.
* Using complexity and source control metrics to identify areas where tests would be most valuable in a test retrofitting exercise.
* Demonstrating how linking a single metric to renumeration can result in completely the opposite outcome to that intended.
* Showing how a very large code base can be monitored and then efforts focused at a component level when things started to unravel.

The session will then become more interactive as we use Sonar, an incredibly attractive open source code quality tool to explore a selection of open source projects to illustrate the above points. This part of the session will briefly cover how to retrospectively apply Sonar analysis to an existing code base.

The session will incorporate a brief definition of each of the key metrics applied.
 
Audience background:The examples will be in Java but the majority of the session will be language agnostic. An understanding of O-O languages and concepts is essential.
An understanding of test driven development would be useful. A brief introduction to each of the metrics means experience in this is not required but would be helpful.

Technical architects, developers and potentially anybody with any sort of responsibility for software delivery who has to understand what happens inside the black box will find this session useful.
 
Benefits of participating:* Real world examples of the positives and negatives of applying continuous static source code analysis. This will allow participants to make an informed decision on how best to use these tools in their development communities.
* An overview of some of the more common and useful software metrics.
* The opportunity to use the Sonar analysis tool in anger and then hopefully understand what is required to use it on their projects.
 
Materials provided:Slide pack containing graphs and anonymised empirical data to be discussed.
A server running Sonar and a code base to be analysed.
 
Process:Part one: A presentation of the case study.
Part two: An interactive section. This will partly be discussion based but to take full advantage participants should have a laptop with internet access (for access to the Sonar server). Alternatively I could provide a wifi hot spot and run the Sonar and supporting tools on my laptop if internet access was an issue.
The group will be invited to use the tools to explore a sufficiently large open source code base and try and draw some positive and negative conclusions based on the metrics. As a group we will collate these ideas then drill into the source code to try and ascertain whether the conclusion was valid. Where the conclusion was a negative one we will discuss how that could be corrected and how that correction could be measured. We will take particular care to explore how that metric could then be 'gamed' to the determent of the project and how best to mitigate this. Where the conclusion was positive we will discuss whether that positive behavior could be replicated across the code base and again how the problem of gaming the metric could be addressed.
 
Detailed timetable:00:00 - 00:05 Introduction and synopsis.
00:05 - 00:45 Run through the case study with questions and discussion.
00:45 - 00:75 Interactively explore a code base with sonar and discuss corrective actions.
 
Outputs:- Slides including data will be available.
- Some conclusions published on the second part of the session about issues identified and mitigations to abuse of the metrics used.
 
History:Part of this session was run during an internal professional services day at Valtech where it was positively received and at least one of the participants went on to use the tool on a new client project where it was later adopted for use across the enterprise. The case study section of the session is likely to be run at Agile Edge 2011 this spring. The interactive section of the session will be run again at a professional services day at Valtech.
 
Presenters
1. Andrew Rendell
Valtech
2. 3.