Tuesday, June 1, 2010

Introduction:

The questions below can be answered very quickly with "yes" or "no". If you've answered "no" to at least one question, the quality in your environment is just acceptable; more than one and you surely have some serious quality issues. Here goes:

  • 1.          Do you use a source / version control system?
  • 2.          Do you write a spec before coding any project?
  • 3.          Do you use a bug tracking system?
  • 4.          Do you fix all bugs before writing new functionality?
  • 5.          Do you have a project schedule that is kept up-to-date?
  • 6.          Do you let the developer in charge of a task estimate the duration of the task?
  • 7.          Do you encourage / enforce code ownership?
  • 8.          Do you have a one-step builds process?
  • 9.          Do you have automatic (at least) daily builds?
  • 10.        Do you use the best tools available for the type of development you are doing?
  • 11.        Do you have testers?

The Details:

I will quickly explain why I feel the questions above are a very good indication of the concern for quality in a software development shop:

1. Use of source control: I won't bother explaining – if the importance of such a system is not obvious, you shouldn’t have bothered reading this far.

2. Writing spec(s) before writing code: the prerequisite to writing code is a good understanding what the code is supposed to do. A spec captures exactly that and communicates to both developers and the customers of that software what will be delivered. Everything else derives from the spec: design document(s) (that describe how the code will do what's listed in the spec), a test plan and the test cases that need to be prepared to verify that the code does indeed what the spec called for.

A spec also identifies very quickly contradictory requirements (in large systems such contradictions are quite frequent) and areas that are not clearly defined. Identifying those early is a lot less expensive than hitting them later, when some of the code is already in place and will probably require changes. In my experience, software that is not written based on a spec ends up being badly designed, does not fulfill the original demands and goes way off the original schedule.
A spec also helps substantially in conflict mitigation: nobody can claim that the software was requested to do X and that X is not present in the software, or that feature Y was never requested. If it's in the spec, it must be in the software; if it's not in the spec, it won’t be in the software.

3. Use of bug tracking: another obvious one. Even for small systems, nobody can keep track of bugs without such a system. Just like the source control system, a bug tracking system pays for itself from day one.

4. Fixing bugs before writing new code: attempting to achieve a "feature complete" milestone so that you can move to the next one before fixing the known bugs is a guaranteed way to convert all the features into bugs. Developers will do everything to complete the code so that they can meet an aggressive "feature complete" milestone, and will therefore either rush the code or even knowingly implement just stubs that don't perform what's expected, and wait for the bug report to come back, buying themselves time.

This relates to other questions in my list: first, let each developer estimate his features; validate the estimate with others if you need to, but don't impose an estimate on the developer. Buffer time must be included in the estimate for bug fixing the respective feature – and don't consider the feature complete if there are open bugs on it. Fixing the bugs early, when the code is very fresh in the developer's mind is a lot less expensive than doing it later (and related to my "code ownership" quality criterion: if somebody else is assigned to fix the bug AND it is done later, you have just maximized the cost of fixing that bug).

5. Having an up-to-date project schedule: unless costs are of no relevance to your business, you want to know where the project is and how much longer it has to go. There are far too many activities that must be planned around the project's timelines that you simply cannot afford to not know what the costs are and when the project is expected to reach certain milestones. The only way I know for staying on top of costs (and for being able to answer to "where are you?" and "how much longer?") is to keep a schedule that must be updated periodically. Another benefit of a schedule is that it forces everyone (including your customers) to prioritize features and possibly cut out those that are determined to be unnecessary.
Personally, I'm not keen on using a project management type tool (MS Project and the like) mainly because of their complexity: I prefer a simpler approach – an Excel spreadsheet at each team level would do just fine. For larger projects, simply consolidate the spreadsheets from each team and you'll get the bigger picture, and so on, all the way up to department or company level.

6. Have each developer estimate his tasks: I have seen many cases in which a deadline is forced (for whatever business reasons) down onto the development team. It is obvious that the timelines that derive from whatever set of business reasons have no relationship with the actual time it would take to create the requested product.
This is such a self-destructive practice for a software shop that I don't even know where to start. There are only two possible outcomes of this scenario: there's the "happy" one, in which the developers burn themselves out but manage to deliver on time – very likely a product of far lesser quality than what they could have done. The whole process goes through immense frustrations and irritation and you end up with a bunch of grumpy, burnt-out developers that won't be able to deliver at the next project – a productivity decline that would take quite long to recover from. There's also the unhappy outcome, in which the developers not only burn themselves out and get frustrated, but they also start leaving, the project misses deadline after deadline and finally unravels completely. You end up with no product and a decimated development team – with those developers still standing being burnt-out, grumpy, demoralized and looking for the next job offer. This scenario is very difficult to recover from.
Therefore, the organization should have a policy of starting with the estimate from the bottom up: as tasks are assigned to developers, each one of them should be able to provide an estimate. There are many variants to the whole estimation process (with buffering done when the individual estimates are consolidated at project level, with buffering done at individual level, etc. depending on the level of confidence management has in the development team and the experience the team has with the specific type of project) but the fundamental principle is that the one who'd be writing the code should be the one estimating it. This practice creates a culture of trust and developer self-confidence, as well as responsibility and accountability – not to mention that the organization gets timelines that actually have a good chance of being met.

7. Enforcing code ownership: by code ownership I mean the code is owned by the developer that wrote it. Any bugs identified in that code will be fixed by the owner of that code. This may seem obvious to many readers, but I have seen many software shops that have a "bug fixing team". This approach reduces substantially the quality of the product and increases the cost of bug fixing (and also causes indirect costs to the business, such as projecting an image of being slow and sloppy at bug fixing). No matter what you say, the best bug fixing job can only be done by the original developer of that code – and the quicker after the code has been written, the better (this relates to my "fixing bugs before writing new code" quality criterion above) – and this reduces the direct and indirect costs to the business.

Having a "development team" and a "bug fixing team" encourages a culture of sloppiness – no matter how good and ethical a developer, I can guarantee that under the pressure of making a milestone he will be less careful when he knows his bug goes to some poor shmuck in the "bug fixing team". It also produces a "class system" where those in the "development team" are the venerated Brahmins while those in the "bug fixing team" are seen (and feel as) second-class developers. If you currently structure your teams this way I can guarantee that the bug count will drop the moment you switch to the rule "he who created the bug fixes it". It translates into responsibility and accountability. As a very benefic side-effect, it allows you very easily to get a feel for the quality level of each developer (however it's not a rigorous metric, as different pieces of code can differ substantially in complexity).
I use the source file as the atomic unit of ownership. This choice has a lot to do with personal coding preferences of individual developers (no matter how much you standardize the coding style, there always is –and should be, unless you want the coding process to become a tyranny- a certain degree of "personalization" by each developer) and with the code organization within the file. So as a rule, "if I wrote file A, only I touch it; anything wrong with it, I take responsibility or blame".

8. Making one-step builds: by this I mean a script / program that you start and produce the whole product – packaged as the user will get it. For example, this build script / program gets the code labeled for build from the source controls, compiles, links, does whatever other steps, then packages into an installer (or many if the deliverable has multiple "flavors") – so at the end of the line you have what the client should get.

Having something like this early in the development cycle ensures that the build / deployment is done and out of the way and can become part of the daily development process (see my next quality criterion) rather than being a last minute thought, when you can run into problems you did not have time to think about. Developed early in the cycle, the build "script" will have to evolve with the system that it builds (which will require some periodic updating of the script) but may also uncover  deployment issues that may actually impact the design – and the earlier you detect the need for such design changes, the cheaper they can be made.
Furthermore, if your build process has multiple steps the chances that errors will be made in those steps increase as the deadlines get closer. You need to have a reproducible, automated system that runs through ALL the steps to create the product "from scratch" to avoid any silly mistakes that are quite common (especially under pressure and especially if it's a routine operation that a person will have to follow step by step).

9. Use automatic daily (or more frequent) builds: this relates to the quality criterion above. Ideally, you'd want to have a continuous integration system, which builds your system every time someone checks code in the source control system. This has many benefits: it ensures that any new files that a developer has added to the project are also in the source control system (a frequent omission by developers) but more importantly it ensures that if the build is broken by a check in, it is flagged immediately, and fixed as soon as possible. Waiting for an "integration phase" at the very end of the project cycle (the "classic" approach) is substantially more expensive.
And you don't necessarily need to buy and implement a continuous integration tool. At the very least, if you do have a one-step build process, why not run it periodically, for example nightly? It comes close to continuous integration, with pretty much all its benefits – and all you need to do is just leverage your existing one-step build process (configured to work on the latest code in source control rather than a specific label as the "real" build does) and schedule it to run nightly. Ideally, you'd want to also add automated test runs (such as unit tests) in the script that does the builds.

10. Use the best development tools: this seems like another obvious one, but I've seen environments where buying a tool for $100 per developer was a big issue. The majority of the costs in a software shop are usually the programmers' salaries – so make sure you use their time as efficiently as possible. Therefore if a tool costs $1000 and saves two days of a senior developer effort, it has paid for itself – after which it starts saving money (not to mention that your developers will be less frustrated and grumpy – and therefore more productive). But this brings me to the other side of buying tools: get the tools that the developers want and need, not the ones that the management thinks they need - ideally, the use of certain tools should not be mandated. Let developers debate and reach a consensus as to which tool in category they'd prefer – and get that one. And you don't have to buy the tool for all developers – get it for those that need it.
Back in 1997 I was working for a large telecom company in a project that involved 200 developers. One day the management announces that now we will start using Rational Rose as they have just purchased licenses for all of us. So I went to the indicated location to get me a copy and I found there a room full of Rational Rose boxes, stacked neatly from the floor to the ceiling – all 200 of them. Guess what: I installed it (after all we had to use it) but never actually used it, and 6 month later when I passed by the same room, pretty much all of the boxes were still there. The moral of the story is that a tool should be brought in to support a process that is keenly embraced by the developers – not one that the management wants to force onto them. From my experience, processes that the management mandates are usually circumvented, ignored or followed in a manner that renders them of no value – in which case buying a tool to support such a process is only a waste of money (unless the tool completely automates the process so that the developers don't have to worry about it anymore, but this is not what I'm talking about here).

11. Use of dedicated testers: another obvious one, yet there are still software shops that are trying to do without dedicated testers. In such environments the end product is either of low quality, or the costs are higher because you have highly paid developers doing the job of testers (which run usually at 50-75% the cost of a developer). Either way, the organization incurs direct and indirect costs that are substantially higher than the cost of having dedicated testers – which makes the whole concept of cutting costs by cutting testers completely unjustifiable.

Software Quality By :

Ahmed Abdelhamid
Software Quality Engineer
Interactive Saudi Arabia Ltd.
An Economic Offset Program Co.
http://www.il.com.sa/ahamid



0 comments: