I originally wrote this as a post to the "Classic Traveller Starships" mailing list, a discussion group for a role-playing game. Someone had written a program to help automate the math-intensive starship design process for the game, and someone else complained that it was "full of bugs". The exchange went something like (names changed to protect the innocent):
Program Author wrote: > > Early User wrote: > > Nah, hoped for so much & it full of bugs. > > Was designing K'Kree Cruiser & number of turrets way off. > > Shame > > Thanks for this. I reviewed the code and found an error in the > calculation of empty turret space *and* one in the space and costs of > low berths for K'kree ships. They have now been fixed.As a software developer, I was impressed with both Program Author's willingness and ability to track down the bug and his gracious response to Early User's vague bug report. This isn't intended as a mean-spirited slam on Early User; the problem was that he just hadn't ever been told how to report a bug. Plenty of testers who are getting paid to test much larger programs can't do much better, frankly.
This article isn't meant to tell professional software testers how to do their jobs; there's quite a bit more to the QA process than this. It's intended to educate end users, who on low-budget labor-of-love software projects, are the only QA staff available. As I said to Early User:
"Speaking as a professional software developer, 'it's full of bugs' isn't a very helpful bug report. Besides being discouraging to the guy who's trying to construct a very useful piece of software for you to use for free, it doesn't tell him how to make it better. Making it better will make both him and you happy, so help him make it better."
I'll use the above exchange as an example, but don't worry too much about the details, which in this case are specific to the starship design system in Traveller. Regardless of what kind of software you're testing, a good bug report will contain, at minimum:
• A good description of the context in which the bug happens. In this case, Early User was partway there; he said what he was trying to do with the program: "designing K'Kree Cruiser". Better would be the additional data of the ship's size and whatever non-turret armaments were installed, which could be expected to alter the turret numbers. Better still would have been for Early User to send, or offer to send, the data file he was working on to Program Author, so that he could look at the whole design and at any weirdnesses visible in the file but not evident in the user interface.
• A description of what the you expected to happen, versus what actually did happen. In this case, something like "...should have had 150 turrets but instead had 95."
Those are the two things that you should be able to give the software author without even trying hard, especially in a case like this where the program isn't crashing. For extra credit, if you're willing to do some experimentation to help the author, you can try to provide some additional information:
• Steps to reproduce. This is more useful when the bug is dependent on a certain sequence of events, but in this case you could say "Start the program, set race to K'Kree, set size to 10000 tons, go to the turrets page. It will say 95 turrets, but it should say 150 turrets." Again, not hard, and it gives the programmer a simple way to see the problem.
• Regressions. This is what begins to separate "Quality Assurance Engineers" from "Testers", in my opinion. Regression is the process of isolating variables in order to give the programmer a better idea of where the problem is occurring. In this case, I can think of three obvious variables. First, check the same design against an old version of the program, if you've got it: "Problem occurs in version 1.1 and in version 1.07. No other versions tested." Second, try a related action in the program to see if a similar problem results, or if the problem is particularly specific. In this case, an uncommon design sub-sequence (starships built by one relatively obscure race) was in use; the obvious question is whether the problem happens in more common sequences (e.g. starships built by humans). Third, try the same action in the program but with slightly different data. In this case, the number of available turrets on a ship is closely tied to ship size, so try a few other sizes: "At 1000 tons, it says 8 turrets instead of 15; at 50000 tons, it says 450 instead of 750." This kind of information can help the Author guess where to start looking for the problem. If the bug just started happening in a recent version, it probably has to do with changes made in that version. If it happens in one sequence but not another, the code which is common between the two is probably not at fault.
Now, granted, the Author can do those regressions in the same amount of time as the User, and he might have a better idea of which ones to try; that's why regression tests are considered "extra credit". However, if you offer some simple regressions without having to be asked, you'll suddenly discover the difference between a happy programmer and a cranky programmer.