If you believe what the avant-garde testing community is saying, test plans are hardly a good thing. Only with exploratory tests, session based tests or context based tests, many errors can be found and the quality can be improved effectively.
You may not know each of these styles, but in short they all criticize the classic test management approach, where a comprehensive test plan is created up-front, which is then processed more or less by "test robots". The test plan will destroy the analytical and creative skills of the tester.
But with common tools, it is very easy to design test plans that also hold these negative aspects.
So, an exemplary test plan in TestLink will looks like this. Even at first glance, quite intimidating:
For a test case, test steps can be stored. This gives a tight corset for the tester. When in doubt, the tester adheres to the test plan, instead of thinking out-of-the-box and finding an important error.
While the test case definition is extremely elaborated, the design is dull, only a text note and a result can be stored.
The UX of this GUI enforces even to the action "no marginal notes and no deviation, just check whether the test case is correct."
TestLink is extremely poor in terms of usability, but the basic problems are shared by all test management tools. Exaggeratedly, one can say that a test tool has that has a "test steps" feature, is 90% scrap!
Still, I think that test plans have a certain value. Just like unit tests do that for programmers, they describe the system and document behavior and knowledge.
With a test plan, one can do extensive reporting. Somehow it creates trust in the software, if you see that many test cases were executed and passed.
A while back I came across the tool “Testpad”. This tool works with checklists which are processed quickly and easily, thus deliver the necessary confidence, but also allow for testing "off-roads". When I designed Quality Spy, I was not even aware, but ultimately I also had exactly this design goal.
The checklist is only a rough agenda; it does not contain precise instructions how the test must be carried out, such as the formation of equivalence classes for a form input. Either a tester has this know-how or he is a beginner and doesn’t know what this equivalence classes thing is.
For example, the test plan i.e. the checklist for Quality Spy's Undo-Redo function looks like as follows pretty spartan:
While execution, one can "check" the points accordingly, so that during the test one can have a rough overview about what areas are already checked.
By the way, I do not like the typical scale "Passed/Failed" for manual testing. This is a scale for test robots. People can say "passed, but with small problems". Thus, a meaningful picture is shown quickly in the summary, instead of the the situation "100%" of the tests passed, but 50 errors found.
From my point of view a checklist never replaces the test protocol, because here problems and comments can be easily added:
But again back to the avant-garde testers, like the people in the Software Testing Club, which are probably far better testers, than I am. They are quite right that test plans are not so good. But short, crisp checklists are a useful tool with a good cost to benefit ratio.