There’s been an interesting little discussion on the Test-Driven-Development list about redundant tests that goes something like this:
A) I have a unit test called “UpdatesDatabase” in my database-connector object that tests to make sure I can update the database.
B) I have the same test in my “Model”; all the model does is call the connector object, but I have a test for it.
C) I have the same test in my “Controller”
D) I have the same test in my GUI/View
E) My customer does the same thing as an acceptance test
It’s not one test, it’s dozens of tests in each layer, each repeated five times. Isn’t this redundant?
My short answer is both – yes, it’s redundant, and, at the same time, that is not necessarily bad.
In any large, working system, at any one time, at least one system is failing, and another system is compensating(*).
If this was not true, we would not need tests, right?
So, first off, if your automated tests get to the point where they could be automatically generated by a code-generator, you aren’t thinking, and risk spending a lot of time on things that might not have much value. If you’ve got more than two copies of essentially the same test, you may be able to eliminate some of those tests by making a pointed decision about risk.
At the same time, If you get feedback like “It just HAS TO WORK” from management, well, recognize that systems fail, and the way to prevent failure is through redundancy and failover. One way to do that is through “redundant” tests at multiple levels; another is, yes, an independent test group.
UPDATE: Yes, it’s a complex architecture, probably win32, not web, and it could certainly be a heckofalot tighter. I suggest we keep that as a separate discussion.