> What is better — having a test or not having a test? The answer is obvious — any test is better than no tests at all.
Nooooooo....
For most people here, this is probably true - because you (and hopefully those you work with) know how to write tests well.
Examples of badly written tests I've encountered that wasted everyone's time:
* Unexpected environment - such as, Django doesn't turn off the cache while tests are running
* Tautological tests - where the test just repeats what's in the code
* Peeking too far into the implementation - restricts refactors and can create lots of false negatives or false positives depending on what's being asserted
* Mocking out too much - tests that pass when they really shouldn't
* False assumptions/not thinking it through - why did this test start failing on New Year's Day?
* Flaky integration tests
In our case, about half of these could be updated once the issue was apparent, but the rest either scrapped or completely rewritten.
And then there's a number of issues with test coverage giving you a false sense of security, with things like reaching 100% for a given piece of code, but only thinking about the happy path.
I partially agree w/ you. One can always delete the test and then it's like you have no tests at all.
> And then there's a number of issues with test coverage giving you a false sense of security, with things like reaching 100% for a given piece of code, but only thinking about the happy path.
Yup, and this is why one writes tests against well-defined interfaces/boundaries.
Nooooooo....
For most people here, this is probably true - because you (and hopefully those you work with) know how to write tests well.
Examples of badly written tests I've encountered that wasted everyone's time:
* Unexpected environment - such as, Django doesn't turn off the cache while tests are running
* Tautological tests - where the test just repeats what's in the code
* Peeking too far into the implementation - restricts refactors and can create lots of false negatives or false positives depending on what's being asserted
* Mocking out too much - tests that pass when they really shouldn't
* False assumptions/not thinking it through - why did this test start failing on New Year's Day?
* Flaky integration tests
In our case, about half of these could be updated once the issue was apparent, but the rest either scrapped or completely rewritten.
And then there's a number of issues with test coverage giving you a false sense of security, with things like reaching 100% for a given piece of code, but only thinking about the happy path.