The introduction made me hope for real advice on how to "stop stopping", but it only repeated the old arguments on why we want tests. Why not deal with some of the real problems that actually prevent people from writing automated tests?
Real-world example 1: The system being developed talks to external system X. We don't want tests to litter the production database, especially since it's about accounting and we would get into legal trouble for that. However, there is no possibility to open a test account on the production system, and no budget for a license for a test installation
of X. The main troiuble with X is that its public API (web service) changes from version to version and there is no useful documentation about it. How would one write
integration tests for that?
Real-world example 2: How would you write tests for a system that has its requirements unspecified, even on a very coarse level, after the deadline where it is forced into
production by management?
I'm pretty sure that both examples happen in other places than the ones I've seen them, too.
> However, there is no possibility to open a test account on the production system, and no budget for a license for a test installation of X. The main troiuble with X is that its public API (web service) changes from version to version and there is no useful documentation about it. How would one write integration tests for that?
How would you manually test in a situation like that? If you can't interact with the production API because it would corrupt data, and there is no dev API, would you just be guessing that everything works before you deploy your code?
Nothing prevents people from writing automated tests except their own shortsightedness.
You're going to spend extra time on the the problems you identified. You can either plan for it up front with tests, or be unprofessional and firefight later.
1. Create a facade abstraction (it could be a microservice, API or even command line app) over the expensive to test thing. Make the abstraction really dumb and easy to test manually in isolation from the rest of the app. Integration test only against the abstraction, not the real thing and do a minimal level of end to end manual testing against the real thing.
2. Don't. If you don't have fixed requirements your tests will have negative ROI.
1. Unfortunately, the main issue was a changing interface of the third-party system. The manual testing against the real thing was the problem. I fully agree with automating everything where an abstraction is sufficient.
2. This is good to know. I might have worked with vague requirements for too long, but knowing this may help identify the parts where testing is indeed possible.
I don't think a blog post can possibly cover all the cases. A book will get you closer. For a great book on exactly that sort of subject matter, I couldn't agree more with the author of TFA: Working Effectively with Legacy Code is a book that everyone should read at least once.
It's been a while since I personally last read it, but, as I recall, it has an entire chapter devoted to each of your examples.
If it's an accounting service, make a new account/table named "Software test". Book-keeping was invented to spot errors. If you have a non zero balance in your "Software test" account you probably have a bug in the software.
Unfortunately, this wasn't possible for two reasons. The first one was that we would not just need one account, but several ones. Though it might have been possible to limit this for the integration test, and test all the stuff that deals with multiple accounts by mocking the accounting system.
The second problem is that even a fake account doesn't belong on production, not in accounting. They are very strict about such things.
So I write some code and a test, and of course the first time I run the code, the test is going to fail.
So now the balance in our "Software test" account is nonzero whereas it should be zero.
But audit requires us to record all bookings, so how do we explain these bogus bookings (both from the erroneous code that made the mistake in the first place, and from the manual adjustment later that fixes the mistake for the next test run)?
You can have many types of tests. First you have assertions and unit tests with mocks/injections that runs at compile time together with the type checker and other low level tests. Then you have the integration tests that makes sure your code works together with third parties and other API's. Those tests might seem unnecessary when you already have unit tests, but it's very nice to know if some third party have made breaking changes and that your software have stopped working because of it.
Real-world example 1: The system being developed talks to external system X. We don't want tests to litter the production database, especially since it's about accounting and we would get into legal trouble for that. However, there is no possibility to open a test account on the production system, and no budget for a license for a test installation of X. The main troiuble with X is that its public API (web service) changes from version to version and there is no useful documentation about it. How would one write integration tests for that?
Real-world example 2: How would you write tests for a system that has its requirements unspecified, even on a very coarse level, after the deadline where it is forced into production by management?
I'm pretty sure that both examples happen in other places than the ones I've seen them, too.