A lot of Robert C Martins pieces are just variations on his strong belief that ill-defined concepts like "craftsmanship" and "clean code" (which are basically just whatever his opinions are on any given day) is how to reduce defects and increase quality, not built-in safety and better tools, and if you think built-in safety and better tools are desirable, you're not a Real Programmer (tm).
I'm not the only one who is skeptical of this toxic, holier-than-thou and dangerous attitude.
Removing braces from if statements is a great example of another dangerous thing he advocates for no justifiable reason
The current state of software safety discussion resembles the state of medical safety discussion 2, 3 decades ago (yeah, software is really really behind time).
Back then, too, the thoughts on medical safety also were divided into 2 schools: the professionalism and the process oriented. The former school argues more or less what Uncle Bob argues: blame the damned and * who made the mistakes; be more careful, damn it.
But of course, that stupidity fell out of favor. After all, when mistakes kill, people are serious about it. After a while, serious people realize that blaming and clamoring for care backfires big time. That's when they applied, you know, science and statistic to safety.
So, tools are upgraded: better color coded medicine boxes, for example, or checklists in surgery. But it's more. They figured out what trainings and processes provide high impacts and do them rigorously. Nurses are taught (I am not kidding you) how to question doctors when weird things happen; identity verification (ever notice why nurses ask your birthday like a thousand times a day?) got extremely serious; etc.
My take: give it a few more years, and software, too, probably will follow the same path. We needs more data, though.
I’m not sure any of them do really. It’s been 22 years since TDD made its entry into our field and it’s still worse than the runtime assertions which helped put people on the moon. I know I was lashing out at uncle Bob before but it’s really all of them.
I do agree with these people that nobody has ever regretted writing a test. Well, I mean, someone probably has, but the idea of it is fairly solid. It’s just also useless, because it’s so vague. You can write a lot of tests and never be safe at runtime.
There isnt much consensus on the right kind of test to write with TDD, but when you get it right or wrong it makes or breaks TDD.
Recently Ive been writing mostly "end-to-end unit tests" - stateless, faking all external services (database, message queue, etc.) with TDD which works great.
There is a sweet spot on default test types - at a high a level as possible while being hermetic seems to be ideal.
The other un-talked about thing is that to be able to always write this kind of test you need test infrastructure which isnt cheap to build (all those fakes).
My biggest problem with TDD is that it doesn't actually protect you from the things you miss. This is great for the consultants who sell TDD courses, because they get to tell you that you did it wrong. They'll be right, but that fact is also useless. I don't have a problem with testing though, as I said I think it's hard to regret doing it, but I'm not sure why we wouldn't push something like runtime assertions instead if we were to do the whole "best practice" thing. With runtime assertions you'll get both the executable documentation and the ability to implement things like triple modular redundancy, so basically everything TDD does but much better.
Yet many developers don't even know what a runtime assertion is while everyone knows what TDD is. I guess it doesn't really matter if you're working on something which can crash and then everyone will be like "oh it's just IT, it does that".
It often protects me from things I miss. E.g. when I'm TDD'in a feature and I'm implementing the 5th-6th test I often find myself inadvertently breaking one of the earlier tests.
I do think there is such a thing as overtesting - i.e. regretted tests. TDD actually protects you from this to an extent by tying each test / modification of a test to a change in code.
Runtime assertions definitely give you more bang for the buck (they are ridiculously cheap) but they are complementary to tests, not a replacement. It attacks the same problem from the bottom up instead of top down.
I also find that when you combine the two, the tests become more useful - previously passing tests will fail when they trigger those assertions.
I'm not the only one who is skeptical of this toxic, holier-than-thou and dangerous attitude.
Removing braces from if statements is a great example of another dangerous thing he advocates for no justifiable reason
https://softwareengineering.stackexchange.com/questions/3202...
Which caused the big OSX/iOS SSL bug in 2014, see https://www.imperialviolet.org/2014/02/22/applebug.html
This link and thread on hackernews is good too
https://news.ycombinator.com/item?id=15440848