Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I might be missing something obvious, but from my (small) reading about TLA+ (and the lamport video course), my biggest concerns were:

A: even if you understand the spec you want to write, it's possible to get the implementation of the spec in the spec language wrong (see the Die Hard exercise in the course) ; isn't there some kind of recursive catch-22 there ?

B: even if you understand the spec you want to write, and manage to write the spec you want to write, and get its model checked, etc... Than you still have 0 lines of working code, and (unless I'm mistaken) you're going to have plenty of occasions to not implement the spec properly, and still get all the bugs that you proved would not happen if you had written the code right (you dummy.)

So it seems like the interesting part is the "game of thinking harder about what you want to do", and it is (arguably) done (at least in part) by writing automated tests beforehand, and the tests also have value after you've written the production code.

That at least could explain why the jump to formal method is not natural.

Looking forward to being wrong, though.



There’s a huge misunderstanding that pops up in every discussion about “formal methods.”

These methods are not perfect silver bullets to guarantee bug-free software forever. They are supposed to be useful engineering tools that help you think and verify your thinking especially for tricky corner cases and interactions.

So then it’s a bit like “Why would I think about the code I’m going to write when I could later make a mistake in implementing what I though?” Well, even so it has advantages over not thinking.

Not to suggest that “formal methods” are the only way of thinking. They’re just not really what people imagine they are when they criticize them.


Fair enough.

If you have experience, what would be the rebutal to my fear of the "catch-22" ?

What I mean is, how often does it happen that you think about a problem right, but fail to transcribe that in a formal method, you gives you either too much confidence in a model that does not work, or makes you feel you have wasted your time on mental self-molesting ?

How do you approach "experimenting" with formal methods to avoid this risk ?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: