Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

There are many programming languages, specifically structured, to prevent programmers from making a certain subset of bad programs. This is incredibly useful.


Yeah, but these languages tend to be not a good fit for inherently complex problems. When one tries to express complex ideas in those languages, you end up code that's either absurdly verbose, or code that experts in that language's idioms would call "bad".

One can write simple, structured code for simple problems in any language. But complex problems require languages that don't attempt to babysit programmers.

(IMNSHX, YMMV)


Some languages like Haskell put you in a purity (chastity?) belt; if you can code your solution there, you can code it anywhere. The problem is knowing your problem well enough to code it in Haskell...while a lot of time we are writing code to understand our problem (prototyping, experimentation). Some people live in a world where they understand their problems before writing code, and all their code is expected to immediately hit production.

Scala is quite flexible and can accept good and bad code, which is good. I was only calling out FP abuses and its promotion as some sort of holy grail that is so much better than objects. If you compare Scala FP to Java OOP, ya, Scala FP is better, but if you compare Scala FP to Scala OOP, it's a much fairer battle and I put my faith in the latter (with the caveat that I can always use Scala FP when it makes sense to do so!).


So what you are saying is that Haskell forces you to understand the problem for which you are implementing a solution, and this is bad because often that is too hard.

Instead one should be able to just hack away until the code kind of does what they want (even though they can't prove anything about that piece of code) because that is the only way to really manage inherently complex problems?

The older I get the less I find writing code an attractive method for understanding a problem. I also find utilizing something like a type system to enforce invariants in code liberating - and actually conducive to solving problems.


Well, it helps to be able to inspect your code at various stages to isolate which part of it isn't working right. In the pure functional paradigm, my programs are typically the composition of numerous functions. Now what if my program isn't doing what it should? (Contra FP advocates, it's possible for this to happen even if your program successful compiles.)

You could have it output the value after each function is applied, but that would either break purity (by having I/O) or, per the GP's point, be tedious to write. At this point, the Haskell community made the decision that in debugging, "screw purity", and the output is effectively untyped.

Certainly, you can use tests of expected invariants (eg QuickCheck), but that just tells you that the whole thing fails.

That is, I think, also the GP's point: that the same things that make your final code good in Haskell, also make it hard to write.

(No to say I don't like haskell; this is just a peeve,)


you can't sprinkle debug prints in Haskell? really? (honest question)


Short answer: No, you can't.

Long answer: You can, but Haskell is very serious about keeping pure code separate from impure code. Any function which performs I/O is impure, so its output must be an IO Thing instead of just a Thing. This means anything that uses its output must accept an IO Thing instead of an ordinary Thing, and so on. Of course, the way I/O works means that a 'Type1 -> Type2' function into can be transformed into an 'IO Type1 -> IO Type2' or a 'Type1 -> IO Type2' function (this applies to many things other than I/O -- consult one of the many monad tutorials for more on that). Haskell even includes some special syntax for invoking these transformations that makes things look rather like an imperative language, but adding in a debug print still requires you to rewrite a lot more code than you would have to in another language. Alternatively, you could use unsafePerformIO (whose type is 'IO a -> a') to hide from the type system the fact that you've done some I/O, but you may run into trouble with the printing not behaving as you expect (due to Haskell's lazy evaluation).


Your long answer is correct. I think I disagree with your short answer.

You can, and this is not just a technicality - it can be useful in debugging.

It's certainly true that laziness means that when things print will be unintuitive if your intuition is trained on strict languages, so any short answer conveying as little information as just "yes" or "no" is liable to be confusing, though...


Whoa, really? You have to use a monad just to print text to stdout in Haskell?


The concept is that affecting stdout isn't "just" - it requires a clear definition when (and if) that function will be executed, and in Haskell you don't do that if you can avoid that - it allows for lazy execution, actually running the function less (or 0) times depending on how it's result is used.

I.e., if your program needs top 3 results in a race, then you can safely use a function that returns all the results, but since you afterwards use only first three then the remainder aren't calculated, that code most likely isn't run and the order of any "embedded" print statements would be undefined.

That being said, standard Debug.Trace module allows simple adding of debug prints anywhere.


Remember that Haskell is lazy. In broad strokes, the way you control when things happen is to sequence them by combining them (directly or indirectly) into the massive IO action that becomes main.

If you don't care when it happens, you can use unsafePerformIO - which is just fine for debugging, though there are often better approaches.


You can. You use a function called `trace` in module Debug.Trace in the standard library.

For instance, if you defined

    f x y = g (x + y)
then you could instead write

    f x y = trace "calling f" (g (x + y))
to insert a debug statement.

(dllthomas linked to the library but didn't point out how it works, and unrealistically expected people to like, follow the link or something. argv_empty is correct that side effects cannot be obtained without either wrapping functions in the IO monad or calling unsafePerformIO. Debug.Trace.trace calls unsafePerformIO behind the scenes.)


http://www.haskell.org/ghc/docs/latest/html/libraries/base/D...

As mentioned, it'll only print when the wrapped value is actually evaluated, which won't always correspond to it's position very well.

However, it's very much worth noting: if you're in IO then you can just sprinkle debug statements. If you're not in IO, then your function isn't effect-full and you can run it (or pieces of it) safely with whatever args you care about (also, quickcheck is amazing).


That doesn't help with debugging pure functions. Many times I've been in a position where something is going wrong in the composition of several functions and I want to know where the error is in all of that. Normally, I'd just append a print statement to each of the functions, but in haskell I'd have to rewrite the program to allow output at each stage or else break purity.

Again, QuickCheck doesn't help with that.


I'm not following your first sentence. What doesn't help with debugging pure functions? Debug.Trace is quite precisely for the situation you describe.

Another option would be to pull the file in question up in ghci and play directly with the components of the function - in pure code that should be safe in a way that it won't be in effect-full code.

I also dispute the assertion that QuickCheck doesn't help - if you're getting weird behavior, then hopefully you can characterize that weird behavior, and get some example failed values out of QuickCheck - and getting a picture of what values the function is failing for can absolutely be useful.


The computer is a tool I use to help me solve problems. Part of the process of solving problems, for me at least, is to write code that executes, and spend some time in the debugger to see if my assumptions were right. The computer is a tool for solving problems, dammit! It is ridiculous to think that you should solve the problem using pencil and paper (b/c computers no good for problem solving!?!), with various mathematical proofs (b/c we can and should prove everything?!?) and the computer is just an annoying end point for getting solutions out.

Programming is not just about implementing solutions, but also about finding solutions.


I actually agree with you. My point was that types are a much better tool than hacking code and debugging. Once the types are embraced and used as a tool, instead of thought of as a straitjacket that is where the real power lies.


Types, especially with some good inference, don't necessarily get in the way of exploration, but purity most certainly does. Not being able to take shortcuts and being expected to have a pre-well elegant understanding of your problem can be a big blocker.

But types are fairly conservative, dynamic languages still provide quicker turn around times even if semantic feedback can't be provided. One of my research projects is focusing on getting the benefits of both.


Some people like to think in terms of 'proofs', it seems reasonable to me that they would think of a compiler as a tool for 'solving problems' in those terms. But most of us seem to be more empirical.

Interestingly the best reverse engineers I've ever met, who are total descriptivists with a debugger, are also some of the most pedantic prescriptivists when it comes to writing actual code. It comes full circle, I think because they understand the reasoning behind the API contracts better than the API documentation tells.


I think what is being said is more, one of the things computers are best at, much better than humans, is simulation--following rigidly and repeatedly the consequences of a set of rules, through cause and effect, to see the eventual behaviors of a set of rules.

When constructing a set of rules in the first place (or when first formalizing an existing, but implicit, set of rules), the human can do the requirements gathering, design, bug-finding, etc. But humans are terrible at simulation--they can't foresee what each and every consequence a particular rule might have. When constructing a rule-set, a human's productivity is enhanced by the computer taking over the "simulation" part, to explore the consequences for them, where the human can then change the rules in response.

Type systems of all kinds, though, basically assume that the rules defining the code are invariant. They don't help with simulation at all.


I disagree with that. Used correctly types are exactly a lightweight simulation layer. Values are not always what you need to know about and, again used correctly, types can reveal a lot of interesting emergent dtructure of your code.


assert()s prove that something isn't happening.

Types prove that something cannot happen.


Types prove that something cannot happen in the formal system the program represents. In the real world, though, radiation can flip memory bits into impossible positions, drive blocks can fail and spit random garbage, IPC messages can get dropped on their way to the receiver, etc.

Sadly, even when your system is pure and elegant and you've proven it all out, you still need runtime asserts (and more advanced things like quorum voting on the results of computations, with the ability to fail nodes experiencing problems) before you can be "sure." Ada wasn't enough for NASA; they needed redundant processors too.

CS is weird as a discipline: we get all the tools of mathematics (digital logic, proof-verification), and all the tools of engineering (rate-of-failure calculations, high-assurance systems)... and then it turns out that the problem-space (ensuring perfect automation over trillions of repetitions of a task) is so difficult that it requires both! Arguing types vs. asserts is a "rabbit-season, duck-season" debate. Anyone who isn't using both a type system and runtime checks, is working in the dark.

However, I can and should probably point out that useful runtime checks can be derived from static properties of your code--and a Sufficiently-Smart Compiler[1] would automatically insert them as it compiled your code.

---

[1] not all that rare these days, GHC's stream fusion goes way beyond "sufficiently smart"


Yeah I use strong typing, assert()s, and 'if (!x) throw ...' everywhere. Nevertheless, I figure if the biggest risk to my code comes from cosmic rays, then I'm doing pretty good.


They can also prevent programmers from making a certain subset of good programs. This is less useful.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: