Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

In general speech, I can get behind the idea that you shouldn't correct someone if you understand what they mean. In programming, however, I think it's important to be pedantic.

You must mean tail call optimization/elimination here. Clearly, the JVM supports tail calls.



Plus, as it is relatively simple to perform tail call elimination with iteration instead of reusing the frame in many (most?) cases, static analysis can classify those cases and hygienically rewrite those functions.

Most attempts I've seen to exploit TCO either aren't optimizations or can be dealt with via another control mechanism. My suspicion is that it is largely a solution in search of a problem for the programmer attempting to employ the technique, much like using AOP to add logging information for every "enter" and "return" from a method/function.

Mind you, I am in no way dismissing tail-call elimination as an important tool. Just that some of its practitioners (such as whichever idiot who wrote some of the barely-readable code in old programs of mine!) are a bit zealous in its use.


If you want to be pedantic, the JVM does not support tail (function) _calls_, but jumps, which may be the result of tail call optimization or elimination.

As far as I know, we still cannot efficiently turn mutual recursive calls into a chain of jumps, but I'll be glad to be corrected here.


I think there may be a misconception here: A tail call is NOT a call that doesn't allocate a stack frame. A tail call is a call that is the return value of a function. This is why we refer it the stack frame-eliminating optimization as "tail call elimination". We are taking a jump-to-subroutine (call) instruction and replacing it with a normal jump instruction, thus eliminating a "call" from our program.

The JVM most certainly does support general tail calls through its invoke instruction, which supports calls to arbitrary methods of arbitrary objects (which, of course, includes tail calls).

Some compilers also support the optimization of _some_ recursive tail calls (usually recursive tail calls to final or local methods) into loops using the goto instruction, which supports jumps within the current method (essentially optimizing the tail-recursive method into a non-recursive method containing a loop).


Thanks for the clarification.

If I read you correctly, you say, the JVM 'supports tail calls', because it can call other methods. I think, the phrase 'supports tail calls' is as meaningful as 'supports function calls' then.

In any case, I don't think this is a question of optimization. I don't see how, for example, an F# program written in CPS can ever run on the JVM.


Scheme implementations will compile any tail calls into jumps, even if they're mutually recursive. This is easy to understand by doing CPS transformation by hand on such calls.


I think everyone knows what he meant. In programming, everything supports everything anyhow, as long as it's Touring complete.


This isn't really fair: some implementations really do rule out certain optimizations. Turing-Completeness only refers to what is computable, not what actual algorithms and optimization techniques are possible.


And to be more specific, Turing completeness refers to what is computable with an unbounded amount of memory and time. This is why we have complexity theory layered on top of computability theory. Some things are theoretically computable but practically infeasible. Compiler optimizations can help greatly in some of those cases.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: