To save anyone the trouble, this project is totally empty yet.
It doesn't say anything about the ability of the owner to port F# to the JVM, but just know that it is just a readme, three almost empty java classes, and the beginning of an ANTLR parser.
So to answer other questions here, you can't even compare it to F# on Mono. F# on Mono works perfectly. The F# compiler and runtime is huge, and getting to parity will probably take at least a year to a very dedicated team.
That said, there is some very interesting work going on towards having a sane call out to JVM / .net code from haskell, though nothing will realistically be in a truly usable for in the next 12-18th months
I am not so sure if Frege would be of interest for people that look for an impure, strict language. A better fit for that would be Yeti.
And assuming that porting a language that is so deeply rooted in the .NET world like F# is very difficult, my guess is that going with Yeti and thus fully embracing the JVM will be a better choice. The more so as one could do this right now.
Very pleased to see this. F# is a really exciting language, hamstrung by being tied to the MS platform. Hoping we'll see more opensource F# projects as a result.
What's the quality of code produced by Mono? How does it compare with Microsoft's compiler? I'd like to see Mono C# and F# numbers compared against Microsoft's compilers. Will I get 90% of the performance?
There are games running on Mono (e.g. Unity C# scripts, Bastion was ported to Mono to run in Chrome, etc..). I'm sure the performance of Mono may depend on your specific application but many people have been using it without any issues.
Mono has been out for years, and I'm sure there have been great improvements. Does anyone have recent comparisons of it vs C# on Windows? I'm sure people must be curious.
the c# performance over mono is really different to the f# performance..while c# has a decent performance on mono, f# doesn't performance so well......
You can run F# on the old "stable" branch of Mono (2.10.x), but I'd highly recommend running Mono 3.0.x instead. The older version uses an older, simpler GC which works OK for C#, but which seems to have trouble with F# (which generates more G0 objects than C#). The new 'sgen' GC in Mono 3.0 is a huge improvement though, so you shouldn't see much (if any) performance difference between F# and C# apps.
No. Those are examples of portability issues when your application depends on platform specific functionality:
Most code containing P/Invoke calls into native Windows Libraries (as opposed to P/Invokes done to your own C libraries) will need to be adapted to the equivalent call in Linux, or the code will have to be refactored to use a different call. <-- Means when you choose to directly link to Windows technology.
Registry <-- If you choose to use it. I haven't written any registry dependent code for the last ten years and don't even miss anything about it.
you should not assume the order of bytes <-- Not .Net's problem.
it's tied in the sense than the madurity over mono is a really long distance over the net one...I try use mono and f# in win xp..the answer was.."you can't...install w7 and vs2012"...that is annoying...the "mono community" are only a few guys compared to the .net support...so yes...it is really tied to .NET.....
It's really not. Scala is a very interesting language, and it incorporates many functional features, but it's not at all ML-like. In particular, type annotations are needed in many, many places where they would be superfluous in an ML-derived language.
As a recent migrant from Ocaml to Scala (due to stagnation of Ocaml), I have to say that you are quite right about type-annotations, but the style of programming that an Ocaml/F# person would be accustomed to is quite easily replicable using Scala.
yep..I change from f# to scala and actually know many developers doing the same change...scala in first instance looks more verbose but actually it's much more powerfull (higher kinded types, scalaz, macros,etc) actually after accustom the syntax you find a clear and concise language (with ugly type annotations)
What do you mean by "stagnation of OCaml"? There was a period some 4-5 years ago where it did seem like there was little development on the language, but things have changed quite a lot since then. Lots of developments in the core language (including first-class modules and GADTs!), and a blossoming of the ecosystem around it. To me, it seems OCaml has never been livelier than now...
Let me start by saying that I <3 Ocaml. I've used it intensively for over 10 years.
By stagnation I mostly meant the comparatively tiny ecosystem of libraries vis-a-vis, e.g. JVM-based languages. In particular, with Akka, Scala has much nicer support for concurrency. The ability to call, and be called by, Java code in the smoothes possible way is also beneficial for my use cases.
I'm surprised that GADTs were added, because a few years ago I visited the Gallium team at INRIA who develop Ocaml, and asked about GADTs. I was told by one of the senior Ocaml developers that there were no plans for the inclusion on this feature.
Not quite the same, but you may be interested in the IKVM type provider prototype[1], which allows you to write F# scripts directly against JAR files (using IKVM in the implementation).
That may not be a practically relevant difference. Why would you want to run F# on the JVM? To interface with JVM code. IKVM lets you do that already.
I've rolled out multiple .NET programs that contain Java open source libraries through IKVM. It works just fine, and the amount of extra work you need to do to make the JVM->.NET mapping work is remarkably little.
In general speech, I can get behind the idea that you shouldn't correct someone if you understand what they mean. In programming, however, I think it's important to be pedantic.
You must mean tail call optimization/elimination here. Clearly, the JVM supports tail calls.
Plus, as it is relatively simple to perform tail call elimination with iteration instead of reusing the frame in many (most?) cases, static analysis can classify those cases and hygienically rewrite those functions.
Most attempts I've seen to exploit TCO either aren't optimizations or can be dealt with via another control mechanism. My suspicion is that it is largely a solution in search of a problem for the programmer attempting to employ the technique, much like using AOP to add logging information for every "enter" and "return" from a method/function.
Mind you, I am in no way dismissing tail-call elimination as an important tool. Just that some of its practitioners (such as whichever idiot who wrote some of the barely-readable code in old programs of mine!) are a bit zealous in its use.
If you want to be pedantic, the JVM does not support tail (function) _calls_, but jumps, which may be the result of tail call optimization or elimination.
As far as I know, we still cannot efficiently turn mutual recursive calls into a chain of jumps, but I'll be glad to be corrected here.
I think there may be a misconception here: A tail call is NOT a call that doesn't allocate a stack frame. A tail call is a call that is the return value of a function. This is why we refer it the stack frame-eliminating optimization as "tail call elimination". We are taking a jump-to-subroutine (call) instruction and replacing it with a normal jump instruction, thus eliminating a "call" from our program.
The JVM most certainly does support general tail calls through its invoke instruction, which supports calls to arbitrary methods of arbitrary objects (which, of course, includes tail calls).
Some compilers also support the optimization of _some_ recursive tail calls (usually recursive tail calls to final or local methods) into loops using the goto instruction, which supports jumps within the current method (essentially optimizing the tail-recursive method into a non-recursive method containing a loop).
If I read you correctly, you say, the JVM 'supports tail calls', because it can call other methods. I think, the phrase 'supports tail calls' is as meaningful as 'supports function calls' then.
In any case, I don't think this is a question of optimization. I don't see how, for example, an F# program written in CPS can ever run on the JVM.
Scheme implementations will compile any tail calls into jumps, even if they're mutually recursive. This is easy to understand by doing CPS transformation by hand on such calls.
This isn't really fair: some implementations really do rule out certain optimizations. Turing-Completeness only refers to what is computable, not what actual algorithms and optimization techniques are possible.
And to be more specific, Turing completeness refers to what is computable with an unbounded amount of memory and time. This is why we have complexity theory layered on top of computability theory. Some things are theoretically computable but practically infeasible. Compiler optimizations can help greatly in some of those cases.
scala doesn't optimize general tail calls, either (only direct tail recursion of final methods). If you want to optimize tail calls on the JVM, you must use trampolining, which adds some overhead to every call. So most languages choose fast calls without tail call elimination over slower calls with tail call elimination (i.e., speed over correctness).
Clojure has a loop/recur construct which makes the tail recursion explicit so doesn't need to be done by the JVM. In other words regular recursive calls should not be used for loops if you want performance.
Scala has a similar explicit @Tailrec annotation for its compiler, too:
"A method annotation which verifies that the method will be compiled with tail call optimization. If it is present, the compiler will issue an error if the method cannot be optimized into a loop."
F# works very well on Mono. You can compile and use the open-source edition of the F# compiler [1] on both Linux and on OS X (though I haven't tried OS X).
This port is in very early stages, and only has the start of a lexer as far as I can see.
It doesn't say anything about the ability of the owner to port F# to the JVM, but just know that it is just a readme, three almost empty java classes, and the beginning of an ANTLR parser.
So to answer other questions here, you can't even compare it to F# on Mono. F# on Mono works perfectly. The F# compiler and runtime is huge, and getting to parity will probably take at least a year to a very dedicated team.