Clojure is the most productive(and beautiful) language in existence(personal preference) as long as you don't have to leave its world.
I stop using it because the lack of Clojure libraries pushed me to use Java libraries, and that was a life-sucking experience. Java is fine as language but you cannot say the same about the APIs of many of its libraries. I also started sensing that new open source libraries are not being created in Java which worsens the story for Clojure.
The great thing about Clojure is that it's not tied to the JVM. JVM is just one of the hosts.
So I mostly do frontend development at the time and I know Clojure since before. Because of those two variables, I've chosen to use ClojureScript so now I 99% of my professional time write only ClojureScript, with the remaining time being 1% JS when needed. Otherwise the only time I see JS is when I have to understand how a library works.
So I agree with you on the Java side, I'm not a Java developer and I want to stay far away from it, but I love Clojure and I need to be close to it, any other language feels inefficient and slows me down. But there are other hosts you can use, if needed.
And since there is always someone who jumps in saying "Common Lisp is a true lisp while Clojure is not", read https://en.wikipedia.org/wiki/No_true_Scotsman so we can have a real discussion about the difference ;)
"No True Scotsman" is about moving the goalposts regarding to the quality or suitability of something. We wouldn't reasonably say that a Dane isn't a True Scotsman, right?
A piece of crap unsuitable for production use could be a Lisp. Lisp 1.5 is Lisp; I wouldn't use it today. Lisp or not is not about quality, but semantics. Clojure lacks or changes numerous semantics which define Lisp. That's not statement about quality, just otherness.
Clojure doesn't call itself a Lisp; though the main website claims it is a Lisp dialect. Dialects are defined by mutual intelligibility. An Osaka man speaks a dialect which is understood in Tokyo. The mutual intelligibility between Lisp dialects and Clojure does not extend very far beyond (+ 2 2).
The other way you have to leave the Clojure ecosystem is when you have to communicate across the network or persist to disk. Writing everything fits in memory data analysis in Clojure is some of the cleanest and most fun programming I've ever done.
But once you want to spread between multiple computers or operate on data larger than what comfortably fits in memory, you end up coding just like you would in C#.
I started working on some versions of persistent data structures that serialize to a KV store, which lets you send a hash map to another machine, have save that machine add a key, then send it back. It seemed to work pretty well, but there's a ~10x performance penalty over Clojure's inbuilt data types (which are themselves not lighting the world on fire speed wise).
That's a pretty harsh price to pay - you can use 100 machines to simulate single address space Clojure on a 10x machine. Awesome, but likely impractical. If you need the persistence between program runs or would like to run incremental computation on large data I could see it working well, though.
When I was using HoneySQL[0] speaking to PostgreSQL over JDBC to deal with persisted data, I did not feel non-idiomatic. When I used Kafka and Onyx[1] to process large dataflows, I did not feel non-idiomatic. Granted my working set was only a few 10s of TB, but there wasn't much pain involved. I tended to use Nippy[1] for serialization.
I didn't use special data structures (except when the problem called for it), with a clear separation using nippy serialization (in place of pr-str/read-string) at the edges of my app.
But did you really get much of the benefit of using Clojure? If you store your data in an SQL database for instance, that layer provides the equivalent of what Clojure's STM can do within the process. You open a transaction, run some queries, perform some logic, then update the database. Clojure's persistent data structures don't give you extra leverage here - you could just as well write the logic with mutable datastructures in C# (that you throw away afterwards).
This is no accident, as Clojure's STM was built around the premise of emulating the MVCC provided by databases to provide ACI (minus the D) guarantees to program against. If you're already getting those guarantees elsewhere, doubling up does no good.
Similarly, code that operates on streams is generally not where you get bogged down by state and multithreading in something like C#.
I say this as somebody that believes Clojure is a powerful addition to the toolbox. I even use refs in production code...
I say you still do get a benefit from using because of the repl-driven development process. I miss that instantaneous feedback and the feeling that you are in dialogue with your program whenever I develop in other languages.
Add to that the fact that the language shepards you into using immutable datastructures and writing pure code, I still think Clojure is a significant advantage when writing distributed applications.
Yes, we leveraged Clojure a lot. PostgreSQL has far more advanced indexing and querying capabilities, particularly around too big to fit into RAM data, than what I'd want to waste time on building myself (if my team could even do it). The D in ACID is pretty important btw, and being able to offload the replication/backups/failover to AWS (using RDS) was a huge win operationally.
Clojure's HoneySQL is the best way to programatically interact with a database I have found, and it's not particularly close. It provides nice data structures to work with (that can even be schemed or speced) and combined with clojure makes writing complicated logic a breeze. It is easily extensible to new SQL functions, clauses, and operators, even vendor-specific stuff (we used a lot of PostgreSQL specific features around function-based indexing). LINQ doesn't compare to the flexibility here provided by data structures and a potentially just in time SQL compiler. No ORM does. As an example, the user might be submitting ajax requests to my backend, filtering some data by adding more filters or other clauses to my query. What I store in the session or in the database is fundamentally just (pr-str my-query). When the user comes back and wants to continue modifying it, I (read-string my-query-str) from my store. Then I just keep assoc'ing, filtering, and manipulating the data. If they decide this is a nice query, I just save it. It can become an alert, a report, etc. The only machinery is basic clojure data structure manipulation, basic print/reading, and the power of compiling data structures to SQL.
As far as operating on streaming data via Kafka (raw) or with Kafka + Onyx, I suspect you've never actually done this in Clojure. It's a breeze, quick to write and performant. I've yet to meet someone who has worked with Onyx who wasn't blown away by how simple it is to use and how quickly you can evolve very complicated data flow graphs. It provides reasonably low latency, customizable batching, and I can still write straight-forward and simple Clojure. You don't really deal with state or multithreading when dealing with Onyx (Onyx handles that for you and you can declare the parallelism behavior you're looking for). When I consumed Kafka directly, I didn't have many problems that get bogged down by state. We'd keep some local state, but the system was designed to either run in batch to consume some entire time interval or to be able to be kill -9'd at any point, in which case the data was designed with idempotence in mind (such that we can handle/detect processing data more than once).
I selected and championed Clojure because it was a dynamic language (very similar to python or ruby) that had access to battle-tested JVM libraries. I selected it over python and ruby, because it made choices that resulted in simpler programs that have far better locality when I or a co-worker must understand and modify a program in the future. The emphasis on immutable data and functional programming allows nice straight line programming where all I need to know to reason about the code is in the function arguments. There aren't as many surprises. The reason I've stayed with Clojure is the community focus on simplicity and the stability of my code. The code I wrote 8-9 years ago works just fine today. That's not a huge brag (cough Fortran cough Common Lisp), but it's refreshing when I'm looking at how quickly evolving languages like Rust and Python.
So this got 3 downvotes, and I don't care about the karma, but I'd be quite curious to hear about opposing experiences scaling up Clojure programs since there seems to be some disagreement.
If you started to struggle with fitting everything in one node due to CPU or memory constraints, but still felt like you were writing Clojure in a half way idiomatic style when moving to a cluster, I'd love to hear from you.
~10X a performance penalty doesn't seem that bad. In C a pointer derefference will beat a web service call by more than 10X. Also why a handjammed KV store? EDN and other(faster) serialization libraries have been around for quite a while.
I used nippy originally, then later Java serialization libraries once it became clear optimization was required. I tried using both Redis and RocksDB. The happy case had all but the last layer of nodes sitting in cache in memory (as long as your memory is more than 1/32 of the size of your disk).
I think I could do better starting from scratch given what I've learned.
Clojure is a pragmatic language, and I think if you just reject the immense value of being on the JVM and try to treat it as something pure and unsullied then you're missing a lot of its motivation.
there are other vms that would provide the same level of value, though, for clojure. the fact that it runs on V8 via ClojureScript is proof of that.
personally, I believe they should have used BEAM originally and stayed as far away from the jvm as they could, but that is my own opinion, and everyone has their own opinions.
I don't think there's a world where Rich Hickey would have had it running on BEAM. From what I've read, a large part of why he originally chose the JVM is because companies he wrote software for would be okay with something that runs on the JVM, but not something like Common Lisp. And surely not BEAM.
At its core, Clojure attempts to be practical. Running on the JVM was a practical decision.
Oh, I understand the reasoning behind the choice. Doesn't change my mind in the slightest that BEAM would have been a better underlying VM for Clojure.
Yeah good point I find myself often having to use a Clojure shim over a not great Java library out there. It's painful at first but it can be really great once you've tamed that beast.
If we're talking about Lisps, I've found Racket to be considerably more expressive, and incidentally, more beautiful as well. If we're talking about beauty of any language, I think many would agree that APL takes the cake.
I stop using it because the lack of Clojure libraries pushed me to use Java libraries, and that was a life-sucking experience. Java is fine as language but you cannot say the same about the APIs of many of its libraries. I also started sensing that new open source libraries are not being created in Java which worsens the story for Clojure.