We have been running benchmarks to compare different languages relevant to high-performance computing and unfortunately Julia still lags behind even Numba-JIT-compiled Python. Perhaps my understanding of Julia is limited, but even the Rodinia SRAD program, which was originally written in Julia, performs faster in all other implementations that aren't Julia.
I don't have much experience with Numba, but Julia, contrary to some online claims, is not really "reads like Python, runs like C" language. When it is able to run like C, it doesn't look as clean as Python, and vice versa.
I am not familiar with Julia nor Numba internals, but maybe Numba, due to being more specialized, can actually provide LLVM with info that allows it to make more aggressive optimizations more easily.
Do you have an idea whether these are specific types of problems that is giving Julia poorer performance? From what I recall, people were reporting better speeds with Julia than with Numba (e.g., [1]). My impression was that you are basically able to bring more of your code to LLVM with Julia than Numba, so it would make sense.
Thank you for the article! We're mainly interested in floating-point performance and energy consumption w/r/t to solving differential equations and tridiagonal systems of equations, while running on a 128-core compute node. Our current results will likely only be presented in May, but here are last year's results: https://www.cs.uni-potsdam.de/bs/research/docs/papers/2025/l...
Our Julia code is parallelised with FLoops.jl, but so far Numba has shown surprising performance benefits when executing code in parallel, despite being slower when executed sequentially. Therefore I can imagine that Julia might yield better results when run in a regular desktop environment.
I have a small benchmark program doing tight binding calculations of carbon nanostructures that I have implemented in C++ with Eigen, C++ with Armadillo, Fortran, Python/numpy, and Julia. It's been a while since I've tested it but IIRC all the other implementations were about on par, except for python which was about half the speed of the others. Haven't tried with numba.
To bring Julia performance on par with the compiled languages I had to do a little bit of profiling and tweaking using @views.
The JuliaParallel/rodinia repo says that the focus of those benchmarks is the CUDA versions. I suspect that the CPU versions have not had much optimization effort spent on them. Julia isn't a magic wand, but you can usually get within a factor of 2 of C++ with similar effort.
Cluster environment with virtualized cores may cause slower performance of Julia's parallel code. People recommend Threadpinnig.jl to solve the issues.
I found this paper (https://www.cs.uni-potsdam.de/bs/research/docs/papers/2025/l...) from around 2025 (it cites papers from 2025) which shows that the Julia version of SRAD (along with some other benchmarks) is about 5 times slower than the slowest FORTRAN implementation and consumes at least 5 times more energy, see Table 4 and Figure 1. This paper, however, doesn't seem to be peer-reviewed.
Yes, that's the paper my predecessors worked on! I replicated the measurements with an upgraded version of Julia (1.12), but despite the claimed performance benefits, Julia still performed poorly.
I agree with you. Having lost a close family member to cancer over a period of years, the only regret I have is not trying more to improve her quality of life for as long as I could. Putting in effort to understand the diagnosis is warranted, but there are no miracle cures, even if there are miracles sometimes.
100% agreed i use Claude often to just bounce ideas back and forth on specs i would like to create which I know will never gain traction because its either way too ambitious or too niche.
And the amount of times Claude proposes something thats completely contradictory in the same response. Or completely does a 180 after two more responses. Is ridiculous.
I feel like the use of the term "anti-AI hype" is not really fully explored here. Even limiting myself to tech-related applications - I'm frankly sick of companies trying to shove half-baked "AI features" down my throat, and the enshittification of services that ensues. That has little to do with using LLMs as coding assistants, and yet I think it is still an essential part of the "anti-AI hype".
The dreaded summarize feature, its in places you wouldn't expect, and not to mention the whole lets record every meeting and then summarize it for leaders. Big Brother in work is back and its even more powerful.
At the same time, everything you ever posted online has already been scraped by hundreds (maybe thousands) of entities and distributed/sold to countless other entities. The only difference is that OP shared his project here.
The issue is not even so much generating fake videos as creating plausible deniability. Now everything can be questioned for the pure reason of seeming AI-generated.
It depends on how it’s implemented. Maybe you could train mmo npcs in this and make it fun but I agree I expect this to simply generate a dumpster fire experience
I'd like to add that there are different flavours of both capitalism and socialism. The kind of disaster capitalism practiced in Russia and other former Soviet Union countries in the 90s certainly didn't lead to an improvement of living standards for quite a long time, if even.
China also presents a case of state capitalism where companies are strictly controlled by the government as evidenced by the recent set of regulations published a few weeks ago and many companies are owned by the children of party members.
reply