Currently developing a radio communications system using Java, with JNI calls for hardware integration, running on a quad-core 32-bit ARM processor. We have constraints of around 20ms for real-time audio packet processing. Recently some stop-the-world pauses introduced by the GC resulted in calls dropping. A few GC parameter tweaks brought our latency below 20ms and the application no longer drops calls.
This is using the GC that ships with JDK 11. Future improvements, such as Shenandoah and ZGC, may be able to further drop the interruptions to 5ms on average.
The radio repeater is designed for real-time, safety-critical operations dealing with voice communications. Java is up for the job; like others have mentioned, it's more about the technical team and whether management understands what it takes to write and maintain an excellent code base than it is about Java versus C++.
It would be interesting to know what the trade-off for those tweaks was. Typically it's increased memory usage.
Another relevant aspect is what happens when one hits GC pauses again. Will there be any tweaks left? Will they conflict with the previous ones? Using such high-level levers to fix performance problems in a specific component is great when it works, but when it doesn't you're out of options.
Another option is to use a JVM that offers hard real-time constraints to achieve guaranteed performance characteristics. Such JVMs are in use in the automotive and airline industries---JamaicaVM, for example, offers deterministic garbage collection.
You’re coding safety critical systems and depending on severely non-deterministic tweaks to GC to get it to run right?! Where is this for? (Just so I can avoid it)
No need to know where this person is deploying this, if you want to avoid people doing stupid shit with safety critical systems you should just avoid going outside. Actually, avoid leaving the bed, as everything software in this day and age is basically just duct tape and cable ties put together in a rush to hit deadlines.
Have direct knowledge or worked with many critical systems in a few different areas (cannot get more specific for several legal reasons). Can confirm. I'm given to understand that legacy systems (think 30ish years ago) were much better engineered, but there's not enough Proper Engineers(tm)(r) to design (and build) all the critical systems that are necessary.
The biggest issues are that an engineer with the knowhow is expensive, a team of them is prohibitively expensive, a lot of companies outsource this work to different countries and make them sign onerous and dubiously-enforceable contracts, pay little, demand too much, and have incredibly unrealistic deadlines for the type of work that needs to happen.
You know how it is, if it never fails, you're throwing money out. That's the mentality of a lot of management types I've dealt with.
> You’re coding safety critical systems and depending on severely non-deterministic tweaks to GC to get it to run right?!
That's certainly one way to look at over five years of engineering effort, extensive systems testing, countless hours of automated regression test development, independent product quality verification, rigorous code reviews, risk assessment processes, mandatory four 9s call quality requirements, integration analysis of JVMs with deterministic GC, and so on.
Most software does. But, let's face it, it's not like C or C++ compilers should come with a manual saying "We strongly recommend using C for safety critical systems".
This is using the GC that ships with JDK 11. Future improvements, such as Shenandoah and ZGC, may be able to further drop the interruptions to 5ms on average.
The radio repeater is designed for real-time, safety-critical operations dealing with voice communications. Java is up for the job; like others have mentioned, it's more about the technical team and whether management understands what it takes to write and maintain an excellent code base than it is about Java versus C++.