No, Java's existing compiler is very good, and it generates as good code as you'd want. There is definitely still a cost due to objects not being inlined in arrays yet (this will change soon) that impacts some programs, but in practice Java performs more-or-less the same as C++.
In this case, however, it appears that the Java program may have been configured in a suboptimal way. I don't know how much of an impact it has here, but it can be very big.
Even benchmarks that allow for jit warmup consistently show java roughly half the speed of c/c++/rust. Is there something they are doing wrong? I've seen people write some really unusual java to eliminate all runtime allocations, but that was about latency, not throughput.
Yes. The most common issues are heap misconfiguration (which is more important in Java than any compiler configuration in other languages) and that the benchmarks don't simulate realistic workloads in terms of both memory usage and concurrency. Another big issue is that the effort put into the program is not the same. Low-level languages do allow you to get better performance than Java if you put significant extra work to get it. Java aims to be "the fastest" for a "normal" amount of effort at the expense of losing some control that could translate to better performance in exchange for significantly more work, bot at initial development time, but especially during evolution/maintenance.
E.g. I know of a project at one of the world's top 5 software companies where they wanted to migrate a real Java program to C++ or Rust to get better performance (it was probably Rust because there's some people out there who really want to to try Rust). Unsurprisingly, they got significantly worse performance (probably because low-level languages are not good at memory management when concurrency is at play, or at concurrency in general). But they wanted the experiment to be a success, so they put in a tonne of effort - I'm talking many months - hand-optimising the code, and in the end they managed to match Java's performance or even exceed it by a bit (but admitted it was ultimately wasted effort).
If the performance of your Java program doesn't more-or-less match or even exceed the performance of a C++ (or other low level language) program then the cause is one of: 1. you've spent more effort optimising the other program, 2. you've misconfigured the Java program (probably a bad heap-size setting), or 3. the program relies on object flattening, which means the Java program will suffer from costly cache misses (until Valhalla arrives, which is expected to be very soon).
In my experience, if your C++ or Rust code does not perform as well as Java, it's probably because you are trying to write Java in C++ or Rust. Java can handle a large number of small heap-allocated objects shared between threads really well. You can't reasonably expect to meet its performance in such workloads with the rudimentary tools provided by the C++ or Rust standard library. If you want performance, you have structure the C++/Rust program in a fundamentally different way.
I was not familiar with the term "object flattening", but apparently it just means storing data by value inside a struct. But data layout is exactly the thing you should be thinking about when you are trying to write performant code. As a first approximation, performance means taking advantage of throughput and avoiding latency, and low-level languages give you more tools for that. If you get the layout right, efficient code should be easy to write. Optimization is sometimes necessary, but it's often not very cost-effective, and it can't save you from poor design.
> it's probably because you are trying to write Java in C++ or Rust
Well, sure. In principle, we know that for every Java program there exists a C++ program that performs at least as well because HotSpot is such a program (i.e. the Java program itself can be seen as a C++ program with some data as input). The question is can you match Java's performance without significantly increasing the cost of development and especially evolution in a way that makes the tradeoff worthwhile? That is quite hard to do, and gets harder and harder the bigger the program gets.
> I was not familiar with the term "object flattening", but apparently it just means storing data by value inside a struct. But data layout is exactly the thing you should be thinking about when you are trying to write performant code.
Of course, but that's why Java is getting flattened objects.
> As a first approximation, performance means taking advantage of throughput and avoiding latency, and low-level languages give you more tools for that
Only at the margins. These benefits are small and they're getting smaller. More significant performance benefits can only be had if virtually all objects in the program have very regular lifetimes - in other words, can be allocated in arenas - which is why I think it's Zig that's particularly suited to squeezing out the last drops of performance that are still left on the table.
Other than that, there's not much left to gain in performance (at least after Java gets flattened objects), which is why the use of low-level languages has been shrinking for a couple of decades now and continues to shrink. Perhaps it would change when AI agents can actually code everything, but then they might as well be programming in machine code.
What low-level languages really give you through better hardware control is not performance, but the ability to target very restricted environments with not much memory (as one of Java's greatest performance tricks is the ability to convert RAM to CPU savings on memory management) assuming you're willing to put in the effort. They're also useful, for that reason, for things that are supposed to sit in the background, such as kernels and drivers.
> The question is can you match Java's performance without significantly increasing the cost of development and especially evolution in a way that makes the tradeoff worthwhile?
This question is mostly about the person and their way of thinking.
If you have a system optimized for frequent memory allocations, it encourages you to think in terms of small independently allocated objects. Repeat that for a decade or two, and it shapes you as a person.
If you, on the other hand, have a system that always exposes the raw bytes underlying the abstractions, it encourages you to consider the arrays of raw data you are manipulating. Repeat that long enough, and it shapes you as a person.
There are some performance gains from the latter approach. The gains are effectively free, if the approach is natural for you and appropriate to the problem at hand. Because you are processing arrays of data instead of chasing pointers, you benefit from memory locality. And because you are storing fewer pointers and have less memory management overhead, your working set is smaller.
What you're saying may (sometimes) be true, but that's not why Java's performance is hard to beat, especially as programs evolve (I was programming in C and C++ since before Java even existed).
In a low-level language, you pay a higher performance cost for a more general (abstract) construct. E.g. static vs. dynamic dispatch, or the Box/Rc/Arc progression in Rust. If a certain subroutine or object requires the more general access even once, you pay the higher price almost everywhere. In Java, the situation is opposite: You use a more general construct, and the compiler picks an appropriate implementation per use site. E.g. dispatch is always logically dynamic, but if at a specific use site the compiler sees that the target is known, then the call will be inlined (C++ compilers sometimes do that, too, but not nearly to the same extent; that's because a JIT can perform speculative optimisations without proving they're correct); if a specific `new Integer...` doesn't escape, it will be "allocated" in a register, and if it does escape it will be allocated on the heap.
The problem with Java's approach is that optimisations aren't guaranteed, and sometimes an optimisation can be missed. But on average they work really well.
The problem with a low-level language is that over time, as the program evolves and features (and maintainers) are added, things tend to go in one direction: more generality. So over time, the low-level program's performance degrades and/or you have to rethink and rearchitect to get good performance back.
As to memory locality, there's no issue with Java's approach, only with a missing feature of flattening objects into arrays. This feature is now being added (also in a general way: a class can declare that it doesn't depend on identity, and the compiler then transparently decides when to flatten it and when to box it).
Anyway, this is why it's hard, even for experts to match Java's performance without a significantly higher effort that isn't a one-time thing, but carries (in fact, gets worse) over the software's lifetime. It can be manageable and maybe worthwhile for smaller programs, but the cost, performance, or both suffer more and more with bigger programs as time goes on.
From my perspective, the problem with Java's approach is memory, not computation. For example, low-level languages treat types as convenient lies you can choose to ignore at your own peril. If it's more convenient to treat your objects as arrays of bytes/integers (maybe to make certain forms of serialization faster), or the other way around (maybe for direct access to data in a memory-mapped file), you can choose to do that. Java tends to make solutions like that harder.
Java's performance may be hard to beat in the same task. But with low-level languages, you can often beat it by doing something else due to having fewer constraints and more control over the environment.
> or the other way around (maybe for direct access to data in a memory-mapped file), you can choose to do that. Java tends to make solutions like that harder.
Not so much anymore, thanks to the new FFM API (https://openjdk.org/jeps/454). The verbose code you see is all compiler intrinsics, and thanks to Java's aggressive inlining, intrinsics can be wrapped and encapsulated in a clean API (i.e. if you use an intrinsic in method bar which you call from method foo, usually it's as if you've used the intrinsic directly in foo, even though the call to bar is virtual). So you can efficiently and safely map a data interface type to chunks of memory in a memory-mapped file.
> But with low-level languages, you can often beat it by doing something else due to having fewer constraints and more control over the environment.
You can, but it's never free, rarely cheap (and the costs are paid throughout the software's lifetime), and the gains aren't all that large (on average). The question isn't "is it possible to write something faster" but "can you get sufficient gains at a justifiable costs", and that's already hard and getting harder and harder.
This critic always forgets that Java is how most folks used to program in C++ARM, 100% of all the 1990's GUI frameworks written in C++, and that the GoF book used C++ and Smalltalk, predating Java for a couple of years.
I don't know what plb2 is, but the benchmark game can demonstrate very little for because, the benchmarks are small and uninteresting compared to real programs (I believe there's not a single one with concurrency, plus there's no measure of effort in such small programs) and they compares different algorithms against each other.
For example, what can you learn from the Java vs. C++ comparison? In 7 out of 10 benchmarks there's no clear winner (the programs in one language aren't faster than all programs in the other) and what can you generalise from the 3 where C++ wins? There just isn't much signal there in the first place.
The Techempower benchmarks explore workloads that are probably more interesting, but they also compare apples to oranges, and like with the benchmark game, the only conclusion you could conceivably generalise (in an age of optimising compilers, CPU caches, and machine-learning banch predictors, all affected by context) is that C++ (or Rust) and Java are about the same, as there are no benchmarks in which all C++ or Rust frameworks are faster than all Java ones or vice-versa, so there's no way of telling whether there is some language advantage or particular optimisation work done that helps a specific benchmark (you could try looking at variances, but given the lack of a rigorous comparison, that's probably also meaningless). The differences there are obviously within the level of noise.
Companies that care about and understand performance pick languages based on their own experience and experiments, hopefully ones that are tailored to their particular program types and workloads.
https://wiki.c2.com/?SufficientlySmartCompiler