I don't know what plb2 is, but the benchmark game can demonstrate very little for because, the benchmarks are small and uninteresting compared to real programs (I believe there's not a single one with concurrency, plus there's no measure of effort in such small programs) and they compares different algorithms against each other.
For example, what can you learn from the Java vs. C++ comparison? In 7 out of 10 benchmarks there's no clear winner (the programs in one language aren't faster than all programs in the other) and what can you generalise from the 3 where C++ wins? There just isn't much signal there in the first place.
The Techempower benchmarks explore workloads that are probably more interesting, but they also compare apples to oranges, and like with the benchmark game, the only conclusion you could conceivably generalise (in an age of optimising compilers, CPU caches, and machine-learning banch predictors, all affected by context) is that C++ (or Rust) and Java are about the same, as there are no benchmarks in which all C++ or Rust frameworks are faster than all Java ones or vice-versa, so there's no way of telling whether there is some language advantage or particular optimisation work done that helps a specific benchmark (you could try looking at variances, but given the lack of a rigorous comparison, that's probably also meaningless). The differences there are obviously within the level of noise.
Companies that care about and understand performance pick languages based on their own experience and experiments, hopefully ones that are tailored to their particular program types and workloads.