Boostr



0 votes
11 views
I'm running the following Java code on a laptop with 2.7 GHz Intel Core i7. I intended to let it measure how long it takes to finish a loop with 2^32 iterations, which I expected to be roughly 1.48 seconds (4/2.7 = 1.48).

But actually it only takes 2 milliseconds, instead of 1.48 s. I'm wondering if this is a result of any JVM optimization underneath?

public static void main(String[] args)
{
    long start = System.nanoTime();

    for (int i = Integer.MIN_VALUE; i < Integer.MAX_VALUE; i++){
    }
    long finish = System.nanoTime();
    long d = (finish - start) / 1000000;

    System.out.println("Used " + d);
}
asked Dec 30, 2017 in Core java by Amrendra | 11 views



2 Answers

0 votes
There are one of two possibilities going on here:

The compiler realized that the loop is redundant and doing nothing so it optimized it away.

The JIT (just-in-time compiler) realized that the loop is redundant and doing nothing, so it optimized it away.

Modern compilers are very intelligent; they can see when code is useless. Try putting an empty loop into GodBolt and look at the output, then turn on -O2 optimizations, you will see that the output is something along the lines of

main():
    xor eax, eax
    ret
I would like to clarify something, in Java most of the optimizations are done by the JIT. In some other languages (like C/C++) most of the optimizations are done by the first compiler.
answered Dec 30, 2017 by Amrendra
0 votes
I just will state the obvious - that this is a JVM optimization that happens, the loop will simply be remove at all. Here is a small test that shows what a huge difference JIT has when enabled/enabled only for C1 Compiler and disabled at all.

Disclaimer: don't write tests like this - this is just to prove that the actual loop "removal" happens in the C2 Compiler:

@Benchmark
@Fork(1)
public void full() {
    long result = 0;
    for (int i = Integer.MIN_VALUE; i < Integer.MAX_VALUE; i++) {
        ++result;
    }
}

@Benchmark
@Fork(1)
public void minusOne() {
    long result = 0;
    for (int i = Integer.MIN_VALUE; i < Integer.MAX_VALUE - 1; i++) {
        ++result;
    }
}

@Benchmark
@Fork(value = 1, jvmArgsAppend = { "-XX:TieredStopAtLevel=1" })
public void withoutC2() {
    long result = 0;
    for (int i = Integer.MIN_VALUE; i < Integer.MAX_VALUE - 1; i++) {
        ++result;
    }
}

@Benchmark
@Fork(value = 1, jvmArgsAppend = { "-Xint" })
public void withoutAll() {
    long result = 0;
    for (int i = Integer.MIN_VALUE; i < Integer.MAX_VALUE - 1; i++) {
        ++result;
    }
}
The results show that depending on which part of the JIT is enabled, method gets faster (so much faster that it looks like it's doing "nothing" - loop removal, which seems to be happening in the C2 Compiler - which is the maximum level):

 Benchmark                Mode  Cnt      Score   Error  Units
 Loop.full        avgt    2     ≈ 10⁻⁷          ms/op
 Loop.minusOne    avgt    2     ≈ 10⁻⁶          ms/op
 Loop.withoutAll  avgt    2  51782.751          ms/op
 Loop.withoutC2   avgt    2   1699.137          ms/op
answered Dec 30, 2017 by Amrendra

Related Questions



Welcome to Knowledge Boostr, where you can ask questions and receive answers from other members of the community.
This is a collaboratively edited question and answer site for professional and enthusiast programmers. It's 100% free, no registration required.
Here's how it works:
  1. Anybody can ask a question
  2. Anybody can answer
  3. The best answers are voted up and rise to the top

http://boostr.in Q&A for professional and enthusiast programmers http://boostr.in Q&A for professional and enthusiast programmers http://boostr.in Q&A for professional and enthusiast programmers http://boostr.in Q&A for professional and enthusiast programmers


23,494 questions
40,169 answers
12 comments
23,609 users