Micro-experimentation Tools in Java 9

11:24 AM 0 Comments

I jumped over to IntelliJ two years ago, and I've been really happy with my choice. However, there is one thing that really irritates me.

Reindexing.

We have a big codebase at my workplace, and when IntelliJ decides to reindex, it can take a *while*. In the worst case, my IDE is unavailable for 10 minutes, though where it is most painful is the common case, which is about 30-45 seconds. 30-45 seconds is the perfect time to distract me and break my concentration--I check slack, my email, check back, get caught up in a customer issue, and by the time I come back, I've forgotten what I was working on and need to spend more time remembering where I was at!


Anything that breaks flow is a frustration, and there are two flow-breakers that I've been thinking about as I've been playing around with the new Java 9 release. The first is one that we all know and love: Java's verbosity. The other is one that you might not have consciously run into yet: JVM optimizations.

JShell

Have you ever tried to do a quick experiment with a new Java library just to see how it works? Creating classes and methods and even variables can get cumbersome when you are just in exploratory mode. As of Java 9, Java has finally joined the ranks of programming languages with a REPL. Sweet!

Working in a REPL is so refreshing because I can simply call a method and see what it does. Using that I can learn about cool new features in Java 9, like how about the fact that I can finally create a map and its contents on a single line? Genius!


jshell> Map.of("Why", "did", "this", "take", "so", "long?");
$1 ==> {Why=did, so=long?, this=take}

jshell> Map.ofEntries(
   ...>   Map.entry("verbose_languages", Arrays.asList("Java")),
   ...>   Map.entry("terse_languages", Arrays.asList("Scala")));
$2 ==> {terse_languages=[Scala], verbose_languages=[Java]}

In JShell, I can play with this to my heart's content without needed to create a file, create a class, create a main method, compile, run, compile, run, compile, run, and then delete the file.

JMH

How about trying to find out which algorithm or library is faster/better for your use case? If your algorithm executes in the microsecond range (or less), JVM optimizations start turning into noise, making it more difficult to make a scientific assessment.

I was very surprised by the outcomes Julian Ponge explained in his article about the trouble with writing benchmarks in Java. Here is a fun experiment to try. Here are three implementations of an algorithm:


private static double mySqrt(double what) {
   return Math.exp(factor * Math.log(what));
}

private static double javaSqrt(double what) {
 return Math.sqrt(what);
}

private static double constant(double what) {
 return what;
}

Create a simple benchmark that runs all three of these in series, comparing their performance by snapping time at the beginning and ending of each test.

If you want, you can use mine:

https://github.com/jzheaux/micro-experimentation/blob/master/04-jmh/cat-genealogy/src/main/java/com/joshcummings/cats/LoftyBenchmark.java

Crazy, but true, Java 8 and earlier will show the silly square root to be the fastest and constant to be the slowest! (You still see it in Java 9, too, but the behavior is less pronounced.)

Learn More


More about each of these can be found in my latest Pluralsight video: Micro-experimentation Tools in Java 9. I'd love your feedback!

0 comments:

Curious JMH results between Java 8 and Java 9

2:17 PM 0 Comments

I've recently been playing around with JMH and doing some comparisons between Java 8 and Java 9. I wrote the following toy benchmark, learning from the example that Julian Ponge wrote up in his article Avoiding Benchmarking Pitfalls on the JVM. This is my simple attempt to apply the principle:



import org.openjdk.jmh.annotations.Benchmark;
import org.openjdk.jmh.annotations.Fork;
import org.openjdk.jmh.annotations.State;
import org.openjdk.jmh.annotations.Scope;

@State(Scope.Benchmark)
@Fork(1)
public class BenchmarkComparison {
  public double sqrt(double what) {
    return Math.exp(0.5 * Math.log(what));
  }

  private double what = 10d;

  @Benchmark
  public double baseline() {
    return Math.sqrt(what);
  }

  @Benchmark    
  public double correct() {
    return sqrt(what);
  }

  @Benchmark
  public double constantFolding() {
    return sqrt(10d);
  }

  @Benchmark
  public void deadCodeElimination() {
    sqrt(what);
  }

  @Benchmark
  public void deadCodeAndFolding() {
    sqrt(10d);
  }
}


Julian's post is intended to demonstrate common pitfalls that Java engineers fall into when it comes to benchmarking, with three of the methods above indicating incorrect ways to create a benchmark. I invite you to read his informative post to get more background information, if you like.

Running the following JMH benchmark in Java 8, I get the following results:


And here are the results in Java 9 on the same machine:


While this is a great example for why benchmarks need to be run on consistent JVM versions, what interests me more is why are the results in Java 9 are so much "smoother"? Why are they even the same order of magnitude?

I don't have the example handy, but I had a similar experience with Julian's very first experiment, with running several benchmarks in the same JVM run, which is a "no no". In Java 8, I saw the same behavior as Julian, but in Java 9, I didn't until I added a third test to the benchmark. If I only added two, I didn't see the dramatic performance degredation.

Any ideas?


0 comments: