Using AssertJ to Verify a Complex Exception

AssertJ just went up another notch in my book today

For a particular unit test, I needed to verify the contents of an exception. Originally, I figured said contents where just part of the exception message:


assertThatCode(() -> doMyExceptionalThing())
    .isInstanceOf(SomeException.class)
    .hasMessage("This is the error message")


However, that assertion failed because, in the case that this particular exception has a cause, it has two messages to pick from (more on that in a moment) and it chooses the message from the cause instead of the message I wanted.

I could do something like this to get it to work:


try {
    doMyExceptionalThing();
} catch ( SomeException ex ) {
    ErrorDomainObject o = ex.getError();

    // this is the description I need,
    // not the cause's description!
    assertThat(o.getDescription())
        .isEqualTo("This is the error message");
}


But that would break my heart. (And we wouldn't want that... just to clarify)

So, I tried some of AssertJ's matching features:


assertThatCode(() -> doMyExceptionalThing())
    .isInstanceOf(SomeException.class)
    .matches(ex -> "This is the error message"
        .equals(
            ((SomeException) ex)
                .getError()
                .getDescription()));


Yuck!

And then, I came upon hasFieldOrPropertyWithValue. Hmm...

Could something like this possibly work?


assertThatCode(() -> doMyExceptionalThing())
    .isInstanceOf(SomeException.class)
    .hasFieldOrPropertyWithValue(
        "error.description",
        "This is the error message");


And it did! All hail AssertJ for helping me create a clean, readable assertion, even in this slightly more complex scenario.

Configuring Wacom Tablet in Ubuntu 16.04 for Large Monitors

This morning, I needed to sign a document for work. I hate the process of printing something out, signing it, scanning it, and mailing it back.

So I busted out my Wacom Tablet and plugged it into my Linux box and, lo, it recognized it!

The problem, though,  was that writing in my typical lettering (maybe about 1/4 inch height on paper) was coming out 4-5 times that size on the screen, making adding my signature to the form impractical.

Long story short, I wasn't able to find a setting to change the pen distance -> mouse distance ratio, but I did find that if I increased the available area that the pen was mapped to, then it did the trick.

So, here is what I did:


> xsetwacom --list devices

Wacom Bamboo Connect Pen stylus  id: 11 type: STYLUS
Wacom Bamboo Connect Pen eraser  id: 18 type: ERASER    
Wacom Bamboo Connect Pad pad     id: 20 type: PAD

> xsetwacom --get 11 Area

Option "Area" "0 0 14720 9200"

> xsetwacom --set 11 Area "0 0 73600 46000"

At that point, the text looked great!

Integrating spring-security with spring-kafka

It's not uncommon for a message on a bus to have a user as part of its metadata. In my particular example at my workplace, we had the following very simple use case:

A user creates a task in our system (not unlike a task in, say, Todoist), and the creation of that task is written to a Kafka topic for propagation to other systems. One service consumes this topic to translate each message into an Elasticsearch record. That service finds it useful to know which user in our application created the task.

To achieve this, we at least need to have user information in the message. And it would be nice for the platform to take care of this concern for us.

In our case, the currently logged in user is available through the Spring Security API, so ideally, we'd configure Spring Kafka to read the user from and write the user to the Spring Security SecurityContext with producing and consuming messages.

Spring Kafka makes this simple.

Augmenting Kafka Messages with the Logged In User


First, we need to add the logged in user for each Kafka message. The way we did this was by extending MessagingMessageConverter:

public SpringSecurityAwareMessagingMessageConverter
    extends MessagingMessageConverter {
  @Override
  protected Object convertPayload(Message message) {
    String payload = (String)super.convertPayload(message);
    Authentication auth =
      SecurityContextHolder.getContext().getAuthentication();
    if ( auth != null && auth.isAuthenticated() ) {
      return new AsUser(payload, auth);
    } else {
      return new AsUser(payload, null);
    }
  }
}

This converter wraps the payload on the way out in an envelope that contains the user as pulled from the Security Context. While we probably don't want to simply throw the entire Authentication object in our message, I've done it here just to keep the code simple.

Setting Up a Security Context Based on each Kafka Message


Second, we need to unwrap the message. We can do this also in the same message converter, placing it in the Security Context:

protected Object extractAndConvertValue
    (ConsumerRecord record, Type type) {
    Object value = super.extractAndConvertValue(record, type);
    if ( value instanceof AsUser ) {

      UsernamePasswordAuthenticationToken token =
        new UsernamePasswordAuthenticationToken
          (((AsUser)value).getUser(), null, new ArrayList<>());

      SecurityContextHolder.getContext()
        .setAuthentication(token);

      return ((AsUser)value).getMessage();
    }
    return value;
}

We also do some cleanup once the method invocation is completed in case the same thread is used to process another message:

public SpringSecurityAwareMessageHandlerFactory
    extends DefaultMessageHandlerFactory {
  @Override
  public InvocableHandlerMethod
    createInvocableHandlerMethod(Object bean, Method method) {
    
    InvocableHandlerMethod m =
      new InvocableHandlerMethod(bean, method) {
        @Override 
        public Object invoke
          (Message message, Object... providedArgs)
          throws Exception {
   
          try {
            return super.invoke(message, providedArgs);
          } finally {
            SecurityContextHolder.clearContext();
          }
        }
      };

    HandlerMethodArgumentResolverComposite handlers =
      new HandlerMethodArgumentResolverComposite();

    handlers.addResolvers(initArgumentResolvers());

    m.setMessageMethodArgumentResolvers(handlers);

    return m;
}

The nice thing about this approach is that these can be used to abstract away the transport of the user. Also, consumers can reuse or otherwise exercise code that uses the SecurityContextHolder to derive who the user is.

Never Trust the Client


Of course, there are problems with this approach. Hypothetically, anyone with access to the Kafka cluster can write messages to this topic and claim a user. If this is a concern, then we can use a claim-based approach to transmit, say, a signed JWT as the user, which, depending on the needs of the consumer, can be used to validate against the issuer. This will definitely slow down processing, so you'll have to weigh the benefits.

I've posted the code as an example in my Github repo. Enjoy!




Micro-experimentation Tools in Java 9

I jumped over to IntelliJ two years ago, and I've been really happy with my choice. However, there is one thing that really irritates me.

Reindexing.

We have a big codebase at my workplace, and when IntelliJ decides to reindex, it can take a *while*. In the worst case, my IDE is unavailable for 10 minutes, though where it is most painful is the common case, which is about 30-45 seconds. 30-45 seconds is the perfect time to distract me and break my concentration--I check slack, my email, check back, get caught up in a customer issue, and by the time I come back, I've forgotten what I was working on and need to spend more time remembering where I was at!


Anything that breaks flow is a frustration, and there are two flow-breakers that I've been thinking about as I've been playing around with the new Java 9 release. The first is one that we all know and love: Java's verbosity. The other is one that you might not have consciously run into yet: JVM optimizations.

JShell

Have you ever tried to do a quick experiment with a new Java library just to see how it works? Creating classes and methods and even variables can get cumbersome when you are just in exploratory mode. As of Java 9, Java has finally joined the ranks of programming languages with a REPL. Sweet!

Working in a REPL is so refreshing because I can simply call a method and see what it does. Using that I can learn about cool new features in Java 9, like how about the fact that I can finally create a map and its contents on a single line? Genius!


jshell> Map.of("Why", "did", "this", "take", "so", "long?");
$1 ==> {Why=did, so=long?, this=take}

jshell> Map.ofEntries(
   ...>   Map.entry("verbose_languages", Arrays.asList("Java")),
   ...>   Map.entry("terse_languages", Arrays.asList("Scala")));
$2 ==> {terse_languages=[Scala], verbose_languages=[Java]}

In JShell, I can play with this to my heart's content without needed to create a file, create a class, create a main method, compile, run, compile, run, compile, run, and then delete the file.

JMH

How about trying to find out which algorithm or library is faster/better for your use case? If your algorithm executes in the microsecond range (or less), JVM optimizations start turning into noise, making it more difficult to make a scientific assessment.

I was very surprised by the outcomes Julian Ponge explained in his article about the trouble with writing benchmarks in Java. Here is a fun experiment to try. Here are three implementations of an algorithm:


private static double mySqrt(double what) {
   return Math.exp(factor * Math.log(what));
}

private static double javaSqrt(double what) {
 return Math.sqrt(what);
}

private static double constant(double what) {
 return what;
}

Create a simple benchmark that runs all three of these in series, comparing their performance by snapping time at the beginning and ending of each test.

If you want, you can use mine:

https://github.com/jzheaux/micro-experimentation/blob/master/04-jmh/cat-genealogy/src/main/java/com/joshcummings/cats/LoftyBenchmark.java

Crazy, but true, Java 8 and earlier will show the silly square root to be the fastest and constant to be the slowest! (You still see it in Java 9, too, but the behavior is less pronounced.)

Learn More


More about each of these can be found in my latest Pluralsight video: Micro-experimentation Tools in Java 9. I'd love your feedback!

Curious JMH results between Java 8 and Java 9

I've recently been playing around with JMH and doing some comparisons between Java 8 and Java 9. I wrote the following toy benchmark, learning from the example that Julian Ponge wrote up in his article Avoiding Benchmarking Pitfalls on the JVM. This is my simple attempt to apply the principle:



import org.openjdk.jmh.annotations.Benchmark;
import org.openjdk.jmh.annotations.Fork;
import org.openjdk.jmh.annotations.State;
import org.openjdk.jmh.annotations.Scope;

@State(Scope.Benchmark)
@Fork(1)
public class BenchmarkComparison {
  public double sqrt(double what) {
    return Math.exp(0.5 * Math.log(what));
  }

  private double what = 10d;

  @Benchmark
  public double baseline() {
    return Math.sqrt(what);
  }

  @Benchmark    
  public double correct() {
    return sqrt(what);
  }

  @Benchmark
  public double constantFolding() {
    return sqrt(10d);
  }

  @Benchmark
  public void deadCodeElimination() {
    sqrt(what);
  }

  @Benchmark
  public void deadCodeAndFolding() {
    sqrt(10d);
  }
}


Julian's post is intended to demonstrate common pitfalls that Java engineers fall into when it comes to benchmarking, with three of the methods above indicating incorrect ways to create a benchmark. I invite you to read his informative post to get more background information, if you like.

Running the following JMH benchmark in Java 8, I get the following results:


And here are the results in Java 9 on the same machine:


While this is a great example for why benchmarks need to be run on consistent JVM versions, what interests me more is why are the results in Java 9 are so much "smoother"? Why are they even the same order of magnitude?

I don't have the example handy, but I had a similar experience with Julian's very first experiment, with running several benchmarks in the same JVM run, which is a "no no". In Java 8, I saw the same behavior as Julian, but in Java 9, I didn't until I added a third test to the benchmark. If I only added two, I didn't see the dramatic performance degredation.

Any ideas?


Published Author! Checkout Scaling Java Applications Through Concurrency

I've very excited to announce that my first Pluralsight course has just been published! You can check it out Scaling Java Appliciations Through Concurrency:

https://app.pluralsight.com/library/courses/scaling-java-applications-through-concurrency

If you happen to have a Pluralsight membership, I would love to get your feedback!

Here is the course description from the website:

"There are several gems inside the existing concurrency API that have been hiding in the background for years, waiting to be discovered by curious software engineers. The existing Java Concurrency API makes it much easier to build a Java application that is scalable and performant without having to settle for lots of low-level wait-notify usage or lots of locking using the synchronized keyword. In this course, Scaling Java Applications Through Concurrency, you'll cover several concurrency patterns simplified by the Java Concurrency API; these patterns will make scaling new and existing Java applications simpler than ever. First, you'll learn about how the Java Concurrency API has changed scalability and how to run processes in the background. Next, you'll cover classes that will help you avoid mistakes like lost updates when sharing resources. Finally, you'll discover how to coordinate dependent processes and implementing throttling. By the end of this course, you will be able to easily scale your Java applications through concurrency so that they work better and faster."

I'd like to give a special thanks to Brian Goetz and his book Concurrency In Practice as well as the collective knowledge in online blogs and, yes, StackOverflow. I feel like I learned so much producing the course, and I hope that you get as much out of it as I did.

DVWA 1.9: File Inclusion Medium and High

Although I've studied and practiced secure coding standards for some time now, I had yet to try my hand at the offensive approach before last Friday when I downloaded DVWA and started working on the exercises.

File Inclusion

The file inclusion exercises were unexpectedly eye opening. Initially, I thought: "Directory traversal, get the etc/passwd file, etc., etc., not much here I don't already know." Then, I stumbled into Ashfaq Ansari's walkthrough of File Inclusion and Log Poisoning on DVWA Low which showed to my astonishment how one could use this security hole to poison logs and subsequently upload a php shell to the DVWA server.

Clever. Not bad for a day's work, right?

Medium Level

Thanks to Mr. Ansari, I learned a lot more than I thought I would about the dangers of file inclusion security holes; however, there was more to come. On the medium level, the same directory traversal attack initially seems defended against with the following code:

$file = str_replace( array("http://", "https://"), "", $file);
$file = str_replace( array("../", "..\""), "", $file);

Now, the url parameter value "../../../../../etc/passwd" will instead be transformed into etc/passwd and nothing will show:


Blacklisting is hard, though, and a single-pass search and replace cannot remove all ills. Consider, for example, what would happen when performing a str_replace on "hthttp://tp://". You, of course, would be left with "http://", the thing you were trying to prevent from being in the string in the first place!

So, of course, if all one is going to do is remove the "../" instances from the string, we simply need to construct a string that will leave "../" instances in the wake of a search and replace, e.g. "....//....//....//....//....//etc/passwd" or "..././..././..././..././..././etc/passwd" will both do fine.


Now, the same steps of log poisoning and shell uploading can again be performed with relative ease.

The right way to defend against this is whitelisting, which the higher levels of this exercise employ.

High Level

Actually, I'm not certain quite how to leverage this, yet, but I thought I'd post some of my initial thoughts. The defense against file inclusion in the high level is incomplete because unintended patterns can get passed it:

if ( !fnmatch("file*", $file) || $file != "include.php" ) {
    echo "ERROR: File not found!";
    exit;
}

Here, the regex allows for the file protocol, e.g. page=file:///etc/passwd. Since this would simply serve files from the user's local machine, I'm not sure what could be done with it, but I found it interesting.