What I learned at Java One, Part I - Security

10:31 PM 2 Comments

This week, I had the opportunity to attend my first JavaOne conference.  In a word:  Firehose.  Over the next few days, I'm going to try and summarize by topic the incredible amount of information I was given.  This is the first in the series.

There were a lot more talks on security than I expected, and I went to several of them (a couple of them to my dismay).

Cloud Identity Management, by Anil Saldhana

Possibly the most interesting thing that I got from Anil's talk was his by-the-way-reference to PicketLink, a JBoss Identity Federation server.  I took a look after the session, and found what looks like a very promising application.  We currently license a commercial, closed-source product, and PicketLink appears to have a great deal of what we would need in order to switch:
  • Management Console for configuring SPs, IDPs, Certificates, Attributes, etc.
  • Support for SAML 2.0, 1.1, and WS-Trust
  • Support for Username/Password auth (via PicketBox APIs)
Plus, the added bonus of future support for OAuth, which we are not using yet, but plan to in the future.

It really is tricky, though, to find one product that does everything I'd like it to.  For example, I like the approach of an Authentication Reverse Proxy, kind of like what Oracle WAM does where it authenticates the user and then passes along HTTP headers that specify the user's identity down to the proxied application.  I'd also like it to have some support for Mobile Devices.  Maybe a REST service that takes the device's UDID, username, and password, and then UDID thereafter.  Or make it easy for me to add a service like that myself.

Anyway, I digress.  Anil mostly talked about IdM 101 type stuff, so I didn't learn a whole bunch in that regard, but the product he referred to really got me thinking.

Protecting Machine-Only Processed Data, by Noam Camiel

I didn't realize that I was stepping into a vender session until I sat down.  I didn't want to be impolite, so I figured I would stick around to hear what he had to say.

Basically, he has these innovative black boxes that have no external dependencies and contain within them secure, encrypted information, like passwords, certs, etc.  Any operation where access to those values is needed, like password comparison, is done in the box as well.  Noam's concept is that the secure data goes in, but it never comes out (other than for replication in aggregate).

Cool idea.  Worth checking into more.

IdM Expert Panel, Ludovic Poitou, Matt Hardin, Petr Jakobi, and Shawn McKinney

I was really excited about this one, but it turned out to be a facade for a bunch of vendors to brag about their products.  Ugh.  Then afterwards, they completely shut down one of the attendees (Anil from JBoss, no less) basically saying that the business requirement he gave them was not their problem.  McKinney seemed to really want to control the narrative.  Double ugh.  Sounds like the panel basically knows what they want to build already, and they are going to do it...

Anyway, each expert had their own product that they were pulling for, and a couple sounded compelling.  I downloaded OpenAM (Poitou) and read through a tutorial; I was excited to hear that it was a continuation of the OpenSSO product.  I also perused the Fortress site (McKinney) reckless reserve.  The identity space is definitely one where open source is far behind the commercial folks; we'll see if we can catch up.

Securing Apps with HTTP Headers, by Frank Kim

I was excited to hear from this guy as I had taken a SANS certification class from him in the past, and I knew that he was a good teacher.  I was not disappointed.  I thought I knew something about securing web applications, but I was wrong.  :)

Kim talked about three kinds of attacks that engineers can defend against simply by using certain HTTP headers in the server response.


XSS is when a hacker finds a way to execute arbitrary javascript on your website.  By executing arbitrary Javascript, the hacker can steal session cookies, initiate Cross-Site Request Forgery, and a host of other nasty things.  Kim mentioned three headers here to prevent XSS.

1.  HttpOnly flag

This is one that we already use in our company for sensitive cookies like session cookies.  HttpOnly is a flag that you place in your Set-Cookie header whenever you are sending a cookie down from the server.  This flag makes so that Javascript cannot read it, which will prevent an XSS attack from being about to steal it.

2.  X-XSS-Protection

This header is one that notifies the browser to detect reflective XSS attacks.  (I just fixed one of these the other day!) These are attacks where a given http parameter's contents are written to the page without any intermediate evaluation or encoding.  It turns out that the latest browser versions have this turned on by default (wohoo!), so you should already be benefiting from it.

Set it to "0" to turn it off (not recommended).  "1" means to just not render the bad part of the page if it detects reflective XSS.  "1;mode=block" means to render none of the page if it detects reflective XSS.

3.  Content Security Policy, X-Content-Security-Policy, X-Webkit-CSP

This is cool.  This tag allows you to specify hefty restrictions on how the browser will process javascript and stylesheets.  At its most secure, it will not render any inline javascripts or styles on the page! There are all kinds of directives that allow you to tweak it to your needs including where resources like scripts, stylesheets, images, and fonts can come from.

Because this one can have such a dramatic effect on a large website with millions of lines of code invested, there is also the X-Content-Security-Policy-Report-Only header, which does the same thing, except it only reports the violations to a specified uri instead of not rendering that content.

Session Hijacking

Session Hijacking is when a hacker is able to sniff your connection, like on a public wifi, and steal your session cookie.  The defense here comes in two parts.

First, setting the Secure flag on your session cookie.  This is like setting the HttpOnly flag on the Set-Cookie header.  The Secure flag means that the cookie will only be sent on HTTPS requests, but not for HTTP requests.

Apparently, there is a program out there written by Moxie Marlinspike (who wouldn't want to have that name?) called sslstrip which can strip off the SSL in a request (not quite sure how that works, but I will be watching this video about sslstrip to get a better understanding).  So, a second defense is needed called Strict-Transport-Security.

This second header mandates that all traffic for a website, regardless of the protocol specified in the link, must be HTTPS.  The format is like this:

Strict-Transport-Security:  max-age=seconds[; includeSubdomains]

This header must be specified over a legitimately-certified HTTPS response and thereafter (until max-age expires) the browser will send all requires for that domain in HTTPS.


The concept of clickjacking had to sink in for a bit before I understood it.  The idea is that there is a button that you would like an individual to click on, say your Like button on Facebook.  On an unprotected site, you and trick users into clicking your button by setting another button underneath it and making the desired button transparent.

This is done with iframes.  The site that contains the desired button (let's say Facebook in this case), is referenced in an iframe on the attacker's site.  The attacker's site looks like something completely different, like maybe a signup form for something which has a button at the end.  So, the attacker has his bogus page load with an iframe into Facebook.  He makes the Facebook frame transparent and positions it in just the right spot so that where you click on the bogus page, it actually clicks the button in the invisible iframe!

Makes sense? It didn't to me for a bit.   Anyway, the defense is another header, X-Frame-Options.

X-Frame-Options is a header to indicate what sites are allowed to have your site inside their iframe.  It can take values DENY, SAMEORIGIN, or ALLOW-FROM.
  • DENY means that no one can put your site in an iframe
  • SAMEORIGIN means that sites from the same domain can put your site in an iframe (recommended)
  • ALLOW-FROM whitelist means that sites from the whitelist can put your site in an iframe
Phew! That's a lot of content.  Kim had more to say about a lot more, but I'll probably have to leave that to another day.

You are Hacked, by Karthik Shyamsunder and Phani Pattapu

I'll be brief on this one.  I was very excited to go to this one because I thought it was going to be a narrative of network forensics, guerilla warfare, and the like.  Sadly, it was two very smart individuals charging through a huge, dry slide deck about the standard JEE 6 security model.  Oi!  Afterwards, I went up to Phani and told him that his presentation inadvertently explained very well why people ought to just use Spring Security!

New Security Features in JDK 8+, by Jeffrey Nisewanger and Brad Whetmore

At this one I found out that the JKS keystores aren't encrypted! Apparently, Java has another kind of keystore called JCEKS that is encrypted.

There are a bunch of things that are slated for JDK 8.  Here are just a few:
  • AEAD cert support.  (AEAD extended certs are certs which are apparently harder to acquire and thus more trusted than regular certs.)
  • doPrivileged is going to be changed to allow code to only assert certain privileges instead of all of them at once.
  • PCKS#11 API spec
  • Better support for PCKS#12 keystores

Cross-Build Injection Attacks, by Sander Mak

This session creeped me out a bit.  What Mak did was create a simple "Hello, World!" application and then built it with the typical "mvn clean compile".  He ran the class and instead of printing the familiar "Hello, World!" it said "You've been p0wned at JavaOne!"  What a neat trick! How did he do it?

It turns out that he purposely corrupted his local maven repository with a poisoned maven-compiler-plugin, which performed the compilation trick.  Then, he posed the questions:
  • What if someone compromised the central maven repository and uploaded a poisoned version of some broadly-used dependency?
  • What if someone hacked your dns to make the central repo url point to their own hacker repository?
  • What if someone stood up a proxy in between you and your connection to the central maven repo and replaced the jar in flight to your local machine?
Honestly, these were questions I'd never thought about before, but in that moment I understood why we have an internal repository at work.  To this day, I'd thought it was simply for performance, but it now made sense that it was necessary for security reasons.

He emphasized that there are three defenses that should be applied in sequence to guard against hacks like this:

Have an internal Maven Repo with no automatic mirroring

This one is pretty simple.  While automatic mirroring is very convenient, it sets you up for a problem should the central maven repo ever get compromised.

Verify PGP signatures

As of three years ago, new jars coming into Maven Central are required to be signed with a PGP certificate and be published with a .asc signature file.  The public key is loaded to http://pgp.mit.edu where Repository Managers can verify the signature against its public certificate.

It takes a few steps to do this manually, but apparently Sonatype offers automatic PGP signature verification as part of its paid edition.

Enter into a Web of Trust

PGP signatures aren't verified with certificates issued by certificate authorities like other protocols.  Instead, there is a Web of Trust where you specify the people who you trust in the web.  These people (you included) indicate which signatures you trust.  This web of trust is overlayed onto the key repository so that you can verify the signature with a public key that is trusted by at least one person that you have already specified that you trust.

Wow.  Security doesn't come easily does it?

Unfortunately, this got me thinking about things like the singularity and the possibility that my consciousness could one day be digitized and uploaded into the brain of another person.  How's that for corrupting a central repository?

Security in the Real World, by Ryan Sciampacone

It is hard to say whether this talk or Kim's talk was my favorite.  Sciampacone's was more fascinating to see the ingenuity of hackers, but I walked away from Kim's with more tools in my pocket to use.

Anyway, I'm getting really tired of typing, so here are the four vulnerabilities that he highlighted:

Hashcode DoS Attack

The basic idea is that it is trivial to come up with an infinite number of very long strings that hash to the same value.  The hack is to use this information to create very big hashmap keys that are all then inserted into the same hashcode bucket, creating a lot of CPU cycles for the server and more than effectively bringing it down.  Hackers could very easily issue this attack by sending in a long parameter list to a servlet, like this

http://yourjavasite.com/context/page.jsp?param1=BlahBlahBlah..1 MB worth of charaters..Aa&param2=BlahBlahBlah..1 MB worth of characters..BB&...16000 parameters

Such a url would, in the past, take any Java servlet container down.

There were a few interesting take-aways from this.

The first interesting takeaway was the defense mechanism that was put into the JDK.  A random value is now inserted as part of the hash code to make sure that it is much harder to guess what parameters will hash to the same value.  The problem with this is that there is existing code all over the world that relies on hashmap#keySet returning the keys in the same order (even though the spec tells you not to).  If the Java engineers had simply introduced this random value for all hashcodes, a great deal of code across the world would have broken.   Because of this, the random value is only inserted for very high large keys.  This value can be tweaked the the command-line property -Djdk.map.althashing.threshold=x

The second is the defense mechanism for Tomcat.  Apparently, Tomcat originally had no limit to the number of parameters that it would take in from the request.  This meant that any sufficiently long url could take down a tomcat server regardless of the hashcode part of things.  So, at the same time, they set the limit to 10000 parameters and introduced a property for the web.xml called maxParameterCount.  Can you imagine a legitimate case where 10000 parameters are needed??? 100 is probably more reasonable.

Gondvv Vulnerability

The idea here is that it was temporarily possible in Java applets to use sun.awt.SunToolkit to call a public getField method that returned a read/write handle to any arbitrary method in any class on the classpath.  Eww...  The hacker would get his code deployed into an applet, use this trick to get the SeucrityManager, change it to grant untold access to the user's computer, set the SecurityManager, and boom the hacker could run arbitrary programs on the user's computer.  Wow!

Apparently this was fixed pretty quickly.  The lesson here was to make sure you know what you are returning from any method, especially if it is public.

Invokespecial Security Fix

This one is a bit more theorhetical, but it was apparently once the case that the BytecodeVerifier did not enforce the rule that you can't skip a parent's constructor in the construction process.  The compiler will stop you if you do that, but if you generate a class file that skips your parent's constructor, other versions of the JDK will actually allow it.

The idea, then, is that if your parent is setting some security roles or something and you are able to arbitrarily skip that construction process, you may be able to gain unauthorized access to the trusted portions of your parent's business logic.


Another theorhetical but very tangible place for security holes are MethodHandles.  javax.lang.reflect.Method is a class that represents a method that can be invoked on a Java object.  Each time the invoke() method is called, though, the SecurityManager runs to make sure that you have access to do so.  MethodHandles are faster than Methods in part because they only run the SecurityManager when you are first getting a reference to the MethodHandle.  After that, you have the keys to the kingdom.  This means that if you get a MethodHandle reference and you return it from your API, either "wo be unto you" or "you best be sure you know what you are doing!"

Secure Coding Guidelines for Java, by Marc Shoenfeld

This was my last security class that I went to, and it nearly made me cry.  It was basically a person who had taken the table of contents from a university textbook, put them on some slides, and read them.  Oh.  My.  Goodness.  That was the first time I walked out from a presentation.

I did learn about the CVSS vulnerability scale in the first three minutes, so I suppose it wasn't a complete waste.


Wow! I hadn't realized how many security talks I'd gone to until I tried writing it all down.  I can't believe you made it all the way to the bottom! You should be doing something productive.  Like hacking into the university's transcript database and changing your grades or something!

Josh Cummings

"I love to teach, as a painter loves to paint, as a singer loves to sing, as a musician loves to play" - William Lyon Phelps


  1. Thanks for the kind words Joshua! As promised during my Cross-Build Injection talk, here is a follow-up post detailing the example I used: http://branchandbound.net/blog/security/2012/10/cross-build-injection-in-action

  2. Thanks, Sander, especially for the extra work to explain your exploit on your blog.