First, a note about the problem and my solution. The problem was to find the first triangle number (where the n_th triangle number is 1+2+3...+n) that has at least 500 divisors. My solution was pretty brute force. I took each triangle number and computed its prime factorization. For example 28 = 2^2 * 7^1. Thus the number of factors is (2+1)*(1+1) = 6. Generally if N = A^a * B^b * ... where A,B,.. are primes, then the number of factors is (a+1)*(b+1)*...
With all of that in mind, why did I think JRuby would be faster than Ruby on a problem like this? This kind of calculation is well suited for JVM optimizations: unwinding of loops, JIT'ing of the code, etc. Thus I thought this might be the kind of problem where the JVM could make JRuby run a lot faster than Ruby. Boy was I wrong!
In general, I found JRuby to take twice as long as plain ol' Ruby (or C Ruby as the JRuby folks like to call it.) This was true on Windows, where Ruby is considered to have a poor implementation by many, and on OSX.
This made me thing that my conjecture was wrong to begin with. Maybe this was not the kind of code that the JVM could do much with. I re-wrote the algorithm in Java and re-ran it. It was exponentially faster in Java than in Ruby or JRuby. Indeed, the JVM was able to optimize the runtime execution of the code and make it fly.
Is this what I should have expected? Is JRuby generally much slower than Ruby? I really thought that part of the idea behind JRuby was to leverage the JVM to make Ruby faster.