Print

Print


If I want to compare the speed of different versions of the same MASON simulation (Java version vs. Clojure rewrite, Clojure vs. Clojure with optimizations, etc.), is it reasonable to simply compare the "Rate:" outputs in a few runs of the simulations?


I assume that at the very least:

(a) The first few lines of output should be ignored; JIT compilation, etc. hasn't had a chance to optimize.

(b) It's better to set the -time parameter to a larger value, so that more fluctuations are averaged over.


There are libraries that will run something multiple times, each time with sufficient burn-in for the JIT compiler, and average over the results.  In Clojure, Criterium is the standard one used for comparing speed of functions, but I don't think it can be used with pure Java.  Would I be better off going that route, or do you feel that the rate output is good enough (with the preceding caveats).


One odd thing that I've noticed is on one of my machines (the slower one) after the first 10 or 20 seconds, the rates often drop just a little bit and remains hovering around the lower value.  This is very consistent.  


(I'm playing with versions of the students simulation from Chapter 2 of the manual.  With type hints and the latest version of Clojure, I have a Clojure version that's close to half the speed of the Java version, rather than the ~1/170 ratio with my first, unoptimized version.   I'm using Java 1.6 (not worth explaining), so maybe the ratio would be worse with a more recent javac.  This is all on the assumption that the answer to my first question is "yes", of course.)


Thanks-

Marshall



Marshall Abrams, Associate Professor 
Department of Philosophy, University of Alabama at Birmingham
http://http://members.logical.net/~marshall
Email: [log in to unmask]; Phone: (205) 996-7483;  Fax: (205) 975-6610
Mail: HB 414A, 900 13th Street South, Birmingham, AL 35294-1260;  Office: HB 418