I've written a simulation in which, when visualised, should run
synchronised with wall-clock time. At the moment, I'm using a 1:1 ratio
between sim and real time.
The obvious way to approximate this is to do this:
static final long TICKS_PER_SECOND = 35L;
static final long MILLISECONDS_PER_TICK = 1000L / TICKS_PER_SECOND;
then do this:
Obviously, each sim tick will take some amount of time, so the actual
frame rate will be less than desired, but if the t
Under Windows this works, the frame rate is sustained at very nearly 35
per second. On a comparably-specced Linux machine, however, the frame
rate tops out at about 25.8, some 40% slower. If I set the sleep time to
zero, the Linux box sustains a frame rate in excess of 200.
Does this sound right? By my calculations, at 200 frames per second 25
frames should only take about 0.125 seconds, leading to a framerate
(with that delay) of at least 30.
Rob Alexander (E-mail: [log in to unmask])
Research Associate, Dept of Computer Science, The University of York,
York, YO10 5DD, UK
Tel: 01904 432792 Fax: 01904 432708