These days, I designed a timing system for my game. Doesn’t really sound impressive, eh?
The problem with accurate timing, apart from hardware faults making timers change speed or jump back, is to resample a high-frequency clock running at 3+ MHz to the update rate your game is running with.
The naive approach would be to just cast the clock to a double and divide it by one 60th (if you want 60 Hz updates) of the clock’s frequency. But that’s not really such a good idea. Let me illustrate:
int main() {
float a = 20000000;
++a;
// a is still 20,000,000. The accuracy of the float has diminished so
// much that you not only have zero decimal places, it cannot even assume
// the value 20,000,001 -- only 20,000,002!
double b = 10000000000000000;
++b;
// b is still 10,000,000,000,000,000. Same reason.
}
Compensation Ticks
If you use floating point variables, you need to make sure your clock values won’t
go too high. Typical clock speeds for Windows’ performance counters are slightly
above 3,000,000 ticks per second, so while a double
would take 105.7 years
to reach the zero decimal places point quoted above, you’ll have obscenely big numbers
long before that and you can’t avoid rounding errors altogether (eg. 60 Hz = 1/60 results
in a periodic number!).
My first attempt was to use a simple integer division (which always rounds down) combined with a compensation tick that should cope for falling short of one second when adding all the ticks. If a 1 MHz timer was to be resampled to 3 Hz, this would be the result:
That sounded promising and wasn’t too difficult to implement:
std::uint64_t clockFrequency = this->clock->GetFrequency();
std::uint64_t updateFrequency = 3;
std::uint64_t updateClocks = clockFrequency / updateFrequency;
std::uint64_t adjustment = clockFrequency - (updateClocks * updateFrequency);
std::uint64_t firstUpdateClocks = updateClocks + adjustment;
You simply need to count which update you’re in and use firstUpdateClocks
instead of updateClocks
if it’s the first update.
However, when I tried to use this method with the real performance counter, it turned out that the compensation step could turn out quite a bit larger than I expected it to. My system’s performance counter runs at 3,233,539 Hz and I wanted to resample it into a microsecond timer. Well, you can do the math: one microsecond is added for each 3 ticks of the clock, except for the first microsecond, which requires… ready?… 233,539 ticks of the clock!
Bresenham
That wouldn’t do. Looking for another solution, I remembered how I implemented Bresenham’s Line Algorithm in assembly more than a decade ago:
You essentially have two counters: one is the longer axis of the line, one the shorter. Assuming you wanted to draw a line 100 pixels to the right and 20 pixels upwards, you’d have to step one pixel up on the Y axis for each 5 pixels on the X axis. That’s exactly the same you’d do when you want to resample a high-frequency clock to a lower frequency clock!
The simplest implementation of Bresenham’s algorithm calculates the fraction of the shorter axis to the longer axis, in the previous example 1/5th. For each step on the X axis, the fraction is added to a counter and when that counter reaches 1, it performs one step on the Y axis and resets the counter.
std::uint64_t clockFrequency = 100;
std::uint64_t updateFrequency = 20;
float fraction = updateFrequency / clockFrequency;
float counter = 0.0f;
for each clockTick { // just for easy illustration, real code doesn't use loops
counter += fraction; // Lots of rounding issues here
if(counter >= 1.0f) {
++updateTicks;
counter -= 1.0f;
}
}
But an optimized implementation will not use floating point or fixed point at all and achieve perfect accuracy: for each step on the X axis, increment the counter by the total offset on the Y axis. When the counter reaches the total length of the X axis, you go one step sideways and subtract the the total length of the X axis from the counter.
If the axes are not evenly dividable by each other, the remainders will accumulate in the counter (as you don’t reset it but subtract the total X offset), causing the sidestep to occur one pixel earlier at regular intervals.
std::uint64_t clockFrequency = 100;
std::uint64_t updateFrequency = 20;
std::uint64_t error = 0;
for each clockTick { // just for easy illustration, real code doesn't use loops
error += updateFrequency;
if(error >= clockFrequency) {
++updateTicks;
error -= clockFrequency;
}
}
That worked out nicely. Here’s a 1 MHz timer (microseconds) resampled to 60 Hz, a typical update interval for games because the screen updates at this rate and updating any faster isn’t visible and updating any slower causes jerky motions:
My timer removes all processed microseconds from its internal counters and handles overflows of the system-provided clock value, so now it could theoretically run hundreds of years, having perfect accuracy throughout and never sweeping a single clock tick under the carpet. Nice :)
Here’s my final implementation:
std::uint64_t Timer::GetElapsedMicroseconds() const {
std::uint64_t elapsedTicks = withdrawAllClockTicks();
// For each whole second, we can just subtract the clock frequency.
// After this, elapsed ticks are guaranteed to be less than the clock frequency.
std::uint64_t seconds = elapsedTicks / this->clockFrequency;
elapsedTicks -= seconds * this->clockFrequency;
// Apply Bresenham's optimized integer algorithm: for each elapsed clock tick,
// sum up the number of microseconds. Each time the sum passes the clock frequency,
// subtract the flock frequency from the sum and count up by one microsecond.
// This is more accurate than any floating point could ever be.
this->error += (elapsedTicks * MicrosecondsPerSecond);
std::uint64_t microseconds = this->error / this->clockFrequency;
this->error -= microseconds * this->clockFrequency;
// Add the newly elapsed time to the as of yet unaccounted microseconds to
// present a consistent view to the user (we're a const method after all)
this->unaccountedMicroseconds += seconds * MicrosecondsPerSecond + microseconds;
return this->unaccountedMicroseconds;
}
// ------------------------------------------------------------------------------------------- //
void Timer::AddAccountedMicroseconds(std::uint64_t microseconds) {
using namespace std;
assert(
(microseconds <= this->unaccountedMicroseconds) &&
"Tried to account for more microseconds than have elapsed"
);
this->unaccountedMicroseconds -= microseconds;
}
Since I’ve seen so many suboptimal timers on the net, I decided to publish
the full code of my own timers here. Everything is ISO C++ except
for the WindowsClock
class.