I am implementing a game engine and am implementing two version of time:
- Real time
- Scaled time (for slow / fast motion effects)
Real time fully implemented and works, but I am having serious concerns with scaled time. The issue is <chrono>
header file specific. I wanted to store an std::chrono::duration<std::chrono::nanoseconds>
, but I am not sure if I do so if games built on the game engine if ran for long enough the scaled time would overflow causing chaos.
The question came when I tried to store the following:
scaledPhysicsTime
scaledPhysicsDeltaTime
For scaledPhysicsDeltaTime
I would get the real time physics delta time as a double
, multiply it by the time scale (multiplier for slow / fast motion effects) and convert the result to std::chrono::nanoseconds
. Then, the scaledPhysicsDeltaTime
I would add to scaledPhysicsTime
. Does anyone think there is a better way? Every time I begin to code it I feel like I am going about this the hard way.
Note: I am thinking of storing the time as nanoseconds
because it would store time resolution better than milliseconds
or double
. Then the user / programmer could decide which data type to request from the class.
Am I going about this the right or wrong way? Any type of insight in other directions are welcomed and thanks to everyone in advanced.
CodePudding user response:
Can std::chrono::durationstd::chrono::nanoseconds run into an overflow?
Yes.
The underlying type of std::chronology::nanoseconds
is "signed integer type". It may also be a concern that signed integer overflow is undefined behavior. An assurance however is that the signed integer type for the nanoseconds helper type must be of at least 64 bits which "covers a range of at least ±292 years".
Is there a better way?
What's "better" is subjective of course but we can talk about some alternatives and their pros and cons:
- Store the double and the time scale individually. Assuming that this double and time scale is valid to begin with, then what's stored can't be garbage. This takes up more space (8 bytes for the double plus however many bytes for the time scale) and potentially changes the times when overflow might occur.
- Use the
std::chrono::duration<double, std::nano>
as the type forscaledPhysicsDeltaTime
. This increases the range fromstd::chrono::nanoseconds
, avoids the undefined behavior of integer overflow, while still storing the delta time in nanoseconds.std::chrono::duration<double, std::nano>
can have less precision compared tostd::chrono::nanoseconds
but that wouldn't be a problem when the real time physics delta time is already a double.
Am I going about this the right or wrong way?
I don't see there being an objective answer to this question either. Here's some things you may want to consider however...
How likely is the calculation of scaledPhysicsDeltaTime
to overflow to begin with and how bad would it be if it did?
You haven't said what's the possible range for your time scale and you've suggested that the scaled time also depends on how long the game has run. Applications that use an incrementing float
for time since start can practically run into numerical precision issues. That's not analogous though to how I'm understanding what you've described for scaledPhysicsDeltaTime
.
With these caveats:
- Given the assurance that the nanosecond helper type is using at least a 64 bit signed integer, it doesn't seem likely that overflow is going to occur.
- This is for a game engine you say which doesn't seem like too serious a system for overflows.
- Are there more important things to address first?
On the other hand, there are guidelines like:
- INT32-C. Ensure that operations on signed integers do not result in overflow from CMU's SEI.
- ES.103: Don't overflow from the C Core Guidelines.