This is the reduced code to demonstrate the behavior.
#include <stdio.h>
#include <ncurses.h>
#include <time.h>
#define DELAY 2.0
int main ( void) {
static clock_t start;
static double delay = 0.0;
double elapsed = 0.0;
clock_t stop;
initscr ( );
timeout ( 20); // milliseconds getch waits for a character
if ( delay < .0001) {
start = clock();
delay = DELAY;
}
do {
// getch ( );
stop = clock();
elapsed = (double)( stop - start) / CLOCKS_PER_SEC;
move ( 22, 0);
printw ( "cps %ld elapsed %f\n", CLOCKS_PER_SEC, elapsed);
refresh ( );
} while ( elapsed < delay);
endwin ( );
return 0;
}
Compiled and linked with ncurses
it runs for about two seconds.
Uncomment // getch ( );
recompile and link and it runs for about two hundred seconds.
The problem has been solved by using clock_gettime()
.
I am curious why clock()
is behaving strangely.
CodePudding user response:
On POSIX systems (like Linux or macOS) the clock
function reports the CPU time used for the process. If it doesn't use any CPU time, like when it's waiting a lot of time, then you will get very small differences.
If you use clock_gettime
with either CLOCK_MONOTONIC
or CLOCK_REALTIME
those are based on the wall clock of the system.
Note that the behavior of clock
is different on Windows where it's a representation of the wall clock.