Home > Software design >  Thread.sleep() optimization for small sleep intervals
Thread.sleep() optimization for small sleep intervals

Time:04-12

I am writing a library that involves a caller-defined temporal resolution. In the implementation, this value ends up being an interval some background thread will sleep before doing some housekeeping and going back to sleep again. I am allowing this resolution to be as small as 1 millisecond, which translates to Thread.sleep(1). My hunch is that that may be more wasteful and less precise than busy-waiting for 1 ms. If that's the case;

  1. Should I fall back to busy-waiting for small enough (how small) time intervals?
  2. Does anyone know if the JVM is already doing this optimization anyway and I don't need to do anything at all?

CodePudding user response:

That's easy to test:

public class Test {

    static int i = 0;
    static long[] measurements = new long[0x100];
    
    static void report(long value) {
        measurements[i   & 0xff] = value;
        if (i > 10_000) {
            for (long m : measurements) {
                System.out.println(m);
            }
            System.exit(0);
        }
    }
    
    static void sleepyWait() throws Exception {
        while (true) {
            long before = System.nanoTime();
            Thread.sleep(1);
            long now = System.nanoTime();
            report(now - before);
        }                   
    }
    
    static void busyWait() {
        while (true) {
            long before = System.nanoTime();
            long now;
            do {
                now = System.nanoTime();
            } while (before   1_000_000 >= now);
            report(now - before);
        }
    }
    
    
    public static void main(String[] args) throws Exception {
        busyWait();
    }
}

Run on my windows system, this shows that busyWait has microsecond accuracy, but fully uses one CPU core.

In contrast, sleepyWait causes no measurable CPU load, but only achieves millisecond accuracy (often taking as much as 2 ms to fire, rather than the 1 ms requested).

At least on windows, this is therefore a straightforward tradeoff between accuracy and CPU use.

It's also worth noting that there are often alternatives to running a CPU at full speed obsessively checking the time. In many cases, there is some other signal you could be waiting for, and offering an API that focuses on time-based resolution may steer the users of your API in a bad direction.

  • Related