Home > OS >  C Is data race big problem in producer/consumer pattern?
C Is data race big problem in producer/consumer pattern?

Time:03-22

I still learning multithread programming with network involved.

My question is that when I designed threads as consumer/producer pattern, and producer modify variable randomly, consumer checks variable and do something based on it, would it be still bad(big problem)?

Like the code below.

int flag = 0;

void producer()
{
    while(true) 
    {
        // waits until packet arrives.
        recv(socket, data);
        if(data == 1)
            flag = 1;
    }
}

void consumer()
{
    while(true)
    {
        if(flag == 1)
            doSomething();
    }
}

I guess doSomething() function will be called eventually. I don't need to sync these two thread perfectly. So I think it would be fine even if I don't use like std::atomic_int instead of int. (I would use std::atomic_int in this example, but in my real implementation, the variable is too complex to use std::atomic.)

So is it bad practice to just ignore data race problem? or is it rather good choice for performance?

CodePudding user response:

A data race results in undefined behavior.

C draft N4860

6.9.2.1 Data races (21.2)

The execution of a program contains a data race if it contains two potentially concurrent conflicting actions, at least one of which is not atomic, and neither happens before the other, except for the special case for signal handlers described below. Any such data race results in undefined behavior.

and

3.30 undefined behavior

Permissible undefined behavior ranges from ignoring the situation completely with unpredictable results, [...]

So, if you can live with undefined behavior, e.g. a formatted hard disk or nasal demons, that's your choice.

If it's a Hello world program, you may get away with it. If it is a bank transaction over 2000€ which gets lost, I consider it as a problem. In my business, if I accidentally miss a network package with measurement points, a plane may crash. I consider it as a big problem.

In the comments you wrote:

Actually what I'm doing is game development, which sometimes I need to choose performance over correctness.

Granted, in game development, you make a compromise between performance and correctness. But typically it means that the amount of correctness is measurable or estimatable. E.g. a physics engine updating only 50 times per second instead of 100 times per second. But still, the result is correct to the extent possible.

With undefined behavior, it's hard to tell how incorrect "unpredictable results" actually are. You don't want the result to change with every new compiler for example.

I suggest you implement it in a thread-safe way and then check whether or not you actually have a performance problem. I bet: there will be a solution which does not require undefined behavior and is still fast enough. It has been done before.

Premature optimization is the root of all evil.

as Donald Knuth said.

CodePudding user response:

All theory aside, this very simple example is already completely broken in real life.

Because data races cause undefined behavior, the compiler is entitled to assume that non-atomic variables don't change asynchronously. Therefore an optimizing compiler can and will transform your consumer into:

void consumer()
{
    while (true) {
        if (flag == 0) {
            while (true); // infinite loop
        } else {
            doSomething();
        }
    }
}

That is, if on any iteration flag equals 0, the compiler can see that the consumer thread will not write to flag nor call any function before checking it again. Therefore, it assumes, flag will not change before it is checked again. Therefore, it concludes, we already know that it will still be false on the next check. Aha, an optimization! We can save the expense of actually testing flag the next time, and just keep looping forever, as nothing can ever break us out.

You can see that g -O3 does exactly this: https://godbolt.org/z/414PG4nnE. Note the infinite loops at lines 13 and 21 of the assembly output. (It split up the two cases into whether flag is false initially or whether it becomes false on some later iteration.)

If flag is initially true, it will call doSomething(), and in this case it will then test the flag again, in case doSomething() itself might have changed it. But if the compiler were able to tell (via inlining, link-time optimization, etc) that doSomething() doesn't write to flag, it could also optimize out the retest, and unconditionally enter an infinite loop after the first call.

Practically speaking, you might be able to get this to work by making flag a volatile variable. However, (1) this would still not fix the data race in theory, and (2) it would impose pretty much all the same optimization penalties as std::atomic. So you might as well just use std::atomic and get correctness while you're at it.

  • Related