Home > front end >  two processes accessing shared memory without lock
two processes accessing shared memory without lock

Time:10-02

There are two variables in a shared memory (shared by two processes), one is a general int variable (call it intVar) with initial value say 5 and the other variable of pid_t type (call it pidVar) with initial value 0.

Now the code to be developed is that both the processes try to add a number (which is fixed and different for both these processes) to the intVar variable in the shared memory (there is no lock on memory ever) and if a process finds the pidVar variable in the shared memory to be having the value 0, only then it will update it with it's own pid.

So the point of confusion for me is:

1: Since both the processes are trying to add a number, then ultimately the final value in the intVar variable will be same all the time after both the processes are done adding a number?

2: The pidVar in the shared memory is going to be totally random? Right?

CodePudding user response:

  1. Not necessarily. Adding to intVar involves reading from memory, calculating a new value then writing back to memory. The two processes could read at the same time, which would mean two writes, so whichever came last would win.

  2. pidVar is defined to be initially zero. Either or both of the processes may find it that way, causing either or both to update it. So you will get one of them.

Your assignment says no locking, which is a loaded and imprecise term. Locking can refer to physical layer operations such as bus protocols right through high level operations such as inter process communication. I prefer synchronizing is locking; thus everything from a bus protocol to message passing is locking.

Some processors create the illusion of atomic memory, for example by having opcodes like "lock: addl $2, (%sp)"; but in reality that is a form of synchronization implemented in the processor via busy wait. Synchronization is locking; thus that would be in violation of your assignment.

Some processors slice the memory protocol down so you can control interference. Typical implementations use a load linked, store conditional pattern, where the store fails if anything has interfered with cache line identified by the load linked instruction. This permits composing atomics by retry operations; but is again synchronization, thus locking.

  • Related