Our aim for using multi-threading was parallel computing, but here we are using the synchronized keyword to allow a single thread at a time. So how are we achieving parallel computing? Please, if possible, provide some relevant comprehensible coding examples.
class Counter {
int count;
public synchronized void increment() {
count ;
}
}
public class SyncDemo {
public static void main(String[] args) throws Exception {
Counter c = new Counter();
Thread t1 = new Thread(new Runnable() {
public void run() {
for (int i=1; i<=1000; i ) {
c.increment();
}
}
});
Thread t2 = new Thread(new Runnable() {
public void run() {
for (int i=1; i<=1000; i ) {
c.increment();
}
}
});
t1.start();
t2.start();
t1.join();
t2.join();
System.out.println("Count: " c.count);
}
}
N.B: This code is from a YouTube video.
CodePudding user response:
Your two threads mutate a shared state, the counter. As incrementing in Java is not an atomic operation, count
is what's called a critical section which must be protected from being entered by more than one thread at a time. For this, your code uses the synchronized
keyword on method increment()
.
If you want to count in parallel, just don't share the counter.
Give each thread its own Counter instance, then increment()
will never be called by more than one thread on an instance of Counter at a time. No more thread synchronization is needed there.
That doesn't mean that you don't have to coordinate the threads at all: as you want to output the total count, this can only be done if all threads have finished their work. One way is to join the threads like your main method does. Afterwards you can output the sum of the individual counts.
CodePudding user response:
Why do we use
synchronized
keyword while our aim of using multi-threading is parallel computing?
We don't.
If our aim of using multi-threading is parallel computing,* then we try to design our program to use synchronized
only when it is absolutely necessary for the threads to talk to each other. We try to make sure that that seldom happens.
The best parallel computations allow the threads to run independently of each other, each working on its own private data (or, its own private copy of some "shared" data), for long stretches at a time. Then they sync up only just long enough to maybe update a few shared variables and continue.
You often will see things in parallel algorithms that look wasteful: E.g., A thread does some work, and then when it syncs up with the shared database, it sees that the work was not necessary, and it throws the result away before starting another task.
Sounds wasteful, but if that strategy allows seven other threads to do big chunks of useful work without needing to talk to each other, it pays off in the long run.
* That's not always our aim of using multi-threading. Our aim of using multi-threading is always to achieve concurrency. Sometimes we want the threads to concurrently compute something (a.k.a., "parallel computation,") but other times, we want our threads to concurrently wait for events that are driven by external, unsynchronized sources (e.g., in a server that waits for commands from multiple different clients.)