Since MPI-3 comes with functionality for shared memory parallelism, and it seems to be perfectly matched for my application, I'm critically considering rewriting my hybrid OpemMP-MPI code into a pure MPI implementation.
In order to drive the last nail into the coffin, I decided to run a small program to test the latency of the OpenMP fork/join mechanism. Here's the code (written for Intel compiler):
void action1(std::vector<double>& t1, std::vector<double>& t2)
{
#pragma omp parallel for schedule(static) num_threads(std::thread::hardware_concurrency())
for (auto index = std::size_t{}; index < t1.size(); index)
{
t1.data()[index] = std::sin(t2.data()[index]) * std::cos(t2.data()[index]);
}
}
void action2(std::vector<double>& t1, std::vector<double>& t2)
{
#pragma omp parallel for schedule(static) num_threads(std::thread::hardware_concurrency())
for (auto index = std::size_t{}; index < t1.size(); index)
{
t1.data()[index] = t2.data()[index] * std::sin(t2.data()[index]);
}
}
void action3(std::vector<double>& t1, std::vector<double>& t2)
{
#pragma omp parallel for schedule(static) num_threads(std::thread::hardware_concurrency())
for (auto index = std::size_t{}; index < t1.size(); index)
{
t1.data()[index] = t2.data()[index] * t2.data()[index];
}
}
void action4(std::vector<double>& t1, std::vector<double>& t2)
{
#pragma omp parallel for schedule(static) num_threads(std::thread::hardware_concurrency())
for (auto index = std::size_t{}; index < t1.size(); index)
{
t1.data()[index] = std::sqrt(t2.data()[index]);
}
}
void action5(std::vector<double>& t1, std::vector<double>& t2)
{
#pragma omp parallel for schedule(static) num_threads(std::thread::hardware_concurrency())
for (auto index = std::size_t{}; index < t1.size(); index)
{
t1.data()[index] = t2.data()[index] * 2.0;
}
}
void all_actions(std::vector<double>& t1, std::vector<double>& t2)
{
#pragma omp parallel for schedule(static) num_threads(std::thread::hardware_concurrency())
for (auto index = std::size_t{}; index < t1.size(); index)
{
t1.data()[index] = std::sin(t2.data()[index]) * std::cos(t2.data()[index]);
t1.data()[index] = t2.data()[index] * std::sin(t2.data()[index]);
t1.data()[index] = t2.data()[index] * t2.data()[index];
t1.data()[index] = std::sqrt(t2.data()[index]);
t1.data()[index] = t2.data()[index] * 2.0;
}
}
int main()
{
// decide the process parameters
const auto n = std::size_t{8000000};
const auto test_count = std::size_t{500};
// garbage data...
auto t1 = std::vector<double>(n);
auto t2 = std::vector<double>(n);
// /////////////////
// perform actions one after the other
// /////////////////
const auto sp = timer::spot_timer();
const auto dur1 = sp.duration_in_us();
for (auto index = std::size_t{}; index < test_count; index)
{
#pragma noinline
action1(t1, t2);
#pragma noinline
action2(t1, t2);
#pragma noinline
action3(t1, t2);
#pragma noinline
action4(t1, t2);
#pragma noinline
action5(t1, t2);
}
const auto dur2 = sp.duration_in_us();
// /////////////////
// perform all actions at once
// /////////////////
const auto dur3 = sp.duration_in_us();
for (auto index = std::size_t{}; index < test_count; index)
{
#pragma noinline
all_actions(t1, t2);
}
const auto dur4 = sp.duration_in_us();
const auto a = dur2 - dur1;
const auto b = dur4 - dur3;
if (a < b)
{
throw std::logic_error("negative_latency_error");
}
const auto fork_join_latency = (a - b) / (test_count * 4);
// report
std::cout << "Ran the program with " << omp_get_max_threads() << ", the calculated fork/join latency is: " << fork_join_latency << " us" << std::endl;
return 0;
}
As you can see, the idea is to perform a set of actions separately (each within an OpenMP loop) and to calculate the average duration of this, and then to perform all these actions together (within the same OpenMP loop) and to calculate the average duration of that. Then we have a linear system of equations in two variables, one of which is the latency of the fork/join mechanism, which can be solved to obtain the value.
Questions:
- Am I overlooking something?
- Currently, I am using "-O0" to prevent smarty-pants compiler from doing its funny business. Which compiler optimizations should I use, would these also have an effect on the latency itself etc etc?
- On my Coffee Lake processor with 6 cores, I measured a latency of ~850 us. Does this sound about right?
Edit 3
) I've included a warm-up calculation in the beginning upon @paleonix's suggestion,
) I've reduced the number of actions for simplicity, and,
) I've switched to 'omp_get_wtime' to make it universally understandable.
I am now running the following code with flag -O3:
void action1(std::vector<double>& t1)
{
#pragma omp parallel for schedule(static) num_threads(std::thread::hardware_concurrency())
for (auto index = std::size_t{}; index < t1.size(); index)
{
t1.data()[index] = std::sin(t1.data()[index]);
}
}
void action2(std::vector<double>& t1)
{
#pragma omp parallel for schedule(static) num_threads(std::thread::hardware_concurrency())
for (auto index = std::size_t{}; index < t1.size(); index)
{
t1.data()[index] = std::cos(t1.data()[index]);
}
}
void action3(std::vector<double>& t1)
{
#pragma omp parallel for schedule(static) num_threads(std::thread::hardware_concurrency())
for (auto index = std::size_t{}; index < t1.size(); index)
{
t1.data()[index] = std::atan(t1.data()[index]);
}
}
void all_actions(std::vector<double>& t1, std::vector<double>& t2, std::vector<double>& t3)
{
#pragma omp parallel for schedule(static) num_threads(std::thread::hardware_concurrency())
for (auto index = std::size_t{}; index < t1.size(); index)
{
#pragma optimize("", off)
t1.data()[index] = std::sin(t1.data()[index]);
t2.data()[index] = std::cos(t2.data()[index]);
t3.data()[index] = std::atan(t3.data()[index]);
#pragma optimize("", on)
}
}
int main()
{
// decide the process parameters
const auto n = std::size_t{1500000}; // 12 MB (way too big for any cache)
const auto experiment_count = std::size_t{1000};
// garbage data...
auto t1 = std::vector<double>(n);
auto t2 = std::vector<double>(n);
auto t3 = std::vector<double>(n);
auto t4 = std::vector<double>(n);
auto t5 = std::vector<double>(n);
auto t6 = std::vector<double>(n);
auto t7 = std::vector<double>(n);
auto t8 = std::vector<double>(n);
auto t9 = std::vector<double>(n);
// /////////////////
// warum-up, initialization of threads etc.
// /////////////////
for (auto index = std::size_t{}; index < experiment_count / 10; index)
{
all_actions(t1, t2, t3);
}
// /////////////////
// perform actions (part A)
// /////////////////
const auto dur1 = omp_get_wtime();
for (auto index = std::size_t{}; index < experiment_count; index)
{
action1(t4);
action2(t5);
action3(t6);
}
const auto dur2 = omp_get_wtime();
// /////////////////
// perform all actions at once (part B)
// /////////////////
const auto dur3 = omp_get_wtime();
#pragma nofusion
for (auto index = std::size_t{}; index < experiment_count; index)
{
all_actions(t7, t8, t9);
}
const auto dur4 = omp_get_wtime();
const auto a = dur2 - dur1;
const auto b = dur4 - dur3;
const auto fork_join_latency = (a - b) / (experiment_count * 2);
// report
std::cout << "Ran the program with " << omp_get_max_threads() << ", the calculated fork/join latency is: "
<< fork_join_latency * 1E 6 << " us" << std::endl;
return 0;
}
With this, the measured latency is now 115 us. What's puzzling me now is that this value changes when the actions are changed. According to my logic, since I'm doing the same action in both parts A and B, there should actually be no change. Why is this happening?
CodePudding user response:
TL;DR: cores do not operate exactly at the same speed due to dynamic frequency scaling and there is a lot of noise that can impact the execution resulting in expensive synchronizations. Your benchmark mostly measures this synchronization overhead. Using a unique parallel section should solve this problem.
The benchmark is rather flawed. This code does not actually measure the "latency" of OpenMP fork/join sections. It measures a combinations of many overheads including:
load balancing and synchronizations: the split loops perform more frequent synchronizations (5 times more) than the big merged one. Synchronizations are expensive, not because of the communication overhead, but because of the actual synchronization between cores that are inherently not synchronized. Indeed, a slight work-imbalance between the threads results in other thread waiting for the completion of the slowest thread. You might think this should not happen because of static scheduling, but context switches and dynamic frequency scaling cause some threads to be slower than others. Context switches are especially important if threads are not bound to core or if some programs are scheduled by the OS during the computation. Dynamic frequency scaling (eg. Intel turbo boost) causes some (group of threads) to be faster regarding the workload, the temperature of each core and the overall package, the number of active cores, the estimated power consumption, etc. The higher the number of core, the higher the synchronization overheads. Note that this overhead is dependent of the time taken by the loops. For more information, please read the below analysis.
Performance of loop splitting: merging the 5 loops into a unique one impacts the generated assembly code (since less instructions are needed) and also impacts load/store in the cache (since the memory access pattern is a bit different). Not to mention it could theoretically impact vectorization although ICC does not vectorize this specific code. That being said, this does not appear to be the main actual issue on my machine since I am not able to reproduce the problem with Clang by running the program in sequential while I can with many threads.
To solve this problem you can use a unique parallel section. omp for
loops must use the nowait
clause so not to introduce synchronizations. Alternatively, task-based construct like taskloop
with the nogroup
can help to achieve the same thing. In both case you should be careful about dependencies since multiple for-loop/task-loos can run in parallel. This is fine in you current code.
Analysis
Analyzing the effect of short synchronizations caused by execution noise (context switches, frequency scaling, cache effect, OS interruptions, etc.) is pretty hard since it is likely never the same thread that is the slowest during synchronizations in your case (the work between thread is quite balance but they velocity is not perfectly equal).
That being said, if this hypothesis is true fork_join_latency
should be dependent of n
. Thus, increasing n
also increase fork_join_latency
. Here is was I can get with Clang 13 IOMP (using -fopenmp -O3
) on my 6-core i5-9600KF processor:
n= 80'000 fork_join_latency<0.000001
n= 800'000 fork_join_latency=0.000036
n= 8'000'000 fork_join_latency=0.000288
n=80'000'000 fork_join_latency=0.003236
Note that the fork_join_latency
timings are not very stable in practice but the behavior is pretty visible: the measured overhead is dependent n
.
A better solution is to measure the synchronization time by measuring the time of the loop for each thread and accumulate the difference between the minimum and maximum time. Here is an example of code:
double totalSyncTime = 0.0;
void action1(std::vector<double>& t1)
{
constexpr int threadCount = 6;
double timePerThread[threadCount] = {0};
#pragma omp parallel
{
const double start = omp_get_wtime();
#pragma omp for nowait schedule(static) //num_threads(std::thread::hardware_concurrency())
#pragma nounroll
for (auto index = std::size_t{}; index < t1.size(); index)
{
t1.data()[index] = std::sin(t1.data()[index]);
}
const double stop = omp_get_wtime();
const double threadLoopTime = (stop - start);
timePerThread[omp_get_thread_num()] = threadLoopTime;
}
const double mini = *std::min_element(timePerThread, timePerThread threadCount);
const double maxi = *std::max_element(timePerThread, timePerThread threadCount);
const double syncTime = maxi - mini;
totalSyncTime = syncTime;
}
You can then divide totalSyncTime
the same way you did for fork_join_latency
and print the result. I get 0.000284
with fork_join_latency=0.000398
(with n=8'000'000
) which almost proves that a major part of the overhead is due to synchronizations and more especially due to a slightly different thread execution velocity. Note that this overhead does not include the implicit barrier at the end of the OpenMP parallel section.
CodePudding user response:
Here is my attempt at measuring fork-join overhead:
#include <chrono>
#include <iostream>
#include <string>
#include <thread>
#include <omp.h>
constexpr int n_warmup = 1000;
constexpr int n_measurement = 10000;
void bench_fork_join(int num_threads) {
std::vector<int> v(num_threads);
using std::chrono::operator""us;
// sleep time should be on the same order of magnitude as
// the expected fork-join overhead
constexpr auto sleep_time = 10us;
omp_set_num_threads(num_threads);
// create threads, warmup
for (int i = 0; i < n_warmup; i) {
#pragma omp parallel
std::this_thread::sleep_for(sleep_time);
}
double const start = omp_get_wtime();
for (int i = 0; i < n_measurement; i) {
#pragma omp parallel
std::this_thread::sleep_for(sleep_time);
}
double const stop = omp_get_wtime();
double const ptime = (stop - start) * 1e6 / n_measurement;
// warmup
for (int i = 0; i < n_warmup; i) {
std::this_thread::sleep_for(sleep_time);
}
double const sstart = omp_get_wtime();
for (int i = 0; i < n_measurement; i) {
std::this_thread::sleep_for(sleep_time);
}
double const sstop = omp_get_wtime();
double const stime = (sstop - sstart) * 1e6 / n_measurement;
std::cout << ptime << " us\t" << stime << " us\t" << ptime - stime << " us\n";
}
int main(int argc, char** argv) {
auto const params = argc - 1;
for (int j = 0; j < params; j) {
auto num_threads = std::stoi(argv[1 j]);
std::cout << "-------- num_threads = " << num_threads << " --------\n";
bench_fork_join(num_threads);
}
return 0;
}
You can call it with multiple different numbers of threads which should not be higher then the number of cores on your machine to give reasonable results. On my machine with 6 cores and compiling with gcc 11.2, I get
$ g -fopenmp -O3 -DNDEBUG -o bench-omp-fork-join bench-omp-fork-join.cpp
$ ./bench-omp-fork-join 6 4 2 1
-------- num_threads = 6 --------
82.6869 us 71.5074 us 11.1795 us
-------- num_threads = 4 --------
76.6323 us 70.5836 us 6.04864 us
-------- num_threads = 2 --------
74.0966 us 70.4136 us 3.68297 us
-------- num_threads = 1 --------
71.3578 us 70.8501 us 0.507681 us
In each line the first number is the average (over 10000 iterations) with threads and the second number is the average without threads. The last number is the difference between the first two and should be an upper bound on the fork-join overhead.
As one can see there is no significant overhead for num_threads=1
which is kind of a sanity check.
Make sure that the numbers in the middle column (no threads) are approximately the same in every row, as they should be independent of the number of threads. If they aren't, make sure there is nothing else running on the computer and/or increase the number of measurements and/or warmup runs.
In regard to exchanging OpenMP for MPI, keep in mind that MPI is still multiprocessing and not multithreading. You might pay a lot of memory overhead because processes tend to be much bigger than threads.