Home > Back-end >  Segmentation fault when using threads on function with large arrays -C
Segmentation fault when using threads on function with large arrays -C

Time:03-18

I am using threads for the first time and came across a weird segmentation error whenever the called function takes very large arrays.

#include <iostream>       
#include <thread>
#include <cmath>

const int dimension = 100000; // Dimension of the array
// Create a simple function of an array
void absolut(double *vec) { 
    double res = 0.; 
    for (int i = 0; i < dimension; i  ) { 
        res  = vec[i] * vec[i];
    }
    std::cout << std::sqrt(res) << std::endl;
}

int main() {
    // Define arrays
    double p[dimension], v[dimension]; 
    for (int i = 1; i < dimension; i  ) { 
        p[i] = 1./double(i);
        v[i] = 1./double(i)/double(i);
    }
    // use multithreading
    std::thread t1(absolut, p);
    std::thread t2(absolut, v); 

    t1.join(); 
    t2.join();

    return 0;
}

The program runs fine like this, but if I increase the dimension of the arrays by a factor 10, then I get a segmentation fault. Does anybody know why this occurs and how to fix it?

CodePudding user response:

double p* = new double[dimension];
double v* = new double[dimension];

I think this compiles because of the compiler defined size limits maybe using dynamically allocation.

  • Related