Home > Net >  memcpy() creates segmentation fault after too many iterations
memcpy() creates segmentation fault after too many iterations

Time:03-07

I am trying to create a multithreading library in c. Here is the link to whole project (because pasting the code here would be too much text).

In the file tests/MultithreadingTests.c I am testing to the functionality of lib/systems/multithreading/src/ThreadPool.c. The function add_work adds any routine function the the work queue which utilises the functionality of lib/sds/lists/src/Queue.c and lib/sds/lists/src/LinkedList.c. In MultithreadingTests.c, NUM_TESTS defines the number of jobs I am adding to the work queue to be performed by NUM_THREADS

I am facing a weird issue with the code. If NUM_TESTS any number is less than 349,261, the code works perfectly fine but any number higher than or equal to 349,261 results in segmentation fault. I tried to check where exactly the segmentation fault is happening and found that it happens in the lib/sds/lists/src/Node.c at line number 29 at memcpy(node->data, data, size);

The flow of code for the error is

  • tests/MultiThreadingTests.c line 95 at pool->add_work(pool, new_thread_job(routine, &arguments[i]));
  • lib/systems/multithreading/src/ThreadPool.c line 150 thread_pool->work.push(&thread_pool->work, &job, sizeof(job));
  • lib/sds/lists/src/Queue.c line 54 return q->list.insert(&q->list, q->list.length, data, size);
  • lib/sds/lists/src/LinkedLists.c line 107 Node *node_to_insert = new_node(data, size);
  • lib/sds/lists/src/Node.c line 29 memcpy(node->data, data, size);

I am not sure why this issue is happening only when the number of jobs is higher than or equal to 349,261 but not when its smaller.

CodePudding user response:

In function new_thread_pool(), you neither

  • test for allocation failure in thread_pool.pool = malloc(sizeof(pthread_t) * num_threads); nor
  • test for thread creation failure in pthread_create(&thread_pool.pool[i], NULL, generic_thread_function, &thread_pool);

Trying to create 349261 or more threads on any system looks more like a stress test than a real life purpose. Test for errors and report them in a usable way.

new_node does not check for allocation failure either. Unless you instrument your code for this, you should use a wrapper around malloc() calls to detect allocation failure and abort the program with an error message.

The issue in your code is in the function mt_test_add_work(): you define an array of arguments with automatic storage:

Arguments arguments[NUM_TESTS];

This object is allocated on the stack, using 8382264 bytes of stack space. This is too much for your system and causes undefined behavior down the call chain where further stack usage actually cause a segmentation fault: a typical case of Stack Overflow.

You should allocate this object from the heap and free it before exiting the function:

Arguments *arguments = malloc(sizeof(*arguments) * NUM_TESTS);
  • Related